<div dir="ltr"><div>Hey Look.  I'm actually reading your patch now!<br><br></div>I read through the whole thing and over-all I think it looks fairly good.<br><div><div><div class="gmail_extra"><br><div class="gmail_quote">On Sun, Nov 27, 2016 at 11:23 AM, Ilia Mirkin <span dir="ltr"><<a href="mailto:imirkin@alum.mit.edu" target="_blank">imirkin@alum.mit.edu</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">The strategy is to just keep n anv_query_pool_slot entries per query<br>
instead of one. The available bit is only valid in the last one.<br></blockquote><div><br></div><div>Seems like a reasonable approach.  To be honest, I'm not a huge fan of the "available" bit (or 64 bits as the case may be) but I'm not sure how we'd get away without it.<br><br></div><div>Maybe it would be better to do something like:<br><br></div><div>struct anv_query_entry {<br></div><div>   uint64_t begin;<br></div><div>   uint64_t end;<br>};<br><br></div><div>struct anv_query_pool_slot {<br></div><div>   uint64_t available<br></div><div>   struct anv_query_entry entries[0];<br></div><div>};<br><br></div><div>Food for thought.<br></div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
Signed-off-by: Ilia Mirkin <<a href="mailto:imirkin@alum.mit.edu" target="_blank">imirkin@alum.mit.edu</a>><br>
---<br>
<br>
I think this is in a pretty good state now. I've tested both the direct and<br>
buffer paths with a hacked up cube application, and I'm seeing non-ridiculous<br>
values for the various counters, although I haven't 100% verified them for<br>
accuracy.<br>
<br>
This also implements the hsw/bdw workaround for dividing frag invocations by 4,<br>
copied from hsw_queryobj. I tested this on SKL and it seem to divide the values<br>
as expected.<br>
<br>
The cube patch I've been testing with is at <a href="http://paste.debian.net/899374/" rel="noreferrer" target="_blank">http://paste.debian.net/899374<wbr>/</a><br>
You can flip between copying to a buffer and explicit retrieval by commenting<br>
out the relevant function calls.<br>
<br>
 src/intel/vulkan/anv_device.c<wbr>      |   2 +-<br>
 src/intel/vulkan/anv_private.<wbr>h     |   4 +<br>
 src/intel/vulkan/anv_query.c       |  99 ++++++++++----<br>
 src/intel/vulkan/genX_cmd_buf<wbr>fer.c | 260 ++++++++++++++++++++++++++++++<wbr>++-----<br>
 4 files changed, 308 insertions(+), 57 deletions(-)<br>
<br>
diff --git a/src/intel/vulkan/anv_device.<wbr>c b/src/intel/vulkan/anv_device.<wbr>c<br>
index 99eb73c..7ad1970 100644<br>
--- a/src/intel/vulkan/anv_device.<wbr>c<br>
+++ b/src/intel/vulkan/anv_device.<wbr>c<br>
@@ -427,7 +427,7 @@ void anv_GetPhysicalDeviceFeatures(<br>
       .textureCompressionASTC_LDR               = pdevice->info.gen >= 9, /* FINISHME CHV */<br>
       .textureCompressionBC                     = true,<br>
       .occlusionQueryPrecise                    = true,<br>
-      .pipelineStatisticsQuery                  = false,<br>
+      .pipelineStatisticsQuery                  = true,<br>
       .fragmentStoresAndAtomics                 = true,<br>
       .shaderTessellationAndGeometr<wbr>yPointSize   = true,<br>
       .shaderImageGatherExtended                = false,<br>
diff --git a/src/intel/vulkan/anv_private<wbr>.h b/src/intel/vulkan/anv_private<wbr>.h<br>
index 2fc543d..7271609 100644<br>
--- a/src/intel/vulkan/anv_private<wbr>.h<br>
+++ b/src/intel/vulkan/anv_private<wbr>.h<br>
@@ -1763,6 +1763,8 @@ struct anv_render_pass {<br>
    struct anv_subpass                           subpasses[0];<br>
 };<br>
<br>
+#define ANV_PIPELINE_STATISTICS_COUNT 11<br>
+<br>
 struct anv_query_pool_slot {<br>
    uint64_t begin;<br>
    uint64_t end;<br>
@@ -1772,6 +1774,8 @@ struct anv_query_pool_slot {<br>
 struct anv_query_pool {<br>
    VkQueryType                                  type;<br>
    uint32_t                                     slots;<br>
+   uint32_t                                     pipeline_statistics;<br>
+   uint32_t                                     slot_stride;<br>
    struct anv_bo                                bo;<br>
 };<br>
<br>
diff --git a/src/intel/vulkan/anv_query.c b/src/intel/vulkan/anv_query.c<br>
index 293257b..dc00859 100644<br>
--- a/src/intel/vulkan/anv_query.c<br>
+++ b/src/intel/vulkan/anv_query.c<br>
@@ -38,8 +38,10 @@ VkResult anv_CreateQueryPool(<br>
    ANV_FROM_HANDLE(anv_device, device, _device);<br>
    struct anv_query_pool *pool;<br>
    VkResult result;<br>
-   uint32_t slot_size;<br>
-   uint64_t size;<br>
+   uint32_t slot_size = sizeof(struct anv_query_pool_slot); <br></blockquote><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
+   uint32_t slot_stride = 1;<br></blockquote><div><br><div>Strides are usually in bytes, not slots...<br></div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
+   uint64_t size = pCreateInfo->queryCount * slot_size;<br></blockquote><div><br></div><div>Might make sense to move this to after we compute the slot_stride.<br></div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
+   uint32_t pipeline_statistics = 0;<br>
<br>
    assert(pCreateInfo->sType == VK_STRUCTURE_TYPE_QUERY_POOL_C<wbr>REATE_INFO);<br>
<br>
@@ -48,12 +50,16 @@ VkResult anv_CreateQueryPool(<br>
    case VK_QUERY_TYPE_TIMESTAMP:<br>
       break;<br>
    case VK_QUERY_TYPE_PIPELINE_STATIST<wbr>ICS:<br>
-      return VK_ERROR_INCOMPATIBLE_DRIVER;<br>
+      pipeline_statistics = pCreateInfo->pipelineStatistic<wbr>s &<br>
+         ((1 << ANV_PIPELINE_STATISTICS_COUNT) - 1);<br>
+      slot_stride = _mesa_bitcount(pipeline_statis<wbr>tics);<br>
+      size *= slot_stride;<br>
+      break;<br>
    default:<br>
       assert(!"Invalid query type");<br>
+      return VK_ERROR_INCOMPATIBLE_DRIVER;<br>
    }<br>
<br>
-   slot_size = sizeof(struct anv_query_pool_slot);<br>
    pool = vk_alloc2(&device->alloc, pAllocator, sizeof(*pool), 8,<br>
                      VK_SYSTEM_ALLOCATION_SCOPE_OBJ<wbr>ECT);<br>
    if (pool == NULL)<br>
@@ -61,8 +67,9 @@ VkResult anv_CreateQueryPool(<br>
<br>
    pool->type = pCreateInfo->queryType;<br>
    pool->slots = pCreateInfo->queryCount;<br>
+   pool->pipeline_statistics = pipeline_statistics;<br>
+   pool->slot_stride = slot_stride;<br>
<br>
-   size = pCreateInfo->queryCount * slot_size;<br>
    result = anv_bo_init_new(&pool->bo, device, size);<br>
    if (result != VK_SUCCESS)<br>
       goto fail;<br>
@@ -95,6 +102,27 @@ void anv_DestroyQueryPool(<br>
    vk_free2(&device->alloc, pAllocator, pool);<br>
 }<br>
<br>
+static void *<br>
+store_query_result(void *pData, VkQueryResultFlags flags,<br>
+                   uint64_t result, uint64_t available)<br>
+{<br>
+   if (flags & VK_QUERY_RESULT_64_BIT) {<br>
+      uint64_t *dst = pData;<br>
+      *dst++ = result;<br>
+      if (flags & VK_QUERY_RESULT_WITH_AVAILABIL<wbr>ITY_BIT)<br>
+         *dst++ = available;<br>
+      return dst;<br>
+   } else {<br>
+      uint32_t *dst = pData;<br>
+      if (result > UINT32_MAX)<br>
+         result = UINT32_MAX;<br>
+      *dst++ = result;<br>
+      if (flags & VK_QUERY_RESULT_WITH_AVAILABIL<wbr>ITY_BIT)<br>
+         *dst++ = available;<br>
+      return dst;<br>
+   }<br>
+}<br>
+<br>
 VkResult anv_GetQueryPoolResults(<br>
     VkDevice                                    _device,<br>
     VkQueryPool                                 queryPool,<br>
@@ -112,6 +140,7 @@ VkResult anv_GetQueryPoolResults(<br>
    int ret;<br>
<br>
    assert(pool->type == VK_QUERY_TYPE_OCCLUSION ||<br>
+          pool->type == VK_QUERY_TYPE_PIPELINE_STATIST<wbr>ICS ||<br>
           pool->type == VK_QUERY_TYPE_TIMESTAMP);<br>
<br>
    if (pData == NULL)<br>
@@ -129,14 +158,42 @@ VkResult anv_GetQueryPoolResults(<br>
    void *data_end = pData + dataSize;<br>
    struct anv_query_pool_slot *slot = pool->bo.map;<br>
<br>
-   for (uint32_t i = 0; i < queryCount; i++) {<br>
+   for (uint32_t i = 0; i < queryCount && pData < data_end;<br></blockquote><div><br></div><div>I think this condition is broken if dataSize is not a multiple of the query slot stride.<br></div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
+        i++, pData += stride) {<br>
+      if (pool->type == VK_QUERY_TYPE_PIPELINE_STATIST<wbr>ICS) {<br>
+         VkQueryResultFlags f = flags & ~VK_QUERY_RESULT_WITH_AVAILABI<wbr>LITY_BIT;<br>
+         void *pos = pData;<br>
+         uint32_t pipeline_statistics = pool->pipeline_statistics;<br>
+         struct anv_query_pool_slot *base =<br>
+            &slot[(firstQuery + i) * pool->slot_stride];<br>
+<br>
+         while (pipeline_statistics) {<br>
+            uint32_t stat = u_bit_scan(&pipeline_statistic<wbr>s);<br>
+            uint64_t result = base->end - base->begin;<br>
+<br>
+            /* WaDividePSInvocationCountBy4:H<wbr>SW,BDW */<br>
+            if ((device->info.gen == 8 || device->info.is_haswell) &&<br>
+                (1 << stat) == VK_QUERY_PIPELINE_STATISTIC_FR<wbr>AGMENT_SHADER_INVOCATIONS_BIT)<br>
+               result >>= 2;<br>
+<br>
+            pos = store_query_result(pos, f, result, 0); <br></blockquote><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
+            base++;<br>
+         }<br>
+         if (flags & VK_QUERY_RESULT_WITH_AVAILABIL<wbr>ITY_BIT) {<br>
+            base--;<br>
+            if (flags & VK_QUERY_RESULT_64_BIT)<br>
+               *(uint64_t *)pos = base->available;<br>
+            else<br>
+               *(uint32_t *)pos = base->available;<br></blockquote><div><br></div><div>Given what I'm reading here, I think my suggestion above about reworking query_slot makes even more sense.<br></div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
+         }<br>
+         continue;<br>
+      }<br>
+<br>
       switch (pool->type) {<br>
       case VK_QUERY_TYPE_OCCLUSION: {<br>
          result = slot[firstQuery + i].end - slot[firstQuery + i].begin;<br>
          break;<br>
       }<br>
-      case VK_QUERY_TYPE_PIPELINE_STATIST<wbr>ICS:<br>
-         unreachable("pipeline stats not supported");<br>
       case VK_QUERY_TYPE_TIMESTAMP: {<br>
          result = slot[firstQuery + i].begin;<br>
          break;<br>
@@ -145,23 +202,7 @@ VkResult anv_GetQueryPoolResults(<br>
          unreachable("invalid pool type");<br>
       }<br>
<br>
-      if (flags & VK_QUERY_RESULT_64_BIT) {<br>
-         uint64_t *dst = pData;<br>
-         dst[0] = result;<br>
-         if (flags & VK_QUERY_RESULT_WITH_AVAILABIL<wbr>ITY_BIT)<br>
-            dst[1] = slot[firstQuery + i].available;<br>
-      } else {<br>
-         uint32_t *dst = pData;<br>
-         if (result > UINT32_MAX)<br>
-            result = UINT32_MAX;<br>
-         dst[0] = result;<br>
-         if (flags & VK_QUERY_RESULT_WITH_AVAILABIL<wbr>ITY_BIT)<br>
-            dst[1] = slot[firstQuery + i].available;<br>
-      }<br>
-<br>
-      pData += stride;<br>
-      if (pData >= data_end)<br>
-         break;<br>
+      store_query_result(pData, flags, result, slot[firstQuery + i].available);<br>
    }<br>
<br>
    return VK_SUCCESS;<br>
@@ -183,6 +224,14 @@ void anv_CmdResetQueryPool(<br>
          slot[firstQuery + i].available = 0;<br>
          break;<br>
       }<br>
+      case VK_QUERY_TYPE_PIPELINE_STATIST<wbr>ICS: {<br>
+         struct anv_query_pool_slot *slot = pool->bo.map;<br>
+<br>
+         slot = &slot[(firstQuery + i) * pool->slot_stride];<br>
+         for (uint32_t j = 0; j < pool->slot_stride; j++)<br>
+            slot[j].available = 0;<br>
+         break;<br>
+      }<br>
       default:<br>
          assert(!"Invalid query type");<br>
       }<br>
diff --git a/src/intel/vulkan/genX_cmd_bu<wbr>ffer.c b/src/intel/vulkan/genX_cmd_bu<wbr>ffer.c<br>
index a965cd6..1369ac2 100644<br>
--- a/src/intel/vulkan/genX_cmd_bu<wbr>ffer.c<br>
+++ b/src/intel/vulkan/genX_cmd_bu<wbr>ffer.c<br>
@@ -2272,6 +2272,50 @@ emit_query_availability(struct anv_cmd_buffer *cmd_buffer,<br>
    }<br>
 }<br>
<br>
+#define IA_VERTICES_COUNT               0x2310<br>
+#define IA_PRIMITIVES_COUNT             0x2318<br>
+#define VS_INVOCATION_COUNT             0x2320<br>
+#define HS_INVOCATION_COUNT             0x2300<br>
+#define DS_INVOCATION_COUNT             0x2308<br>
+#define GS_INVOCATION_COUNT             0x2328<br>
+#define GS_PRIMITIVES_COUNT             0x2330<br>
+#define CL_INVOCATION_COUNT             0x2338<br>
+#define CL_PRIMITIVES_COUNT             0x2340<br>
+#define PS_INVOCATION_COUNT             0x2348<br>
+#define CS_INVOCATION_COUNT             0x2290<br></blockquote><div><br></div><div>I think the "right" thing to do would be to add genxml for these.<br></div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
+<br>
+static const uint32_t PIPELINE_STAT_TO_REG[] = {<br>
+   IA_VERTICES_COUNT,<br>
+   IA_PRIMITIVES_COUNT,<br>
+   VS_INVOCATION_COUNT,<br>
+   GS_INVOCATION_COUNT,<br>
+   GS_PRIMITIVES_COUNT,<br>
+   CL_INVOCATION_COUNT,<br>
+   CL_PRIMITIVES_COUNT,<br>
+   PS_INVOCATION_COUNT,<br>
+   HS_INVOCATION_COUNT,<br>
+   DS_INVOCATION_COUNT,<br>
+   CS_INVOCATION_COUNT<br>
+};<br>
+<br>
+static void<br>
+emit_pipeline_stat(struct anv_cmd_buffer *cmd_buffer, uint32_t stat,<br>
+                   struct anv_bo *bo, uint32_t offset) {<br>
+   STATIC_ASSERT(ARRAY_SIZE(PIPE<wbr>LINE_STAT_TO_REG) ==<br>
+                 ANV_PIPELINE_STATISTICS_COUNT<wbr>);<br>
+<br>
+   uint32_t reg = PIPELINE_STAT_TO_REG[stat];<br>
+<br>
+   anv_batch_emit(&cmd_buffer->b<wbr>atch, GENX(MI_STORE_REGISTER_MEM), lrm) {<br>
+      lrm.RegisterAddress  = reg,<br>
+      lrm.MemoryAddress    = (struct anv_address) { bo, offset };<br>
+   }<br>
+   anv_batch_emit(&cmd_buffer->b<wbr>atch, GENX(MI_STORE_REGISTER_MEM), lrm) {<br>
+      lrm.RegisterAddress  = reg + 4,<br>
+      lrm.MemoryAddress    = (struct anv_address) { bo, offset + 4 };<br>
+   }<br>
+}<br>
+<br>
 void genX(CmdBeginQuery)(<br>
     VkCommandBuffer                             commandBuffer,<br>
     VkQueryPool                                 queryPool,<br>
@@ -2301,7 +2345,25 @@ void genX(CmdBeginQuery)(<br>
                           query * sizeof(struct anv_query_pool_slot));<br>
       break;<br>
<br>
-   case VK_QUERY_TYPE_PIPELINE_STATIST<wbr>ICS:<br>
+   case VK_QUERY_TYPE_PIPELINE_STATIST<wbr>ICS: {<br>
+      uint32_t pipeline_statistics = pool->pipeline_statistics;<br>
+      uint32_t slot_offset = query * pool->slot_stride *<br>
+         sizeof(struct anv_query_pool_slot);<br>
+<br>
+      /* TODO: This might only be necessary for certain stats */<br>
+      anv_batch_emit(&cmd_buffer->ba<wbr>tch, GENX(PIPE_CONTROL), pc) {<br>
+         pc.CommandStreamerStallEnable = true;<br>
+         pc.StallAtPixelScoreboard = true;<br>
+      }<br>
+<br>
+      while (pipeline_statistics) {<br>
+         uint32_t stat = u_bit_scan(&pipeline_statistic<wbr>s);<br>
+<br>
+         emit_pipeline_stat(cmd_buffer<wbr>, stat, &pool->bo, slot_offset);<br>
+         slot_offset += sizeof(struct anv_query_pool_slot);<br>
+      }<br>
+      break;<br>
+   }<br>
    default:<br>
       unreachable("");<br>
    }<br>
@@ -2314,17 +2376,35 @@ void genX(CmdEndQuery)(<br>
 {<br>
    ANV_FROM_HANDLE(anv_cmd_buffer<wbr>, cmd_buffer, commandBuffer);<br>
    ANV_FROM_HANDLE(anv_query_pool<wbr>, pool, queryPool);<br>
+   uint32_t slot_offset = query * pool->slot_stride *<br>
+      sizeof(struct anv_query_pool_slot);<br>
<br>
    switch (pool->type) {<br>
    case VK_QUERY_TYPE_OCCLUSION:<br>
-      emit_ps_depth_count(cmd_buffer<wbr>, &pool->bo,<br>
-                          query * sizeof(struct anv_query_pool_slot) + 8);<br>
+      emit_ps_depth_count(cmd_buffer<wbr>, &pool->bo, slot_offset + 8);<br>
+      emit_query_availability(cmd_bu<wbr>ffer, &pool->bo, slot_offset + 16);<br>
+      break;<br>
+<br>
+   case VK_QUERY_TYPE_PIPELINE_STATIST<wbr>ICS: {<br>
+      uint32_t pipeline_statistics = pool->pipeline_statistics;<br>
+      /* TODO: This might only be necessary for certain stats */<br>
+      anv_batch_emit(&cmd_buffer->ba<wbr>tch, GENX(PIPE_CONTROL), pc) {<br>
+         pc.CommandStreamerStallEnable = true;<br>
+         pc.StallAtPixelScoreboard = true;<br>
+      }<br>
+<br>
+      while (pipeline_statistics) {<br>
+         uint32_t stat = u_bit_scan(&pipeline_statistic<wbr>s);<br>
<br>
-      emit_query_availability(cmd_bu<wbr>ffer, &pool->bo,<br>
-                              query * sizeof(struct anv_query_pool_slot) + 16);<br>
+         emit_pipeline_stat(cmd_buffer<wbr>, stat, &pool->bo, slot_offset + 8);<br>
+         slot_offset += sizeof(struct anv_query_pool_slot);<br>
+      }<br>
+<br>
+      slot_offset -= sizeof(struct anv_query_pool_slot);<br>
+      emit_query_availability(cmd_bu<wbr>ffer, &pool->bo, slot_offset + 16);<br>
       break;<br>
+   }<br>
<br>
-   case VK_QUERY_TYPE_PIPELINE_STATIST<wbr>ICS:<br>
    default:<br>
       unreachable("");<br>
    }<br>
@@ -2421,6 +2501,31 @@ emit_load_alu_reg_u64(struct anv_batch *batch, uint32_t reg,<br>
 }<br>
<br>
 static void<br>
+emit_load_alu_reg_imm32(struc<wbr>t anv_batch *batch, uint32_t reg, uint32_t imm)<br>
+{<br>
+   anv_batch_emit(batch, GENX(MI_LOAD_REGISTER_IMM), lri) {<br>
+      lri.RegisterOffset   = reg;<br>
+      lri.DataDWord        = imm;<br>
+   }<br>
+}<br>
+<br>
+static void<br>
+emit_load_alu_reg_imm64(struc<wbr>t anv_batch *batch, uint32_t reg, uint64_t imm)<br>
+{<br>
+   emit_load_alu_reg_imm32(batch<wbr>, reg, (uint32_t)imm);<br>
+   emit_load_alu_reg_imm32(batch<wbr>, reg + 4, (uint32_t)(imm >> 32));<br></blockquote><div><br></div><div>I don't think the casts are needed here.  They don't hurt though.<br></div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
+}<br>
+<br>
+static void<br>
+emit_load_alu_reg_reg32(struc<wbr>t anv_batch *batch, uint32_t src, uint32_t dst)<br>
+{<br>
+   anv_batch_emit(batch, GENX(MI_LOAD_REGISTER_REG), lrr) {<br>
+      lrr.SourceRegisterAddress      = src;<br>
+      lrr.DestinationRegisterAddress = dst;<br>
+   }<br>
+}<br>
+<br>
+static uint32_t<br>
 store_query_result(struct anv_batch *batch, uint32_t reg,<br>
                    struct anv_bo *bo, uint32_t offset, VkQueryResultFlags flags)<br>
 {<br>
@@ -2434,9 +2539,88 @@ store_query_result(struct anv_batch *batch, uint32_t reg,<br>
          srm.RegisterAddress  = reg + 4;<br>
          srm.MemoryAddress    = (struct anv_address) { bo, offset + 4 };<br>
       }<br>
+<br>
+      return offset + 8;<br>
+   }<br>
+<br>
+   return offset + 4;<br>
+}<br>
+<br>
+/*<br>
+ * GPR0 = GPR0 & ((1ull << n) - 1);<br>
+ */<br>
+static void<br>
+keep_gpr0_lower_n_bits(struct anv_batch *batch, uint32_t n)<br>
+{<br>
+   assert(n < 64);<br>
+   emit_load_alu_reg_imm64(batch<wbr>, CS_GPR(1), (1ull << n) - 1);<br>
+<br>
+   uint32_t *dw = anv_batch_emitn(batch, 5, GENX(MI_MATH));<br>
+   dw[1] = alu(OPCODE_LOAD, OPERAND_SRCA, OPERAND_R0);<br>
+   dw[2] = alu(OPCODE_LOAD, OPERAND_SRCB, OPERAND_R1);<br>
+   dw[3] = alu(OPCODE_AND, 0, 0);<br>
+   dw[4] = alu(OPCODE_STORE, OPERAND_R0, OPERAND_ACCU);<br>
+}<br>
+<br>
+/*<br>
+ * GPR0 = GPR0 << 30;<br>
+ */<br>
+static void<br>
+shl_gpr0_by_30_bits(struct anv_batch *batch)<br>
+{<br>
+   /* First we mask 34 bits of GPR0 to prevent overflow */<br>
+   keep_gpr0_lower_n_bits(batch, 34);<br>
+<br>
+   const uint32_t outer_count = 5;<br>
+   const uint32_t inner_count = 6;<br>
+   STATIC_ASSERT(outer_count * inner_count == 30);<br>
+   const uint32_t cmd_len = 1 + inner_count * 4;<br>
+<br>
+   /* We'll emit 5 commands, each shifting GPR0 left by 6 bits, for a total of<br>
+    * 30 left shifts.<br></blockquote><div><br></div><div>Why do we need 5 MI_MATH commands?  Can't we do it in 1?<br></div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
+    */<br>
+   for (int o = 0; o < outer_count; o++) {<br>
+      /* Submit one MI_MATH to shift left by 6 bits */<br>
+      uint32_t *dw = anv_batch_emitn(batch, cmd_len, GENX(MI_MATH));<br>
+      dw++;<br>
+      for (int i = 0; i < inner_count; i++, dw += 4) {<br>
+         dw[0] = alu(OPCODE_LOAD, OPERAND_SRCA, OPERAND_R0);<br>
+         dw[1] = alu(OPCODE_LOAD, OPERAND_SRCB, OPERAND_R0);<br>
+         dw[2] = alu(OPCODE_ADD, 0, 0);<br>
+         dw[3] = alu(OPCODE_STORE, OPERAND_R0, OPERAND_ACCU);<br>
+      }<br>
    }<br>
 }<br>
<br>
+/*<br>
+ * GPR0 = GPR0 >> 2;<br>
+ *<br>
+ * Note that the upper 30 bits of GPR are lost!<br>
+ */<br>
+static void<br>
+shr_gpr0_by_2_bits(struct anv_batch *batch)<br>
+{<br>
+   shl_gpr0_by_30_bits(batch);<br>
+   emit_load_alu_reg_reg32(batch<wbr>, CS_GPR(0) + 4, CS_GPR(0));<br>
+   emit_load_alu_reg_imm32(batch<wbr>, CS_GPR(0) + 4, 0);<br>
+}<br>
+<br>
+static void<br>
+compute_query_result(struct anv_batch *batch, struct anv_bo *bo,<br>
+                     uint32_t dst_reg, uint32_t offset)<br>
+{<br>
+   emit_load_alu_reg_u64(batch, CS_GPR(0), bo, offset);<br>
+   emit_load_alu_reg_u64(batch, CS_GPR(1), bo, offset + 8);<br>
+<br>
+   /* FIXME: We need to clamp the result for 32 bit. */<br>
+<br>
+   uint32_t *dw = anv_batch_emitn(batch, 5, GENX(MI_MATH));<br>
+   dw[1] = alu(OPCODE_LOAD, OPERAND_SRCA, OPERAND_R1);<br>
+   dw[2] = alu(OPCODE_LOAD, OPERAND_SRCB, OPERAND_R0);<br>
+   dw[3] = alu(OPCODE_SUB, 0, 0);<br>
+   dw[4] = alu(OPCODE_STORE, dst_reg, OPERAND_ACCU);<br>
+}<br></blockquote><div><br></div><div>Ugh...  So, I've been thinking about this a bit and I'm actually starting to wonder if we don't want to take a completely different approach to CmdCopyQueryPoolResults.  Namely, to use a vertex shader and trandform-feedback to pull query buffer data in one end and dump computed query results out the other.  For small quantities of queries, the command streamer math may be faster, but if they're pulling a lot of queries, a VS may be more efficient.  Also, it's way more flexible in terms of the math it allows you to do.  Talos pulls queries in blocks of 256.  If you were to try and pull that many pipeline statistics queries, the amount of batch space it would burn is insane.<br><br></div><div>I'm happy to be the one to play with this.  I'm also reasonably happy to land all the CS math and then clean it up later with a VS if it turns out to be the most practical path.  Thoughts?<br></div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
+<br>
 void genX(CmdCopyQueryPoolResults)(<br>
     VkCommandBuffer                             commandBuffer,<br>
     VkQueryPool                                 queryPool,<br>
@@ -2459,50 +2643,64 @@ void genX(CmdCopyQueryPoolResults)(<br>
       }<br>
    }<br>
<br>
-   dst_offset = buffer->offset + destOffset;<br>
    for (uint32_t i = 0; i < queryCount; i++) {<br>
-<br>
+      dst_offset = buffer->offset + destOffset + destStride * i;<br>
       slot_offset = (firstQuery + i) * sizeof(struct anv_query_pool_slot);<br>
       switch (pool->type) {<br>
       case VK_QUERY_TYPE_OCCLUSION:<br>
-         emit_load_alu_reg_u64(&cmd_bu<wbr>ffer->batch,<br>
-                               CS_GPR(0), &pool->bo, slot_offset);<br>
-         emit_load_alu_reg_u64(&cmd_bu<wbr>ffer->batch,<br>
-                               CS_GPR(1), &pool->bo, slot_offset + 8);<br>
-<br>
-         /* FIXME: We need to clamp the result for 32 bit. */<br>
-<br>
-         uint32_t *dw = anv_batch_emitn(&cmd_buffer->b<wbr>atch, 5, GENX(MI_MATH));<br>
-         dw[1] = alu(OPCODE_LOAD, OPERAND_SRCA, OPERAND_R1);<br>
-         dw[2] = alu(OPCODE_LOAD, OPERAND_SRCB, OPERAND_R0);<br>
-         dw[3] = alu(OPCODE_SUB, 0, 0);<br>
-         dw[4] = alu(OPCODE_STORE, OPERAND_R2, OPERAND_ACCU);<br>
+         compute_query_result(&cmd_buf<wbr>fer->batch, &pool->bo, OPERAND_R2,<br>
+                              slot_offset);<br>
+         dst_offset = store_query_result(<br>
+               &cmd_buffer->batch,<br>
+               CS_GPR(2), buffer->bo, dst_offset, flags);<br>
          break;<br>
<br>
       case VK_QUERY_TYPE_TIMESTAMP:<br>
          emit_load_alu_reg_u64(&cmd_buf<wbr>fer->batch,<br>
                                CS_GPR(2), &pool->bo, slot_offset);<br>
+         dst_offset = store_query_result(<br>
+               &cmd_buffer->batch,<br>
+               CS_GPR(2), buffer->bo, dst_offset, flags);<br>
          break;<br>
<br>
+      case VK_QUERY_TYPE_PIPELINE_STATIST<wbr>ICS: {<br>
+         uint32_t pipeline_statistics = pool->pipeline_statistics;<br>
+<br>
+         slot_offset *= pool->slot_stride;<br>
+         while (pipeline_statistics) {<br>
+            uint32_t stat = u_bit_scan(&pipeline_statistic<wbr>s);<br>
+<br>
+            compute_query_result(&cmd_buff<wbr>er->batch, &pool->bo, OPERAND_R0,<br>
+                                 slot_offset);<br>
+<br>
+            /* WaDividePSInvocationCountBy4:H<wbr>SW,BDW */<br>
+            if ((cmd_buffer->device->info.gen == 8 ||<br>
+                 cmd_buffer->device->info.is_h<wbr>aswell) &&<br>
+                (1 << stat) == VK_QUERY_PIPELINE_STATISTIC_FR<wbr>AGMENT_SHADER_INVOCATIONS_BIT) {<br>
+               shr_gpr0_by_2_bits(&cmd_buffe<wbr>r->batch);<br>
+            }<br>
+            dst_offset = store_query_result(<br>
+                  &cmd_buffer->batch,<br>
+                  CS_GPR(0), buffer->bo, dst_offset, flags);<br>
+            slot_offset += sizeof(struct anv_query_pool_slot);<br>
+         }<br>
+<br>
+         /* Get the slot offset to where it's supposed to be for the<br>
+          * availability bit.<br>
+          */<br>
+         slot_offset -= sizeof(struct anv_query_pool_slot);<br>
+         break;<br>
+      }<br>
       default:<br>
          unreachable("unhandled query type");<br>
       }<br>
<br>
-      store_query_result(&cmd_buffer<wbr>->batch,<br>
-                         CS_GPR(2), buffer->bo, dst_offset, flags);<br>
-<br>
       if (flags & VK_QUERY_RESULT_WITH_AVAILABIL<wbr>ITY_BIT) {<br>
          emit_load_alu_reg_u64(&cmd_buf<wbr>fer->batch, CS_GPR(0),<br>
                                &pool->bo, slot_offset + 16);<br>
-         if (flags & VK_QUERY_RESULT_64_BIT)<br>
-            store_query_result(&cmd_buffer<wbr>->batch,<br>
-                               CS_GPR(0), buffer->bo, dst_offset + 8, flags);<br>
-         else<br>
-            store_query_result(&cmd_buffer<wbr>->batch,<br>
-                               CS_GPR(0), buffer->bo, dst_offset + 4, flags);<br>
+         store_query_result(&cmd_buffe<wbr>r->batch,<br>
+                            CS_GPR(0), buffer->bo, dst_offset, flags);<br>
       }<br>
-<br>
-      dst_offset += destStride;<br>
    }<br>
 }<br>
<span class="m_5307687833192365919gmail-m_-1036097962548183864HOEnZb"><font color="#888888"><br>
--<br>
2.7.3<br>
<br>
______________________________<wbr>_________________<br>
mesa-dev mailing list<br>
<a href="mailto:mesa-dev@lists.freedesktop.org" target="_blank">mesa-dev@lists.freedesktop.org</a><br>
<a href="https://lists.freedesktop.org/mailman/listinfo/mesa-dev" rel="noreferrer" target="_blank">https://lists.freedesktop.org/<wbr>mailman/listinfo/mesa-dev</a><br>
</font></span></blockquote></div><br></div></div></div></div>