<div dir="ltr"><div class="gmail_extra"><div class="gmail_quote">On Wed, Apr 5, 2017 at 11:18 AM, Chris Wilson <span dir="ltr"><<a href="mailto:chris@chris-wilson.co.uk" target="_blank">chris@chris-wilson.co.uk</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span>On Wed, Apr 05, 2017 at 11:02:18AM -0700, Jason Ekstrand wrote:<br>
> On Wed, Apr 5, 2017 at 10:45 AM, Chris Wilson<br>
</span><div><div class="m_7071355497396330583h5">> <[1]<a href="mailto:chris@chris-wilson.co.uk" target="_blank">chris@chris-wilson.co.uk</a>> wrote:<br>
><br>
> On Wed, Apr 05, 2017 at 10:28:53AM -0700, Jason Ekstrand wrote:<br>
> > Before, we were just looking at whether or not the user wanted us to<br>
> > wait and waiting on the BO. Some clients, such as the Serious engine,<br>
> > use a single query pool for hundreds of individual query results where<br>
> > the writes for those queries may be split across several command<br>
> > buffers. In this scenario, the individual query we're looking for may<br>
> > become available long before the BO is idle so waiting on the query<br>
> pool<br>
> > BO to be finished is wasteful. This commit makes us instead busy-loop<br>
> on<br>
> > each query until it's available.<br>
> ><br>
> > This significantly reduces pipeline bubbles and improves performance<br>
> of<br>
> > The Talos Principle on medium settings (where the GPU isn't overloaded<br>
> > with drawing) by around 20% on my SkyLake gt4.<br>
> > ---<br>
> > src/intel/vulkan/genX_query.c | 52<br>
> ++++++++++++++++++++++++++++++<wbr>++++++++-----<br>
> > 1 file changed, 46 insertions(+), 6 deletions(-)<br>
> ><br>
> > diff --git a/src/intel/vulkan/genX_query.<wbr>c<br>
> b/src/intel/vulkan/genX_query.<wbr>c<br>
> > index 7ea9404..0d303a6 100644<br>
> > --- a/src/intel/vulkan/genX_query.<wbr>c<br>
> > +++ b/src/intel/vulkan/genX_query.<wbr>c<br>
> > @@ -131,6 +131,44 @@ cpu_write_query_result(void *dst_slot,<br>
> VkQueryResultFlags flags,<br>
> > }<br>
> > }<br>
> ><br>
> > +static bool<br>
> > +query_is_available(struct anv_device *device, volatile uint64_t<br>
> *slot)<br>
> > +{<br>
> > + if (!device->info.has_llc)<br>
> > + __builtin_ia32_clflush(slot);<br>
> > +<br>
> > + return slot[0];<br>
> > +}<br>
> > +<br>
> > +static VkResult<br>
> > +wait_for_available(struct anv_device *device,<br>
> > + struct anv_query_pool *pool, uint64_t *slot)<br>
> > +{<br>
> > + while (true) {<br>
> > + if (query_is_available(device, slot))<br>
> > + return VK_SUCCESS;<br>
> > +<br>
> > + VkResult result = anv_device_bo_busy(device, &pool->bo);<br>
><br>
> Ah, but you can use the simpler check here because you follow up with a<br>
> query_is_available() so you know whether or not the hang clobbered the<br>
> result.<br>
><br>
> If the query is not available but the bo is idle, you might then went to<br>
> check for a reset in case it was due to a lost device. GEM_BUSY is<br>
> lockless, but GEM_RESET_STATS currently takes the big struct_mutex and<br>
> so has non-deterministic and often quite large latencies.<br>
><br>
> anv_device_bo_busy does that for us. :-) In particular, it queries<br>
> GEM_RESET_STATS but only if the BO is not busy. This way, so long as the<br>
> BO is busy, we only do GEM_BUSY but as soon as it is not busy we do a<br>
> single GEM_RESET_STATS and return VK_ERROR_DEVICE_LOST if we've gotten a<br>
> hang. The theory (the same for wait) is that, so long as the BO is still<br>
> busy, we either haven't hung or we haven't figured out that we've hung. <br>
> We could, in theory have two batches where the first one hangs and the<br>
> second is ok, so this isn't quite true. However, the important thing is<br>
> to never return VK_SUCCESS to the user when results are invalid. Both<br>
> anv_device_wait and anv_device_bo_busy only do an early return of<br>
> VK_NOT_READY or VK_TIMEOUT and don't return VK_SUCCESS until after they've<br>
> called GEM_RESET_STATS to ensuring that we haven't hung.<br>
<br>
</div></div>It's the ordering though. Currently it is<br>
<br>
!query_is_available<br>
!busy<br>
get_reset_stats <-- slow<br>
return query_is_available.<br>
<br>
I'm arguing that you want<br>
<br>
!query_is_available<br>
!busy<br>
if (query_is_available) return SUCCESS<br>
return get_reset_stats <-- slow.<br>
<br>
with the observation that if the gpu completed the available before the<br>
hang, it is not affected by the unlikely hang. You really, really want<br>
to avoid get_reset_stats on low latency paths (for the next few months<br>
at least).<br>
</blockquote></div><br></div><div class="gmail_extra">Ok, that makes more sense. <br></div></div>