[Mesa-dev] [PATCH 6/4] radv: make use of on-disk cache

Timothy Arceri tarceri at itsqueeze.com
Thu Mar 16 00:43:53 UTC 2017


On 16/03/17 11:10, Jason Ekstrand wrote:
> On Tue, Mar 14, 2017 at 10:57 PM, Timothy Arceri <tarceri at itsqueeze.com
> <mailto:tarceri at itsqueeze.com>> wrote:
>
>     If the app provided and in-memory pipeline caches don't contain
>     what we are looking for then we fallback to the on-disk cache.
>
>
> <soapbox>
>
> Not my driver so my opinion doesn't count for much here, but this seems
> like a very un-Vulkan thing to do.  The API has a VkPipelineCache and
> apps should be using it.  If an app doesn't take advantage of that API
> and its users suffer from crummy load times or stutter, then it's a
> broken app.

In theory you are probably right, the problem is Nvidia and AMD are 
already doing this with their binary drivers so I don't see much point 
in trying to fight it.

>  The only thing this is doing (assuming good apps) is using
> double the disk space to cache shaders twice.

We could possibly only write to disk if there is no pipeline cache 
however that would limit possibilities for shader distribution in order 
to avoid first time compiles something that will be extra important for VR.

I've been informed that Nvidia just caches everything driver side also, 
since we will be sharing the GL cache (and it's max limit) and entries 
will be compressed I'm not to concerned about the extra space.

>
> In GL, there's just not much we can do besides provide a transparent
> on-disk cache.  There are things like GL_ARB_get_program_binary but,
> thanks to the way the GL API is designed, there is no real solution to
> the recompile problem.  You can guess all you want but on hardware like
> AMDs where you have lots of recompiles, it's kind-of worthless to do
> so.  The answer:  The driver has a transparent cache.  In the Vulkan
> world, this problem has been solved and we should use this solution.
> We're also not supposed to be doing things behind the client's back and,
> given that we have a pipeline cache API, this definitely falls into that
> category.
>
> Finally, I really don't want, in these early days of Vulkan, for drivers
> to start doing transparent caching because it sends exactly the wrong
> message to developers.  "You know that caching API?  If you find it hard
> to use, just don't bother.  The driver will do it for you."  That's not
> the message we want to send.
>
> </soapbox>

As above the binary drivers are already doing it. By doing arguably 
right thing we are just hurting users.

>
> Do whatever you want. #notmydriver
>
>
>     ---
>      src/amd/vulkan/radv_pipeline_cache.c | 13 +++++++++++--
>      1 file changed, 11 insertions(+), 2 deletions(-)
>
>     diff --git a/src/amd/vulkan/radv_pipeline_cache.c
>     b/src/amd/vulkan/radv_pipeline_cache.c
>     index 3a58f6a..ffcb8e5 100644
>     --- a/src/amd/vulkan/radv_pipeline_cache.c
>     +++ b/src/amd/vulkan/radv_pipeline_cache.c
>     @@ -16,20 +16,21 @@
>       * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
>     MERCHANTABILITY,
>       * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO
>     EVENT SHALL
>       * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM,
>     DAMAGES OR OTHER
>       * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
>     ARISING
>       * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
>     OTHER DEALINGS
>       * IN THE SOFTWARE.
>       */
>
>      #include "util/mesa-sha1.h"
>      #include "util/debug.h"
>     +#include "util/disk_cache.h"
>      #include "radv_private.h"
>
>      #include "ac_nir_to_llvm.h"
>
>      struct cache_entry {
>             union {
>                     unsigned char sha1[20];
>                     uint32_t sha1_dw[5];
>             };
>             uint32_t code_size;
>     @@ -152,22 +153,28 @@
>     radv_create_shader_variant_from_pipeline_cache(struct radv_device
>     *device,
>                                                    struct
>     radv_pipeline_cache *cache,
>                                                    const unsigned char
>     *sha1)
>      {
>             struct cache_entry *entry = NULL;
>
>             if (cache)
>                     entry = radv_pipeline_cache_search(cache, sha1);
>             else
>                     entry =
>     radv_pipeline_cache_search(device->mem_cache, sha1);
>
>     -       if (!entry)
>     -               return NULL;
>     +       if (!entry) {
>     +               size_t entry_size;
>     +               entry = (struct cache_entry *)
>     +
>      disk_cache_get(device->physical_device->disk_cache,
>     +                                      sha1, &entry_size);
>     +               if (!entry)
>     +                       return NULL;
>     +       }
>
>             if (!entry->variant) {
>                     struct radv_shader_variant *variant;
>
>                     variant = calloc(1, sizeof(struct radv_shader_variant));
>                     if (!variant)
>                             return NULL;
>
>                     variant->config = entry->config;
>                     variant->info = entry->variant_info;
>     @@ -294,20 +301,22 @@ radv_pipeline_cache_insert_shader(struct
>     radv_device *device,
>             memcpy(entry->code, code, code_size);
>             entry->config = variant->config;
>             entry->variant_info = variant->info;
>             entry->rsrc1 = variant->rsrc1;
>             entry->rsrc2 = variant->rsrc2;
>             entry->code_size = code_size;
>             entry->variant = variant;
>             __sync_fetch_and_add(&variant->ref_count, 1);
>
>             radv_pipeline_cache_add_entry(cache, entry);
>     +       disk_cache_put(device->physical_device->disk_cache,
>     +                      sha1, entry, sizeof(*entry) + code_size);
>
>             cache->modified = true;
>             pthread_mutex_unlock(&cache->mutex);
>             return variant;
>      }
>
>      struct cache_header {
>             uint32_t header_size;
>             uint32_t header_version;
>             uint32_t vendor_id;
>     --
>     2.9.3
>
>     _______________________________________________
>     mesa-dev mailing list
>     mesa-dev at lists.freedesktop.org <mailto:mesa-dev at lists.freedesktop.org>
>     https://lists.freedesktop.org/mailman/listinfo/mesa-dev
>     <https://lists.freedesktop.org/mailman/listinfo/mesa-dev>
>
>


More information about the mesa-dev mailing list