[Intel-gfx] [PATCH 1/3] drm/i915: Use pagecache write to prepopulate shmemfs from pwrite-ioctl

Tvrtko Ursulin tvrtko.ursulin at linux.intel.com
Tue Mar 7 07:30:30 UTC 2017


On 06/03/2017 21:49, Chris Wilson wrote:
> On Mon, Mar 06, 2017 at 04:32:45PM +0000, Tvrtko Ursulin wrote:
>>
>> On 06/03/2017 14:14, Chris Wilson wrote:
>>> Remember when I said that nobody would touch pages without using them,
>>> (and so could defer the update for the shrinker until we had the
>>> struct_mutex) and certainly not 16GiB of written-but-unused pages on a
>>> small box? libva happened.
>>
>> Oh dear.. Ok, going back to the previous reply..
>>
>> I can see the benefit of avoiding the shrinker and struct mutex but
>> haven't found that other benefit.
>>
>> I've been rummaging in the shmem.c & co but so far haven't found
>> anything to explain me the possibility of it avoiding
>> clearing/swapping-in the pages. It looks like both our normal page
>> allocation and this new one boil down to the same shmem_getpage.
>>
>> Could you explain what I am missing?
>
> Normally we use shmem_getpages(SGP_CACHE). write_begin uses SGP_WRITE.
> That has the clear avoidance, but alas I can't see it taking advantage
> of avoiding a swapin - probably because SGP_WRITE doesn't differentiate
> between a full or partial page write at that point, though it has the
> information to do so. (swapin avoidance is then just a pipe dream.) For
> bonus points would be handling high-order pages...

I've looked in that code but it's too deep in page handling for me at 
the moment to say one way or the other...

>> Also, would we have any IGT coverage for this new path? And keep a
>> solid amount of coverage for the old paths as well.
>
> Virtually every test has some form of
> gem_pwrite(gem_create(4096), 0, &bbe, sizeof(bbe));
> and this path is extensively tested by gem_concurrent_blit, gem_pwrite
> which should both exercise initial pwrites plus pwrites following
> shrinking, as well as the ordinary pwrite with obj->mm.pages.
>
> In real workloads though, while pwrite is fairly widespread in mesa for
> uploading the batch buffer (on !llc at least), the userspace bo cache
> means that the number of times we can expect to hit this path are rare.
> Which just leaves the confusing case of libva.

.. but since it has the benefit of avoiding the shrinker and you 
describe the IGT coverage is good:

Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin at intel.com>

Regards,

Tvrtko




More information about the Intel-gfx mailing list