[Intel-gfx] [PATCH 07/10] drm/i915: Support for pread/pwrite from/to non shmem backed objects
Tvrtko Ursulin
tvrtko.ursulin at linux.intel.com
Mon Jan 11 09:15:54 PST 2016
On 11/01/16 17:03, Chris Wilson wrote:
> On Mon, Jan 11, 2016 at 03:11:07PM +0000, Tvrtko Ursulin wrote:
>>
>> On 11/01/16 14:45, Chris Wilson wrote:
>>> On Mon, Jan 11, 2016 at 02:21:33PM +0000, Tvrtko Ursulin wrote:
>>>>
>>>> On 22/12/15 17:40, Chris Wilson wrote:
>>>>> On Tue, Dec 22, 2015 at 11:58:33AM +0000, Tvrtko Ursulin wrote:
>>>>>> Maybe:
>>>>>>
>>>>>> if (!obj->base.filp || cpu_write_needs_clflush(obj))
>>>>>> ret = i915_gem_gtt_pwrite_fast(...);
>>>>>>
>>>>>> if (ret == -EFAULT && !obj->base.filp) {
>>>>>> ret = i915_gem_gtt_pwrite_slow(...) /* New function, doing the
>>>>>> slow_user_access loop for !filp objects, extracted from
>>>>>> gtt_pwrite_fast above. */
>>>>>
>>>>> The point is that "gtt_pwrite_slow" is going to be preferrable in the
>>>>> cases where it is possible. It just wasn't the full fallback patch for
>>>>> all objects previously, so we didn't bother to write a partial fallback
>>>>> handler.
>>>>
>>>> Maybe I don't get this - is fast_user_write expected always to fail
>>>> for non shmem backed objects? And so revert to the slow_user_path
>>>> always and immediately? Because fast_user_write is still the primary
>>>> choice for everything.
>>>
>>> If we already have a GTT mapping available, then WC writes into the
>>> object are about as fast as we can get, especially if we don't have
>>> direct page access. They also have the benefit of not polluting the
>>> cache further - though that maybe a downside as well, in which case
>>> pwrite/pread was the wrong interface to use.
>>>
>>> fast_user_write is no more likely to fail for stolen objs than for
>>> shmemfs obj, it is just that we cannot fallback to direct page access
>>> for stolen objs and so need a fallback path that writes through the GTT.
>>> That fallback path would also be preferrable to falling back from the
>>> middle of a GTT write to the direct page paths. The issue was simply
>>> that the GTT paths cannot be assumed to be universally available,
>>> whereas historically the direct page access paths were. *That* changes,
>>> and now we cannot rely on either path being universally available.
>>
>> So it sounds that we don't need to have code which falls back in the
>> middle of the write but could be written cleaner as separate
>> helpers?
>>
>> Because I really dislike that new loop...
>
> What new loop? We don't need a new loop...
>
> i915_gem_gtt_pwrite():
> /* Important and exceedingly complex setup/teardown code
> * removed for brevity.
> */
> for_each_page() {
> ... get limits of operation in page...
>
> if (fast_gtt_write(##args)) {
> /* Beware dragons */
> mutex_unlock();
> hit_slow_path = 1;
> slow_gtt_write(##args);
> mutex_lock();
> }
> }
>
> if (hit_slow_path) {
> /* Beware dragons that bite */
> ret = i915_gem_object_set_to_gtt_domain(obj, true);
> }
>
> Is that not what was written? I take it my telepathy isn't working
> again.
Sorry not a new loop, new case in a old loop. This is the hunk I think
is not helping readability:
@@ -869,11 +967,29 @@ i915_gem_gtt_pwrite_fast(struct drm_i915_private *i915,
/* If we get a fault while copying data, then (presumably) our
* source page isn't available. Return the error and we'll
* retry in the slow path.
+ * If the object is non-shmem backed, we retry again with the
+ * path that handles page fault.
*/
- if (fast_user_write(i915->gtt.mappable, page_base,
- page_offset, user_data, page_length)) {
- ret = -EFAULT;
- goto out_flush;
+ if (faulted || fast_user_write(i915->gtt.mappable,
+ page_base, page_offset,
+ user_data, page_length)) {
+ if (!obj->base.filp) {
+ faulted = true;
+ mutex_unlock(&dev->struct_mutex);
+ if (slow_user_access(i915->gtt.mappable,
+ page_base,
+ page_offset, user_data,
+ page_length, true)) {
+ ret = -EFAULT;
+ mutex_lock(&dev->struct_mutex);
+ goto out_flush;
+ }
+
+ mutex_lock(&dev->struct_mutex);
+ } else {
+ ret = -EFAULT;
+ goto out_flush;
+ }
Because the concept is now different for page faults on shmem based and
non-shmem based objects. Former falls out on fault and ends up in
i915_gem_shmem_pwrite, while latter keeps banging on in
i915_gem_gtt_pwrite_fast.
I find it confusing code organization and naming. So I suggested the
new path (!shmem + fault) is added as a separate new function and called
from i915_gem_pwrite_ioctl same as i915_gem_shmem_pwrite but you
objected:
if (!obj->base.filp || cpu_write_needs_clflush(obj))
ret = i915_gem_gtt_pwrite_fast(...);
if (ret == -EFAULT && !obj->base.filp) {
ret = i915_gem_gtt_pwrite_slow(...) /* New function, doing the slow_user_access loop for !filp objects, extracted from gtt_pwrite_fast above. */
} else if (ret == -EFAULT || ret == -ENOSPC) {
if (obj->phys_handle)
...
...
Regards,
Tvrtko
More information about the Intel-gfx
mailing list