[PATCH] drm/amdgpu:guarantee 128dws between vm flush and IB(v3)

Grazvydas Ignotas notasas at gmail.com
Thu Jan 19 13:51:44 UTC 2017


On Thu, Jan 19, 2017 at 11:10 AM, Christian König
<deathsimple at vodafone.de> wrote:
> Am 18.01.2017 um 12:42 schrieb Monk Liu:
>> @@ -6743,6 +6741,15 @@ static void gfx_v8_ring_emit_cntxcntl(struct
>> amdgpu_ring *ring, uint32_t flags)
>>         if (amdgpu_sriov_vf(ring->adev))
>>                 gfx_v8_0_ring_emit_de_meta_init(ring,
>>                         (flags & AMDGPU_VM_DOMAIN) ? AMDGPU_CSA_VADDR :
>> ring->adev->virt.csa_vmid0_addr);
>> +
>> +       /* We need to pad some NOPs before emit_ib to prevent CE run ahead
>> of
>> +        * vm_flush, which may trigger VM fault. */
>> +       if (ring->wptr > ring->last_vm_flush_pos) /* no wptr wrapping to
>> RB head */
>> +               amdgpu_ring_insert_nop(ring, 128 - (ring->wptr -
>> ring->last_vm_flush_pos));
>
>
> This can easily result in a negative number, couldn't it?
>
>> +       else
>> +               if (ring->ptr_mask + 1 - ring->last_vm_flush_pos +
>> ring->wptr < 128)
>> +                       amdgpu_ring_insert_nop(ring,
>> +                               128 - (ring->ptr_mask + 1 -
>> ring->last_vm_flush_pos + ring->wptr));
>
>
> I think it would be cleaner if you calculate the number of NOPs needed first
> for both cases and then check if the number isn't negative for both cases.

What about this:
128 - ((ring->wptr - ring->last_vm_flush_pos) & 127)

Gražvydas


More information about the amd-gfx mailing list