RE: 答复: 答复: [PATCH] drm/amdgpu:impl vgt_flush for VI
Deucher, Alexander
Alexander.Deucher at amd.com
Mon Nov 14 15:28:54 UTC 2016
> -----Original Message-----
> From: amd-gfx [mailto:amd-gfx-bounces at lists.freedesktop.org] On Behalf
> Of Liu, Monk
> Sent: Monday, November 14, 2016 6:40 AM
> To: Christian König; amd-gfx at freedesktop.org
> Subject: 答复: 答复: 答复: [PATCH] drm/amdgpu:impl vgt_flush for VI
>
> I have no strong opinion, I checked windows kmd and they separate vgt-
> flush and cntx-ctrl as well,
> Fine with your suggestion
Please also make sure you update the packet counts in the ring structure.
Alex
>
> BR Monk
>
> -----邮件原件-----
> 发件人: Christian König [mailto:deathsimple at vodafone.de]
> 发送时间: Monday, November 14, 2016 7:33 PM
> 收件人: Liu, Monk; amd-gfx at freedesktop.org
> 主题: Re: 答复: 答复: [PATCH] drm/amdgpu:impl vgt_flush for VI
>
> The callbacks are use case driven, so it doesn't matter what packets they use.
> I would really prefer not to add to many of them.
>
> Maybe rename the emit_cntxcntl callback to just emit_context_preamble or
> something like this to make it more clear what that is good for.
>
> Regards,
> Christian.
>
> Am 14.11.2016 um 11:01 schrieb Liu, Monk:
> > Although the effect is equal, but cntxcntl uses CONTEXT_CONTROL only,
> > while vgt-flush uses EVENT_WRITE on vgt_flush and vs_partial_flush only,
> And vgt flush only operate on tessellation category registers, I'd prefer it not
> mixed with CONTEXT_CONTROL package ...
> > I think Put them together seems not grace ...
> >
> > BR Monk
> >
> > -----邮件原件-----
> > 发件人: Christian König [mailto:deathsimple at vodafone.de]
> > 发送时间: Monday, November 14, 2016 5:46 PM
> > 收件人: Liu, Monk; amd-gfx at freedesktop.org
> > 主题: Re: 答复: [PATCH] drm/amdgpu:impl vgt_flush for VI
> >
> > Am 14.11.2016 um 04:17 schrieb Liu, Monk:
> >> Anyone review this patch ?
> > Looks good in general, but is there any reason not to put it into the existing
> emit_cntxcntl callback?
> >
> > Regards,
> > Christian.
> >
> >> This patch could fix tessellation bug when shadowing enabled, we
> >> should always insert vgt_flush when there is a context switch
> >>
> >> BR Monk
> >>
> >> -----邮件原件-----
> >> 发件人: Monk Liu [mailto:Monk.Liu at amd.com]
> >> 发送时间: Friday, November 11, 2016 6:32 PM
> >> 收件人: amd-gfx at freedesktop.org
> >> 抄送: Liu, Monk
> >> 主题: [PATCH] drm/amdgpu:impl vgt_flush for VI
> >>
> >> when hardware shadowing enabled, tesselation will trigger vm fault, the
> root cause is because VGT_FLUSH not introduced in kmd. this could fix
> tesselation crash issue.
> >>
> >> Change-Id: I77d87d93ce6580e559e734fb41d97ee8c59d245b
> >> Signed-off-by: Monk Liu <Monk.Liu at amd.com>
> >> ---
> >> drivers/gpu/drm/amd/amdgpu/amdgpu.h | 1 +
> >> drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c | 5 ++++-
> >> drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h | 1 +
> >> drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c | 13 +++++++++++++
> >> 4 files changed, 19 insertions(+), 1 deletion(-)
> >>
> >> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
> >> b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
> >> index 15015bc..f46e96b 100644
> >> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
> >> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
> >> @@ -1630,6 +1630,7 @@ amdgpu_get_sdma_instance(struct
> amdgpu_ring
> >> *ring) #define amdgpu_ring_emit_fence(r, addr, seq, flags)
> >> (r)->funcs->emit_fence((r), (addr), (seq), (flags)) #define
> >> amdgpu_ring_emit_gds_switch(r, v, db, ds, wb, ws, ab, as)
> >> (r)->funcs->emit_gds_switch((r), (v), (db), (ds), (wb), (ws), (ab),
> >> (as)) #define amdgpu_ring_emit_hdp_flush(r)
> >> (r)->funcs->emit_hdp_flush((r))
> >> +#define amdgpu_ring_emit_vgt_flush(r)
> >> +(r)->funcs->emit_vgt_flush((r))
> >> #define amdgpu_ring_emit_hdp_invalidate(r) (r)->funcs-
> >emit_hdp_invalidate((r))
> >> #define amdgpu_ring_emit_switch_buffer(r) (r)->funcs-
> >emit_switch_buffer((r))
> >> #define amdgpu_ring_emit_cntxcntl(r, d)
> >> (r)->funcs->emit_cntxcntl((r), (d)) diff --git
> >> a/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c
> >> b/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c
> >> index acf48de..c039890 100644
> >> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c
> >> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c
> >> @@ -175,11 +175,14 @@ int amdgpu_ib_schedule(struct amdgpu_ring
> *ring, unsigned num_ibs,
> >> if (ring->funcs->emit_hdp_flush)
> >> amdgpu_ring_emit_hdp_flush(ring);
> >>
> >> + need_ctx_switch = ring->current_ctx != fence_ctx;
> >> + if (ring->funcs->emit_vgt_flush && need_ctx_switch)
> >> + amdgpu_ring_emit_vgt_flush(ring);
> >> +
> >> /* always set cond_exec_polling to CONTINUE */
> >> *ring->cond_exe_cpu_addr = 1;
> >>
> >> skip_preamble = ring->current_ctx == fence_ctx;
> >> - need_ctx_switch = ring->current_ctx != fence_ctx;
> >> if (job && ring->funcs->emit_cntxcntl) {
> >> if (need_ctx_switch)
> >> status |= AMDGPU_HAVE_CTX_SWITCH; diff --git
> >> a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
> >> b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
> >> index 92bc89b..c3a7329 100644
> >> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
> >> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
> >> @@ -116,6 +116,7 @@ struct amdgpu_ring_funcs {
> >> void (*emit_vm_flush)(struct amdgpu_ring *ring, unsigned vm_id,
> >> uint64_t pd_addr);
> >> void (*emit_hdp_flush)(struct amdgpu_ring *ring);
> >> + void (*emit_vgt_flush)(struct amdgpu_ring *ring);
> >> void (*emit_hdp_invalidate)(struct amdgpu_ring *ring);
> >> void (*emit_gds_switch)(struct amdgpu_ring *ring, uint32_t vmid,
> >> uint32_t gds_base, uint32_t gds_size, diff --
> git
> >> a/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c
> >> b/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c
> >> index 9017803..1d407d76 100644
> >> --- a/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c
> >> +++ b/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c
> >> @@ -6187,6 +6187,18 @@ static void
> gfx_v8_0_ring_emit_hdp_flush(struct amdgpu_ring *ring)
> >> amdgpu_ring_write(ring, 0x20); /* poll interval */ }
> >>
> >> +static void gfx_v8_0_ring_emit_vgt_flush(struct amdgpu_ring *ring) {
> >> + amdgpu_ring_write(ring, PACKET3(PACKET3_EVENT_WRITE, 0));
> >> + amdgpu_ring_write(ring, EVENT_TYPE(VS_PARTIAL_FLUSH) |
> >> + EVENT_INDEX(4));
> >> +
> >> + amdgpu_ring_write(ring, PACKET3(PACKET3_EVENT_WRITE, 0));
> >> + amdgpu_ring_write(ring, EVENT_TYPE(VGT_FLUSH) |
> >> + EVENT_INDEX(0));
> >> +}
> >> +
> >> +
> >> static void gfx_v8_0_ring_emit_hdp_invalidate(struct amdgpu_ring
> *ring) {
> >> amdgpu_ring_write(ring, PACKET3(PACKET3_WRITE_DATA, 3)); @@
> -6590,6 +6602,7 @@ static const struct amdgpu_ring_funcs
> gfx_v8_0_ring_funcs_gfx = {
> >> .pad_ib = amdgpu_ring_generic_pad_ib,
> >> .emit_switch_buffer = gfx_v8_ring_emit_sb,
> >> .emit_cntxcntl = gfx_v8_ring_emit_cntxcntl,
> >> + .emit_vgt_flush = gfx_v8_0_ring_emit_vgt_flush,
> >> };
> >>
> >> static const struct amdgpu_ring_funcs gfx_v8_0_ring_funcs_compute
> >> = {
> >> --
> >> 1.9.1
> >>
> >> _______________________________________________
> >> amd-gfx mailing list
> >> amd-gfx at lists.freedesktop.org
> >> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
> >
>
> _______________________________________________
> amd-gfx mailing list
> amd-gfx at lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
More information about the amd-gfx
mailing list