<p dir="ltr">Do we need to bump the DRM version for this bug fix?</p>
<p dir="ltr">Marek</p>
<div class="gmail_extra"><br><div class="gmail_quote">On Oct 4, 2016 10:20 AM, "Christian König" <<a href="mailto:deathsimple@vodafone.de">deathsimple@vodafone.de</a>> wrote:<br type="attribution"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Am 04.10.2016 um 09:45 schrieb Nicolai Hähnle:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
From: Nicolai Hähnle <<a href="mailto:nicolai.haehnle@amd.com" target="_blank">nicolai.haehnle@amd.com</a>><br>
<br>
Ensure that we really only report a GPU reset if one has happened since the<br>
creation of the context.<br>
<br>
Signed-off-by: Nicolai Hähnle <<a href="mailto:nicolai.haehnle@amd.com" target="_blank">nicolai.haehnle@amd.com</a>><br>
</blockquote>
<br>
Reviewed-by: Christian König <<a href="mailto:christian.koenig@amd.com" target="_blank">christian.koenig@amd.com</a>>.<br>
<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
---<br>
  drivers/gpu/drm/amd/amdgpu/amd<wbr>gpu_ctx.c | 3 +++<br>
  1 file changed, 3 insertions(+)<br>
<br>
diff --git a/drivers/gpu/drm/amd/amdgpu/a<wbr>mdgpu_ctx.c b/drivers/gpu/drm/amd/amdgpu/a<wbr>mdgpu_ctx.c<br>
index e203e55..a5e2fcb 100644<br>
--- a/drivers/gpu/drm/amd/amdgpu/a<wbr>mdgpu_ctx.c<br>
+++ b/drivers/gpu/drm/amd/amdgpu/a<wbr>mdgpu_ctx.c<br>
@@ -36,20 +36,23 @@ static int amdgpu_ctx_init(struct amdgpu_device *adev, struct amdgpu_ctx *ctx)<br>
        spin_lock_init(&ctx->ring_lock<wbr>);<br>
        ctx->fences = kcalloc(amdgpu_sched_jobs * AMDGPU_MAX_RINGS,<br>
                              sizeof(struct fence*), GFP_KERNEL);<br>
        if (!ctx->fences)<br>
                return -ENOMEM;<br>
        for (i = 0; i < AMDGPU_MAX_RINGS; ++i) {<br>
                ctx->rings[i].sequence = 1;<br>
                ctx->rings[i].fences = &ctx->fences[amdgpu_sched_jobs * i];<br>
        }<br>
+<br>
+       ctx->reset_counter = atomic_read(&adev->gpu_reset_c<wbr>ounter);<br>
+<br>
        /* create context entity for each ring */<br>
        for (i = 0; i < adev->num_rings; i++) {<br>
                struct amdgpu_ring *ring = adev->rings[i];<br>
                struct amd_sched_rq *rq;<br>
                rq = &ring->sched.sched_rq[AMD_SCHE<wbr>D_PRIORITY_NORMAL];<br>
                r = amd_sched_entity_init(&ring->s<wbr>ched, &ctx->rings[i].entity,<br>
                                          rq, amdgpu_sched_jobs);<br>
                if (r)<br>
                        break;<br>
</blockquote>
<br>
<br>
______________________________<wbr>_________________<br>
amd-gfx mailing list<br>
<a href="mailto:amd-gfx@lists.freedesktop.org" target="_blank">amd-gfx@lists.freedesktop.org</a><br>
<a href="https://lists.freedesktop.org/mailman/listinfo/amd-gfx" rel="noreferrer" target="_blank">https://lists.freedesktop.org/<wbr>mailman/listinfo/amd-gfx</a><br>
</blockquote></div></div>