<html>
<head>
<meta content="text/html; charset=utf-8" http-equiv="Content-Type">
</head>
<body text="#000000" bgcolor="#FFFFFF">
<div class="moz-cite-prefix">Am 28.06.2016 um 11:33 schrieb zhoucm1:<br>
</div>
<blockquote cite="mid:5772444E.20309@amd.com" type="cite">
<br>
<br>
On 2016年06月28日 17:36, Christian König wrote:
<br>
<blockquote type="cite">Am 28.06.2016 um 09:27 schrieb Huang Rui:
<br>
<blockquote type="cite">On Tue, Jun 28, 2016 at 03:04:18PM
+0800, Chunming Zhou wrote:
<br>
<blockquote type="cite">ring_mirror_list is only used kthread
context, no need to spinlock.
<br>
otherwise deadlock happens when kthread_park.
<br>
<br>
</blockquote>
Yes, in process context, we prefer to use mutex, because it
avoids to
<br>
grab the CPU all the time.
<br>
<br>
Reviewed-by: Huang Rui <a class="moz-txt-link-rfc2396E" href="mailto:ray.huang@amd.com"><ray.huang@amd.com></a>
<br>
</blockquote>
<br>
NAK, the patch won't apply because I've changed the irq save
spin lock to a normal one quite a while ago. But, I'm not sure
if Alex picked up that patch yet.
<br>
<br>
You shouldn't use a mutex here when you don't have a reason to
do so. Spin locks have less overhead and we won't expect any
contention here.
<br>
<br>
Additional to that how should this cause a deadlock with
kthread_park?
<br>
</blockquote>
you can apply another patch to drop hw ring, and then run
glxgears, and hang the gfx with the attached, then you will find
that deadlock info.
<br>
</blockquote>
<br>
Please provide the deadlock info. Since the spin lock only protects
the linked list we should be able to easily avoid any lock inversion
which could lead to a deadlock.<br>
<br>
BTW: The debug patch you attached is not such a good idea either,
cause you modify the gfx ring from outside the worker thread.<br>
<br>
Regards,<br>
Christian.<br>
<br>
<blockquote cite="mid:5772444E.20309@amd.com" type="cite">
<br>
Regards,
<br>
David Zhou
<br>
<blockquote type="cite">
<br>
Regards,
<br>
Christian.
<br>
<br>
<blockquote type="cite">
<br>
<blockquote type="cite">Change-Id:
I906022297015faf14a0ddb5f62a728af3e5f9448
<br>
Signed-off-by: Chunming Zhou <a class="moz-txt-link-rfc2396E" href="mailto:David1.Zhou@amd.com"><David1.Zhou@amd.com></a>
<br>
---
<br>
drivers/gpu/drm/amd/scheduler/gpu_scheduler.c | 12
+++++-------
<br>
drivers/gpu/drm/amd/scheduler/gpu_scheduler.h | 2 +-
<br>
2 files changed, 6 insertions(+), 8 deletions(-)
<br>
<br>
diff --git a/drivers/gpu/drm/amd/scheduler/gpu_scheduler.c
b/drivers/gpu/drm/amd/scheduler/gpu_scheduler.c
<br>
index b53cf58..3373d97 100644
<br>
--- a/drivers/gpu/drm/amd/scheduler/gpu_scheduler.c
<br>
+++ b/drivers/gpu/drm/amd/scheduler/gpu_scheduler.c
<br>
@@ -331,10 +331,9 @@ static void amd_sched_job_finish(struct
work_struct *work)
<br>
struct amd_sched_job *s_job = container_of(work,
struct amd_sched_job,
<br>
finish_work);
<br>
struct amd_gpu_scheduler *sched = s_job->sched;
<br>
- unsigned long flags;
<br>
/* remove job from ring_mirror_list */
<br>
- spin_lock_irqsave(&sched->job_list_lock, flags);
<br>
+ mutex_lock(&sched->job_list_lock);
<br>
list_del_init(&s_job->node);
<br>
if (sched->timeout != MAX_SCHEDULE_TIMEOUT) {
<br>
struct amd_sched_job *next;
<br>
@@ -348,7 +347,7 @@ static void amd_sched_job_finish(struct
work_struct *work)
<br>
if (next)
<br>
schedule_delayed_work(&next->work_tdr,
sched->timeout);
<br>
}
<br>
- spin_unlock_irqrestore(&sched->job_list_lock,
flags);
<br>
+ mutex_unlock(&sched->job_list_lock);
<br>
sched->ops->free_job(s_job);
<br>
}
<br>
@@ -362,15 +361,14 @@ static void
amd_sched_job_finish_cb(struct fence *f, struct fence_cb
*cb)
<br>
static void amd_sched_job_begin(struct amd_sched_job
*s_job)
<br>
{
<br>
struct amd_gpu_scheduler *sched = s_job->sched;
<br>
- unsigned long flags;
<br>
- spin_lock_irqsave(&sched->job_list_lock,
flags);
<br>
+ mutex_lock(&sched->job_list_lock);
<br>
list_add_tail(&s_job->node,
&sched->ring_mirror_list);
<br>
if (sched->timeout != MAX_SCHEDULE_TIMEOUT
&&
<br>
list_first_entry_or_null(&sched->ring_mirror_list,
<br>
struct amd_sched_job, node) == s_job)
<br>
schedule_delayed_work(&s_job->work_tdr,
sched->timeout);
<br>
- spin_unlock_irqrestore(&sched->job_list_lock,
flags);
<br>
+ mutex_unlock(&sched->job_list_lock);
<br>
}
<br>
static void amd_sched_job_timedout(struct work_struct
*work)
<br>
@@ -564,7 +562,7 @@ int amd_sched_init(struct
amd_gpu_scheduler *sched,
<br>
init_waitqueue_head(&sched->wake_up_worker);
<br>
init_waitqueue_head(&sched->job_scheduled);
<br>
INIT_LIST_HEAD(&sched->ring_mirror_list);
<br>
- spin_lock_init(&sched->job_list_lock);
<br>
+ mutex_init(&sched->job_list_lock);
<br>
atomic_set(&sched->hw_rq_count, 0);
<br>
if (atomic_inc_return(&sched_fence_slab_ref) == 1)
{
<br>
sched_fence_slab = kmem_cache_create(
<br>
diff --git a/drivers/gpu/drm/amd/scheduler/gpu_scheduler.h
b/drivers/gpu/drm/amd/scheduler/gpu_scheduler.h
<br>
index 221a515..5675906 100644
<br>
--- a/drivers/gpu/drm/amd/scheduler/gpu_scheduler.h
<br>
+++ b/drivers/gpu/drm/amd/scheduler/gpu_scheduler.h
<br>
@@ -132,7 +132,7 @@ struct amd_gpu_scheduler {
<br>
atomic_t hw_rq_count;
<br>
struct task_struct *thread;
<br>
struct list_head ring_mirror_list;
<br>
- spinlock_t job_list_lock;
<br>
+ struct mutex job_list_lock;
<br>
};
<br>
int amd_sched_init(struct amd_gpu_scheduler *sched,
<br>
-- <br>
1.9.1
<br>
<br>
_______________________________________________
<br>
amd-gfx mailing list
<br>
<a class="moz-txt-link-abbreviated" href="mailto:amd-gfx@lists.freedesktop.org">amd-gfx@lists.freedesktop.org</a>
<br>
<a class="moz-txt-link-freetext" href="https://lists.freedesktop.org/mailman/listinfo/amd-gfx">https://lists.freedesktop.org/mailman/listinfo/amd-gfx</a>
<br>
</blockquote>
_______________________________________________
<br>
amd-gfx mailing list
<br>
<a class="moz-txt-link-abbreviated" href="mailto:amd-gfx@lists.freedesktop.org">amd-gfx@lists.freedesktop.org</a>
<br>
<a class="moz-txt-link-freetext" href="https://lists.freedesktop.org/mailman/listinfo/amd-gfx">https://lists.freedesktop.org/mailman/listinfo/amd-gfx</a>
<br>
</blockquote>
<br>
_______________________________________________
<br>
amd-gfx mailing list
<br>
<a class="moz-txt-link-abbreviated" href="mailto:amd-gfx@lists.freedesktop.org">amd-gfx@lists.freedesktop.org</a>
<br>
<a class="moz-txt-link-freetext" href="https://lists.freedesktop.org/mailman/listinfo/amd-gfx">https://lists.freedesktop.org/mailman/listinfo/amd-gfx</a>
<br>
</blockquote>
<br>
<br>
<fieldset class="mimeAttachmentHeader"></fieldset>
<br>
<pre wrap="">_______________________________________________
amd-gfx mailing list
<a class="moz-txt-link-abbreviated" href="mailto:amd-gfx@lists.freedesktop.org">amd-gfx@lists.freedesktop.org</a>
<a class="moz-txt-link-freetext" href="https://lists.freedesktop.org/mailman/listinfo/amd-gfx">https://lists.freedesktop.org/mailman/listinfo/amd-gfx</a>
</pre>
</blockquote>
<br>
</body>
</html>