[Intel-gfx] [PATCH v2 3/3] drm/dp_mst: Fix flushing the delayed port/mstb destroy work
Lyude Paul
lyude at redhat.com
Wed Jun 10 15:54:30 UTC 2020
my crunch time is over so I can review these on time now :)
one small comment below, although it doesn't stop me from giving my R-B here:
Reviewed-by: Lyude Paul <lyude at redhat.com>
On Wed, 2020-06-10 at 16:47 +0300, Imre Deak wrote:
> Atm, a pending delayed destroy work during module removal will be
> canceled, leaving behind MST ports, mstbs. Fix this by using a dedicated
> workqueue which will be drained of requeued items as well when
> destroying it.
>
> v2:
> - Check if wq is NULL before calling destroy_workqueue().
>
> Cc: Lyude Paul <lyude at redhat.com>
> Cc: Stanislav Lisovskiy <stanislav.lisovskiy at intel.com>
> Reviewed-by: Stanislav Lisovskiy <stanislav.lisovskiy at intel.com>
> Signed-off-by: Imre Deak <imre.deak at intel.com>
> ---
> drivers/gpu/drm/drm_dp_mst_topology.c | 19 ++++++++++++++++---
> include/drm/drm_dp_mst_helper.h | 8 ++++++++
> 2 files changed, 24 insertions(+), 3 deletions(-)
>
> diff --git a/drivers/gpu/drm/drm_dp_mst_topology.c
> b/drivers/gpu/drm/drm_dp_mst_topology.c
> index eff8d6ac0273..a5f67b9db7fa 100644
> --- a/drivers/gpu/drm/drm_dp_mst_topology.c
> +++ b/drivers/gpu/drm/drm_dp_mst_topology.c
> @@ -1604,7 +1604,7 @@ static void drm_dp_destroy_mst_branch_device(struct kref
> *kref)
> mutex_lock(&mgr->delayed_destroy_lock);
> list_add(&mstb->destroy_next, &mgr->destroy_branch_device_list);
> mutex_unlock(&mgr->delayed_destroy_lock);
> - schedule_work(&mgr->delayed_destroy_work);
> + queue_work(mgr->delayed_destroy_wq, &mgr->delayed_destroy_work);
> }
>
> /**
> @@ -1721,7 +1721,7 @@ static void drm_dp_destroy_port(struct kref *kref)
> mutex_lock(&mgr->delayed_destroy_lock);
> list_add(&port->next, &mgr->destroy_port_list);
> mutex_unlock(&mgr->delayed_destroy_lock);
> - schedule_work(&mgr->delayed_destroy_work);
> + queue_work(mgr->delayed_destroy_wq, &mgr->delayed_destroy_work);
> }
>
> /**
> @@ -5182,6 +5182,15 @@ int drm_dp_mst_topology_mgr_init(struct
> drm_dp_mst_topology_mgr *mgr,
> INIT_LIST_HEAD(&mgr->destroy_port_list);
> INIT_LIST_HEAD(&mgr->destroy_branch_device_list);
> INIT_LIST_HEAD(&mgr->up_req_list);
> +
> + /*
> + * delayed_destroy_work will be queued on a dedicated WQ, so that any
> + * requeuing will be also flushed when deiniting the topology manager.
> + */
> + mgr->delayed_destroy_wq = alloc_ordered_workqueue("drm_dp_mst_wq", 0);
> + if (mgr->delayed_destroy_wq == NULL)
> + return -ENOMEM;
> +
> INIT_WORK(&mgr->work, drm_dp_mst_link_probe_work);
> INIT_WORK(&mgr->tx_work, drm_dp_tx_work);
> INIT_WORK(&mgr->delayed_destroy_work, drm_dp_delayed_destroy_work);
> @@ -5226,7 +5235,11 @@ void drm_dp_mst_topology_mgr_destroy(struct
> drm_dp_mst_topology_mgr *mgr)
> {
> drm_dp_mst_topology_mgr_set_mst(mgr, false);
> flush_work(&mgr->work);
> - cancel_work_sync(&mgr->delayed_destroy_work);
> + /* The following will also drain any requeued work on the WQ. */
> + if (mgr->delayed_destroy_wq) {
> + destroy_workqueue(mgr->delayed_destroy_wq);
> + mgr->delayed_destroy_wq = NULL;
> + }
We should definitely cleanup the cleanup in this function, I don't mind
submitting some patches to do it today if you poke me on IRC once you've got
this pushed to drm-misc-next
> mutex_lock(&mgr->payload_lock);
> kfree(mgr->payloads);
> mgr->payloads = NULL;
> diff --git a/include/drm/drm_dp_mst_helper.h b/include/drm/drm_dp_mst_helper.h
> index 9e1ffcd7cb68..17b568c6f4f8 100644
> --- a/include/drm/drm_dp_mst_helper.h
> +++ b/include/drm/drm_dp_mst_helper.h
> @@ -672,6 +672,14 @@ struct drm_dp_mst_topology_mgr {
> * @destroy_branch_device_list.
> */
> struct mutex delayed_destroy_lock;
> +
> + /**
> + * @delayed_destroy_wq: Workqueue used for delayed_destroy_work items.
> + * A dedicated WQ makes it possible to drain any requeued work items
> + * on it.
> + */
> + struct workqueue_struct *delayed_destroy_wq;
> +
> /**
> * @delayed_destroy_work: Work item to destroy MST port and branch
> * devices, needed to avoid locking inversion.
More information about the Intel-gfx
mailing list