[RESEND PATCH 2/5] drm: Add Reusable task barrier.

Ma, Le Le.Ma at amd.com
Thu Dec 12 04:04:34 UTC 2019


[AMD Official Use Only - Internal Distribution Only]






-----Original Message-----
From: Andrey Grodzovsky <andrey.grodzovsky at amd.com>
Sent: Thursday, December 12, 2019 4:39 AM
To: dri-devel at lists.freedesktop.org; amd-gfx at lists.freedesktop.org
Cc: Deucher, Alexander <Alexander.Deucher at amd.com>; Ma, Le <Le.Ma at amd.com>; Zhang, Hawking <Hawking.Zhang at amd.com>; Quan, Evan <Evan.Quan at amd.com>; Grodzovsky, Andrey <Andrey.Grodzovsky at amd.com>
Subject: [RESEND PATCH 2/5] drm: Add Reusable task barrier.



It is used to synchronize N threads at a rendevouz point before execution of critical code that has to be started by all the threads at approximatly the same time.



Signed-off-by: Andrey Grodzovsky <andrey.grodzovsky at amd.com<mailto:andrey.grodzovsky at amd.com>>

---

include/drm/task_barrier.h | 106 +++++++++++++++++++++++++++++++++++++++++++++

1 file changed, 106 insertions(+)

create mode 100644 include/drm/task_barrier.h



diff --git a/include/drm/task_barrier.h b/include/drm/task_barrier.h new file mode 100644 index 0000000..81fb0f7

--- /dev/null

+++ b/include/drm/task_barrier.h

@@ -0,0 +1,106 @@

+/*

+ * Copyright 2019 Advanced Micro Devices, Inc.

+ *

+ * Permission is hereby granted, free of charge, to any person

+obtaining a

+ * copy of this software and associated documentation files (the

+"Software"),

+ * to deal in the Software without restriction, including without

+limitation

+ * the rights to use, copy, modify, merge, publish, distribute,

+sublicense,

+ * and/or sell copies of the Software, and to permit persons to whom

+the

+ * Software is furnished to do so, subject to the following conditions:

+ *

+ * The above copyright notice and this permission notice shall be

+included in

+ * all copies or substantial portions of the Software.

+ *

+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,

+EXPRESS OR

+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF

+MERCHANTABILITY,

+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.  IN NO EVENT

+SHALL

+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM,

+DAMAGES OR

+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR

+OTHERWISE,

+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE

+OR

+ * OTHER DEALINGS IN THE SOFTWARE.

+ *

+ */

+#include <linux/semaphore.h>

+#include <linux/atomic.h>

+

+/*

+ * Reusable 2 PHASE task barrier (randevouz point) implementation for N tasks.

+ * Based on the Little book of sempahores -

+https://greenteapress.com/wp/semaphores/

+ */

+

+

+

+#ifndef DRM_TASK_BARRIER_H_

+#define DRM_TASK_BARRIER_H_

+



[Le]: It might be better to prefix "drm_" to the functions and structure below, even this header file name.



+/*

+ * Represents an instance of a task barrier.

+ */

+struct task_barrier {

+          unsigned int n;

[Le]: We can define it as signed type here for more common use.

+          atomic_t count;

+          struct semaphore enter_turnstile;

+          struct semaphore exit_turnstile;

+};

+

+static inline void task_barrier_signal_turnstile(struct semaphore *turnstile,

+                                                                      unsigned int n)

+{

+          int i;

+

+          for (i = 0 ; i < n; i++)

+                      up(turnstile);

+}

+

+static inline void task_barrier_init(struct task_barrier *tb) {

+          tb->n = 0;

+          atomic_set(&tb->count, 0);

+          sema_init(&tb->enter_turnstile, 0);

+          sema_init(&tb->exit_turnstile, 0);

+}

+

+static inline void task_barrier_add_task(struct task_barrier *tb) {

+          tb->n++;

+}

+

+static inline void task_barrier_rem_task(struct task_barrier *tb) {

+          tb->n--;

+}

+

+/*

+ * Lines up all the threads BEFORE the critical point.

+ *

+ * When all thread passed this code the entry barrier is back to locked state.

+ */

+static inline void task_barrier_enter(struct task_barrier *tb) {

+          if (atomic_inc_return(&tb->count) == tb->n)

+                      task_barrier_signal_turnstile(&tb->enter_turnstile, tb->n);

+

+          down(&tb->enter_turnstile);

+}

+

+/*

+ * Lines up all the threads AFTER the critical point.

+ *

+ * This function is used to avoid any one thread running ahead of the

+reset if

[Le]: No need to mention "reset" here.



With the above addressed, Acked-by: Le Ma Le.Ma at amd.com<mailto:Le.Ma at amd.com>



Regards,

Ma Le

+ * the barrier is used in a loop (repeatedly) .

+ */

+static inline void task_barrier_exit(struct task_barrier *tb) {

+          if (atomic_dec_return(&tb->count) == 0)

+                      task_barrier_signal_turnstile(&tb->exit_turnstile, tb->n);

+

+          down(&tb->exit_turnstile);

+}

+

+static inline void task_barrier_full(struct task_barrier *tb) {

+          task_barrier_enter(tb);

+          task_barrier_exit(tb);

+}

+

+#endif

--

2.7.4


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.freedesktop.org/archives/dri-devel/attachments/20191212/b733dac2/attachment.htm>


More information about the dri-devel mailing list