[Intel-gfx] [PATCH] drm: fix call_kern.cocci warnings v2

Koenig, Christian Christian.Koenig at amd.com
Thu Oct 25 09:30:31 UTC 2018


Am 25.10.18 um 11:28 schrieb zhoucm1:


On 2018年10月25日 17:23, Koenig, Christian wrote:
Am 25.10.18 um 11:20 schrieb zhoucm1:


On 2018年10月25日 17:11, Koenig, Christian wrote:
Am 25.10.18 um 11:03 schrieb zhoucm1:


On 2018年10月25日 16:56, Christian König wrote:
+++ b/drivers/gpu/drm/drm_syncobj.c
@@ -111,15 +111,16 @@ static struct dma_fence
                        uint64_t point)
  {
      struct drm_syncobj_signal_pt *signal_pt;
+    struct dma_fence *f = NULL;
+    struct drm_syncobj_stub_fence *fence =
+        kzalloc(sizeof(struct drm_syncobj_stub_fence),
+            GFP_KERNEL);
  +    if (!fence)
+        return NULL;
+    spin_lock(&syncobj->pt_lock);

How about using a single static stub fence like I suggested?
Sorry, I don't get your meanings, how to do that?

Add a new function drm_syncobj_stub_fence_init() which is called from drm_core_init() when the module is loaded.

In drm_syncobj_stub_fence_init() you initialize one static stub_fence which is then used over and over again.
Seems it would not work, we could need more than one stub fence.

Mhm, why? I mean it is just a signaled fence,

If A gets the global stub fence, doesn't put it yet, then B is coming, how does B re-use the global stub fence?  anything I misunderstand?

dma_fence_get()? The whole thing is reference counted, every time you need it you grab another reference.

Since we globally initialize it the reference never becomes zero, so it is never released.

Christian.


David
context and sequence number are irrelevant.

Christian.


David

Since its reference count never goes down to zero it should never be freed. In doubt maybe add a .free callback which just calls BUG() to catch reference count issues.

Christian.


Thanks,
David





-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.freedesktop.org/archives/intel-gfx/attachments/20181025/88afab44/attachment.html>


More information about the Intel-gfx mailing list