[PATCH 1/2] drm/panfrost: Allow passing extra information about BOs used by a job
steven.price at arm.com
Mon Sep 23 13:41:19 UTC 2019
On 17/09/2019 08:15, Boris Brezillon wrote:
> On Mon, 16 Sep 2019 17:20:28 -0500
> Rob Herring <robh at kernel.org> wrote:
>> On Fri, Sep 13, 2019 at 6:17 AM Boris Brezillon
>> <boris.brezillon at collabora.com> wrote:
>>> The READ/WRITE flags are particularly useful if we want to avoid
>>> serialization of jobs that read from the same BO but never write to it.
>> Any data on performance differences?
> Unfortunately no. When I initially added support for BO flags I thought
> it would fix a regression I had on one glmark2 test (ideas), but the
> problem ended up being something completely different (overhead of
> calling ioctl(WAIT_BO) on already idle BOs).
> I just ran glmark2 again, and there doesn't seem to be a noticeable
> improvement with those 2 patches applied (and mesa patched to use the
> new flags). This being said, the improvement is likely to be workload
> dependent, so I wouldn't consider these patches as useless, but I'm
> fine putting them on hold until we see a real need.
> Maybe Steven has some real use cases that could help outline the
> benefit of these patches.
To be honest I don't really know. The DDK internally does track this
read/write information - but then it doesn't involve the kernel in
tracking it. I was presuming that Mesa (because it exports the buffer
usage to the kernel) would benefit from being able to have multiple
readers - for example of a buffer being used as a texture for multiple
render passes. It's possible we don't see this benefit because we
haven't got the queuing of jobs merged yet?
There might also be some benefits when it comes to interaction with
other drivers, but I don't have any concrete examples.
>>> The NO_IMPLICIT_FENCE might be useful when the user knows the BO is
>>> shared but jobs are using different portions of the buffer.
>> Why don't we add this when it is useful rather than might be?
> I don't have a need for that one yet, but etanviv has it in place so I
> thought I'd add both at the same time.
> Note that it could help us reduce the number of fence returned by
> panfrost_job_dependency(), but I'm not sure it makes a difference in
> terms of perfs.
I'm not aware of any reason for NO_IMPLICIT_FENCE. I found it somewhat
odd that it is effectively one way (it doesn't wait for fences, but
others could be waiting on the fence added by this usage). If we don't
have a use yet in Mesa then it's probably best to wait until we know how
this is actually going to be used.
There is of course already the option of simply omitting the BO in the
job to prevent any dependencies being created :)
More information about the dri-devel