[PATCH RFC 102/111] staging: etnaviv: separate GPU pipes from execution state

Rob Clark robdclark at gmail.com
Tue Apr 7 15:14:25 PDT 2015


On Tue, Apr 7, 2015 at 12:59 PM, Christian Gmeiner
<christian.gmeiner at gmail.com> wrote:
>>> And each Core(/FE) has its own device node. Does this make any sense?
>>>
>> And I don't get why each core needs to have a single device node. IMHO
>> this is purely an implementation decision weather to have one device
>> node for all cores or one device node per core.
>>
>
> It is an important decision. And I think that one device node per core
> reflects the
> hardware design to 100%.
>

Although I haven't really added support for devices with multiple
pipe, the pipe param in msm ioctls is intended to deal with hw that
has multiple pipes.  (And I assume someday adreno will sprout an extra
compute pipe, where we'll need this.)

in your case, it sounds a bit like you should have an ioctl to
enumerate the pipes, and a getcap that returns a bitmask of compute
engine(s) supported by a given pipe.  Or something roughly like that.

>> For now I could only see that one device node per core makes things
>> harder to get right, while I don't see a single benefit.
>>
>
> What makes harder to get it right? The needed changes to the kernel
> driver are not that
> hard. The user space is an other story but thats because of the
> render-only thing, where we
> need to pass (prime) buffers around and do fence syncs etc. In the end
> I do not see a
> showstopper in the user space.

I assume the hw gives you a way to do fencing between pipes?  It seems
at least convenient not to need to expose that via dmabuf+fence, since
that is a bit heavyweight if you end up needing to do things like
texture uploads/downloads or msaa resolve on one pipe synchronized to
rendering happening on another..

BR,
-R


More information about the dri-devel mailing list