On Thu, Apr 29, 2021 at 7:54 AM Tvrtko Ursulin tvrtko.ursulin@linux.intel.com wrote:
On 29/04/2021 13:24, Daniel Vetter wrote:
On Wed, Apr 28, 2021 at 04:51:19PM +0100, Tvrtko Ursulin wrote:
On 23/04/2021 23:31, Jason Ekstrand wrote:
This adds a bunch of complexity which the media driver has never actually used. The media driver does technically bond a balanced engine to another engine but the balanced engine only has one engine in the sibling set. This doesn't actually result in a virtual engine.
For historical reference, this is not because uapi was over-engineered but because certain SKUs never materialized.
Jason said that for SKU with lots of media engines media-driver sets up a set of ctx in userspace with all the pairings (and I guess then load balances in userspace or something like that). Tony Ye also seems to have confirmed that. So I'm not clear on which SKU this is?
Not sure if I should disclose it here. But anyway, platform which is currently in upstream and was supposed to be the first to use this uapi was supposed to have at least 4 vcs engines initially, or even 8 vcs + 4 vecs at some point. That was the requirement uapi was designed for. For that kind of platform there were supposed to be two virtual engines created, with bonding, for instance parent = [vcs0, vcs2], child = [vcs1, vcs3]; bonds = [vcs0 - vcs1, vcs2 - vcs3]. With more engines the merrier.
I've added the following to the commit message:
This functionality was originally added to handle cases where we may have more than two video engines and media might want to load-balance their bonded submits by, for instance, submitting to a balanced vcs0-1 as the primary and then vcs2-3 as the secondary. However, no such hardware has shipped thus far and, if we ever want to enable such use-cases in the future, we'll use the up-and-coming parallel submit API which targets GuC submission.
--Jason
Userspace load balancing, from memory, came into the picture only as a consequence of balancing between two types of media pipelines which was either working around the rcs contention or lack of sfc, or both. Along the lines of - one stage of a media pipeline can be done either as GPGPU work, or on the media engine, and so userspace was deciding to spawn "a bit of these and a bit of those" to utilise all the GPU blocks. Not really about frame split virtual engines and bonding, but completely different load balancing, between gpgpu and fixed pipeline.
Or maybe the real deal is only future platforms, and there we have GuC scheduler backend.
Yes, because SKUs never materialised.
Not against adding a bit more context to the commit message, but we need to make sure what we put there is actually correct. Maybe best to ask Tony/Carl as part of getting an ack from them.
I think there is no need - fact uapi was designed for way more engines than we got to have is straight forward enough.
Only unasked for flexibility in the uapi was the fact bonding can express any dependency and not only N consecutive engines as media fixed function needed at the time. I say "at the time" because in fact the "consecutive" engines requirement also got more complicated / broken in a following gen (via fusing and logical instance remapping), proving the point having the uapi disassociated from the hw limitations of the _day_ was a good call.
Regards,
Tvrtko