[PATCH 6/6] drm/msm: add OCMEM driver

Rob Clark robdclark at gmail.com
Thu Oct 1 17:26:22 PDT 2015


On Thu, Oct 1, 2015 at 3:25 PM, Rob Clark <robdclark at gmail.com> wrote:
> On Thu, Oct 1, 2015 at 2:00 PM, Stephen Boyd <sboyd at codeaurora.org> wrote:
>> On 10/01, Stanimir Varbanov wrote:
>>> On 09/30/2015 02:45 PM, Rob Clark wrote:
>>> > On Wed, Sep 30, 2015 at 7:31 AM, Rob Clark <robdclark at gmail.com> wrote:
>>> >> On Wed, Sep 30, 2015 at 3:51 AM, Stanimir Varbanov
>>> >> <stanimir.varbanov at linaro.org> wrote:
>>> >>> On 09/29/2015 10:48 PM, Rob Clark wrote:
>>> >> was mandatory or just power optimization..  but yes, the plan was to
>>> >> move it somewhere else (not sure quite where, drivers/doc/qcom?)
>>> >> sometime..  Preferably when someone who understands all the other
>>> >> ocmem use-cases better could figure out what we really need to add to
>>> >> the driver.
>>> >>
>>> >> In downstream driver there is a lot of complexity that appears to be
>>> >> in order to allow two clients to each allocate a portion of a macro
>>> >> within a region (ie. aggregate_macro_state(), apply_macro_vote(),
>>> >> etc), and I wanted to figure out if that is even a valid use-case
>>> >> before trying to make ocmem something that could actually support
>>> >> multiple clients.
>>> >>
>>> >> There is also some complexity about ensuring that if clients aren't
>>> >> split up on region boundaries, that you don't have one client in
>>> >> region asking for wide-mode and other for narrow-mode..
>>> >> (switch_region_mode()) but maybe we could handle that by just
>>> >> allocating wide from bottom and narrow from top.  Also seems to be
>>> >> some craziness for allowing one client to pre-empt/evict another.. a
>>> >> dm engine, etc, etc..
>>> >>
>>> >> All I know is gpu just statically allocates one big region aligned
>>> >> chunk of ocmem, so I ignored the rest of the crazy (maybe or maybe not
>>> >> hypothetical) use-cases for now...
>>>
>>> OK, I will try to sort out ocmem use cases for vidc driver.
>>>
>>
>> The simplest thing to do is to split the memory between GPU and
>> vidc statically. The other use cases with preemption and eviction
>> and DMA add a lot of complexity that we can explore at a later
>> time if need be.
>
> true, as long as one of the clients is the static gpu client, I guess
> we could reasonably easily support up to two clients reasonably
> easily...

btw, random thought..  drm_mm is a utility in drm that serves a
similar function to genalloc for graphics drivers to manage their
address space(s) (used for everything from mmap virtual address space
of buffers allocated against device to managing vram and gart
allocations, etc... when vram carveout is used w/ drm/msm (due to no
iommu) I use it to manage allocations from the carveout).  It has some
potentially convenient twists, like supporting allocation from top of
the "address space" instead of bottom.  I'm thinking in particular of
allocating "narrow mode" allocations from top and "wide mode" from
bottom since wide vs narrow can only be set per region and not per
macro within the region.  (It can also search by first-fit or
best-fit.. although not sure if that is useful to us, since OCMEM size
is relatively constrained.)

Not that I really want to keep ocmem allocator in drm.. I'd really
rather it be someone else's headache once it gets to implementing the
crazy stuff for all the random use-cases of other OCMEM users, since
gpu's use of OCMEM is rather simple/static..

The way downstream driver does this is with a bunch of extra
bookkeeping on top of genalloc so it can do a dummy allocation to
force "from the top" allocation (and then immediately freeing the
dummy allocation)..  Maybe it just makes sense to teach genalloc how
to do from-top vs from-bottom allocations?  Not really sure..

BR,
-R


More information about the dri-devel mailing list