[Mesa-dev] [PATCH 0/5] [RFC] r600g/compute: Adding support for defragmenting compute_memory_pool

Tom Stellard tom at stellard.net
Wed Jul 23 07:33:32 PDT 2014


On Fri, Jul 18, 2014 at 01:09:03PM +0200, Bruno Jimenez wrote:
> On Thu, 2014-07-17 at 22:56 -0400, Tom Stellard wrote:
> > On Wed, Jul 16, 2014 at 11:12:42PM +0200, Bruno Jiménez wrote:
> > > Hi,
> > > 
> > > This series finally adds support for defragmenting the pool for
> > > OpenCL buffers in the r600g driver. It is mostly a rewritten of
> > > the series that I wrote some months ago.
> > > 
> > > For defragmenting the pool I have thought of two different
> > > possibilities:
> > > 
> > > - Creating a new pool and moving every item here in the correct
> > >     position. This has the advantage of being very simple to
> > >     implement and that it allows the pool to be grown at the
> > >     same time. But it has a couple of problems, namely that it
> > >     has a high memory peak usage (sum of current pool + new pool)
> > >     and that in the case of having a pool not very fragmented you
> > >     have to copy every item to its new place.
> > > - Using the same pool by moving the items in it. This has the
> > >     advantage of using less memory (sum of current pool + biggest
> > >     item in it) and that it is easier to handle the case of
> > >     only having few elements out of place. The disadvantages
> > >     are that it doesn't allow growing the pool at the same time
> > >     and that it may involve twice the number of item-copies in 
> > >     the worst case.
> > > 
> > > I have chosen to implement the second option, but if you think
> > > that it is better the first one I can rewrite the series for it.
> > > (^_^)
> > > 
> > > The worst case I have mentioned is this: Imagine that you have
> > > a series of items in which the first is, at least, 1 'unit'
> > > smaller than the rest. You now free this item and create a new
> > > one with the same size [why would anyone do this? I don't know]
> > > For now, the defragmenter code is so dumb that it will move
> > > every item to the front of the pool without trying first to
> > > put this new item in the available space.
> > > 
> > > Hopefully situations like this won't be very common.
> > > 
> > > If you want me to explain any detail about any of the patches
> > > just ask. And as said, if you prefer the first version of the
> > > defragmenter, just ask. [In fact, after having written this,
> > > I may add it for the case grow+defrag]
> > > 
> > > Also, no regressions found in piglit.
> > > 
> > > Thanks in advance!
> > > Bruno
> > > 
> > > Bruno Jiménez (5):
> > >   r600g/compute: Add a function for moving items in the pool
> > >   r600g/compute: Add a function for defragmenting the pool
> > >   r600g/compute: Defrag the pool if it's necesary
> > >   r600g/compute: Quick exit if there's nothing to add to the pool
> > >   r600g/compute: Remove unneeded code from compute_memory_promote_item
> > > 
> > >  src/gallium/drivers/r600/compute_memory_pool.c | 196 ++++++++++++++++++-------
> > >  src/gallium/drivers/r600/compute_memory_pool.h |  13 +-
> > >  2 files changed, 156 insertions(+), 53 deletions(-)
> > 
> > Hi,
> > 
> > A took a brief look at these patches and they look pretty good.  I will
> > look at them again tomorrow and then commit if I don't see any issues.
> 

I've pushed these patches, thanks!

-Tom

> Hi,
> 
> Thanks, if you have any doubt about any of the patches just ask.
> 
> I have just ended writing a follow up series for doing grow + defrag at
> the same time. I still have to test it, but if no problems arise I'll
> send it to the list as soon as possible.
> 
> This new series is based on the patch that I sent here:
> http://lists.freedesktop.org/archives/mesa-dev/2014-July/062923.html 
> If you think it's good, could you push it to master?
> 
> Thanks in advance!
> Bruno
> 
> > -Tom
> > 
> > > 
> > > -- 
> > > 2.0.1
> > > 
> > > _______________________________________________
> > > mesa-dev mailing list
> > > mesa-dev at lists.freedesktop.org
> > > http://lists.freedesktop.org/mailman/listinfo/mesa-dev
> 
> 


More information about the mesa-dev mailing list