[Mesa-dev] [RFC] r600-r800 2D tiling
Jerome Glisse
j.glisse at gmail.com
Mon Jan 16 08:21:38 PST 2012
On Mon, Jan 16, 2012 at 12:08:17PM +0000, Simon Farnsworth wrote:
> (resending due to my inability to work my e-mail client - I neither cc'd
> Jerome, nor used the correct identity, so the original appears to be held in
> moderation).
>
> On Thursday 12 January 2012, Jerome Glisse <j.glisse at gmail.com> wrote:
> > Hi,
> >
> > I don't cross post as i am pretty sure all interested people are reading
> > this mailing-list.
> >
> > Attached is kernel, libdrm, ddx, mesa/r600g patches to enable 2D tiling
> > on r600 to cayman. I haven't yet done a full regression testing but 2D
> > tiling seems to work ok. I would like to get feedback on 2 things :
> >
> > - the kernel API
>
> I notice that you don't expose all the available Evergreen parameters to
> user control (TILE_SPLIT_BYTES, NUM_BANKS are both currently fixed by the
> kernel). Is this deliberate?
>
> It looks like it's leftovers from a previous attempt to force Evergreen's
> flexible 2D tiling to behave like R600's fixed-by-hardware 2D tiling.
I need to add tile split to kernel API, num banks is not a surface parameter.
Well it is but it needs to be set to the same value as the global one. I think
it might only be usefull in multi-gpu case with different GPU (but that's
just a wild guess).
>
> > - using libdrm/radeon as common place for surface allocation
> >
> > The second question especialy impact the layering/abstraction of gallium
> > btw winsys as it make libdrm/radeon_surface API a part of the winsys.
> > The ddx doesn't need as much knowledge as mesa (pretty much the whole
> > mipmap tree is pointless to the ddx). So anyone have strong feeling
> > about moving the whole mipmap tree computation to this common code ?
> >
> I'm in favour - it means that all the code relating to the details of how
> modern Radeons tile surfaces is in one place.
>
> I've looked at the API you introduce to handle this, and it should be very
> easy to port to a non-libdrm platform - the only element of the API that's
> currently tied to libdrm is radeon_surface_manager_new, so a new platform
> shouldn't struggle to adapt it.
I am in process of reworking a bit the API but it will be very close and
only the surface manager creator will have drm specific code.
> I do have one question; how are you intending to handle passing the tiling
> parameters from the DDX to Mesa for GLX_EXT_texture_from_pixmap? Right now,
> it works because the DDX uses the surface manager's defaults for tiling, as
> does Mesa; I would expect Mesa to read out the parameters as set in the
> kernel and use those.
>
> At a future date, I can envisage the DDX wanting to choose a different
> tiling layout for DRI2 buffers, or XComposite backing pixmaps (e.g. because
> someone's benchmarked it and found that choosing something beyond the bare
> minimum that meets constraints improves performance); it would be a shame if
> we can't do this because Mesa's not flexible enough.
We don't use dri2 to communicate tiling info, we go through kernel for that.
So ddx call set_tiling ioctl and mesa call get_tiling, i haven't hooked up
the mesa side to extract various eg values yet, right now it works because
both ddx and mesa use same surface allocator param so they end up taking
same value for various eg fields. Again i am working on this. Hopefully
should be completely done this week.
Cheers,
Jerome
More information about the mesa-dev
mailing list