[Mesa-dev] [PATCH 2/2] st/mesa: Handle GL_MAP_INVALIDATE_BUFFER_BIT in st_bufferobj_map_range().

Younes Manton younes.m at gmail.com
Mon Jul 18 19:30:40 PDT 2011

On Mon, Jul 18, 2011 at 8:32 PM, Marek Olšák <maraeo at gmail.com> wrote:
> On Tue, Jul 19, 2011 at 12:21 AM, Jose Fonseca <jfonseca at vmware.com> wrote:
>> ----- Original Message -----
>>> We can't do try-map + create + map + dereference internally in a
>>> driver. Creating a new buffer and replacing a pointer to the old one
>>> may lead to the following issue. If a buffer pointer is replaced, it
>>> doesn't necessarily update all the states the buffer is set in in all
>>> existing contexts. Such states and contexts would still use the old
>>> buffer and wouldn't see the change until the old buffer is unbound.
>> First, I don't think this is a real issue. OpenGL 3.3. section "Shared Objects and Multiple Contexts" says:
>>  Rule 3 State Changes to the contents of shared objects are not automatically prop-
>> agated between contexts. If the contents of a shared object T are changed in a con-
>> text other than the current context, and T is already directly or indirectly attached
>> to the current context, any operations on the current context involving T via those
>> attachments are not guaranteed to use its new contents.
>> And even if it was, it's straightforward to implement: other contexts need validate the storage of bound resources on each draw call.  Of course, it wouldn't be efficient to revalidate all buffers if the pipe buffer bindings didn't change, which is probably why the spec doesn't require it.
>>> I think the only correct way to implement the DISCARD flags in
>>> drivers
>>> is through a temporary (staging) resource and doing an on-gpu copy to
>>> the original one (i.e. what we do for texture transfers in rX00g).
>> Queueing a gpu copy would be an efficient way to implement MAP_INVALIDATE_RANGE_BIT without stalling, but to implement GL_MAP_INVALIDATE_BUFFER_BIT an gpu copy is overkill. The whole point of MAP_INVALIDATE_RANGE_BIT/DISCARD_WHOLE_RESOURCE is changing the underlying storage, without chaning the resource/buffer-object.
>> That is:
>> struct foo_resource {
>>    struct pipe_resource base;
>>    struct kernel_buffer_object_t *bo.
>> };
>>   if (flags & DISCARD_WHOLE_RESOURCE) {
>>      if (is_kernel_buffer_object_busy(resource->bo) {
>>          bo = kernel_buffer_object_create()
>>          reference(resource->bo, bo);
>>      }
>>   }
> I implemented exactly that and it didn't work (= caused regressions).
> The current context must still be examined if the bo is bound
> somewhere and mark such states as dirty, so that the gpu can see the
> change.
> I don't like the fact OpenGL makes a lot of obvious object sharing
> cases undefined, but whatever. I wonder whether resource sharing works
> any different in and between other APIs, like OpenGL vs OpenCL.
> Marek

We didn't keep references to the BO anywhere in the context, just the
nouveau subclasses of pipe_buffer and pipe_texture where needed. The
only place we kept BO refs was in a list associated with each command
buffer, and as each command buffer's fence came up we freed up any BOs
in the list that were no longer ref'd elsewhere.

More information about the mesa-dev mailing list