<p dir="ltr">Welcome to the community!</p>
<p dir="ltr">Some small comments.<br>
We usually write patch messages in present tense.<br>
So, "Format code in .... ".<br>
Some more comments below.<br></p>
<p dir="ltr">On Mar 25, 2016 01:24, "Rovanion Luckey" <<a href="mailto:rovanion.luckey@gmail.com">rovanion.luckey@gmail.com</a>> wrote:<br>
><br>
> This is a tiny housekeeping patch which does the following:<br>
><br>
> * Replaced tabs with three spaces.<br>
> * Formatted oneline and multiline code comments. Some doxygen<br>
> comments weren't marked as such and some code comments were marked<br>
> as doxygen comments.<br>
> * Spaces between if- and while-statements and their parenthesis.<br>
><br>
> As specified on: <a href="http://mesa3d.org/devinfo.html#style">http://mesa3d.org/devinfo.html#style</a><br>
></p>
<p dir="ltr">Nice detailed commit message!<br>
I'm not sure, but I think you should just write:<br>
"According to coding style standards" or something.<br>
Links may die, and so having them in the commit message<br>
may not give us much in the future.</p>
<p dir="ltr">> The only interesting point would be @@ -364,14 +363,9 @@ where the<br>
> following seemingly trivial change is applied.<br>
><br>
> - boolean destroyed;<br>
> -<br>
> - destroyed = fenced_buffer_remove_locked(fenced_mgr, fenced_buf);<br>
> + boolean destroyed = fenced_buffer_remove_locked(fenced_mgr, fenced_buf);<br>
><br>
> It may be that I'm missing some of the finer points of C making this<br>
> into a semantic change instead of the only syntactic change I was<br>
> after, in which case the change should be removed. It might also be<br>
> that it should be removed from this change set either way since it<br>
> could be considered non-trivial.<br>
></p>
<p dir="ltr">I'm not sure how this works now, but I believe there was<br>
something with older versions of MSVC that didn't allow<br>
initializing variables like this, and that's why it is separated<br>
in declaration and initialization.<br>
I believe C89 was the thing here? The VMware guys will know.<br>
I believe the requirement was lifted to a newer C standard(C99?).<br>
Therefore this is now OK I believe.</p>
<p dir="ltr">I haven't looked to closely at the other changes.<br>
I can do that tomorrow, if no one gets to it before me.</p>
<p dir="ltr">Don't let the minor nitpicking get you down ;-)<br>
There's always the occasional nitpick when you first adapt<br>
to how things are done in the project.</p>
<p dir="ltr">Thanks for your first patch!</p>
<p dir="ltr">> ---<br>
> .../auxiliary/pipebuffer/pb_buffer_fenced.c | 226 +++++++++------------<br>
> 1 file changed, 97 insertions(+), 129 deletions(-)<br>
><br>
> diff --git a/src/gallium/auxiliary/pipebuffer/pb_buffer_fenced.c b/src/gallium/auxiliary/pipebuffer/pb_buffer_fenced.c<br>
> index 2678268..fbbe8d1 100644<br>
> --- a/src/gallium/auxiliary/pipebuffer/pb_buffer_fenced.c<br>
> +++ b/src/gallium/auxiliary/pipebuffer/pb_buffer_fenced.c<br>
> @@ -108,14 +108,14 @@ struct fenced_manager<br>
> */<br>
> struct fenced_buffer<br>
> {<br>
> - /*<br>
> + /**<br>
> * Immutable members.<br>
> */<br>
><br>
> struct pb_buffer base;<br>
> struct fenced_manager *mgr;<br>
><br>
> - /*<br>
> + /**<br>
> * Following members are mutable and protected by fenced_manager::mutex.<br>
> */<br>
><br>
> @@ -205,7 +205,7 @@ fenced_manager_dump_locked(struct fenced_manager *fenced_mgr)<br>
><br>
> curr = fenced_mgr->unfenced.next;<br>
> next = curr->next;<br>
> - while(curr != &fenced_mgr->unfenced) {<br>
> + while (curr != &fenced_mgr->unfenced) {<br>
> fenced_buf = LIST_ENTRY(struct fenced_buffer, curr, head);<br>
> assert(!fenced_buf->fence);<br>
> debug_printf("%10p %7u %8u %7s\n",<br>
> @@ -219,7 +219,7 @@ fenced_manager_dump_locked(struct fenced_manager *fenced_mgr)<br>
><br>
> curr = fenced_mgr->fenced.next;<br>
> next = curr->next;<br>
> - while(curr != &fenced_mgr->fenced) {<br>
> + while (curr != &fenced_mgr->fenced) {<br>
> int signaled;<br>
> fenced_buf = LIST_ENTRY(struct fenced_buffer, curr, head);<br>
> assert(fenced_buf->buffer);<br>
> @@ -340,7 +340,7 @@ fenced_buffer_finish_locked(struct fenced_manager *fenced_mgr,<br>
> assert(pipe_is_referenced(&fenced_buf->base.reference));<br>
> assert(fenced_buf->fence);<br>
><br>
> - if(fenced_buf->fence) {<br>
> + if (fenced_buf->fence) {<br>
> struct pipe_fence_handle *fence = NULL;<br>
> int finished;<br>
> boolean proceed;<br>
> @@ -355,8 +355,7 @@ fenced_buffer_finish_locked(struct fenced_manager *fenced_mgr,<br>
><br>
> assert(pipe_is_referenced(&fenced_buf->base.reference));<br>
><br>
> - /*<br>
> - * Only proceed if the fence object didn't change in the meanwhile.<br>
> + /* Only proceed if the fence object didn't change in the meanwhile.<br>
> * Otherwise assume the work has been already carried out by another<br>
> * thread that re-aquired the lock before us.<br>
> */<br>
> @@ -364,14 +363,9 @@ fenced_buffer_finish_locked(struct fenced_manager *fenced_mgr,<br>
><br>
> ops->fence_reference(ops, &fence, NULL);<br>
><br>
> - if(proceed && finished == 0) {<br>
> - /*<br>
> - * Remove from the fenced list<br>
> - */<br>
> -<br>
> - boolean destroyed;<br>
> -<br>
> - destroyed = fenced_buffer_remove_locked(fenced_mgr, fenced_buf);<br>
> + if (proceed && finished == 0) {<br>
> + /* Remove from the fenced list. */<br>
> + boolean destroyed = fenced_buffer_remove_locked(fenced_mgr, fenced_buf);<br>
><br>
> /* TODO: remove consequents buffers with the same fence? */<br>
><br>
> @@ -405,36 +399,33 @@ fenced_manager_check_signalled_locked(struct fenced_manager *fenced_mgr,<br>
><br>
> curr = fenced_mgr->fenced.next;<br>
> next = curr->next;<br>
> - while(curr != &fenced_mgr->fenced) {<br>
> + while (curr != &fenced_mgr->fenced) {<br>
> fenced_buf = LIST_ENTRY(struct fenced_buffer, curr, head);<br>
><br>
> - if(fenced_buf->fence != prev_fence) {<br>
> - int signaled;<br>
> + if (fenced_buf->fence != prev_fence) {<br>
> + int signaled;<br>
><br>
> - if (wait) {<br>
> - signaled = ops->fence_finish(ops, fenced_buf->fence, 0);<br>
> + if (wait) {<br>
> + signaled = ops->fence_finish(ops, fenced_buf->fence, 0);<br>
><br>
> - /*<br>
> - * Don't return just now. Instead preemptively check if the<br>
> - * following buffers' fences already expired, without further waits.<br>
> - */<br>
> - wait = FALSE;<br>
> - }<br>
> - else {<br>
> - signaled = ops->fence_signalled(ops, fenced_buf->fence, 0);<br>
> - }<br>
> + /* Don't return just now. Instead preemptively check if the<br>
> + * following buffers' fences already expired, without further waits.<br>
> + */<br>
> + wait = FALSE;<br>
> + } else {<br>
> + signaled = ops->fence_signalled(ops, fenced_buf->fence, 0);<br>
> + }<br>
><br>
> - if (signaled != 0) {<br>
> - return ret;<br>
> + if (signaled != 0) {<br>
> + return ret;<br>
> }<br>
><br>
> - prev_fence = fenced_buf->fence;<br>
> - }<br>
> - else {<br>
> + prev_fence = fenced_buf->fence;<br>
> + } else {<br>
> /* This buffer's fence object is identical to the previous buffer's<br>
> * fence object, so no need to check the fence again.<br>
> */<br>
> - assert(ops->fence_signalled(ops, fenced_buf->fence, 0) == 0);<br>
> + assert(ops->fence_signalled(ops, fenced_buf->fence, 0) == 0);<br>
> }<br>
><br>
> fenced_buffer_remove_locked(fenced_mgr, fenced_buf);<br>
> @@ -462,22 +453,21 @@ fenced_manager_free_gpu_storage_locked(struct fenced_manager *fenced_mgr)<br>
><br>
> curr = fenced_mgr->unfenced.next;<br>
> next = curr->next;<br>
> - while(curr != &fenced_mgr->unfenced) {<br>
> + while (curr != &fenced_mgr->unfenced) {<br>
> fenced_buf = LIST_ENTRY(struct fenced_buffer, curr, head);<br>
><br>
> - /*<br>
> - * We can only move storage if the buffer is not mapped and not<br>
> + /* We can only move storage if the buffer is not mapped and not<br>
> * validated.<br>
> */<br>
> - if(fenced_buf->buffer &&<br>
> + if (fenced_buf->buffer &&<br>
> !fenced_buf->mapcount &&<br>
> !fenced_buf->vl) {<br>
> enum pipe_error ret;<br>
><br>
> ret = fenced_buffer_create_cpu_storage_locked(fenced_mgr, fenced_buf);<br>
> - if(ret == PIPE_OK) {<br>
> + if (ret == PIPE_OK) {<br>
> ret = fenced_buffer_copy_storage_to_cpu_locked(fenced_buf);<br>
> - if(ret == PIPE_OK) {<br>
> + if (ret == PIPE_OK) {<br>
> fenced_buffer_destroy_gpu_storage_locked(fenced_buf);<br>
> return TRUE;<br>
> }<br>
> @@ -499,7 +489,7 @@ fenced_manager_free_gpu_storage_locked(struct fenced_manager *fenced_mgr)<br>
> static void<br>
> fenced_buffer_destroy_cpu_storage_locked(struct fenced_buffer *fenced_buf)<br>
> {<br>
> - if(fenced_buf->data) {<br>
> + if (fenced_buf->data) {<br>
> align_free(fenced_buf->data);<br>
> fenced_buf->data = NULL;<br>
> assert(fenced_buf->mgr->cpu_total_size >= fenced_buf->size);<br>
> @@ -516,14 +506,14 @@ fenced_buffer_create_cpu_storage_locked(struct fenced_manager *fenced_mgr,<br>
> struct fenced_buffer *fenced_buf)<br>
> {<br>
> assert(!fenced_buf->data);<br>
> - if(fenced_buf->data)<br>
> + if (fenced_buf->data)<br>
> return PIPE_OK;<br>
><br>
> if (fenced_mgr->cpu_total_size + fenced_buf->size > fenced_mgr->max_cpu_total_size)<br>
> return PIPE_ERROR_OUT_OF_MEMORY;<br>
><br>
> fenced_buf->data = align_malloc(fenced_buf->size, fenced_buf->desc.alignment);<br>
> - if(!fenced_buf->data)<br>
> + if (!fenced_buf->data)<br>
> return PIPE_ERROR_OUT_OF_MEMORY;<br>
><br>
> fenced_mgr->cpu_total_size += fenced_buf->size;<br>
> @@ -538,7 +528,7 @@ fenced_buffer_create_cpu_storage_locked(struct fenced_manager *fenced_mgr,<br>
> static void<br>
> fenced_buffer_destroy_gpu_storage_locked(struct fenced_buffer *fenced_buf)<br>
> {<br>
> - if(fenced_buf->buffer) {<br>
> + if (fenced_buf->buffer) {<br>
> pb_reference(&fenced_buf->buffer, NULL);<br>
> }<br>
> }<br>
> @@ -575,41 +565,37 @@ fenced_buffer_create_gpu_storage_locked(struct fenced_manager *fenced_mgr,<br>
> {<br>
> assert(!fenced_buf->buffer);<br>
><br>
> - /*<br>
> - * Check for signaled buffers before trying to allocate.<br>
> - */<br>
> + /* Check for signaled buffers before trying to allocate. */<br>
> fenced_manager_check_signalled_locked(fenced_mgr, FALSE);<br>
><br>
> fenced_buffer_try_create_gpu_storage_locked(fenced_mgr, fenced_buf);<br>
><br>
> - /*<br>
> - * Keep trying while there is some sort of progress:<br>
> + /* Keep trying while there is some sort of progress:<br>
> * - fences are expiring,<br>
> * - or buffers are being being swapped out from GPU memory into CPU memory.<br>
> */<br>
> - while(!fenced_buf->buffer &&<br>
> + while (!fenced_buf->buffer &&<br>
> (fenced_manager_check_signalled_locked(fenced_mgr, FALSE) ||<br>
> fenced_manager_free_gpu_storage_locked(fenced_mgr))) {<br>
> fenced_buffer_try_create_gpu_storage_locked(fenced_mgr, fenced_buf);<br>
> }<br>
><br>
> - if(!fenced_buf->buffer && wait) {<br>
> - /*<br>
> - * Same as before, but this time around, wait to free buffers if<br>
> + if (!fenced_buf->buffer && wait) {<br>
> + /* Same as before, but this time around, wait to free buffers if<br>
> * necessary.<br>
> */<br>
> - while(!fenced_buf->buffer &&<br>
> + while (!fenced_buf->buffer &&<br>
> (fenced_manager_check_signalled_locked(fenced_mgr, TRUE) ||<br>
> fenced_manager_free_gpu_storage_locked(fenced_mgr))) {<br>
> fenced_buffer_try_create_gpu_storage_locked(fenced_mgr, fenced_buf);<br>
> }<br>
> }<br>
><br>
> - if(!fenced_buf->buffer) {<br>
> - if(0)<br>
> + if (!fenced_buf->buffer) {<br>
> + if (0)<br>
> fenced_manager_dump_locked(fenced_mgr);<br>
><br>
> - /* give up */<br>
> + /* Give up. */<br>
> return PIPE_ERROR_OUT_OF_MEMORY;<br>
> }<br>
><br>
> @@ -686,18 +672,16 @@ fenced_buffer_map(struct pb_buffer *buf,<br>
><br>
> assert(!(flags & PB_USAGE_GPU_READ_WRITE));<br>
><br>
> - /*<br>
> - * Serialize writes.<br>
> - */<br>
> - while((fenced_buf->flags & PB_USAGE_GPU_WRITE) ||<br>
> + /* Serialize writes. */<br>
> + while ((fenced_buf->flags & PB_USAGE_GPU_WRITE) ||<br>
> ((fenced_buf->flags & PB_USAGE_GPU_READ) &&<br>
> (flags & PB_USAGE_CPU_WRITE))) {<br>
><br>
> - /*<br>
> - * Don't wait for the GPU to finish accessing it, if blocking is forbidden.<br>
> + /* Don't wait for the GPU to finish accessing it,<br>
> + * if blocking is forbidden.<br>
> */<br>
> - if((flags & PB_USAGE_DONTBLOCK) &&<br>
> - ops->fence_signalled(ops, fenced_buf->fence, 0) != 0) {<br>
> + if ((flags & PB_USAGE_DONTBLOCK) &&<br>
> + ops->fence_signalled(ops, fenced_buf->fence, 0) != 0) {<br>
> goto done;<br>
> }<br>
><br>
> @@ -705,17 +689,15 @@ fenced_buffer_map(struct pb_buffer *buf,<br>
> break;<br>
> }<br>
><br>
> - /*<br>
> - * Wait for the GPU to finish accessing. This will release and re-acquire<br>
> + /* Wait for the GPU to finish accessing. This will release and re-acquire<br>
> * the mutex, so all copies of mutable state must be discarded.<br>
> */<br>
> fenced_buffer_finish_locked(fenced_mgr, fenced_buf);<br>
> }<br>
><br>
> - if(fenced_buf->buffer) {<br>
> + if (fenced_buf->buffer) {<br>
> map = pb_map(fenced_buf->buffer, flags, flush_ctx);<br>
> - }<br>
> - else {<br>
> + } else {<br>
> assert(fenced_buf->data);<br>
> map = fenced_buf->data;<br>
> }<br>
> @@ -725,7 +707,7 @@ fenced_buffer_map(struct pb_buffer *buf,<br>
> fenced_buf->flags |= flags & PB_USAGE_CPU_READ_WRITE;<br>
> }<br>
><br>
> -done:<br>
> + done:<br>
> pipe_mutex_unlock(fenced_mgr->mutex);<br>
><br>
> return map;<br>
> @@ -741,12 +723,12 @@ fenced_buffer_unmap(struct pb_buffer *buf)<br>
> pipe_mutex_lock(fenced_mgr->mutex);<br>
><br>
> assert(fenced_buf->mapcount);<br>
> - if(fenced_buf->mapcount) {<br>
> + if (fenced_buf->mapcount) {<br>
> if (fenced_buf->buffer)<br>
> pb_unmap(fenced_buf->buffer);<br>
> --fenced_buf->mapcount;<br>
> - if(!fenced_buf->mapcount)<br>
> - fenced_buf->flags &= ~PB_USAGE_CPU_READ_WRITE;<br>
> + if (!fenced_buf->mapcount)<br>
> + fenced_buf->flags &= ~PB_USAGE_CPU_READ_WRITE;<br>
> }<br>
><br>
> pipe_mutex_unlock(fenced_mgr->mutex);<br>
> @@ -765,7 +747,7 @@ fenced_buffer_validate(struct pb_buffer *buf,<br>
> pipe_mutex_lock(fenced_mgr->mutex);<br>
><br>
> if (!vl) {<br>
> - /* invalidate */<br>
> + /* Invalidate. */<br>
> fenced_buf->vl = NULL;<br>
> fenced_buf->validation_flags = 0;<br>
> ret = PIPE_OK;<br>
> @@ -776,40 +758,37 @@ fenced_buffer_validate(struct pb_buffer *buf,<br>
> assert(!(flags & ~PB_USAGE_GPU_READ_WRITE));<br>
> flags &= PB_USAGE_GPU_READ_WRITE;<br>
><br>
> - /* Buffer cannot be validated in two different lists */<br>
> - if(fenced_buf->vl && fenced_buf->vl != vl) {<br>
> + /* Buffer cannot be validated in two different lists. */<br>
> + if (fenced_buf->vl && fenced_buf->vl != vl) {<br>
> ret = PIPE_ERROR_RETRY;<br>
> goto done;<br>
> }<br>
><br>
> - if(fenced_buf->vl == vl &&<br>
> + if (fenced_buf->vl == vl &&<br>
> (fenced_buf->validation_flags & flags) == flags) {<br>
> - /* Nothing to do -- buffer already validated */<br>
> + /* Nothing to do -- buffer already validated. */<br>
> ret = PIPE_OK;<br>
> goto done;<br>
> }<br>
><br>
> - /*<br>
> - * Create and update GPU storage.<br>
> - */<br>
> - if(!fenced_buf->buffer) {<br>
> + /* Create and update GPU storage. */<br>
> + if (!fenced_buf->buffer) {<br>
> assert(!fenced_buf->mapcount);<br>
><br>
> ret = fenced_buffer_create_gpu_storage_locked(fenced_mgr, fenced_buf, TRUE);<br>
> - if(ret != PIPE_OK) {<br>
> + if (ret != PIPE_OK) {<br>
> goto done;<br>
> }<br>
><br>
> ret = fenced_buffer_copy_storage_to_gpu_locked(fenced_buf);<br>
> - if(ret != PIPE_OK) {<br>
> + if (ret != PIPE_OK) {<br>
> fenced_buffer_destroy_gpu_storage_locked(fenced_buf);<br>
> goto done;<br>
> }<br>
><br>
> - if(fenced_buf->mapcount) {<br>
> + if (fenced_buf->mapcount) {<br>
> debug_printf("warning: validating a buffer while it is still mapped\n");<br>
> - }<br>
> - else {<br>
> + } else {<br>
> fenced_buffer_destroy_cpu_storage_locked(fenced_buf);<br>
> }<br>
> }<br>
> @@ -821,7 +800,7 @@ fenced_buffer_validate(struct pb_buffer *buf,<br>
> fenced_buf->vl = vl;<br>
> fenced_buf->validation_flags |= flags;<br>
><br>
> -done:<br>
> + done:<br>
> pipe_mutex_unlock(fenced_mgr->mutex);<br>
><br>
> return ret;<br>
> @@ -841,13 +820,12 @@ fenced_buffer_fence(struct pb_buffer *buf,<br>
> assert(pipe_is_referenced(&fenced_buf->base.reference));<br>
> assert(fenced_buf->buffer);<br>
><br>
> - if(fence != fenced_buf->fence) {<br>
> + if (fence != fenced_buf->fence) {<br>
> assert(fenced_buf->vl);<br>
> assert(fenced_buf->validation_flags);<br>
><br>
> if (fenced_buf->fence) {<br>
> - boolean destroyed;<br>
> - destroyed = fenced_buffer_remove_locked(fenced_mgr, fenced_buf);<br>
> + boolean destroyed = fenced_buffer_remove_locked(fenced_mgr, fenced_buf);<br>
> assert(!destroyed);<br>
> }<br>
> if (fence) {<br>
> @@ -876,16 +854,15 @@ fenced_buffer_get_base_buffer(struct pb_buffer *buf,<br>
><br>
> pipe_mutex_lock(fenced_mgr->mutex);<br>
><br>
> - /*<br>
> - * This should only be called when the buffer is validated. Typically<br>
> + /* This should only be called when the buffer is validated. Typically<br>
> * when processing relocations.<br>
> */<br>
> assert(fenced_buf->vl);<br>
> assert(fenced_buf->buffer);<br>
><br>
> - if(fenced_buf->buffer)<br>
> + if (fenced_buf->buffer) {<br>
> pb_get_base_buffer(fenced_buf->buffer, base_buf, offset);<br>
> - else {<br>
> + } else {<br>
> *base_buf = buf;<br>
> *offset = 0;<br>
> }<br>
> @@ -896,12 +873,12 @@ fenced_buffer_get_base_buffer(struct pb_buffer *buf,<br>
><br>
> static const struct pb_vtbl<br>
> fenced_buffer_vtbl = {<br>
> - fenced_buffer_destroy,<br>
> - fenced_buffer_map,<br>
> - fenced_buffer_unmap,<br>
> - fenced_buffer_validate,<br>
> - fenced_buffer_fence,<br>
> - fenced_buffer_get_base_buffer<br>
> + fenced_buffer_destroy,<br>
> + fenced_buffer_map,<br>
> + fenced_buffer_unmap,<br>
> + fenced_buffer_validate,<br>
> + fenced_buffer_fence,<br>
> + fenced_buffer_get_base_buffer<br>
> };<br>
><br>
><br>
> @@ -917,12 +894,11 @@ fenced_bufmgr_create_buffer(struct pb_manager *mgr,<br>
> struct fenced_buffer *fenced_buf;<br>
> enum pipe_error ret;<br>
><br>
> - /*<br>
> - * Don't stall the GPU, waste time evicting buffers, or waste memory<br>
> + /* Don't stall the GPU, waste time evicting buffers, or waste memory<br>
> * trying to create a buffer that will most likely never fit into the<br>
> * graphics aperture.<br>
> */<br>
> - if(size > fenced_mgr->max_buffer_size) {<br>
> + if (size > fenced_mgr->max_buffer_size) {<br>
> goto no_buffer;<br>
> }<br>
><br>
> @@ -942,29 +918,21 @@ fenced_bufmgr_create_buffer(struct pb_manager *mgr,<br>
><br>
> pipe_mutex_lock(fenced_mgr->mutex);<br>
><br>
> - /*<br>
> - * Try to create GPU storage without stalling,<br>
> - */<br>
> + /* Try to create GPU storage without stalling. */<br>
> ret = fenced_buffer_create_gpu_storage_locked(fenced_mgr, fenced_buf, FALSE);<br>
><br>
> - /*<br>
> - * Attempt to use CPU memory to avoid stalling the GPU.<br>
> - */<br>
> - if(ret != PIPE_OK) {<br>
> + /* Attempt to use CPU memory to avoid stalling the GPU. */<br>
> + if (ret != PIPE_OK) {<br>
> ret = fenced_buffer_create_cpu_storage_locked(fenced_mgr, fenced_buf);<br>
> }<br>
><br>
> - /*<br>
> - * Create GPU storage, waiting for some to be available.<br>
> - */<br>
> - if(ret != PIPE_OK) {<br>
> + /* Create GPU storage, waiting for some to be available. */<br>
> + if (ret != PIPE_OK) {<br>
> ret = fenced_buffer_create_gpu_storage_locked(fenced_mgr, fenced_buf, TRUE);<br>
> }<br>
><br>
> - /*<br>
> - * Give up.<br>
> - */<br>
> - if(ret != PIPE_OK) {<br>
> + /* Give up. */<br>
> + if (ret != PIPE_OK) {<br>
> goto no_storage;<br>
> }<br>
><br>
> @@ -976,10 +944,10 @@ fenced_bufmgr_create_buffer(struct pb_manager *mgr,<br>
><br>
> return &fenced_buf->base;<br>
><br>
> -no_storage:<br>
> + no_storage:<br>
> pipe_mutex_unlock(fenced_mgr->mutex);<br>
> FREE(fenced_buf);<br>
> -no_buffer:<br>
> + no_buffer:<br>
> return NULL;<br>
> }<br>
><br>
> @@ -990,12 +958,12 @@ fenced_bufmgr_flush(struct pb_manager *mgr)<br>
> struct fenced_manager *fenced_mgr = fenced_manager(mgr);<br>
><br>
> pipe_mutex_lock(fenced_mgr->mutex);<br>
> - while(fenced_manager_check_signalled_locked(fenced_mgr, TRUE))<br>
> + while (fenced_manager_check_signalled_locked(fenced_mgr, TRUE))<br>
> ;<br>
> pipe_mutex_unlock(fenced_mgr->mutex);<br>
><br>
> assert(fenced_mgr->provider->flush);<br>
> - if(fenced_mgr->provider->flush)<br>
> + if (fenced_mgr->provider->flush)<br>
> fenced_mgr->provider->flush(fenced_mgr->provider);<br>
> }<br>
><br>
> @@ -1007,25 +975,25 @@ fenced_bufmgr_destroy(struct pb_manager *mgr)<br>
><br>
> pipe_mutex_lock(fenced_mgr->mutex);<br>
><br>
> - /* Wait on outstanding fences */<br>
> + /* Wait on outstanding fences. */<br>
> while (fenced_mgr->num_fenced) {<br>
> pipe_mutex_unlock(fenced_mgr->mutex);<br>
> #if defined(PIPE_OS_LINUX) || defined(PIPE_OS_BSD) || defined(PIPE_OS_SOLARIS)<br>
> sched_yield();<br>
> #endif<br>
> pipe_mutex_lock(fenced_mgr->mutex);<br>
> - while(fenced_manager_check_signalled_locked(fenced_mgr, TRUE))<br>
> + while (fenced_manager_check_signalled_locked(fenced_mgr, TRUE))<br>
> ;<br>
> }<br>
><br>
> #ifdef DEBUG<br>
> - /*assert(!fenced_mgr->num_unfenced);*/<br>
> + /* assert(!fenced_mgr->num_unfenced); */<br>
> #endif<br>
><br>
> pipe_mutex_unlock(fenced_mgr->mutex);<br>
> pipe_mutex_destroy(fenced_mgr->mutex);<br>
><br>
> - if(fenced_mgr->provider)<br>
> + if (fenced_mgr->provider)<br>
> fenced_mgr->provider->destroy(fenced_mgr->provider);<br>
><br>
> fenced_mgr->ops->destroy(fenced_mgr->ops);<br>
> --<br>
> 2.7.3<br>
> _______________________________________________<br>
> mesa-dev mailing list<br>
> <a href="mailto:mesa-dev@lists.freedesktop.org">mesa-dev@lists.freedesktop.org</a><br>
> <a href="https://lists.freedesktop.org/mailman/listinfo/mesa-dev">https://lists.freedesktop.org/mailman/listinfo/mesa-dev</a><br>
</p>