<div dir="ltr"><div>To identity a performance problem I would look from at it from the following perspective.</div><div><br></div><div>SSE implementation:</div><div> 1. Alignment loop [alignment to 16 byte boundary] (max 3 pixels)<br></div><div> 2. Inner loop (4+ pixels)</div><div> 3. Trailing loop (max 3 pixels).</div><div><br></div><div> Worst case: 6 pixels avoiding inner loop completely.<br></div><div> Minimum number of pixels to enter inner loop: 4.</div><div><br></div><div>AVX2 implementation:</div><div> 1. Alignment loop [alignment to 32 byte boundary] (max 7 pixels)<br></div><div><div> 2. Inner loop (8+ pixels)</div><div> 3. Trailing loop (max 7 pixels).</div><div><br></div><div> Worst case: 14 pixels avoiding inner loop completely.<br></div><div><div> Minimum number of pixels to enter inner loop: 8.</div><br class="gmail-Apple-interchange-newline"></div><div>Based on these observations I would suggest doing this smarter and introducing a 4 pixel case that would be able to speed-up spans that would normally not reach inner loop at all or that would spend too much time in single pixel loops. The reason is simple - single pixel loop would probably be very comparable to inner loop in terms of performance because of tight dependency and impossibility to execute anything in single-pixel case in parallel.</div><div><br></div><div>Decreasing the array size gradually in the initial benchmark to some small width like 4 would definitely reveal all the problems with the current implementation. I think minimum value of 400 pixels is just too high for any reasonable conclusion about performance of such optimization, because 2D is usually about smaller fills/blits most of the time.</div><div><br></div></div><div>- Petr</div><div><div><br></div></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Wed, Jan 23, 2019 at 3:40 PM Devulapalli, Raghuveer <<a href="mailto:raghuveer.devulapalli@intel.com">raghuveer.devulapalli@intel.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Hmm, I wonder why none of the cairo perf traces don’t show any improvement when the low level benchmark in PIXMAN shows 1.8x improvement. I would like to investigate that further to know what's happening (might need your help in doing so). Please do include the avx2_composite_over_reverse_n_8888 patch. If you know specific functions where avx2 is likely to benefit, I am happy to work with you to add those functionalities in avx2. <br>
<br>
For the ones I did submit, would you like me to fix the patch which adds the avx2 infrastructure and submit that again? And do you still want the patch which splits sse2 helpers functions into a separate file? <br>
<br>
Thanks for your help. <br>
<br>
-----Original Message-----<br>
From: Matt Turner [mailto:<a href="mailto:mattst88@gmail.com" target="_blank">mattst88@gmail.com</a>] <br>
Sent: Monday, January 21, 2019 5:46 PM<br>
To: Devulapalli, Raghuveer <<a href="mailto:raghuveer.devulapalli@intel.com" target="_blank">raghuveer.devulapalli@intel.com</a>><br>
Cc: <a href="mailto:pixman@lists.freedesktop.org" target="_blank">pixman@lists.freedesktop.org</a><br>
Subject: Re: [Pixman] [PATCH 3/3] Rev2 of patch: AVX2 versions of OVER and ROVER operators.<br>
<br>
On Wed, Jan 16, 2019 at 4:57 PM Raghuveer Devulapalli <<a href="mailto:raghuveer.devulapalli@intel.com" target="_blank">raghuveer.devulapalli@intel.com</a>> wrote:<br>
><br>
> From: raghuveer devulapalli <<a href="mailto:raghuveer.devulapalli@intel.com" target="_blank">raghuveer.devulapalli@intel.com</a>><br>
><br>
> These were found to be upto 1.8 times faster (depending on the array<br>
> size) than the corresponding SSE2 version. The AVX2 and SSE2 were <br>
> benchmarked on a Intel(R) Core(TM) i5-6260U CPU @ 1.80GHz. The AVX2 <br>
> and SSE versions were benchmarked by measuring how many TSC cycles <br>
> each of the avx2_combine_over_u and sse2_combine_over_u functions took <br>
> to run for various array sizes. For the purpose of benchmarking, turbo <br>
> was disabled and intel_pstate governor was set to performance to avoid <br>
> variance in CPU frequencies across multiple runs.<br>
><br>
> | Array size | #cycles SSE2 | #cycles AVX2 |<br>
> --------------------------------------------<br>
> | 400 | 53966 | 32800 |<br>
> | 800 | 107595 | 62595 |<br>
> | 1600 | 214810 | 122482 |<br>
> | 3200 | 429748 | 241971 |<br>
> | 6400 | 859070 | 481076 |<br>
><br>
> Also ran lowlevel-blt-bench for OVER_8888_8888 operation and that also <br>
> shows a 1.55x-1.79x improvement over SSE2. Here are the details:<br>
><br>
> AVX2: OVER_8888_8888 = L1:2136.35 L2:2109.46 M:1751.99 ( 60.90%)<br>
> SSE2: OVER_8888_8888 = L1:1188.91 L2:1190.63 M:1128.32 ( 40.31%)<br>
><br>
> The AVX2 implementation uses the SSE2 version for manipulating pixels <br>
> that are not 32 byte aligned. The helper functions from pixman-sse2.h <br>
> are re-used for this purpose.<br>
<br>
I still cannot measure any performance improvement with cairo-traces.<br>
If we're not improving performance in any real world application, then I don't think it's worth adding a significant amount of code.<br>
<br>
As I told you in person and in private mail, I suspect that you're more likely to see real performance improvements in operations that are more compute-heavy, like bilinear filtering. You could probably use AVX2's gather instructions in the bilinear code as well. Filling out the avx2_iters array would also be a good place to start, since those functions execute when we do not have a specific fast-path for an operation (which will be the case for AVX2).<br>
<br>
I sense that you want to check this off your todo list and move on. If that's the case, we can include the avx2_composite_over_reverse_n_8888<br>
function I wrote (and will send as a separate patch) to confirm that using AVX2 is capable of giving a performance improvement in some cairo traces.<br>
<br>
> ---<br>
> pixman/pixman-avx2.c | 431 <br>
> ++++++++++++++++++++++++++++++++++++++++++-<br>
> 1 file changed, 430 insertions(+), 1 deletion(-)<br>
><br>
> diff --git a/pixman/pixman-avx2.c b/pixman/pixman-avx2.c index <br>
> d860d67..faef552 100644<br>
> --- a/pixman/pixman-avx2.c<br>
> +++ b/pixman/pixman-avx2.c<br>
> @@ -6,13 +6,439 @@<br>
> #include "pixman-private.h"<br>
> #include "pixman-combine32.h"<br>
> #include "pixman-inlines.h"<br>
> +#include "pixman-sse2.h"<br>
><br>
> +#define MASK_0080_AVX2 _mm256_set1_epi16(0x0080) #define <br>
> +MASK_00FF_AVX2 _mm256_set1_epi16(0x00ff) #define MASK_0101_AVX2 <br>
> +_mm256_set1_epi16(0x0101)<br>
> +<br>
> +static force_inline __m256i<br>
> +load_256_aligned (__m256i* src)<br>
> +{<br>
> + return _mm256_load_si256(src);<br>
> +}<br>
> +<br>
> +static force_inline void<br>
> +negate_2x256 (__m256i data_lo,<br>
> + __m256i data_hi,<br>
> + __m256i* neg_lo,<br>
> + __m256i* neg_hi)<br>
> +{<br>
> + *neg_lo = _mm256_xor_si256 (data_lo, MASK_00FF_AVX2);<br>
> + *neg_hi = _mm256_xor_si256 (data_hi, MASK_00FF_AVX2); }<br>
> +<br>
> +static force_inline __m256i<br>
> +pack_2x256_256 (__m256i lo, __m256i hi) {<br>
> + return _mm256_packus_epi16 (lo, hi); }<br>
> +<br>
<br>
Stray space<br>
<br>
> +static force_inline void<br>
> +pix_multiply_2x256 (__m256i* data_lo,<br>
> + __m256i* data_hi,<br>
> + __m256i* alpha_lo,<br>
> + __m256i* alpha_hi,<br>
> + __m256i* ret_lo,<br>
> + __m256i* ret_hi)<br>
> +{<br>
> + __m256i lo, hi;<br>
> +<br>
> + lo = _mm256_mullo_epi16 (*data_lo, *alpha_lo);<br>
> + hi = _mm256_mullo_epi16 (*data_hi, *alpha_hi);<br>
> + lo = _mm256_adds_epu16 (lo, MASK_0080_AVX2);<br>
> + hi = _mm256_adds_epu16 (hi, MASK_0080_AVX2);<br>
> + *ret_lo = _mm256_mulhi_epu16 (lo, MASK_0101_AVX2);<br>
> + *ret_hi = _mm256_mulhi_epu16 (hi, MASK_0101_AVX2); }<br>
> +<br>
<br>
Stray space<br>
<br>
> +static force_inline void<br>
> +over_2x256 (__m256i* src_lo,<br>
> + __m256i* src_hi,<br>
> + __m256i* alpha_lo,<br>
> + __m256i* alpha_hi,<br>
> + __m256i* dst_lo,<br>
> + __m256i* dst_hi)<br>
> +{<br>
> + __m256i t1, t2;<br>
> +<br>
> + negate_2x256 (*alpha_lo, *alpha_hi, &t1, &t2);<br>
> +<br>
> + pix_multiply_2x256 (dst_lo, dst_hi, &t1, &t2, dst_lo, dst_hi);<br>
> +<br>
> + *dst_lo = _mm256_adds_epu8 (*src_lo, *dst_lo);<br>
> + *dst_hi = _mm256_adds_epu8 (*src_hi, *dst_hi); }<br>
> +<br>
> +static force_inline void<br>
> +expand_alpha_2x256 (__m256i data_lo,<br>
> + __m256i data_hi,<br>
> + __m256i* alpha_lo,<br>
> + __m256i* alpha_hi) {<br>
> + __m256i lo, hi;<br>
> +<br>
> + lo = _mm256_shufflelo_epi16 (data_lo, _MM_SHUFFLE (3, 3, 3, 3));<br>
> + hi = _mm256_shufflelo_epi16 (data_hi, _MM_SHUFFLE (3, 3, 3, 3));<br>
> +<br>
> + *alpha_lo = _mm256_shufflehi_epi16 (lo, _MM_SHUFFLE (3, 3, 3, 3));<br>
> + *alpha_hi = _mm256_shufflehi_epi16 (hi, _MM_SHUFFLE (3, 3, 3, <br>
> +3)); }<br>
> +<br>
> +static force_inline void<br>
> +unpack_256_2x256 (__m256i data, __m256i* data_lo, __m256i* data_hi) {<br>
> + *data_lo = _mm256_unpacklo_epi8 (data, _mm256_setzero_si256 ());<br>
> + *data_hi = _mm256_unpackhi_epi8 (data, _mm256_setzero_si256 ()); <br>
> +}<br>
> +<br>
> +/* save 4 pixels on a 16-byte boundary aligned address */ static <br>
> +force_inline void save_256_aligned (__m256i* dst,<br>
> + __m256i data)<br>
> +{<br>
> + _mm256_store_si256 (dst, data);<br>
> +}<br>
> +<br>
> +static force_inline int<br>
> +is_opaque_256 (__m256i x)<br>
> +{<br>
> + __m256i ffs = _mm256_cmpeq_epi8 (x, x);<br>
> +<br>
> + return (_mm256_movemask_epi8<br>
> + (_mm256_cmpeq_epi8 (x, ffs)) & 0x88888888) == 0x88888888; <br>
> +}<br>
> +<br>
> +static force_inline int<br>
> +is_zero_256 (__m256i x)<br>
> +{<br>
> + return _mm256_movemask_epi8 (<br>
> + _mm256_cmpeq_epi8 (x, _mm256_setzero_si256 ())) == 0xffffffff; <br>
> +}<br>
> +<br>
> +static force_inline int<br>
> +is_transparent_256 (__m256i x)<br>
> +{<br>
> + return (_mm256_movemask_epi8 (<br>
> + _mm256_cmpeq_epi8 (x, _mm256_setzero_si256 ())) & 0x88888888)<br>
> + == 0x88888888;<br>
> +}<br>
> +<br>
> +<br>
<br>
Extra newline<br>
<br>
> +/* load 4 pixels from a unaligned address */ static force_inline <br>
> +__m256i load_256_unaligned (const __m256i* src) {<br>
> + return _mm256_loadu_si256 (src);<br>
> +}<br>
> +<br>
> +static force_inline __m256i<br>
> +combine8 (const __m256i *ps, const __m256i *pm) {<br>
> + __m256i ymm_src_lo, ymm_src_hi;<br>
> + __m256i ymm_msk_lo, ymm_msk_hi;<br>
> + __m256i s;<br>
> +<br>
> + if (pm)<br>
> + {<br>
> + ymm_msk_lo = load_256_unaligned (pm);<br>
> +<br>
> + if (is_transparent_256 (ymm_msk_lo))<br>
> + return _mm256_setzero_si256 ();<br>
> + }<br>
> +<br>
> + s = load_256_unaligned (ps);<br>
> +<br>
> + if (pm)<br>
> + {<br>
> + unpack_256_2x256 (s, &ymm_src_lo, &ymm_src_hi);<br>
> + unpack_256_2x256 (ymm_msk_lo, &ymm_msk_lo, &ymm_msk_hi);<br>
> +<br>
> + expand_alpha_2x256 (ymm_msk_lo, ymm_msk_hi, &ymm_msk_lo, <br>
> + &ymm_msk_hi);<br>
> +<br>
> + pix_multiply_2x256 (&ymm_src_lo, &ymm_src_hi,<br>
> + &ymm_msk_lo, &ymm_msk_hi,<br>
> + &ymm_src_lo, &ymm_src_hi);<br>
> +<br>
> + s = pack_2x256_256 (ymm_src_lo, ymm_src_hi);<br>
> + }<br>
> +<br>
> + return s;<br>
> +}<br>
> +<br>
> +static force_inline void<br>
> +core_combine_over_u_avx2_mask (uint32_t * pd,<br>
> + const uint32_t* ps,<br>
> + const uint32_t* pm,<br>
> + int w)<br>
> +{<br>
> + uint32_t s, d;<br>
> + while (w && ((uintptr_t)pd & 31))<br>
> + {<br>
> + d = *pd;<br>
> + s = combine1 (ps, pm);<br>
> +<br>
> + if (s)<br>
> + *pd = core_combine_over_u_pixel_sse2 (s, d);<br>
> + pd++;<br>
> + ps++;<br>
> + pm++;<br>
> + w--;<br>
> + }<br>
> +<br>
<br>
Is the alignment loop here actually important for performance? As far as I know unaligned loads are cheap on all CPUs with AVX2. We might be able to do unaligned loads/stores in the loop below and get into the vectorized code sooner./+pack_2x256_256<br>
<br>
> + /*<br>
> + * dst is 32 byte aligned, and w >=8 means the next 256 bits<br>
> + * contain relevant data<br>
> + */<br>
> +<br>
<br>
Stray whitespace, and */ is not aligned<br>
<br>
> + while (w >= 8)<br>
> + {<br>
> + __m256i mask = load_256_unaligned ((__m256i *)pm);<br>
> +<br>
> + if (!is_zero_256 (mask))<br>
> + {<br>
> + __m256i src;<br>
> + __m256i src_hi, src_lo;<br>
> + __m256i mask_hi, mask_lo;<br>
> + __m256i alpha_hi, alpha_lo;<br>
> +<br>
> + src = load_256_unaligned ((__m256i *)ps);<br>
> +<br>
> + if (is_opaque_256 (_mm256_and_si256 (src, mask)))<br>
> + {<br>
> + save_256_aligned ((__m256i *)pd, src);<br>
> + }<br>
> + else<br>
> + {<br>
> + __m256i dst = load_256_aligned ((__m256i *)pd);<br>
> + __m256i dst_hi, dst_lo;<br>
> +<br>
> + unpack_256_2x256 (mask, &mask_lo, &mask_hi);<br>
> + unpack_256_2x256 (src, &src_lo, &src_hi);<br>
> +<br>
> + expand_alpha_2x256 (mask_lo, mask_hi, &mask_lo, &mask_hi);<br>
> + pix_multiply_2x256 (&src_lo, &src_hi,<br>
> + &mask_lo, &mask_hi,<br>
> + &src_lo, &src_hi);<br>
> +<br>
<br>
Stray spaces again<br>
<br>
> + unpack_256_2x256 (dst, &dst_lo, &dst_hi);<br>
> + expand_alpha_2x256 (src_lo, src_hi,<br>
> + &alpha_lo, &alpha_hi);<br>
> +<br>
> + over_2x256 (&src_lo, &src_hi, &alpha_lo, &alpha_hi,<br>
> + &dst_lo, &dst_hi);<br>
> +<br>
> + save_256_aligned (<br>
> + (__m256i *)pd,<br>
> + pack_2x256_256 (dst_lo, dst_hi));<br>
> + }<br>
> + }<br>
> + pm += 8;<br>
> + ps += 8;<br>
> + pd += 8;<br>
> + w -= 8;<br>
> + }<br>
> +<br>
> + while (w)<br>
> + {<br>
> + d = *pd;<br>
> + s = combine1 (ps, pm);<br>
> +<br>
> + if (s)<br>
> + *pd = core_combine_over_u_pixel_sse2 (s, d);<br>
> + pd++;<br>
> + ps++;<br>
> + pm++;<br>
> + w--;<br>
> + }<br>
> +}<br>
> +<br>
> +static force_inline void<br>
> +core_combine_over_u_avx2_no_mask (uint32_t * pd,<br>
> + const uint32_t* ps,<br>
> + int w)<br>
> +{<br>
> + uint32_t s, d;<br>
> +<br>
> + /* Align dst on a 16-byte boundary */<br>
> + while (w && ((uintptr_t)pd & 31))<br>
> + {<br>
> + d = *pd;<br>
> + s = *ps;<br>
> +<br>
> + if (s)<br>
> + *pd = core_combine_over_u_pixel_sse2 (s, d);<br>
> + pd++;<br>
> + ps++;<br>
> + w--;<br>
> + }<br>
> +<br>
> + while (w >= 8)<br>
> + {<br>
> + __m256i src;<br>
> + __m256i src_hi, src_lo, dst_hi, dst_lo;<br>
> + __m256i alpha_hi, alpha_lo;<br>
> +<br>
> + src = load_256_unaligned ((__m256i *)ps);<br>
> +<br>
> + if (!is_zero_256 (src))<br>
> + {<br>
> + if (is_opaque_256 (src))<br>
> + {<br>
> + save_256_aligned ((__m256i *)pd, src);<br>
> + }<br>
> + else<br>
> + {<br>
> + __m256i dst = load_256_aligned ((__m256i *)pd);<br>
> +<br>
> + unpack_256_2x256 (src, &src_lo, &src_hi);<br>
> + unpack_256_2x256 (dst, &dst_lo, &dst_hi);<br>
> +<br>
> + expand_alpha_2x256 (src_lo, src_hi,<br>
> + &alpha_lo, &alpha_hi);<br>
> + over_2x256 (&src_lo, &src_hi, &alpha_lo, &alpha_hi,<br>
> + &dst_lo, &dst_hi);<br>
> +<br>
> + save_256_aligned (<br>
> + (__m256i *)pd,<br>
> + pack_2x256_256 (dst_lo, dst_hi));<br>
> + }<br>
> + }<br>
> +<br>
> + ps += 8;<br>
> + pd += 8;<br>
> + w -= 8;<br>
> + }<br>
<br>
Should have a blank line here<br>
<br>
> + while (w)<br>
> + {<br>
> + d = *pd;<br>
> + s = *ps;<br>
> +<br>
> + if (s)<br>
> + *pd = core_combine_over_u_pixel_sse2 (s, d);<br>
> + pd++;<br>
> + ps++;<br>
> + w--;<br>
> + }<br>
> +}<br>
> +<br>
> +static force_inline void<br>
> +avx2_combine_over_u (pixman_implementation_t *imp,<br>
> + pixman_op_t op,<br>
> + uint32_t * pd,<br>
> + const uint32_t * ps,<br>
> + const uint32_t * pm,<br>
> + int w)<br>
> +{<br>
> + if (pm)<br>
> + core_combine_over_u_avx2_mask (pd, ps, pm, w);<br>
> + else<br>
> + core_combine_over_u_avx2_no_mask (pd, ps, w); }<br>
> +<br>
> +static void<br>
> +avx2_combine_over_reverse_u (pixman_implementation_t *imp,<br>
> + pixman_op_t op,<br>
> + uint32_t * pd,<br>
> + const uint32_t * ps,<br>
> + const uint32_t * pm,<br>
> + int w)<br>
> +{<br>
> + uint32_t s, d;<br>
> +<br>
> + __m256i ymm_dst_lo, ymm_dst_hi;<br>
> + __m256i ymm_src_lo, ymm_src_hi;<br>
> + __m256i ymm_alpha_lo, ymm_alpha_hi;<br>
> +<br>
> + /* Align dst on a 16-byte boundary */<br>
> + while (w &&<br>
> + ((uintptr_t)pd & 31))<br>
> + {<br>
> + d = *pd;<br>
> + s = combine1 (ps, pm);<br>
> +<br>
> + *pd++ = core_combine_over_u_pixel_sse2 (d, s);<br>
> + w--;<br>
> + ps++;<br>
> + if (pm)<br>
> + pm++;<br>
> + }<br>
> +<br>
> + while (w >= 8)<br>
> + {<br>
> + ymm_src_hi = combine8 ((__m256i*)ps, (__m256i*)pm);<br>
> + ymm_dst_hi = load_256_aligned ((__m256i*) pd);<br>
> +<br>
> + unpack_256_2x256 (ymm_src_hi, &ymm_src_lo, &ymm_src_hi);<br>
> + unpack_256_2x256 (ymm_dst_hi, &ymm_dst_lo, &ymm_dst_hi);<br>
> +<br>
> + expand_alpha_2x256 (ymm_dst_lo, ymm_dst_hi,<br>
> + &ymm_alpha_lo, &ymm_alpha_hi);<br>
> +<br>
> + over_2x256 (&ymm_dst_lo, &ymm_dst_hi,<br>
> + &ymm_alpha_lo, &ymm_alpha_hi,<br>
> + &ymm_src_lo, &ymm_src_hi);<br>
> +<br>
> + /* rebuid the 4 pixel data and save*/<br>
> + save_256_aligned ((__m256i*)pd,<br>
> + pack_2x256_256 (ymm_src_lo, ymm_src_hi));<br>
> +<br>
> + w -= 8;<br>
> + ps += 8;<br>
> + pd += 8;<br>
> +<br>
> + if (pm)<br>
> + pm += 8;<br>
> + }<br>
> +<br>
> + while (w)<br>
> + {<br>
> + d = *pd;<br>
> + s = combine1 (ps, pm);<br>
> +<br>
> + *pd++ = core_combine_over_u_pixel_sse2 (d, s);<br>
> + ps++;<br>
> + w--;<br>
> + if (pm)<br>
> + pm++;<br>
> + }<br>
> +}<br>
> +<br>
> +static void<br>
> +avx2_composite_over_8888_8888 (pixman_implementation_t *imp,<br>
> + pixman_composite_info_t *info) {<br>
> + PIXMAN_COMPOSITE_ARGS (info);<br>
> + int dst_stride, src_stride;<br>
> + uint32_t *dst_line, *dst;<br>
> + uint32_t *src_line, *src;<br>
> +<br>
> + PIXMAN_IMAGE_GET_LINE (<br>
> + dest_image, dest_x, dest_y, uint32_t, dst_stride, dst_line, 1);<br>
> + PIXMAN_IMAGE_GET_LINE (<br>
> + src_image, src_x, src_y, uint32_t, src_stride, src_line, 1);<br>
> +<br>
> + dst = dst_line;<br>
> + src = src_line;<br>
> +<br>
> + while (height--)<br>
> + {<br>
> + avx2_combine_over_u (imp, op, dst, src, NULL, width);<br>
> +<br>
> + dst += dst_stride;<br>
> + src += src_stride;<br>
> + }<br>
> +}<br>
<br>
Should have a blank line here<br>
<br>
> static const pixman_fast_path_t avx2_fast_paths[] = {<br>
> + PIXMAN_STD_FAST_PATH (OVER, a8r8g8b8, null, a8r8g8b8, avx2_composite_over_8888_8888),<br>
> + PIXMAN_STD_FAST_PATH (OVER, a8r8g8b8, null, x8r8g8b8, avx2_composite_over_8888_8888),<br>
> + PIXMAN_STD_FAST_PATH (OVER, a8b8g8r8, null, a8b8g8r8, avx2_composite_over_8888_8888),<br>
> + PIXMAN_STD_FAST_PATH (OVER, a8b8g8r8, null, x8b8g8r8, <br>
> + avx2_composite_over_8888_8888),<br>
> { PIXMAN_OP_NONE },<br>
> };<br>
><br>
> -static const pixman_iter_info_t avx2_iters[] =<br>
> +static const pixman_iter_info_t avx2_iters[] =<br>
<br>
There was stray whitespace added in the earlier commit here, and now removed. Please just remove it from the original commit.<br>
<br>
> {<br>
> { PIXMAN_null },<br>
> };<br>
> @@ -26,6 +452,9 @@ _pixman_implementation_create_avx2 (pixman_implementation_t *fallback)<br>
> pixman_implementation_t *imp = _pixman_implementation_create <br>
> (fallback, avx2_fast_paths);<br>
><br>
> /* Set up function pointers */<br>
> + imp->combine_32[PIXMAN_OP_OVER] = avx2_combine_over_u;<br>
> + imp->combine_32[PIXMAN_OP_OVER_REVERSE] = <br>
> + avx2_combine_over_reverse_u;<br>
> +<br>
<br>
More stray spaces<br>
<br>
<br>
> imp->iter_info = avx2_iters;<br>
><br>
> return imp;<br>
> --<br>
> 2.17.1<br>
><br>
> _______________________________________________<br>
> Pixman mailing list<br>
> <a href="mailto:Pixman@lists.freedesktop.org" target="_blank">Pixman@lists.freedesktop.org</a><br>
> <a href="https://lists.freedesktop.org/mailman/listinfo/pixman" rel="noreferrer" target="_blank">https://lists.freedesktop.org/mailman/listinfo/pixman</a><br>
_______________________________________________<br>
Pixman mailing list<br>
<a href="mailto:Pixman@lists.freedesktop.org" target="_blank">Pixman@lists.freedesktop.org</a><br>
<a href="https://lists.freedesktop.org/mailman/listinfo/pixman" rel="noreferrer" target="_blank">https://lists.freedesktop.org/mailman/listinfo/pixman</a><br>
</blockquote></div>