<br><br>
<div class="gmail_quote">2011/4/4 Siarhei Siamashka <span dir="ltr"><a href="mailto:siarhei.siamashka@gmail.com">siarhei.siamashka@gmail.com</a></span><br>
<blockquote style="BORDER-LEFT: #ccc 1px solid; MARGIN: 0px 0px 0px 0.8ex; PADDING-LEFT: 1ex" class="gmail_quote">
<div class="im"> </div>Sorry, I was a little bit sick recently and dropped out for a while.<br>Hopefully everything can get back on track now.</blockquote>
<div> </div>
<div>Oh... Are you OK now? Take care, nothing is more important than health.</div>
<div>Anyway welcome back :-)</div>
<div> </div>
<blockquote style="BORDER-LEFT: #ccc 1px solid; MARGIN: 0px 0px 0px 0.8ex; PADDING-LEFT: 1ex" class="gmail_quote">Improvement from doing aligned writes does not actually provide a<br>really significant improvement, but it's surely measurable and<br>
cumulative with other optimizations.<br></blockquote>
<div>I didn't test the performance of unaligned write on ARM core.</div>
<div>(I just simly thought destination alignment hint of vst instructions)</div>
<div>But on Intel Core2-Quad CPUs, aligned write memset32 (for solid filling) using rep stosd hit the abstract memory bandwidth limit.</div>
<div>I think rep sto/mov X86 instructions can give CPU more chance to control burst write synchronization. (even faster then SSE2 one)</div>
<div>Aligned write is some what HW dependent, I think it has to be carefully checked whether there is actual performance gain inspite of alignment overhead for various length scanlines.</div>
<div> </div>
<blockquote style="BORDER-LEFT: #ccc 1px solid; MARGIN: 0px 0px 0px 0.8ex; PADDING-LEFT: 1ex" class="gmail_quote">
<div class="im"><br>+/* Combine function for operator OVER */<br>+.macro bilinear_combine_over dst_fmt, numpix<br></div>+ vpush { q4, q5 }<br><br>It's better to do vpush/vpop at the function entry/exit and not in the<br>
inner loop.<br></blockquote>
<div>According to ARM eabi calling convention, only q4-q7 registers are need to be restored.</div>
<div>I didn't know about that at that time.</div>
<div>So I used some other registers to avoid stack push/pop.</div>
<div> </div>
<blockquote style="BORDER-LEFT: #ccc 1px solid; MARGIN: 0px 0px 0px 0.8ex; PADDING-LEFT: 1ex" class="gmail_quote">+/* Destination pixel load functions for bilinear_combine_XXXX */<br>+.macro bilinear_load_dst_8888 numpix<br>
+.if numpix == 4<br>+ pld [OUT, #64]<br><br>It's better to have the same N pixels ahead prefetch distance for both<br>source and destination.<br><br>The whole prefetch stuff works in the following way. Using PLD<br>
instruction, you poke some address in memory which is going to be<br>accessed soon, and then you need to keep CPU busy doing something else<br>for hundred(s) of cycles before the data is fetched into cache. So<br>it's not something like "let's prefetch one cache line ahead" because<br>
such prefetch distance can be easily too small. Optimal prefetch<br>distance depends on how heavy are the computations done in the<br>function per loop iteration. So for unscaled fast paths or nearest<br>scaling we need to prefetch really far ahead. For more CPU heavy<br>
bilinear interpolation, prefetch distance can be a lot smaller. But<br>the rough estimation is the following: if we prefetch N pixels ahead,<br>and we use X cycles per pixel, then memory controller has N * X cycles<br>time to fetch the needed data into cache before we attempt to access<br>
this data. Of course everything gets messy when memory bandwidth is<br>the bottleneck and we are waiting for memory anyway, also there is<br>cache line granularity to consider, etc. But increasing prefetch<br>distance in an experimental way until it stops providing performance<br>
improvement still works well in practice :) Still the point is that we<br>select some prefetch distance, and it is N pixels ahead, synchronized<br>for both source and destination fetches.<br></blockquote>
<div>I've done various experiments on PLD instruction.</div>
<div>I removed cache preload in neon fast path functions and then benchmarked, there was no difference at performance.</div>
<div>I tested some other neon functions (like memcpy) in similar way, but no difference at all.</div>
<div>As I know coretex-a8 have preload engine (maybe not according to different SoC integration??) but PLD is just an hint to the HW.</div>
<div>So it is implementation dependent, right?</div>
<div>I need further investigation on this and need to acquire various targets with different SoC.</div>
<div> </div></div>
<div>Thanks for you kind explanation about prefetching.<br clear="all">I've learned many knowledges and way of thinking from you and your codes :-)</div>
<div>I really appreciate that.</div>
<div> </div>
<div>I've done some additional work on overlapped blit functions and bilinear filter with A8 mask for operator OVER and ADD.</div>
<div>(tight scheduled work here...)</div>
<div>Here I have some issues (related to memory access patterns).</div>
<div>Maybe we can discuss about this on next posting.</div>
<div><br>-- <br>Best Regards,</div>
<div>Taekyun Kim</div><br>