On Wed, Jan 27, 2021 at 12:21 PM Daniel Mentz danielmentz@google.com wrote:
On Fri, Jan 22, 2021 at 7:47 PM John Stultz john.stultz@linaro.org wrote:
+static int system_heap_clear_pages(struct page **pages, int num, pgprot_t pgprot) +{
void *addr = vmap(pages, num, VM_MAP, pgprot);
if (!addr)
return -ENOMEM;
memset(addr, 0, PAGE_SIZE * num);
vunmap(addr);
return 0;
+}
I thought that vmap/vunmap are expensive, and I am wondering if there's a faster way that avoids vmap.
How about lifting this code from lib/iov_iter.c static void memzero_page(struct page *page, size_t offset, size_t len) { char *addr = kmap_atomic(page); memset(addr + offset, 0, len); kunmap_atomic(addr); }
Or what about lifting that code from the old ion_cma_heap.c
if (PageHighMem(pages)) { unsigned long nr_clear_pages = nr_pages; struct page *page = pages;
while (nr_clear_pages > 0) { void *vaddr = kmap_atomic(page); memset(vaddr, 0, PAGE_SIZE); kunmap_atomic(vaddr); page++; nr_clear_pages--; }
} else { memset(page_address(pages), 0, size); }
Though, this last memset only works since CMA is contiguous, so it probably needs to always do the kmap_atomic for each page, right?
I'm still a little worried if this is right, as the current implementation with the vmap comes from the old ion_heap_sglist_zero logic, which similarly tries to batch the vmaps 32 pages at at time, but I'll give it a try.
thanks -john