[PATCH v5 1/3] drm/shmem: add support for per object caching flags.
Thomas Hellström (VMware)
thomas_os at shipmail.org
Thu Feb 27 08:10:59 UTC 2020
On 2/27/20 8:53 AM, Gerd Hoffmann wrote:
> Hi,
>
>>> + if (!shmem->map_cached)
>>> + prot = pgprot_writecombine(prot);
>>> shmem->vaddr = vmap(shmem->pages, obj->size >> PAGE_SHIFT,
>>> - VM_MAP, pgprot_writecombine(PAGE_KERNEL));
>>> + VM_MAP, prot)
>>
>> Wouldn't a vmap with pgprot_writecombine() create conflicting mappings with
>> the linear kernel map which is not write-combined?
> I think so, yes.
>
>> Or do you change the linear kernel map of the shmem pages somewhere?
> Havn't seen anything doing so while browsing the code.
>
>> vmap bypassess at least the
>> x86 PAT core mapping consistency check and this could potentially cause
>> spuriously overwritten memory.
> Well, I don't think the linear kernel map is ever used to access the
> shmem gem objects. So while this isn't exactly clean it shouldn't
> cause problems in practice.
>
> Suggestions how to fix that?
>
So this has historically caused problems since the linear kernel map has
been accessed while prefetching, even if it's never used. Some
processors like AMD athlon actually even wrote back the prefetched
contents without ever using it.
Also the linear kernel map could be cached somewhere because of the
page's previous usage. (hibernation for example?)
I think it might be safe for some integrated graphics where the driver
maintainers can guarantee that it's safe on all particular processors
used with that driver, but then IMO it should be moved out to those drivers.
Other drivers needing write-combine shouldn't really use shmem.
So again, to fix the regression, could we revert 0be895893607f
("drm/shmem: switch shmem helper to &drm_gem_object_funcs.mmap") or does
that have other implications?
/Thomas
More information about the dri-devel
mailing list