[PATCH] drm/vmwgfx: switch from ioremap_cache to memremap

Dan Williams dan.j.williams at intel.com
Tue Oct 13 09:35:12 PDT 2015


On Mon, Oct 12, 2015 at 10:18 PM, Thomas Hellstrom
<thellstrom at vmware.com> wrote:
> Hi!
>
> On 10/13/2015 12:35 AM, Dan Williams wrote:
>> Per commit 2e586a7e017a "drm/vmwgfx: Map the fifo as cached" the driver
>> expects the fifo registers to be cacheable.  In preparation for
>> deprecating ioremap_cache() convert its usage in vmwgfx to memremap().
>>
>> Cc: David Airlie <airlied at linux.ie>
>> Cc: Thomas Hellstrom <thellstrom at vmware.com>
>> Cc: Sinclair Yeh <syeh at vmware.com>
>> Cc: dri-devel at lists.freedesktop.org
>> Signed-off-by: Dan Williams <dan.j.williams at intel.com>
>
> While I have nothing against the conversion, what's stopping the
> compiler from reordering writes on a generic architecture and caching
> and reordering reads on x86 in particular? At the very least it looks to
> me like the memory accesses of the memremap'd memory needs to be
> encapsulated within READ_ONCE and WRITE_ONCE.

Hmm, currently the code is using ioread32/iowrite32 which only do
volatile accesses, whereas READ_ONCE / WRITE_ONCE have a memory
clobber on entry and exit.  So, I'm assuming all you need is the
guarantee of "no compiler re-ordering" and not the stronger
READ_ONCE/WRITE_ONCE guarantees, but that still seems broken compared
to explicit fencing where it matters.

If the ordering needs to be guaranteed with respect to the agent on
the other side of the fifo that can only be asserted via real barriers
(mb(), wmb()) which the driver already seems to have in places.
ioread32 and iowrite32 as is don't help with that case.

> How is this handled in the other conversions?

As far as I can see, the vmwgfx conversion is unique in implementing a
device fifo in cached memory.


More information about the dri-devel mailing list