drm/radeon: "ring test failed" on PA-RISC Linux

Alex Ivanov gnidorah at p0n4ik.tk
Wed Sep 25 11:51:04 PDT 2013


> You'd want to add the add the flush to r100_ring_test() in r100.c.
> radeon_cp.c is for the old UMS support.


Thanks for the code! I'll try asap.

25.09.2013, 21:28, "Konrad Rzeszutek Wilk" <konrad.wilk at oracle.com>:
> I took a look at the arch/parisc/kernel/pci-dma.c and I see that
> is mostly a flat platform. That is bus addresses == physical addresses.
> Unless it is an pclx or pclx2 CPU type (huh?) - if its it that
> then any calls to dma_alloc_coherent will map memory out of a pool.
> In essence it will look like a SWIOTLB bounce buffer.

** PARISC 1.1 Dynamic DMA mapping support.
** This implementation is for PA-RISC platforms that do not support
** I/O TLBs (aka DMA address translation hardware).

That's very old. PA-RISC 2.0 came into the game circa 1996.
PA-RISC 1.1 is 32-bit only and i even not sure whether these machines
had PCI bus.

Only old boxes (PA7200 CPU and lower) cannot use dma_alloc_coherent()
(and forced to do syncs iirc). That's not our case.
And PA-RISC configs have 'Discontiguous Memory' choosen.

> But interestingly enough there is a lot of 'flush_kernel_dcache_range'
> call for every DMA operation. And I think the you need to do
> dma_sync_for_cpu call in the radeon_test_writeback for it to
> use the flush_kernel_dcache_range. I don't know what the
> flush_kernel_dcache_range does thought so I could be wrong.

D-cache is a CPU cache (if they meant it). 
Seems to be L1-level. There is an I-cache at same level.

> That means you can ignore the little code below I wrote and
> see about doing something like this:
> diff --git a/drivers/gpu/drm/radeon/radeon_cp.c b/drivers/gpu/drm/radeon/radeon_cp.c
> index 3cae2bb..9e5923d 100644
> --- a/drivers/gpu/drm/radeon/radeon_cp.c
> +++ b/drivers/gpu/drm/radeon/radeon_cp.c
> @@ -876,6 +876,7 @@ static void radeon_test_writeback(drm_radeon_private_t * dev_priv)
>          RADEON_WRITE(RADEON_SCRATCH_REG1, 0xdeadbeef);
> + flush_kernel_dcache_range(dev_priv->ring_rptr, PAGE_SIZE);
>          for (tmp = 0; tmp < dev_priv->usec_timeout; tmp++) {
>                  u32 val;
> But that is probably a shot in the dark. I have no clue what the flush_..
> is doing.
> [edit: And then I noticed sba_iommu.c, which is a complete IOMMU driver
> where bus and physical addresses are different. sigh. What type of machine
> is this? Does it have the IOMMU in it?]

That's our case.
Yes, recent IA64 and PA-RISC machines have SBA IOMMU device. PCI I/O
seem to go through it. There is a note for my chipset in sba_iommu.c:

/* We are just "encouraging" 32-bit DMA masks here since we can
 * never allow IOMMU bypass unless we add special support for ZX1.

And it indeed right. When i've tried to bypass hw IOMMU like in ia64
code it lead to the faults from drivers which do the DMA (like Fusion MPT SCSI

>>                  void *va = bus_to_virt(gtt->ttm.dma_address[i]);
>>                  if ((phys_addr_t) va != virt_to_bus(va)) {
> You are missing a translation here (you were comparing the virtual address
> to the bus address). I was thinking something along this:

Yes, this confused me. I've translated your suggestion literally :\

>                 unsigned int pfn = page_to_pfn(ttm->pages[i]);
>                 dma_addr_t bus =  gtt->ttm.dma_address[i];
>                 void *va_bus, *va, *va_pfn;
>                 if ((pfn << PAGE_SHIFT) != bus)
>                         printk("Bus 0x%lx != PFN 0x%lx, bus, pfn << PAGE_SHIFT); /* OK, that means
>                         bus addresses are different */
>                 va_bus = bus_to_virt(gtt->ttm.dma_address[i]);
>                 va_pfn = __va(pfn << PAGE_SHIFT);
>                 if (!virt_addr_valid(va_bus))
>                         printk("va_bus (0x%lx) not good!\n", va_bus);
>                 if (!virt_addr_valid(va_pfn))
>                         printk("va_pfn (0x%lx) not good!\n", va_pfn);
>                 /* We got VA for both bus -> va, and pfn -> va. Should be the
>                    same if bus and physical addresses are on the same namespace. */
>                 if (va_bus != va_pfn)
>                         printk("va bus:%lx != va pfn: %lx\n", va_bus, va_pfn);
>                 /* Now that we have bus -> pa -> va (va_bus) try to go va_bus -> bus address.
>                    The bus address should be the same */
>                 if (gtt->tmm.dma_address[i] != virt_to_bus(va_bus))
>                         printk("bus->pa->va:%lx != bus->pa->va->ba: %lx\n", gtt->tmm.dma_address[i],virt_to_bus(va_bus));
>>                       DRM_INFO("MISMATCH: %p != %p\n", va, (void *) virt_to_bus(va));
>>                       /*DRM_INFO("CONTENTS: %x\n", *((uint32_t *)va));*/ // Leads to a Kernel Fault
> That is odd. I would have thought it would be usuable.
>>                       ...
>>                  }
>>  I'm getting the output:
>>  [drm] MISMATCH: 0000000080280000 != 0000000040280000
> In theory that means the bus address that is programmed in (gtt->dma_address[i])
> is 0000000040280000 (which is what virt_to_bus(va) should have resolved itself to).

Should resolved properly. I had a sane check of virt_to_bus(va) == gtt->ttm.dma_address[i]

> Tha you can't get access to 'va' (0000000080280000) is odd. One way to try to
> access it is to do:
>         va = __va(page_to_pfn(ttm->pages[i]) << PAGE_SHIFT);
>         DRM_INFO("CONTENTS: %x\n", *((uint32_t)va));
> As that would get it via the page -> va.

More information about the dri-devel mailing list