drm/radeon: "ring test failed" on PA-RISC Linux
Alex Ivanov
gnidorah at p0n4ik.tk
Thu Sep 26 01:39:03 PDT 2013
Let's go futher.
25.09.2013, 22:58, "Alex Ivanov" <gnidorah at p0n4ik.tk>:
> 25.09.2013, 21:28, "Konrad Rzeszutek Wilk" <konrad.wilk at oracle.com>:
>> I took a look at the arch/parisc/kernel/pci-dma.c and I see that
>> is mostly a flat platform. That is bus addresses == physical addresses.
>> Unless it is an pclx or pclx2 CPU type (huh?) - if its it that
>> then any calls to dma_alloc_coherent will map memory out of a pool.
>> In essence it will look like a SWIOTLB bounce buffer.
> arch/parisc/kernel/pci-dma.c:
> ** PARISC 1.1 Dynamic DMA mapping support.
> ** This implementation is for PA-RISC platforms that do not support
> ** I/O TLBs (aka DMA address translation hardware).
>
> That's very old. PA-RISC 2.0 came into the game circa 1996.
> PA-RISC 1.1 is 32-bit only and i even not sure whether these machines
> had PCI bus.
>
> Only old boxes (PA7200 CPU and lower) cannot use dma_alloc_coherent()
> (and forced to do syncs iirc). That's not our case.
> And PA-RISC configs have 'Discontiguous Memory' choosen.
>> But interestingly enough there is a lot of 'flush_kernel_dcache_range'
>> call for every DMA operation.
>> And I think the you need to do
>> dma_sync_for_cpu call in the radeon_test_writeback for it to
>> use the flush_kernel_dcache_range.
I was correct regarding syncs.
In our case (SBA IOMMU) dma_sync* calls are no-ops:
sba_iommu.c:
static struct hppa_dma_ops sba_ops = {
...
.dma_sync_single_for_cpu = NULL,
.dma_sync_single_for_device = NULL,
.dma_sync_sg_for_cpu = NULL,
.dma_sync_sg_for_device = NULL,
}
dma-mapping.h:
dma_cache_sync(struct device *dev, void *vaddr, size_t size,
enum dma_data_direction direction)
{
if(hppa_dma_ops->dma_sync_single_for_cpu)
flush_kernel_dcache_range((unsigned long)vaddr, size);
}
So i'll skip doing the flush_kernel_dcache_range().
>> I don't know what the
>> flush_kernel_dcache_range does thought so I could be wrong.
> D-cache is a CPU cache (if they meant it).
> Seems to be L1-level. There is an I-cache at same level.
>> You are missing a translation here (you were comparing the virtual address
>> to the bus address). I was thinking something along this:
> Yes, this confused me. I've translated your suggestion literally :\
>> unsigned int pfn = page_to_pfn(ttm->pages[i]);
>> dma_addr_t bus = gtt->ttm.dma_address[i];
>> void *va_bus, *va, *va_pfn;
>>
>> if ((pfn << PAGE_SHIFT) != bus)
>> printk("Bus 0x%lx != PFN 0x%lx, bus, pfn << PAGE_SHIFT); /* OK, that means
>> bus addresses are different */
>>
>> va_bus = bus_to_virt(gtt->ttm.dma_address[i]);
>> va_pfn = __va(pfn << PAGE_SHIFT);
>>
>> if (!virt_addr_valid(va_bus))
>> printk("va_bus (0x%lx) not good!\n", va_bus);
>> if (!virt_addr_valid(va_pfn))
>> printk("va_pfn (0x%lx) not good!\n", va_pfn);
>>
>> /* We got VA for both bus -> va, and pfn -> va. Should be the
>> same if bus and physical addresses are on the same namespace. */
>> if (va_bus != va_pfn)
>> printk("va bus:%lx != va pfn: %lx\n", va_bus, va_pfn);
>>
>> /* Now that we have bus -> pa -> va (va_bus) try to go va_bus -> bus address.
>> The bus address should be the same */
>> if (gtt->tmm.dma_address[i] != virt_to_bus(va_bus))
>> printk("bus->pa->va:%lx != bus->pa->va->ba: %lx\n", gtt->tmm.dma_address[i],virt_to_bus(va_bus));
Ok, slightly modified:
struct page *page = ttm->pages[i];
unsigned long pfn = page_to_pfn(page);
dma_addr_t bus = gtt->ttm.dma_address[i];
void *va_bus, *va, *va_pfn;
BUG_ON(!pfn_valid(pfn));
//BUG_ON(!page_mapping(page)); // Leads to a kernel BUG
/* Avoid floodage */
if (i % 100 == 0) {
if ((pfn << PAGE_SHIFT) != bus)
printk("Bus 0x%lx != PFN 0x%lx\n", bus, pfn << PAGE_SHIFT); /*
OK, that means bus addresses are different */
va_bus = bus_to_virt(bus);
va_pfn = __va(pfn << PAGE_SHIFT);
if (!virt_addr_valid(va_bus))
printk("va_bus (0x%lx) not good!\n", va_bus);
if (!virt_addr_valid(va_pfn))
printk("va_pfn (0x%lx) not good!\n", va_pfn);
/* We got VA for both bus -> va, and pfn -> va. Should be the
same if bus and physical addresses are on the same namespace. */
if (va_bus != va_pfn)
printk("va bus: %lx != va pfn: %lx\n", va_bus, va_pfn);
/* Now that we have bus -> pa -> va (va_bus) try to go va_bus -> bus address.
The bus address should be the same */
if (bus != virt_to_bus(va_bus))
printk("bus->pa->va: %lx != bus->pa->va->ba: %lx\n", bus,virt_to_bus(va_bus));
}
Output:
Bus 0x40280000 != PFN 0x3e92d000
va_bus (0x80280000) not good!
va bus: 80280000 != va pfn: 7e92d000
Bus 0x40281000 != PFN 0x3e930000
va_bus (0x80281000) not good!
va bus: 80281000 != va pfn: 7e930000
...
va_bus is invalid. That's the reason i was getting a KF trying to read data behind it.
>>> DRM_INFO("MISMATCH: %p != %p\n", va, (void *) virt_to_bus(va));
>>> /*DRM_INFO("CONTENTS: %x\n", *((uint32_t *)va));*/ // Leads to a Kernel Fault
>> That is odd. I would have thought it would be usuable.
>>> ...
>>> }
>>>
>>> I'm getting the output:
>>>
>>> [drm] MISMATCH: 0000000080280000 != 0000000040280000
>> In theory that means the bus address that is programmed in (gtt->dma_address[i])
>> is 0000000040280000 (which is what virt_to_bus(va) should have resolved itself to).
> Should resolved properly. I had a sane check of virt_to_bus(va) == gtt->ttm.dma_address[i]
>> Tha you can't get access to 'va' (0000000080280000) is odd. One way to try to
>> access it is to do:
>>
>> va = __va(page_to_pfn(ttm->pages[i]) << PAGE_SHIFT);
>> DRM_INFO("CONTENTS: %x\n", *((uint32_t)va));
>>
>> As that would get it via the page -> va.
This way i get CONTENTS: 0
> _______________________________________________
> dri-devel mailing list
> dri-devel at lists.freedesktop.org
> http://lists.freedesktop.org/mailman/listinfo/dri-devel
More information about the dri-devel
mailing list