[PATCH] drm/radeon/nouveau: fix build regression on alpha due to Xen changes.
Konrad Rzeszutek Wilk
konrad.wilk at oracle.com
Mon May 9 13:37:28 PDT 2011
On Mon, May 09, 2011 at 12:24:04PM +1000, Dave Airlie wrote:
> From: Dave Airlie <airlied at redhat.com>
>
> The Xen changes were using DMA_ERROR_CODE which isn't defined on a few
> platforms, however we reverted the Xen patch that caused use to try and
> use this code path earlier in 2.6.39 cycle, so for now lets just force
> the code to never take this path and allow it to build again on alpha.
>
> The proper long term answer is probably to store if the dma_addr has
> been assigned to alongside the dma_addr in the higher level code,
> though I think Thomas wanted to rewrite most of this anyways properly.
<nods> Yes, just need to find the time :-)
>
> Signed-off-by: Dave Airlie <airlied at redhat.com>
> Cc: Konrad Rzeszutek Wilk <konrad.wilk at oracle.com>
Acked-by: Konrad Rzeszutek Wilk <konrad.wilk at oracle.com>
Thanks for sending this patch out!
> ---
> drivers/gpu/drm/nouveau/nouveau_sgdma.c | 3 ++-
> drivers/gpu/drm/radeon/radeon_gart.c | 6 +++---
> 2 files changed, 5 insertions(+), 4 deletions(-)
>
> diff --git a/drivers/gpu/drm/nouveau/nouveau_sgdma.c b/drivers/gpu/drm/nouveau/nouveau_sgdma.c
> index 4bce801..c77111e 100644
> --- a/drivers/gpu/drm/nouveau/nouveau_sgdma.c
> +++ b/drivers/gpu/drm/nouveau/nouveau_sgdma.c
> @@ -42,7 +42,8 @@ nouveau_sgdma_populate(struct ttm_backend *be, unsigned long num_pages,
>
> nvbe->nr_pages = 0;
> while (num_pages--) {
> - if (dma_addrs[nvbe->nr_pages] != DMA_ERROR_CODE) {
> + /* this code path isn't called and is incorrect anyways */
> + if (0) { /*dma_addrs[nvbe->nr_pages] != DMA_ERROR_CODE)*/
> nvbe->pages[nvbe->nr_pages] =
> dma_addrs[nvbe->nr_pages];
> nvbe->ttm_alloced[nvbe->nr_pages] = true;
> diff --git a/drivers/gpu/drm/radeon/radeon_gart.c b/drivers/gpu/drm/radeon/radeon_gart.c
> index 8a955bb..a533f52 100644
> --- a/drivers/gpu/drm/radeon/radeon_gart.c
> +++ b/drivers/gpu/drm/radeon/radeon_gart.c
> @@ -181,9 +181,9 @@ int radeon_gart_bind(struct radeon_device *rdev, unsigned offset,
> p = t / (PAGE_SIZE / RADEON_GPU_PAGE_SIZE);
>
> for (i = 0; i < pages; i++, p++) {
> - /* On TTM path, we only use the DMA API if TTM_PAGE_FLAG_DMA32
> - * is requested. */
> - if (dma_addr[i] != DMA_ERROR_CODE) {
> + /* we reverted the patch using dma_addr in TTM for now but this
> + * code stops building on alpha so just comment it out for now */
> + if (0) { /*dma_addr[i] != DMA_ERROR_CODE) */
> rdev->gart.ttm_alloced[p] = true;
> rdev->gart.pages_addr[p] = dma_addr[i];
> } else {
> --
> 1.7.1
More information about the dri-devel
mailing list