[mipsel+rs780e]Occasionally "GPU lockup" after resuming from suspend.

Jerome Glisse j.glisse at gmail.com
Wed Feb 15 07:53:04 PST 2012


On Wed, Feb 15, 2012 at 05:32:35PM +0800, Chen Jie wrote:
> Hi,
> 
> Status update about the problem 'Occasionally "GPU lockup" after
> resuming from suspend.'
> 
> First, this could happen when system returns from a STR(suspend to
> ram) or STD(suspend to disk, aka hibernation).
> When returns from STD, the initialization process is most similar to
> the normal boot.
> The standby is ok, which is similar to STR, except that standby will
> not shutdown the power of CPU,GPU etc.
> 
> We've dumped and compared the registers, and found something:
> CP_STAT
> normal value: 0x00000000
> value when this problem occurred: 0x802100C1 or 0x802300C1
> 
> CP_ME_CNTL
> normal value: 0x000000FF
> value when this problem occurred: always 0x200000FF in our test
> 
> Questions:
> According to the manual,
> CP_STAT = 0x802100C1 means
> 	CSF_RING_BUSY(bit 0):
> 		The Ring fetcher still has command buffer data to fetch, or the PFP
> still has data left to process from the reorder queue.
> 	CSF_BUSY(bit 6):
> 		The input FIFOs have command buffers to fetch, or one or more of the
> fetchers are busy, or the arbiter has a request to send to the MIU.
> 	MIU_RDREQ_BUSY(bit 7):
> 		The read path logic inside the MIU is busy.
> 	MEQ_BUSY(bit 16):
> 		The PFP-to-ME queue has valid data in it.
> 	SURFACE_SYNC_BUSY(bit 21):
> 		The Surface Sync unit is busy.
> 	CP_BUSY(bit 31):
> 		Any block in the CP is busy.
> What does it suggest?
> 
> What does it mean if bit 29 of CP_ME_CNTL is set?
> 
> BTW, how does the dummy page work in GART?
> 
> 
> Regards,
> -- Chen Jie

To me it looks like the CP is trying to fetch memory but the
GPU memory controller fail to fullfill cp request. Did you
check the PCI configuration before & after (when things don't
work) My best guest is PCI bus mastering is no properly working
or the PCIE GPU gart table as wrong data.

Maybe one need to drop bus master and reenable bus master to
work around some bug...

Cheers,
Jerome


More information about the dri-devel mailing list