[PATCH] drm/amdgpu: stop disable the scheduler during HW fini
Nirmoy
nirmodas at amd.com
Tue Feb 25 13:38:23 UTC 2020
Hi Christian,
I tested with amdgpu_test which does a GPU reset as well because of
deadlock_tests. Reset was fine I could run amdgpu_test multiple times.
dmesg:
Feb 25 14:32:20 brihaspati kernel: [drm:gfx_v9_0_priv_reg_irq [amdgpu]]
*ERROR* Illegal register access in command stream
Feb 25 14:32:20 brihaspati kernel: [drm:amdgpu_job_timedout [amdgpu]]
*ERROR* ring gfx timeout, signaled seq=290, emitted seq=291
Feb 25 14:32:20 brihaspati kernel: [drm:amdgpu_job_timedout [amdgpu]]
*ERROR* Process information: process amdgpu_test pid 2401 thread
amdgpu_test pid 2401
Feb 25 14:32:20 brihaspati kernel: amdgpu 0000:09:00.0: GPU reset begin!
Feb 25 14:32:21 brihaspati kernel: amdgpu 0000:09:00.0: GPU BACO reset
Feb 25 14:32:21 brihaspati kernel: amdgpu 0000:09:00.0: GPU reset
succeeded, trying to resume
Feb 25 14:32:21 brihaspati kernel: [drm] PCIE GART of 512M enabled
(table at 0x000000F400900000).
Feb 25 14:32:21 brihaspati kernel: [drm] VRAM is lost due to GPU reset!
Feb 25 14:32:21 brihaspati kernel: [drm] PSP is resuming...
Feb 25 14:32:22 brihaspati kernel: [drm] reserve 0x400000 from
0xf5fe800000 for PSP TMR
Feb 25 14:32:22 brihaspati kernel: [drm] kiq ring mec 2 pipe 1 q 0
Feb 25 14:32:22 brihaspati kernel: [drm] UVD and UVD ENC initialized
successfully.
Feb 25 14:32:22 brihaspati kernel: [drm] VCE initialized successfully.
Feb 25 14:32:22 brihaspati kernel: amdgpu 0000:09:00.0: ring gfx uses VM
inv eng 0 on hub 0
Feb 25 14:32:22 brihaspati kernel: amdgpu 0000:09:00.0: ring comp_1.0.0
uses VM inv eng 1 on hub 0
Feb 25 14:32:22 brihaspati kernel: amdgpu 0000:09:00.0: ring comp_1.1.0
uses VM inv eng 4 on hub 0
Feb 25 14:32:22 brihaspati kernel: amdgpu 0000:09:00.0: ring comp_1.2.0
uses VM inv eng 5 on hub 0
Feb 25 14:32:22 brihaspati kernel: amdgpu 0000:09:00.0: ring comp_1.3.0
uses VM inv eng 6 on hub 0
Feb 25 14:32:22 brihaspati kernel: amdgpu 0000:09:00.0: ring comp_1.0.1
uses VM inv eng 7 on hub 0
Feb 25 14:32:22 brihaspati kernel: amdgpu 0000:09:00.0: ring comp_1.1.1
uses VM inv eng 8 on hub 0
Feb 25 14:32:22 brihaspati kernel: amdgpu 0000:09:00.0: ring comp_1.2.1
uses VM inv eng 9 on hub 0
Feb 25 14:32:22 brihaspati kernel: amdgpu 0000:09:00.0: ring comp_1.3.1
uses VM inv eng 10 on hub 0
Feb 25 14:32:22 brihaspati kernel: amdgpu 0000:09:00.0: ring kiq_2.1.0
uses VM inv eng 11 on hub 0
Feb 25 14:32:22 brihaspati kernel: amdgpu 0000:09:00.0: ring sdma0 uses
VM inv eng 0 on hub 1
Feb 25 14:32:22 brihaspati kernel: amdgpu 0000:09:00.0: ring page0 uses
VM inv eng 1 on hub 1
Feb 25 14:32:22 brihaspati kernel: amdgpu 0000:09:00.0: ring sdma1 uses
VM inv eng 4 on hub 1
Feb 25 14:32:22 brihaspati kernel: amdgpu 0000:09:00.0: ring page1 uses
VM inv eng 5 on hub 1
Feb 25 14:32:22 brihaspati kernel: amdgpu 0000:09:00.0: ring uvd_0 uses
VM inv eng 6 on hub 1
Feb 25 14:32:22 brihaspati kernel: amdgpu 0000:09:00.0: ring uvd_enc_0.0
uses VM inv eng 7 on hub 1
Feb 25 14:32:22 brihaspati kernel: amdgpu 0000:09:00.0: ring uvd_enc_0.1
uses VM inv eng 8 on hub 1
Feb 25 14:32:22 brihaspati kernel: amdgpu 0000:09:00.0: ring vce0 uses
VM inv eng 9 on hub 1
Feb 25 14:32:22 brihaspati kernel: amdgpu 0000:09:00.0: ring vce1 uses
VM inv eng 10 on hub 1
Feb 25 14:32:22 brihaspati kernel: amdgpu 0000:09:00.0: ring vce2 uses
VM inv eng 11 on hub 1
Feb 25 14:32:22 brihaspati kernel: [drm] ECC is not present.
Feb 25 14:32:22 brihaspati kernel: [drm] SRAM ECC is not present.
Feb 25 14:32:22 brihaspati kernel: [drm] recover vram bo from shadow start
Feb 25 14:32:22 brihaspati kernel: [drm] recover vram bo from shadow done
Feb 25 14:32:22 brihaspati kernel: [drm] Skip scheduling IBs!
Feb 25 14:32:22 brihaspati kernel: amdgpu 0000:09:00.0: GPU reset(2)
succeeded!
Feb 25 14:32:22 brihaspati kernel: gmc_v9_0_process_interrupt: 45
callbacks suppressed
Feb 25 14:32:22 brihaspati kernel: amdgpu 0000:09:00.0: [gfxhub0] retry
page fault (src_id:0 ring:0 vmid:4 pasid:32769, for process amdgpu_test
pid 2401 thread amdgpu_test pid 2401)
Feb 25 14:32:22 brihaspati kernel: amdgpu 0000:09:00.0: in page
starting at address 0x00000000deadb000 from client 27
Feb 25 14:32:22 brihaspati kernel: amdgpu 0000:09:00.0:
VM_L2_PROTECTION_FAULT_STATUS:0x00440C51
Feb 25 14:32:22 brihaspati kernel: amdgpu 0000:09:00.0: MORE_FAULTS: 0x1
Feb 25 14:32:22 brihaspati kernel: amdgpu 0000:09:00.0: WALKER_ERROR: 0x0
Feb 25 14:32:22 brihaspati kernel: amdgpu 0000:09:00.0:
PERMISSION_FAULTS: 0x5
Feb 25 14:32:22 brihaspati kernel: amdgpu 0000:09:00.0: MAPPING_ERROR: 0x0
Feb 25 14:32:22 brihaspati kernel: amdgpu 0000:09:00.0: RW: 0x1
Feb 25 14:32:23 brihaspati systemd[1]:
NetworkManager-dispatcher.service: Succeeded.
Feb 25 14:32:24 brihaspati nscd[1255]: 1255 checking for monitored file
`/etc/services': No such file or directory
Feb 25 14:32:32 brihaspati PackageKit[2092]: daemon quit
Feb 25 14:32:32 brihaspati systemd[1]: packagekit.service: Succeeded.
Feb 25 14:32:39 brihaspati nscd[1255]: 1255 checking for monitored file
`/etc/services': No such file or directory
Feb 25 14:32:43 brihaspati systemd[1]: systemd-localed.service: Succeeded.
Feb 25 14:32:43 brihaspati systemd[1]: systemd-hostnamed.service: Succeeded.
Feb 25 14:32:48 brihaspati kernel: [drm:gfx_v9_0_priv_reg_irq [amdgpu]]
*ERROR* Illegal register access in command stream
Feb 25 14:32:48 brihaspati kernel: [drm:amdgpu_job_timedout [amdgpu]]
*ERROR* ring gfx timeout, signaled seq=537, emitted seq=538
Feb 25 14:32:48 brihaspati kernel: [drm:amdgpu_job_timedout [amdgpu]]
*ERROR* Process information: process amdgpu_test pid 2444 thread
amdgpu_test pid 2444
Feb 25 14:32:48 brihaspati kernel: amdgpu 0000:09:00.0: GPU reset begin!
Feb 25 14:32:49 brihaspati kernel: amdgpu 0000:09:00.0: GPU BACO reset
Feb 25 14:32:49 brihaspati kernel: amdgpu 0000:09:00.0: GPU reset
succeeded, trying to resume
Feb 25 14:32:49 brihaspati kernel: [drm] PCIE GART of 512M enabled
(table at 0x000000F400900000).
Feb 25 14:32:49 brihaspati kernel: [drm] VRAM is lost due to GPU reset!
Feb 25 14:32:49 brihaspati kernel: [drm] PSP is resuming...
Feb 25 14:32:50 brihaspati kernel: [drm] reserve 0x400000 from
0xf5fe800000 for PSP TMR
Feb 25 14:32:50 brihaspati kernel: [drm] kiq ring mec 2 pipe 1 q 0
Feb 25 14:32:50 brihaspati kernel: [drm] UVD and UVD ENC initialized
successfully.
Feb 25 14:32:50 brihaspati kernel: [drm] VCE initialized successfully.
Feb 25 14:32:50 brihaspati kernel: amdgpu 0000:09:00.0: ring gfx uses VM
inv eng 0 on hub 0
Feb 25 14:32:50 brihaspati kernel: amdgpu 0000:09:00.0: ring comp_1.0.0
uses VM inv eng 1 on hub 0
Feb 25 14:32:50 brihaspati kernel: amdgpu 0000:09:00.0: ring comp_1.1.0
uses VM inv eng 4 on hub 0
Feb 25 14:32:50 brihaspati kernel: amdgpu 0000:09:00.0: ring comp_1.2.0
uses VM inv eng 5 on hub 0
Feb 25 14:32:50 brihaspati kernel: amdgpu 0000:09:00.0: ring comp_1.3.0
uses VM inv eng 6 on hub 0
Feb 25 14:32:50 brihaspati kernel: amdgpu 0000:09:00.0: ring comp_1.0.1
uses VM inv eng 7 on hub 0
Feb 25 14:32:50 brihaspati kernel: amdgpu 0000:09:00.0: ring comp_1.1.1
uses VM inv eng 8 on hub 0
Feb 25 14:32:50 brihaspati kernel: amdgpu 0000:09:00.0: ring comp_1.2.1
uses VM inv eng 9 on hub 0
Feb 25 14:32:50 brihaspati kernel: amdgpu 0000:09:00.0: ring comp_1.3.1
uses VM inv eng 10 on hub 0
Feb 25 14:32:50 brihaspati kernel: amdgpu 0000:09:00.0: ring kiq_2.1.0
uses VM inv eng 11 on hub 0
Feb 25 14:32:50 brihaspati kernel: amdgpu 0000:09:00.0: ring sdma0 uses
VM inv eng 0 on hub 1
Feb 25 14:32:50 brihaspati kernel: amdgpu 0000:09:00.0: ring page0 uses
VM inv eng 1 on hub 1
Feb 25 14:32:50 brihaspati kernel: amdgpu 0000:09:00.0: ring sdma1 uses
VM inv eng 4 on hub 1
On 2/25/20 2:26 PM, Christian König wrote:
> Am 25.02.20 um 14:16 schrieb Nirmoy:
>> Acked-by: Nirmoy Das <nirmoy.das at amd.com>
>
> Could you test it as well? I only did a quick round of smoke tests,
> but somebody should probably run a gpu reset test as well.
>
> Thanks in advance,
> Christian.
>
>>
>> On 2/25/20 2:07 PM, Christian König wrote:
>>> When we stop the HW for example for GPU reset we should not stop the
>>> front-end scheduler. Otherwise we run into intermediate failures during
>>> command submission.
>>>
>>> The scheduler should only be stopped in very few cases:
>>> 1. We can't get the hardware working in ring or IB test after a GPU
>>> reset.
>>> 2. The KIQ scheduler is not used in the front-end and should be
>>> disabled during GPU reset.
>>> 3. In amdgpu_ring_fini() when the driver unloads.
>>>
>>> Signed-off-by: Christian König <christian.koenig at amd.com>
>>> ---
>>> drivers/gpu/drm/amd/amdgpu/cik_sdma.c | 2 --
>>> drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c | 8 --------
>>> drivers/gpu/drm/amd/amdgpu/gfx_v6_0.c | 5 -----
>>> drivers/gpu/drm/amd/amdgpu/gfx_v7_0.c | 25 +++++++++----------------
>>> drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c | 7 -------
>>> drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c | 9 ---------
>>> drivers/gpu/drm/amd/amdgpu/jpeg_v2_0.c | 3 ---
>>> drivers/gpu/drm/amd/amdgpu/sdma_v2_4.c | 2 --
>>> drivers/gpu/drm/amd/amdgpu/sdma_v3_0.c | 2 --
>>> drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c | 4 ----
>>> drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c | 3 ---
>>> drivers/gpu/drm/amd/amdgpu/si_dma.c | 1 -
>>> drivers/gpu/drm/amd/amdgpu/uvd_v4_2.c | 3 ---
>>> drivers/gpu/drm/amd/amdgpu/uvd_v5_0.c | 3 ---
>>> drivers/gpu/drm/amd/amdgpu/uvd_v6_0.c | 3 ---
>>> drivers/gpu/drm/amd/amdgpu/uvd_v7_0.c | 7 -------
>>> drivers/gpu/drm/amd/amdgpu/vce_v4_0.c | 4 ----
>>> drivers/gpu/drm/amd/amdgpu/vcn_v1_0.c | 3 ---
>>> drivers/gpu/drm/amd/amdgpu/vcn_v2_0.c | 9 ---------
>>> drivers/gpu/drm/amd/amdgpu/vcn_v2_5.c | 11 +----------
>>> 20 files changed, 10 insertions(+), 104 deletions(-)
>>>
>>> diff --git a/drivers/gpu/drm/amd/amdgpu/cik_sdma.c
>>> b/drivers/gpu/drm/amd/amdgpu/cik_sdma.c
>>> index 4274ccf765de..cb3b3a0a1348 100644
>>> --- a/drivers/gpu/drm/amd/amdgpu/cik_sdma.c
>>> +++ b/drivers/gpu/drm/amd/amdgpu/cik_sdma.c
>>> @@ -320,8 +320,6 @@ static void cik_sdma_gfx_stop(struct
>>> amdgpu_device *adev)
>>> WREG32(mmSDMA0_GFX_RB_CNTL + sdma_offsets[i], rb_cntl);
>>> WREG32(mmSDMA0_GFX_IB_CNTL + sdma_offsets[i], 0);
>>> }
>>> - sdma0->sched.ready = false;
>>> - sdma1->sched.ready = false;
>>> }
>>> /**
>>> diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
>>> b/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
>>> index 7b6158320400..36ce67ce4800 100644
>>> --- a/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
>>> +++ b/drivers/gpu/drm/amd/amdgpu/gfx_v10_0.c
>>> @@ -2391,10 +2391,6 @@ static int gfx_v10_0_cp_gfx_enable(struct
>>> amdgpu_device *adev, bool enable)
>>> tmp = REG_SET_FIELD(tmp, CP_ME_CNTL, ME_HALT, enable ? 0 : 1);
>>> tmp = REG_SET_FIELD(tmp, CP_ME_CNTL, PFP_HALT, enable ? 0 : 1);
>>> tmp = REG_SET_FIELD(tmp, CP_ME_CNTL, CE_HALT, enable ? 0 : 1);
>>> - if (!enable) {
>>> - for (i = 0; i < adev->gfx.num_gfx_rings; i++)
>>> - adev->gfx.gfx_ring[i].sched.ready = false;
>>> - }
>>> WREG32_SOC15(GC, 0, mmCP_ME_CNTL, tmp);
>>> for (i = 0; i < adev->usec_timeout; i++) {
>>> @@ -2869,16 +2865,12 @@ static int gfx_v10_0_cp_gfx_resume(struct
>>> amdgpu_device *adev)
>>> static void gfx_v10_0_cp_compute_enable(struct amdgpu_device
>>> *adev, bool enable)
>>> {
>>> - int i;
>>> -
>>> if (enable) {
>>> WREG32_SOC15(GC, 0, mmCP_MEC_CNTL, 0);
>>> } else {
>>> WREG32_SOC15(GC, 0, mmCP_MEC_CNTL,
>>> (CP_MEC_CNTL__MEC_ME1_HALT_MASK |
>>> CP_MEC_CNTL__MEC_ME2_HALT_MASK));
>>> - for (i = 0; i < adev->gfx.num_compute_rings; i++)
>>> - adev->gfx.compute_ring[i].sched.ready = false;
>>> adev->gfx.kiq.ring.sched.ready = false;
>>> }
>>> udelay(50);
>>> diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v6_0.c
>>> b/drivers/gpu/drm/amd/amdgpu/gfx_v6_0.c
>>> index 31f44d05e606..e462a099dbda 100644
>>> --- a/drivers/gpu/drm/amd/amdgpu/gfx_v6_0.c
>>> +++ b/drivers/gpu/drm/amd/amdgpu/gfx_v6_0.c
>>> @@ -1950,7 +1950,6 @@ static int gfx_v6_0_ring_test_ib(struct
>>> amdgpu_ring *ring, long timeout)
>>> static void gfx_v6_0_cp_gfx_enable(struct amdgpu_device *adev,
>>> bool enable)
>>> {
>>> - int i;
>>> if (enable) {
>>> WREG32(mmCP_ME_CNTL, 0);
>>> } else {
>>> @@ -1958,10 +1957,6 @@ static void gfx_v6_0_cp_gfx_enable(struct
>>> amdgpu_device *adev, bool enable)
>>> CP_ME_CNTL__PFP_HALT_MASK |
>>> CP_ME_CNTL__CE_HALT_MASK));
>>> WREG32(mmSCRATCH_UMSK, 0);
>>> - for (i = 0; i < adev->gfx.num_gfx_rings; i++)
>>> - adev->gfx.gfx_ring[i].sched.ready = false;
>>> - for (i = 0; i < adev->gfx.num_compute_rings; i++)
>>> - adev->gfx.compute_ring[i].sched.ready = false;
>>> }
>>> udelay(50);
>>> }
>>> diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v7_0.c
>>> b/drivers/gpu/drm/amd/amdgpu/gfx_v7_0.c
>>> index 8f20a5dd44fe..9bc8673c83ac 100644
>>> --- a/drivers/gpu/drm/amd/amdgpu/gfx_v7_0.c
>>> +++ b/drivers/gpu/drm/amd/amdgpu/gfx_v7_0.c
>>> @@ -2431,15 +2431,12 @@ static int gfx_v7_0_ring_test_ib(struct
>>> amdgpu_ring *ring, long timeout)
>>> */
>>> static void gfx_v7_0_cp_gfx_enable(struct amdgpu_device *adev,
>>> bool enable)
>>> {
>>> - int i;
>>> -
>>> - if (enable) {
>>> + if (enable)
>>> WREG32(mmCP_ME_CNTL, 0);
>>> - } else {
>>> - WREG32(mmCP_ME_CNTL, (CP_ME_CNTL__ME_HALT_MASK |
>>> CP_ME_CNTL__PFP_HALT_MASK | CP_ME_CNTL__CE_HALT_MASK));
>>> - for (i = 0; i < adev->gfx.num_gfx_rings; i++)
>>> - adev->gfx.gfx_ring[i].sched.ready = false;
>>> - }
>>> + else
>>> + WREG32(mmCP_ME_CNTL, (CP_ME_CNTL__ME_HALT_MASK |
>>> + CP_ME_CNTL__PFP_HALT_MASK |
>>> + CP_ME_CNTL__CE_HALT_MASK));
>>> udelay(50);
>>> }
>>> @@ -2700,15 +2697,11 @@ static void
>>> gfx_v7_0_ring_set_wptr_compute(struct amdgpu_ring *ring)
>>> */
>>> static void gfx_v7_0_cp_compute_enable(struct amdgpu_device *adev,
>>> bool enable)
>>> {
>>> - int i;
>>> -
>>> - if (enable) {
>>> + if (enable)
>>> WREG32(mmCP_MEC_CNTL, 0);
>>> - } else {
>>> - WREG32(mmCP_MEC_CNTL, (CP_MEC_CNTL__MEC_ME1_HALT_MASK |
>>> CP_MEC_CNTL__MEC_ME2_HALT_MASK));
>>> - for (i = 0; i < adev->gfx.num_compute_rings; i++)
>>> - adev->gfx.compute_ring[i].sched.ready = false;
>>> - }
>>> + else
>>> + WREG32(mmCP_MEC_CNTL, (CP_MEC_CNTL__MEC_ME1_HALT_MASK |
>>> + CP_MEC_CNTL__MEC_ME2_HALT_MASK));
>>> udelay(50);
>>> }
>>> diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c
>>> b/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c
>>> index fa245973de12..7b6b03c02754 100644
>>> --- a/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c
>>> +++ b/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c
>>> @@ -4111,7 +4111,6 @@ static int gfx_v8_0_rlc_resume(struct
>>> amdgpu_device *adev)
>>> static void gfx_v8_0_cp_gfx_enable(struct amdgpu_device *adev,
>>> bool enable)
>>> {
>>> - int i;
>>> u32 tmp = RREG32(mmCP_ME_CNTL);
>>> if (enable) {
>>> @@ -4122,8 +4121,6 @@ static void gfx_v8_0_cp_gfx_enable(struct
>>> amdgpu_device *adev, bool enable)
>>> tmp = REG_SET_FIELD(tmp, CP_ME_CNTL, ME_HALT, 1);
>>> tmp = REG_SET_FIELD(tmp, CP_ME_CNTL, PFP_HALT, 1);
>>> tmp = REG_SET_FIELD(tmp, CP_ME_CNTL, CE_HALT, 1);
>>> - for (i = 0; i < adev->gfx.num_gfx_rings; i++)
>>> - adev->gfx.gfx_ring[i].sched.ready = false;
>>> }
>>> WREG32(mmCP_ME_CNTL, tmp);
>>> udelay(50);
>>> @@ -4311,14 +4308,10 @@ static int gfx_v8_0_cp_gfx_resume(struct
>>> amdgpu_device *adev)
>>> static void gfx_v8_0_cp_compute_enable(struct amdgpu_device
>>> *adev, bool enable)
>>> {
>>> - int i;
>>> -
>>> if (enable) {
>>> WREG32(mmCP_MEC_CNTL, 0);
>>> } else {
>>> WREG32(mmCP_MEC_CNTL, (CP_MEC_CNTL__MEC_ME1_HALT_MASK |
>>> CP_MEC_CNTL__MEC_ME2_HALT_MASK));
>>> - for (i = 0; i < adev->gfx.num_compute_rings; i++)
>>> - adev->gfx.compute_ring[i].sched.ready = false;
>>> adev->gfx.kiq.ring.sched.ready = false;
>>> }
>>> udelay(50);
>>> diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
>>> b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
>>> index 1c7a16b91686..a2f9882bd9b4 100644
>>> --- a/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
>>> +++ b/drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c
>>> @@ -3034,16 +3034,11 @@ static int gfx_v9_0_rlc_resume(struct
>>> amdgpu_device *adev)
>>> static void gfx_v9_0_cp_gfx_enable(struct amdgpu_device *adev,
>>> bool enable)
>>> {
>>> - int i;
>>> u32 tmp = RREG32_SOC15(GC, 0, mmCP_ME_CNTL);
>>> tmp = REG_SET_FIELD(tmp, CP_ME_CNTL, ME_HALT, enable ? 0 : 1);
>>> tmp = REG_SET_FIELD(tmp, CP_ME_CNTL, PFP_HALT, enable ? 0 : 1);
>>> tmp = REG_SET_FIELD(tmp, CP_ME_CNTL, CE_HALT, enable ? 0 : 1);
>>> - if (!enable) {
>>> - for (i = 0; i < adev->gfx.num_gfx_rings; i++)
>>> - adev->gfx.gfx_ring[i].sched.ready = false;
>>> - }
>>> WREG32_SOC15_RLC(GC, 0, mmCP_ME_CNTL, tmp);
>>> udelay(50);
>>> }
>>> @@ -3239,15 +3234,11 @@ static int gfx_v9_0_cp_gfx_resume(struct
>>> amdgpu_device *adev)
>>> static void gfx_v9_0_cp_compute_enable(struct amdgpu_device
>>> *adev, bool enable)
>>> {
>>> - int i;
>>> -
>>> if (enable) {
>>> WREG32_SOC15_RLC(GC, 0, mmCP_MEC_CNTL, 0);
>>> } else {
>>> WREG32_SOC15_RLC(GC, 0, mmCP_MEC_CNTL,
>>> (CP_MEC_CNTL__MEC_ME1_HALT_MASK |
>>> CP_MEC_CNTL__MEC_ME2_HALT_MASK));
>>> - for (i = 0; i < adev->gfx.num_compute_rings; i++)
>>> - adev->gfx.compute_ring[i].sched.ready = false;
>>> adev->gfx.kiq.ring.sched.ready = false;
>>> }
>>> udelay(50);
>>> diff --git a/drivers/gpu/drm/amd/amdgpu/jpeg_v2_0.c
>>> b/drivers/gpu/drm/amd/amdgpu/jpeg_v2_0.c
>>> index ff2e6e1ccde7..471710a42a0c 100644
>>> --- a/drivers/gpu/drm/amd/amdgpu/jpeg_v2_0.c
>>> +++ b/drivers/gpu/drm/amd/amdgpu/jpeg_v2_0.c
>>> @@ -169,14 +169,11 @@ static int jpeg_v2_0_hw_init(void *handle)
>>> static int jpeg_v2_0_hw_fini(void *handle)
>>> {
>>> struct amdgpu_device *adev = (struct amdgpu_device *)handle;
>>> - struct amdgpu_ring *ring = &adev->jpeg.inst->ring_dec;
>>> if (adev->jpeg.cur_state != AMD_PG_STATE_GATE &&
>>> RREG32_SOC15(JPEG, 0, mmUVD_JRBC_STATUS))
>>> jpeg_v2_0_set_powergating_state(adev, AMD_PG_STATE_GATE);
>>> - ring->sched.ready = false;
>>> -
>>> return 0;
>>> }
>>> diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v2_4.c
>>> b/drivers/gpu/drm/amd/amdgpu/sdma_v2_4.c
>>> index fd7fa6082563..05b79aced6e8 100644
>>> --- a/drivers/gpu/drm/amd/amdgpu/sdma_v2_4.c
>>> +++ b/drivers/gpu/drm/amd/amdgpu/sdma_v2_4.c
>>> @@ -355,8 +355,6 @@ static void sdma_v2_4_gfx_stop(struct
>>> amdgpu_device *adev)
>>> ib_cntl = REG_SET_FIELD(ib_cntl, SDMA0_GFX_IB_CNTL,
>>> IB_ENABLE, 0);
>>> WREG32(mmSDMA0_GFX_IB_CNTL + sdma_offsets[i], ib_cntl);
>>> }
>>> - sdma0->sched.ready = false;
>>> - sdma1->sched.ready = false;
>>> }
>>> /**
>>> diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v3_0.c
>>> b/drivers/gpu/drm/amd/amdgpu/sdma_v3_0.c
>>> index 4a8a7f0f3a9c..1448d9beb7a8 100644
>>> --- a/drivers/gpu/drm/amd/amdgpu/sdma_v3_0.c
>>> +++ b/drivers/gpu/drm/amd/amdgpu/sdma_v3_0.c
>>> @@ -529,8 +529,6 @@ static void sdma_v3_0_gfx_stop(struct
>>> amdgpu_device *adev)
>>> ib_cntl = REG_SET_FIELD(ib_cntl, SDMA0_GFX_IB_CNTL,
>>> IB_ENABLE, 0);
>>> WREG32(mmSDMA0_GFX_IB_CNTL + sdma_offsets[i], ib_cntl);
>>> }
>>> - sdma0->sched.ready = false;
>>> - sdma1->sched.ready = false;
>>> }
>>> /**
>>> diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
>>> b/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
>>> index 7cea4513c303..0c6eb65f96f3 100644
>>> --- a/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
>>> +++ b/drivers/gpu/drm/amd/amdgpu/sdma_v4_0.c
>>> @@ -923,8 +923,6 @@ static void sdma_v4_0_gfx_stop(struct
>>> amdgpu_device *adev)
>>> ib_cntl = RREG32_SDMA(i, mmSDMA0_GFX_IB_CNTL);
>>> ib_cntl = REG_SET_FIELD(ib_cntl, SDMA0_GFX_IB_CNTL,
>>> IB_ENABLE, 0);
>>> WREG32_SDMA(i, mmSDMA0_GFX_IB_CNTL, ib_cntl);
>>> -
>>> - sdma[i]->sched.ready = false;
>>> }
>>> }
>>> @@ -971,8 +969,6 @@ static void sdma_v4_0_page_stop(struct
>>> amdgpu_device *adev)
>>> ib_cntl = REG_SET_FIELD(ib_cntl, SDMA0_PAGE_IB_CNTL,
>>> IB_ENABLE, 0);
>>> WREG32_SDMA(i, mmSDMA0_PAGE_IB_CNTL, ib_cntl);
>>> -
>>> - sdma[i]->sched.ready = false;
>>> }
>>> }
>>> diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c
>>> b/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c
>>> index 7ee603db8c57..5af66a24b0a2 100644
>>> --- a/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c
>>> +++ b/drivers/gpu/drm/amd/amdgpu/sdma_v5_0.c
>>> @@ -502,9 +502,6 @@ static void sdma_v5_0_gfx_stop(struct
>>> amdgpu_device *adev)
>>> ib_cntl = REG_SET_FIELD(ib_cntl, SDMA0_GFX_IB_CNTL,
>>> IB_ENABLE, 0);
>>> WREG32(sdma_v5_0_get_reg_offset(adev, i,
>>> mmSDMA0_GFX_IB_CNTL), ib_cntl);
>>> }
>>> -
>>> - sdma0->sched.ready = false;
>>> - sdma1->sched.ready = false;
>>> }
>>> /**
>>> diff --git a/drivers/gpu/drm/amd/amdgpu/si_dma.c
>>> b/drivers/gpu/drm/amd/amdgpu/si_dma.c
>>> index 7f64d73043cf..a8548678c37d 100644
>>> --- a/drivers/gpu/drm/amd/amdgpu/si_dma.c
>>> +++ b/drivers/gpu/drm/amd/amdgpu/si_dma.c
>>> @@ -124,7 +124,6 @@ static void si_dma_stop(struct amdgpu_device *adev)
>>> if (adev->mman.buffer_funcs_ring == ring)
>>> amdgpu_ttm_set_buffer_funcs_status(adev, false);
>>> - ring->sched.ready = false;
>>> }
>>> }
>>> diff --git a/drivers/gpu/drm/amd/amdgpu/uvd_v4_2.c
>>> b/drivers/gpu/drm/amd/amdgpu/uvd_v4_2.c
>>> index 82abd8e728ab..957e14e2c155 100644
>>> --- a/drivers/gpu/drm/amd/amdgpu/uvd_v4_2.c
>>> +++ b/drivers/gpu/drm/amd/amdgpu/uvd_v4_2.c
>>> @@ -210,13 +210,10 @@ static int uvd_v4_2_hw_init(void *handle)
>>> static int uvd_v4_2_hw_fini(void *handle)
>>> {
>>> struct amdgpu_device *adev = (struct amdgpu_device *)handle;
>>> - struct amdgpu_ring *ring = &adev->uvd.inst->ring;
>>> if (RREG32(mmUVD_STATUS) != 0)
>>> uvd_v4_2_stop(adev);
>>> - ring->sched.ready = false;
>>> -
>>> return 0;
>>> }
>>> diff --git a/drivers/gpu/drm/amd/amdgpu/uvd_v5_0.c
>>> b/drivers/gpu/drm/amd/amdgpu/uvd_v5_0.c
>>> index 0fa8aae2d78e..2aad6689823b 100644
>>> --- a/drivers/gpu/drm/amd/amdgpu/uvd_v5_0.c
>>> +++ b/drivers/gpu/drm/amd/amdgpu/uvd_v5_0.c
>>> @@ -208,13 +208,10 @@ static int uvd_v5_0_hw_init(void *handle)
>>> static int uvd_v5_0_hw_fini(void *handle)
>>> {
>>> struct amdgpu_device *adev = (struct amdgpu_device *)handle;
>>> - struct amdgpu_ring *ring = &adev->uvd.inst->ring;
>>> if (RREG32(mmUVD_STATUS) != 0)
>>> uvd_v5_0_stop(adev);
>>> - ring->sched.ready = false;
>>> -
>>> return 0;
>>> }
>>> diff --git a/drivers/gpu/drm/amd/amdgpu/uvd_v6_0.c
>>> b/drivers/gpu/drm/amd/amdgpu/uvd_v6_0.c
>>> index e0aadcaf6c8b..a9d06ec5d09a 100644
>>> --- a/drivers/gpu/drm/amd/amdgpu/uvd_v6_0.c
>>> +++ b/drivers/gpu/drm/amd/amdgpu/uvd_v6_0.c
>>> @@ -535,13 +535,10 @@ static int uvd_v6_0_hw_init(void *handle)
>>> static int uvd_v6_0_hw_fini(void *handle)
>>> {
>>> struct amdgpu_device *adev = (struct amdgpu_device *)handle;
>>> - struct amdgpu_ring *ring = &adev->uvd.inst->ring;
>>> if (RREG32(mmUVD_STATUS) != 0)
>>> uvd_v6_0_stop(adev);
>>> - ring->sched.ready = false;
>>> -
>>> return 0;
>>> }
>>> diff --git a/drivers/gpu/drm/amd/amdgpu/uvd_v7_0.c
>>> b/drivers/gpu/drm/amd/amdgpu/uvd_v7_0.c
>>> index 0995378d8263..af3b1c9d3377 100644
>>> --- a/drivers/gpu/drm/amd/amdgpu/uvd_v7_0.c
>>> +++ b/drivers/gpu/drm/amd/amdgpu/uvd_v7_0.c
>>> @@ -598,7 +598,6 @@ static int uvd_v7_0_hw_init(void *handle)
>>> static int uvd_v7_0_hw_fini(void *handle)
>>> {
>>> struct amdgpu_device *adev = (struct amdgpu_device *)handle;
>>> - int i;
>>> if (!amdgpu_sriov_vf(adev))
>>> uvd_v7_0_stop(adev);
>>> @@ -607,12 +606,6 @@ static int uvd_v7_0_hw_fini(void *handle)
>>> DRM_DEBUG("For SRIOV client, shouldn't do anything.\n");
>>> }
>>> - for (i = 0; i < adev->uvd.num_uvd_inst; ++i) {
>>> - if (adev->uvd.harvest_config & (1 << i))
>>> - continue;
>>> - adev->uvd.inst[i].ring.sched.ready = false;
>>> - }
>>> -
>>> return 0;
>>> }
>>> diff --git a/drivers/gpu/drm/amd/amdgpu/vce_v4_0.c
>>> b/drivers/gpu/drm/amd/amdgpu/vce_v4_0.c
>>> index 3fd102efb7af..5e986dea4645 100644
>>> --- a/drivers/gpu/drm/amd/amdgpu/vce_v4_0.c
>>> +++ b/drivers/gpu/drm/amd/amdgpu/vce_v4_0.c
>>> @@ -539,7 +539,6 @@ static int vce_v4_0_hw_init(void *handle)
>>> static int vce_v4_0_hw_fini(void *handle)
>>> {
>>> struct amdgpu_device *adev = (struct amdgpu_device *)handle;
>>> - int i;
>>> if (!amdgpu_sriov_vf(adev)) {
>>> /* vce_v4_0_wait_for_idle(handle); */
>>> @@ -549,9 +548,6 @@ static int vce_v4_0_hw_fini(void *handle)
>>> DRM_DEBUG("For SRIOV client, shouldn't do anything.\n");
>>> }
>>> - for (i = 0; i < adev->vce.num_rings; i++)
>>> - adev->vce.ring[i].sched.ready = false;
>>> -
>>> return 0;
>>> }
>>> diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v1_0.c
>>> b/drivers/gpu/drm/amd/amdgpu/vcn_v1_0.c
>>> index 71f61afdc655..df92c4e1efaa 100644
>>> --- a/drivers/gpu/drm/amd/amdgpu/vcn_v1_0.c
>>> +++ b/drivers/gpu/drm/amd/amdgpu/vcn_v1_0.c
>>> @@ -227,14 +227,11 @@ static int vcn_v1_0_hw_init(void *handle)
>>> static int vcn_v1_0_hw_fini(void *handle)
>>> {
>>> struct amdgpu_device *adev = (struct amdgpu_device *)handle;
>>> - struct amdgpu_ring *ring = &adev->vcn.inst->ring_dec;
>>> if ((adev->pg_flags & AMD_PG_SUPPORT_VCN_DPG) ||
>>> RREG32_SOC15(VCN, 0, mmUVD_STATUS))
>>> vcn_v1_0_set_powergating_state(adev, AMD_PG_STATE_GATE);
>>> - ring->sched.ready = false;
>>> -
>>> return 0;
>>> }
>>> diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v2_0.c
>>> b/drivers/gpu/drm/amd/amdgpu/vcn_v2_0.c
>>> index c387c81f8695..37508277cbdf 100644
>>> --- a/drivers/gpu/drm/amd/amdgpu/vcn_v2_0.c
>>> +++ b/drivers/gpu/drm/amd/amdgpu/vcn_v2_0.c
>>> @@ -232,21 +232,12 @@ static int vcn_v2_0_hw_init(void *handle)
>>> static int vcn_v2_0_hw_fini(void *handle)
>>> {
>>> struct amdgpu_device *adev = (struct amdgpu_device *)handle;
>>> - struct amdgpu_ring *ring = &adev->vcn.inst->ring_dec;
>>> - int i;
>>> if ((adev->pg_flags & AMD_PG_SUPPORT_VCN_DPG) ||
>>> (adev->vcn.cur_state != AMD_PG_STATE_GATE &&
>>> RREG32_SOC15(VCN, 0, mmUVD_STATUS)))
>>> vcn_v2_0_set_powergating_state(adev, AMD_PG_STATE_GATE);
>>> - ring->sched.ready = false;
>>> -
>>> - for (i = 0; i < adev->vcn.num_enc_rings; ++i) {
>>> - ring = &adev->vcn.inst->ring_enc[i];
>>> - ring->sched.ready = false;
>>> - }
>>> -
>>> return 0;
>>> }
>>> diff --git a/drivers/gpu/drm/amd/amdgpu/vcn_v2_5.c
>>> b/drivers/gpu/drm/amd/amdgpu/vcn_v2_5.c
>>> index 2d64ba1adf99..90a1994857db 100644
>>> --- a/drivers/gpu/drm/amd/amdgpu/vcn_v2_5.c
>>> +++ b/drivers/gpu/drm/amd/amdgpu/vcn_v2_5.c
>>> @@ -307,25 +307,16 @@ static int vcn_v2_5_hw_init(void *handle)
>>> static int vcn_v2_5_hw_fini(void *handle)
>>> {
>>> struct amdgpu_device *adev = (struct amdgpu_device *)handle;
>>> - struct amdgpu_ring *ring;
>>> - int i, j;
>>> + int i;
>>> for (i = 0; i < adev->vcn.num_vcn_inst; ++i) {
>>> if (adev->vcn.harvest_config & (1 << i))
>>> continue;
>>> - ring = &adev->vcn.inst[i].ring_dec;
>>> if ((adev->pg_flags & AMD_PG_SUPPORT_VCN_DPG) ||
>>> (adev->vcn.cur_state != AMD_PG_STATE_GATE &&
>>> RREG32_SOC15(VCN, i, mmUVD_STATUS)))
>>> vcn_v2_5_set_powergating_state(adev, AMD_PG_STATE_GATE);
>>> -
>>> - ring->sched.ready = false;
>>> -
>>> - for (j = 0; j < adev->vcn.num_enc_rings; ++j) {
>>> - ring = &adev->vcn.inst[i].ring_enc[j];
>>> - ring->sched.ready = false;
>>> - }
>>> }
>>> return 0;
>> _______________________________________________
>> amd-gfx mailing list
>> amd-gfx at lists.freedesktop.org
>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Famd-gfx&data=02%7C01%7CNirmoy.Das%40amd.com%7C01cff3248f7b4c59df3508d7b9f654f2%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637182339951934594&sdata=BNbTZI%2FYNENFqUQtU3U1e%2FN2BJKVDKqAVPB7V9gdZ74%3D&reserved=0
>>
>
More information about the amd-gfx
mailing list