[Intel-gfx] ✗ Fi.CI.BAT: failure for series starting with [1/2] drm/i915/guc: Don't enable GuC when vGPU is active

Tvrtko Ursulin tvrtko.ursulin at linux.intel.com
Wed Jan 17 11:18:15 UTC 2018


On 17/01/2018 02:36, Du, Changbin wrote:
> On Tue, Jan 16, 2018 at 11:17:39AM +0100, Michal Wajdeczko wrote:
>> On Tue, 16 Jan 2018 10:53:47 +0100, Joonas Lahtinen
>> <joonas.lahtinen at linux.intel.com> wrote:
>>
>>> On Mon, 2018-01-15 at 13:10 +0200, Tomi Sarvela wrote:
>>>> On 15/01/18 12:28, Zhenyu Wang wrote:
>>>>> On 2018.01.15 12:07:28 +0200, Joonas Lahtinen wrote:
>>>>>> On Fri, 2018-01-12 at 14:08 +0800, Du, Changbin wrote:
>>>>>>> On Fri, Jan 12, 2018 at 11:32:30AM +0530, Sagar Arun Kamble wrote:
>>>>>>>> Is skl-gvtdvm not having vGPU active?
>>>>>>>>
>>>>>>>> It has flag X86_FEATURE_HYPERVISOR set however it might be
>>>> set on host too
>>>>>>>> so relying intel_vgpu_active().
>>>>>>>>
>>>>>>>
>>>>>>> Do you mean flag X86_FEATURE_HYPERVISOR is set on host, too?
>>>> This is weird since this
>>>>>>> flag indicates the OS is running on a hypervisor.
>>>>>>
>>>>>> + CI folks and Zhenyu
>>>>>>
>>>>>> Somehow, magically, the virtual machine seems to starts skipping all
>>>>>> tests when GuC is disabled?
>>>>>>
>>>>>> Has somebody actually validated that the tests results are valid for
>>>>>> the virtual machine? Or is this a one-off CI quirk?
>>>>>
>>>>> Are these tests really run in VM with GVT-g enabled on host?
>>>>
>>>> These tests are ran on VM running on GVT-d (as name implies), not GVT-g.
>>>
>>> I don't still understand how explicitly disabling GuC could make all
>>> the tests skip on a machine that didn't use GuC to begin with. There
>>> must be something wrong in the initialization code.
>>>
>>> That intel_vgpu_active() check by my logic should not trigger in GVT-d
>>> (because we don't have virtual GPU, we have the real deal, just without
>>> stolen etc.), so I'm bit baffled.
>>
>> True. This intel_vgpu_active() check added by Sagar is not active in these
>> scenarios so we keep turn on GuC on that platform (as default from auto)
>>
>> -	param(int, enable_guc, 0) \
>> +	param(int, enable_guc, -1) \
>>
>> [drm:intel_uc_sanitize_options [i915]] enable_guc=3 (submission:yes huc:yes)
>>
>> but since i915_memcpy_from_wc() check still fails due to running under
>> hypervisor (introduced by "drm/i915: Do not enable movntdqa optimization
>> in hypervisor guest"), initialization of the GuC log fails
>>
>> WARN_ON(!i915_memcpy_from_wc(((void *)0), ((void *)0), 0))
>> WARNING: CPU: 0 PID: 228 at drivers/gpu/drm/i915/intel_guc_log.c:527
>> intel_guc_log_create
>>
>> and that is treated as driver load error (as we no longer support silent
>> fallback from GuC to execlist, if GuC was selected using auto(-1) or
>> explicit
>> load(1) modparam option.
>>
>> On the other mail thread there was proposal to make GuC log optional in
>> case of running under hypervisor and disable it, but in my opinion it is
>> not a solution but just short term fix, as we want to keep GuC log enabled
>> since it works as is with other hypervisors.
>>
>> Michal
> 
> To enable Guc logging on hypervisor guest, I think the correct solution is to
> fallback to memcpy() after i915_has_memcpy_from_wc(). At least for kvm, it needs
> this change considering GPU passthrough.

Perhaps a stupid question - but we are not talking about a scheme where 
multiple guests could read the same GuC log?

Regards,

Tvrtko



More information about the Intel-gfx mailing list