[PATCH v1 3/4] Drivers: hv: Always reserve framebuffer region for Gen1 VMs

Michael Kelley (LINUX) mikelley at microsoft.com
Tue Aug 23 14:03:07 UTC 2022


From: Vitaly Kuznetsov <vkuznets at redhat.com> Sent: Thursday, August 18, 2022 7:25 AM
> 
> vmbus_reserve_fb() tries reserving framebuffer region iff
> screen_info.lfb_base is set. Gen2 VMs seem to have it set by EFI fb
> but on Gen1 VM it is observed to be zero. 

FWIW, in a Gen1 VM, whether screen_info.lfb_base is set depends on what
grub sets up, which in turn seems to depend on the gfxpayload= setting
in grub.cfg and certain versions of grub.  There are cases where it is
observed to be zero, but from our experiments it's not all cases.

In a Gen2 VM, there's an edge case where the frame buffer has been
moved, and a kexec() kernel may see the moved location instead of
what was set by EFI.  See
https://lore.kernel.org/all/20201014092429.1415040-1-kasong@redhat.com/

I think these points may be worth recording in the commit message here
so that there's accurate record for the future.  The Hyper-V and grub
idiosyncrasies make this a very tricky area.

> In fact, we do not need to
> rely on some other video driver setting it correctly as Gen1 VMs have
> a dedicated PCI device to look at. Both Hyper-V DRM and Hyper-V FB
> drivers get framebuffer base from this PCI device already so Vmbus
> driver can do the same trick.
> 
> Check for legacy PCI video device presence and reserve the whole
> region for framebuffer.
> 
> Signed-off-by: Vitaly Kuznetsov <vkuznets at redhat.com>
> ---
>  drivers/hv/vmbus_drv.c | 47 +++++++++++++++++++++++++++++-------------
>  1 file changed, 33 insertions(+), 14 deletions(-)
> 
> diff --git a/drivers/hv/vmbus_drv.c b/drivers/hv/vmbus_drv.c
> index 547ae334e5cd..6edaeefa2c3c 100644
> --- a/drivers/hv/vmbus_drv.c
> +++ b/drivers/hv/vmbus_drv.c
> @@ -35,6 +35,7 @@
>  #include <linux/kernel.h>
>  #include <linux/syscore_ops.h>
>  #include <linux/dma-map-ops.h>
> +#include <linux/pci.h>
>  #include <clocksource/hyperv_timer.h>
>  #include "hyperv_vmbus.h"
> 
> @@ -2258,26 +2259,44 @@ static int vmbus_acpi_remove(struct acpi_device *device)
> 
>  static void vmbus_reserve_fb(void)
>  {
> -	int size;
> +	resource_size_t start = 0, size;
> +	struct pci_dev *pdev;
> +
> +	if (efi_enabled(EFI_BOOT)) {
> +		/* Gen2 VM: get FB base from EFI framebuffer */
> +		start = screen_info.lfb_base;
> +		size = max_t(__u32, screen_info.lfb_size, 0x800000);
> +	} else {
> +		/* Gen1 VM: get FB base from PCI */
> +		pdev = pci_get_device(PCI_VENDOR_ID_MICROSOFT,
> +				      PCI_DEVICE_ID_HYPERV_VIDEO, NULL);
> +		if (!pdev)
> +			return;
> +
> +		if (!(pdev->resource[0].flags & IORESOURCE_MEM))
> +			return;

Doesn't this error exit need a pci_dev_put(pdev)?  Or maybe reverse
the test like this, and the later check for !start will do the error exit.

		if (pdev->resource[0].flags & IORESOURCE_MEM) {
			start = pci_resource_start(pdev, 0);
			size = pci_resource_len(pdev, 0);
		}

> +
> +		start = pci_resource_start(pdev, 0);
> +		size = pci_resource_len(pdev, 0);
> +
> +		/*
> +		 * Release the PCI device so hyperv_drm or hyperv_fb driver can
> +		 * grab it later.
> +		 */
> +		pci_dev_put(pdev);
> +	}
> +
> +	if (!start)
> +		return;
> +
>  	/*
>  	 * Make a claim for the frame buffer in the resource tree under the
>  	 * first node, which will be the one below 4GB.  The length seems to
>  	 * be underreported, particularly in a Generation 1 VM.  So start out
>  	 * reserving a larger area and make it smaller until it succeeds.
>  	 */
> -
> -	if (screen_info.lfb_base) {
> -		if (efi_enabled(EFI_BOOT))
> -			size = max_t(__u32, screen_info.lfb_size, 0x800000);
> -		else
> -			size = max_t(__u32, screen_info.lfb_size, 0x4000000);
> -
> -		for (; !fb_mmio && (size >= 0x100000); size >>= 1) {
> -			fb_mmio = __request_region(hyperv_mmio,
> -						   screen_info.lfb_base, size,
> -						   fb_mmio_name, 0);
> -		}
> -	}
> +	for (; !fb_mmio && (size >= 0x100000); size >>= 1)
> +		fb_mmio = __request_region(hyperv_mmio, start, size, fb_mmio_name, 0);
> }
> 
>  /**
> --
> 2.37.1


More information about the dri-devel mailing list