[PATCH v2 1/2] drm/xe/pf: Create a link between PF and VF devices
Piotr Piórkowski
piotr.piorkowski at intel.com
Fri Feb 21 08:25:37 UTC 2025
Satyanarayana K V P <satyanarayana.k.v.p at intel.com> wrote on pią [2025-lut-21 11:27:21 +0530]:
> When both PF and VF devices are enabled on the host, they
> resume simultaneously during system resume.
>
> However, the PF must finish provisioning the VF before any
> VFs can successfully resume.
>
> Establish a parent-child device link between the PF and VF
> devices to ensure the correct order of resumption.
>
> v2:
> - Added a helper function to get VF pci_dev.
> - Updated xe_sriov_notice() to xe_sriov_warn() if vf pci_dev
> is not found.
>
> Signed-off-by: Satyanarayana K V P <satyanarayana.k.v.p at intel.com>
> Cc: Michał Wajdeczko <michal.wajdeczko at intel.com>
> Cc: Michał Winiarski <michal.winiarski at intel.com>
> Cc: Piotr Piórkowski <piotr.piorkowski at intel.com>
> ---
> drivers/gpu/drm/xe/xe_pci_sriov.c | 44 +++++++++++++++++++++++++++++++
> drivers/gpu/drm/xe/xe_pci_sriov.h | 6 +++++
> 2 files changed, 50 insertions(+)
>
> diff --git a/drivers/gpu/drm/xe/xe_pci_sriov.c b/drivers/gpu/drm/xe/xe_pci_sriov.c
> index aaceee748287..6d2e87d1c6ba 100644
> --- a/drivers/gpu/drm/xe/xe_pci_sriov.c
> +++ b/drivers/gpu/drm/xe/xe_pci_sriov.c
> @@ -62,6 +62,48 @@ static void pf_reset_vfs(struct xe_device *xe, unsigned int num_vfs)
> xe_gt_sriov_pf_control_trigger_flr(gt, n);
> }
>
> +struct pci_dev *xe_pci_pf_get_vf_dev(struct pci_dev *pdev, unsigned int vf_id)
Perhaps it would be good to add a doc to this function.
IMO, in general, it is good practice to add a doc to functions that we make available
outside the source file.
> +{
> + /* caller must use pci_dev_put() */
> + return pci_get_domain_bus_and_slot(pci_domain_nr(pdev->bus),
> + pdev->bus->number,
> + pci_iov_virtfn_devfn(pdev, vf_id));
> +}
> +
> +static void pf_link_vfs(struct xe_device *xe, int num_vfs)
> +{
> + struct pci_dev *pdev_pf = to_pci_dev(xe->drm.dev);
> + struct device_link *link;
> + struct pci_dev *pdev_vf;
> + unsigned int n;
> +
> + for (n = 1; n <= num_vfs; n++) {
> + pdev_vf = xe_pci_pf_get_vf_dev(pdev_pf, n - 1);
> +
> + if (!pdev_vf) {
> + xe_sriov_warn(xe, "Can't link PF and VF%d due to missing pci dev..\n", n);
NIT: I would changed a bit a log: "Cannot link PF and VF%d due to missing VF PCI dev\n"
> + continue;
> + }
> +
> + /*
> + * When both PF and VF devices are enabled on the host, during system
> + * resume they are resuming in parallel.
> + *
> + * But PF has to complete the provision of VF first to allow any VFs to
> + * successfully resume.
> + *
> + * Create a parent-child device link between PF and VF devices that will
> + * enforce correct resume order.
> + */
> + link = device_link_add(&pdev_vf->dev, &pdev_pf->dev,
> + DL_FLAG_AUTOREMOVE_CONSUMER);
> + if (!link)
> + xe_sriov_notice(xe, "Failed linking PF and VF%u\n", n);
> +
> + pci_dev_put(pdev_vf);
> + }
> +}
> +
> static int pf_enable_vfs(struct xe_device *xe, int num_vfs)
> {
> struct pci_dev *pdev = to_pci_dev(xe->drm.dev);
> @@ -92,6 +134,8 @@ static int pf_enable_vfs(struct xe_device *xe, int num_vfs)
> if (err < 0)
> goto failed;
>
> + pf_link_vfs(xe, num_vfs);
> +
> xe_sriov_info(xe, "Enabled %u of %u VF%s\n",
> num_vfs, total_vfs, str_plural(total_vfs));
> return num_vfs;
> diff --git a/drivers/gpu/drm/xe/xe_pci_sriov.h b/drivers/gpu/drm/xe/xe_pci_sriov.h
> index c76dd0d90495..f66b68a25b20 100644
> --- a/drivers/gpu/drm/xe/xe_pci_sriov.h
> +++ b/drivers/gpu/drm/xe/xe_pci_sriov.h
> @@ -10,11 +10,17 @@ struct pci_dev;
>
> #ifdef CONFIG_PCI_IOV
> int xe_pci_sriov_configure(struct pci_dev *pdev, int num_vfs);
> +struct pci_dev *xe_pci_pf_get_vf_dev(struct pci_dev *pdev, unsigned int vf_id);
> #else
> static inline int xe_pci_sriov_configure(struct pci_dev *pdev, int num_vfs)
> {
> return 0;
> }
> +
> +static inline struct pci_dev *xe_pci_pf_get_vf_dev(struct pci_dev *pdev, unsigned int vf_id)
> +{
> + return NULL;
> +}
> #endif
Other than that, it's okay for me:
Reviewed-by: Piotr Piorkowski <piotr.piorkowski at intel.com>
>
> #endif
> --
> 2.35.3
>
--
More information about the Intel-xe
mailing list