[patch 23/30] net/mlx5: Use effective interrupt affinity

Thomas Gleixner tglx at linutronix.de
Thu Dec 10 19:25:59 UTC 2020


Using the interrupt affinity mask for checking locality is not really
working well on architectures which support effective affinity masks.

The affinity mask is either the system wide default or set by user space,
but the architecture can or even must reduce the mask to the effective set,
which means that checking the affinity mask itself does not really tell
about the actual target CPUs.

Signed-off-by: Thomas Gleixner <tglx at linutronix.de>
Cc: Saeed Mahameed <saeedm at nvidia.com>
Cc: Leon Romanovsky <leon at kernel.org>
Cc: "David S. Miller" <davem at davemloft.net>
Cc: Jakub Kicinski <kuba at kernel.org>
Cc: netdev at vger.kernel.org
Cc: linux-rdma at vger.kernel.org
---
 drivers/net/ethernet/mellanox/mlx5/core/en_main.c |    2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

--- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
@@ -1998,7 +1998,7 @@ static int mlx5e_open_channel(struct mlx
 	c->num_tc   = params->num_tc;
 	c->xdp      = !!params->xdp_prog;
 	c->stats    = &priv->channel_stats[ix].ch;
-	c->aff_mask = irq_get_affinity_mask(irq);
+	c->aff_mask = irq_get_effective_affinity_mask(irq);
 	c->lag_port = mlx5e_enumerate_lag_port(priv->mdev, ix);
 
 	netif_napi_add(netdev, &c->napi, mlx5e_napi_poll, 64);



More information about the dri-devel mailing list