[PATCH 2/2] drm/i915: Use Transparent Hugepages when IOMMU is enabled
Daniel Vetter
daniel at ffwll.ch
Tue Sep 7 08:42:49 UTC 2021
On Fri, Sep 03, 2021 at 01:47:52PM +0100, Tvrtko Ursulin wrote:
>
> On 29/07/2021 15:06, Daniel Vetter wrote:
> > On Thu, Jul 29, 2021 at 3:34 PM Tvrtko Ursulin
> > <tvrtko.ursulin at linux.intel.com> wrote:
> > >
> > > From: Tvrtko Ursulin <tvrtko.ursulin at intel.com>
> > >
> > > Usage of Transparent Hugepages was disabled in 9987da4b5dcf
> > > ("drm/i915: Disable THP until we have a GPU read BW W/A"), but since it
> > > appears majority of performance regressions reported with an enabled IOMMU
> > > can be almost eliminated by turning them on, lets just do that.
> > >
> > > To err on the side of safety we keep the current default in cases where
> > > IOMMU is not active, and only when it is default to the "huge=within_size"
> > > mode. Although there probably would be wins to enable them throughout,
> > > more extensive testing across benchmarks and platforms would need to be
> > > done.
> > >
> > > With the patch and IOMMU enabled my local testing on a small Skylake part
> > > shows OglVSTangent regression being reduced from ~14% (IOMMU on versus
> > > IOMMU off) to ~2% (same comparison but with THP on).
> > >
> > > v2:
> > > * Add Kconfig dependency to transparent hugepages and some help text.
> > > * Move to helper for easier handling of kernel build options.
> > >
> > > v3:
> > > * Drop Kconfig. (Daniel)
> > >
> > > References: b901bb89324a ("drm/i915/gemfs: enable THP")
> > > References: 9987da4b5dcf ("drm/i915: Disable THP until we have a GPU read BW W/A")
> > > References: https://gitlab.freedesktop.org/drm/intel/-/issues/430
> > > Co-developed-by: Chris Wilson <chris at chris-wilson.co.uk>
> > > Signed-off-by: Chris Wilson <chris at chris-wilson.co.uk>
> > > Cc: Joonas Lahtinen <joonas.lahtinen at linux.intel.com>
> > > Cc: Matthew Auld <matthew.auld at intel.com>
> > > Cc: Eero Tamminen <eero.t.tamminen at intel.com>
> > > Cc: Tvrtko Ursulin <tvrtko.ursulin at intel.com>
> > > Cc: Rodrigo Vivi <rodrigo.vivi at intel.com>
> > > Cc: Daniel Vetter <daniel at ffwll.ch>
> > > Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin at intel.com>
> > > Reviewed-by: Rodrigo Vivi <rodrigo.vivi at intel.com> # v1
> >
> > On both patches: Acked-by: Daniel Vetter <daniel.vetter at ffwll.ch>
>
> Eero's testing results at
> https://gitlab.freedesktop.org/drm/intel/-/issues/430 are looking good -
> seem to show this to be a net win for at least Gen9 and Gen12 platforms.
>
> Is the ack enough to merge in this case or I should look for an r-b as well?
Since your back to defacto v1 with the 2nd patch I think you have full r-b
already. So more than enough I think.
Please do record the relative perf numbers from Eero in that issue in the
commit message so that we have that on the git log record too. It's easier
to find there than following the link and finding the right comment in the
issue.
Thanks, Daniel
>
> Regards,
>
> Tvrtko
>
> > > ---
> > > drivers/gpu/drm/i915/gem/i915_gemfs.c | 22 +++++++++++++++++++---
> > > 1 file changed, 19 insertions(+), 3 deletions(-)
> > >
> > > diff --git a/drivers/gpu/drm/i915/gem/i915_gemfs.c b/drivers/gpu/drm/i915/gem/i915_gemfs.c
> > > index 5e6e8c91ab38..dbdbdc344d87 100644
> > > --- a/drivers/gpu/drm/i915/gem/i915_gemfs.c
> > > +++ b/drivers/gpu/drm/i915/gem/i915_gemfs.c
> > > @@ -6,7 +6,6 @@
> > >
> > > #include <linux/fs.h>
> > > #include <linux/mount.h>
> > > -#include <linux/pagemap.h>
> > >
> > > #include "i915_drv.h"
> > > #include "i915_gemfs.h"
> > > @@ -15,6 +14,7 @@ int i915_gemfs_init(struct drm_i915_private *i915)
> > > {
> > > struct file_system_type *type;
> > > struct vfsmount *gemfs;
> > > + char *opts;
> > >
> > > type = get_fs_type("tmpfs");
> > > if (!type)
> > > @@ -26,10 +26,26 @@ int i915_gemfs_init(struct drm_i915_private *i915)
> > > *
> > > * One example, although it is probably better with a per-file
> > > * control, is selecting huge page allocations ("huge=within_size").
> > > - * Currently unused due to bandwidth issues (slow reads) on Broadwell+.
> > > + * However, we only do so to offset the overhead of iommu lookups
> > > + * due to bandwidth issues (slow reads) on Broadwell+.
> > > */
> > >
> > > - gemfs = kern_mount(type);
> > > + opts = NULL;
> > > + if (intel_vtd_active()) {
> > > + if (IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE)) {
> > > + static char huge_opt[] = "huge=within_size"; /* r/w */
> > > +
> > > + opts = huge_opt;
> > > + drm_info(&i915->drm,
> > > + "Transparent Hugepage mode '%s'\n",
> > > + opts);
> > > + } else {
> > > + drm_notice(&i915->drm,
> > > + "Transparent Hugepage support is recommended for optimal performance when IOMMU is enabled!\n");
> > > + }
> > > + }
> > > +
> > > + gemfs = vfs_kern_mount(type, SB_KERNMOUNT, type->name, opts);
> > > if (IS_ERR(gemfs))
> > > return PTR_ERR(gemfs);
> > >
> > > --
> > > 2.30.2
> > >
> >
> >
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
More information about the dri-devel
mailing list