[PATCH] drm/xe: Map both mem.kernel_bb_pool and usm.bb_pool
Matthew Brost
matthew.brost at intel.com
Fri Feb 2 17:53:35 UTC 2024
On Fri, Feb 02, 2024 at 09:58:32AM -0700, Summers, Stuart wrote:
> On Thu, 2024-02-01 at 19:34 -0800, Matthew Brost wrote:
> > For integrated devices we need to map both mem.kernel_bb_pool and
> > usm.bb_pool to be able to run batches from both pools.
> >
> > Fixes: a682b6a42d4d ("drm/xe: Support device page faults on
> > integrated platforms")
> > Signed-off-by: Matthew Brost <matthew.brost at intel.com>
> > ---
> > drivers/gpu/drm/xe/xe_gt.c | 5 ++++-
> > drivers/gpu/drm/xe/xe_migrate.c | 23 ++++++++++++++++++-----
> > 2 files changed, 22 insertions(+), 6 deletions(-)
> >
> > diff --git a/drivers/gpu/drm/xe/xe_gt.c b/drivers/gpu/drm/xe/xe_gt.c
> > index 675a2927a19e..295cba1c688f 100644
> > --- a/drivers/gpu/drm/xe/xe_gt.c
> > +++ b/drivers/gpu/drm/xe/xe_gt.c
> > @@ -456,7 +456,10 @@ static int all_fw_domain_init(struct xe_gt *gt)
> > * USM has its only SA pool to non-block behind user
> > operations
> > */
> > if (gt_to_xe(gt)->info.has_usm) {
> > - gt->usm.bb_pool =
> > xe_sa_bo_manager_init(gt_to_tile(gt), SZ_1M, 16);
> > + struct xe_device *xe = gt_to_xe(gt);
> > +
> > + gt->usm.bb_pool =
> > xe_sa_bo_manager_init(gt_to_tile(gt),
> > + IS_DG
> > FX(xe) ? SZ_1M : SZ_512, 16);
>
> Would it be better to use a modparam for this size/offset? What if we
> decide to change it at some point?
>
That doesn't seem necessary.
> Also is this supposed to be SZ_512K?
>
Yep, good catch. Didn't have a LNL to test this.
> > if (IS_ERR(gt->usm.bb_pool)) {
> > err = PTR_ERR(gt->usm.bb_pool);
> > goto err_force_wake;
> > diff --git a/drivers/gpu/drm/xe/xe_migrate.c
> > b/drivers/gpu/drm/xe/xe_migrate.c
> > index 9ab004871f9a..7465f8d14028 100644
> > --- a/drivers/gpu/drm/xe/xe_migrate.c
> > +++ b/drivers/gpu/drm/xe/xe_migrate.c
> > @@ -180,11 +180,6 @@ static int xe_migrate_prepare_vm(struct xe_tile
> > *tile, struct xe_migrate *m,
> > if (!IS_DGFX(xe)) {
> > /* Write out batch too */
> > m->batch_base_ofs = NUM_PT_SLOTS * XE_PAGE_SIZE;
> > - if (xe->info.has_usm) {
> > - batch = tile->primary_gt->usm.bb_pool->bo;
> > - m->usm_batch_base_ofs = m->batch_base_ofs;
> > - }
> > -
> > for (i = 0; i < batch->size;
> > i += vm->flags & XE_VM_FLAG_64K ?
> > XE_64K_PAGE_SIZE :
> > XE_PAGE_SIZE) {
> > @@ -195,6 +190,24 @@ static int xe_migrate_prepare_vm(struct xe_tile
> > *tile, struct xe_migrate *m,
> > entry);
> > level++;
> > }
> > + if (xe->info.has_usm) {
> > + xe_tile_assert(tile, batch->size == SZ_1M);
> > +
> > + batch = tile->primary_gt->usm.bb_pool->bo;
> > + m->usm_batch_base_ofs = m->batch_base_ofs +
> > SZ_1M;
>
> Same here.
>
Yep.
> > + xe_tile_assert(tile, batch->size == SZ_512);
>
> And here.
>
Yep.
Matt
> Thanks,
> Stuart
>
> > +
> > + for (i = 0; i < batch->size;
> > + i += vm->flags & XE_VM_FLAG_64K ?
> > XE_64K_PAGE_SIZE :
> > + XE_PAGE_SIZE) {
> > + entry = vm->pt_ops-
> > >pte_encode_bo(batch, i,
> > +
> > pat_index, 0);
> > +
> > + xe_map_wr(xe, &bo->vmap, map_ofs +
> > level * 8, u64,
> > + entry);
> > + level++;
> > + }
> > + }
> > } else {
> > u64 batch_addr = xe_bo_addr(batch, 0, XE_PAGE_SIZE);
> >
>
More information about the Intel-xe
mailing list