[PATCH] drm/amdgpu: To get gds, gws and oa from adev->gds.
Deng, Emily
Emily.Deng at amd.com
Tue May 29 09:06:06 UTC 2018
Thanks Christian, will modify the patch as you suggest.
Best Wishes,
Emily Deng
> -----Original Message-----
> From: Christian König [mailto:ckoenig.leichtzumerken at gmail.com]
> Sent: Tuesday, May 29, 2018 5:01 PM
> To: Deng, Emily <Emily.Deng at amd.com>; amd-gfx at lists.freedesktop.org
> Subject: Re: [PATCH] drm/amdgpu: To get gds, gws and oa from adev->gds.
>
> Am 29.05.2018 um 10:06 schrieb Emily Deng:
> > As now enabled per vm bo feature, the user mode driver won't supply
> > the bo_list generally, for this case, the gdb_base, gds_size,
> > gws_base, gws_size and oa_base, oa_size won't be set.
>
> Good catch, a few minor notes below.
>
> >
> > Signed-off-by: Emily Deng <Emily.Deng at amd.com>
> > ---
> > drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c | 44
> ++++++++++++++++++++--------------
> > 1 file changed, 26 insertions(+), 18 deletions(-)
> >
> > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
> > b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
> > index e1756b6..49be8b65 100644
> > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
> > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_cs.c
> > @@ -515,14 +515,17 @@ static int amdgpu_cs_list_validate(struct
> amdgpu_cs_parser *p,
> > return 0;
> > }
> >
> > -static int amdgpu_cs_parser_bos(struct amdgpu_cs_parser *p,
> > - union drm_amdgpu_cs *cs)
> > +static int amdgpu_cs_parser_bos(struct amdgpu_device *adev,
> > + struct amdgpu_cs_parser *p, union
> drm_amdgpu_cs *cs)
>
> No need for this change, adev should be available as p->adev as well.
>
> > {
> > struct amdgpu_fpriv *fpriv = p->filp->driver_priv;
> > struct amdgpu_bo_list_entry *e;
> > struct list_head duplicates;
> > unsigned i, tries = 10;
> > int r;
> > + struct amdgpu_bo *gds;
> > + struct amdgpu_bo *gws;
> > + struct amdgpu_bo *oa;
>
> Reverse xmas tree order please, e.g. "int r" should be the last one.
>
> Apart from that it looks good to me,
> Christian.
>
> >
> > INIT_LIST_HEAD(&p->validated);
> >
> > @@ -652,10 +655,11 @@ static int amdgpu_cs_parser_bos(struct
> > amdgpu_cs_parser *p,
> >
> > amdgpu_cs_report_moved_bytes(p->adev, p->bytes_moved,
> > p->bytes_moved_vis);
> > +
> > if (p->bo_list) {
> > - struct amdgpu_bo *gds = p->bo_list->gds_obj;
> > - struct amdgpu_bo *gws = p->bo_list->gws_obj;
> > - struct amdgpu_bo *oa = p->bo_list->oa_obj;
> > + gds = p->bo_list->gds_obj;
> > + gws = p->bo_list->gws_obj;
> > + oa = p->bo_list->oa_obj;
> > struct amdgpu_vm *vm = &fpriv->vm;
> > unsigned i;
> >
> > @@ -664,19 +668,23 @@ static int amdgpu_cs_parser_bos(struct
> > amdgpu_cs_parser *p,
> >
> > p->bo_list->array[i].bo_va =
> amdgpu_vm_bo_find(vm, bo);
> > }
> > + } else {
> > + gds = adev->gds.gds_gfx_bo;
> > + gws = adev->gds.gws_gfx_bo;
> > + oa = adev->gds.oa_gfx_bo;
> > + }
> >
> > - if (gds) {
> > - p->job->gds_base = amdgpu_bo_gpu_offset(gds);
> > - p->job->gds_size = amdgpu_bo_size(gds);
> > - }
> > - if (gws) {
> > - p->job->gws_base = amdgpu_bo_gpu_offset(gws);
> > - p->job->gws_size = amdgpu_bo_size(gws);
> > - }
> > - if (oa) {
> > - p->job->oa_base = amdgpu_bo_gpu_offset(oa);
> > - p->job->oa_size = amdgpu_bo_size(oa);
> > - }
> > + if (gds) {
> > + p->job->gds_base = amdgpu_bo_gpu_offset(gds);
> > + p->job->gds_size = amdgpu_bo_size(gds);
> > + }
> > + if (gws) {
> > + p->job->gws_base = amdgpu_bo_gpu_offset(gws);
> > + p->job->gws_size = amdgpu_bo_size(gws);
> > + }
> > + if (oa) {
> > + p->job->oa_base = amdgpu_bo_gpu_offset(oa);
> > + p->job->oa_size = amdgpu_bo_size(oa);
> > }
> >
> > if (!r && p->uf_entry.robj) {
> > @@ -1233,7 +1241,7 @@ int amdgpu_cs_ioctl(struct drm_device *dev,
> void *data, struct drm_file *filp)
> > if (r)
> > goto out;
> >
> > - r = amdgpu_cs_parser_bos(&parser, data);
> > + r = amdgpu_cs_parser_bos(adev, &parser, data);
> > if (r) {
> > if (r == -ENOMEM)
> > DRM_ERROR("Not enough memory for command
> submission!\n");
More information about the amd-gfx
mailing list