<html>
<head>
<meta http-equiv="Content-Type" content="text/html;
charset=windows-1252">
</head>
<body text="#000000" bgcolor="#FFFFFF">
<div class="moz-cite-prefix">amdgpu_bo_create() doesn't necessarily
allocate anything, it just creates the BO structure.<br>
<br>
The backing memory for GTT and CPU domain is only allocated on
first use, only VRAM is allocated directly.<br>
<br>
So just call amdgpu_bo_create() with AMDGPU_GEM_DOMAIN_CPU and
then the pin with AMDGPU_GEM_DOMAIN_VRAM and your desired offset.<br>
<br>
Regards,<br>
Christian.<br>
<br>
Am 13.09.2017 um 11:14 schrieb Liu, Monk:<br>
</div>
<blockquote type="cite"
cite="mid:BLUPR12MB0449555D451CD4F6061AB3FB846E0@BLUPR12MB0449.namprd12.prod.outlook.com">
<meta http-equiv="Content-Type" content="text/html;
charset=windows-1252">
<meta name="Generator" content="Microsoft Exchange Server">
<!-- converted from text -->
<style><!-- .EmailQuote { margin-left: 1pt; padding-left: 4pt; border-left: #800000 2px solid; } --></style>
<meta content="text/html; charset=UTF-8">
<style type="text/css" style="">
<!--
p
{margin-top:0;
margin-bottom:0}
-->
</style>
<div dir="ltr">
<div id="x_divtagdefaultwrapper" dir="ltr"
style="font-size:12pt; color:#000000;
font-family:Calibri,Helvetica,sans-serif">
<p>SRIOV need to reserve a memory at an offset that set by
GIM/hypervisor side, but I'm not sure how to do it
perfectly, currently we call bo_create to allocate a VRAM
BO, and call pin_restrict with "offset" as parameter for
"min" and "offset + size" as "max",
<br>
</p>
<p><br>
</p>
<p>I feel strange of above approach frankly speaking (unless
the new offset equals to the original offset from
bo_create),
<br>
</p>
<p><br>
</p>
<p>Because the original gpu offset (from the bo_create) is
different with the new "offset" provided by GIM, what will
TTM/DRM do on the range of <original offset, new
offset> after we pin the bo to <new offset, new
offset+ size> ???</p>
<p><br>
</p>
<p>BR Monk<br>
</p>
<p><br>
</p>
</div>
<hr tabindex="-1" style="display:inline-block; width:98%">
<div id="x_divRplyFwdMsg" dir="ltr"><font style="font-size:11pt"
face="Calibri, sans-serif" color="#000000"><b>From:</b>
amd-gfx <a class="moz-txt-link-rfc2396E" href="mailto:amd-gfx-bounces@lists.freedesktop.org"><amd-gfx-bounces@lists.freedesktop.org></a> on
behalf of Deucher, Alexander
<a class="moz-txt-link-rfc2396E" href="mailto:Alexander.Deucher@amd.com"><Alexander.Deucher@amd.com></a><br>
<b>Sent:</b> Tuesday, September 12, 2017 11:59:35 PM<br>
<b>To:</b> 'Christian König'; <a class="moz-txt-link-abbreviated" href="mailto:amd-gfx@lists.freedesktop.org">amd-gfx@lists.freedesktop.org</a><br>
<b>Subject:</b> RE: [PATCH 4/5] drm/amdgpu: cleanup
amdgpu_bo_pin_restricted</font>
<div> </div>
</div>
</div>
<font size="2"><span style="font-size:10pt;">
<div class="PlainText">> -----Original Message-----<br>
> From: amd-gfx [<a
href="mailto:amd-gfx-bounces@lists.freedesktop.org"
moz-do-not-send="true">mailto:amd-gfx-bounces@lists.freedesktop.org</a>]
On Behalf<br>
> Of Christian König<br>
> Sent: Tuesday, September 12, 2017 5:09 AM<br>
> To: <a class="moz-txt-link-abbreviated" href="mailto:amd-gfx@lists.freedesktop.org">amd-gfx@lists.freedesktop.org</a><br>
> Subject: [PATCH 4/5] drm/amdgpu: cleanup
amdgpu_bo_pin_restricted<br>
> <br>
> From: Christian König <a class="moz-txt-link-rfc2396E" href="mailto:christian.koenig@amd.com"><christian.koenig@amd.com></a><br>
> <br>
> Nobody is using the min/max interface any more.<br>
> <br>
> Signed-off-by: Christian König
<a class="moz-txt-link-rfc2396E" href="mailto:christian.koenig@amd.com"><christian.koenig@amd.com></a><br>
<br>
I'm not sure it's a good idea to get rid of this. I can see
a need to reserve memory at specific offsets in memory.
Specifically I think SR-IOV will be placing structures in
memory to communicate configuration details from the host to
the guest. Also, we should be reserving the vbios scratch
area, but we don't currently.<br>
<br>
Alex<br>
<br>
> ---<br>
> drivers/gpu/drm/amd/amdgpu/amdgpu_object.c | 39
+++++------------------<br>
> -------<br>
> drivers/gpu/drm/amd/amdgpu/amdgpu_object.h | 3 ---<br>
> 2 files changed, 6 insertions(+), 36 deletions(-)<br>
> <br>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c<br>
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c<br>
> index 726a662..8a8add3 100644<br>
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c<br>
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.c<br>
> @@ -629,20 +629,15 @@ void amdgpu_bo_unref(struct
amdgpu_bo **bo)<br>
> *bo = NULL;<br>
> }<br>
> <br>
> -int amdgpu_bo_pin_restricted(struct amdgpu_bo *bo, u32
domain,<br>
> - u64 min_offset, u64
max_offset,<br>
> - u64 *gpu_addr)<br>
> +int amdgpu_bo_pin(struct amdgpu_bo *bo, u32 domain,
u64 *gpu_addr)<br>
> {<br>
> struct amdgpu_device *adev =
amdgpu_ttm_adev(bo->tbo.bdev);<br>
> + unsigned lpfn;<br>
> int r, i;<br>
> - unsigned fpfn, lpfn;<br>
> <br>
> if (amdgpu_ttm_tt_get_usermm(bo->tbo.ttm))<br>
> return -EPERM;<br>
> <br>
> - if (WARN_ON_ONCE(min_offset > max_offset))<br>
> - return -EINVAL;<br>
> -<br>
> /* A shared bo cannot be migrated to VRAM */<br>
> if (bo->prime_shared_count && (domain
==<br>
> AMDGPU_GEM_DOMAIN_VRAM))<br>
> return -EINVAL;<br>
> @@ -657,12 +652,6 @@ int
amdgpu_bo_pin_restricted(struct amdgpu_bo<br>
> *bo, u32 domain,<br>
> if (gpu_addr)<br>
> *gpu_addr =
amdgpu_bo_gpu_offset(bo);<br>
> <br>
> - if (max_offset != 0) {<br>
> - u64 domain_start =
bo->tbo.bdev-<br>
> >man[mem_type].gpu_offset;<br>
> - WARN_ON_ONCE(max_offset <<br>
> -
(amdgpu_bo_gpu_offset(bo) -<br>
> domain_start));<br>
> - }<br>
> -<br>
> return 0;<br>
> }<br>
> <br>
> @@ -671,23 +660,12 @@ int
amdgpu_bo_pin_restricted(struct amdgpu_bo<br>
> *bo, u32 domain,<br>
> for (i = 0; i <
bo->placement.num_placement; i++) {<br>
> /* force to pin into visible video ram
*/<br>
> if ((bo->placements[i].flags &
TTM_PL_FLAG_VRAM) &&<br>
> - !(bo->flags &
AMDGPU_GEM_CREATE_NO_CPU_ACCESS)<br>
> &&<br>
> - (!max_offset || max_offset ><br>
> - adev->mc.visible_vram_size)) {<br>
> - if (WARN_ON_ONCE(min_offset ><br>
> -
adev->mc.visible_vram_size))<br>
> - return -EINVAL;<br>
> - fpfn = min_offset >>
PAGE_SHIFT;<br>
> + !(bo->flags &
AMDGPU_GEM_CREATE_NO_CPU_ACCESS))<br>
> {<br>
> lpfn =
adev->mc.visible_vram_size >> PAGE_SHIFT;<br>
> - } else {<br>
> - fpfn = min_offset >>
PAGE_SHIFT;<br>
> - lpfn = max_offset >>
PAGE_SHIFT;<br>
> + if (!bo->placements[i].lpfn ||<br>
> + (lpfn && lpfn <
bo->placements[i].lpfn))<br>
> + bo->placements[i].lpfn
= lpfn;<br>
> }<br>
> - if (fpfn > bo->placements[i].fpfn)<br>
> - bo->placements[i].fpfn = fpfn;<br>
> - if (!bo->placements[i].lpfn ||<br>
> - (lpfn && lpfn <
bo->placements[i].lpfn))<br>
> - bo->placements[i].lpfn = lpfn;<br>
> bo->placements[i].flags |=
TTM_PL_FLAG_NO_EVICT;<br>
> }<br>
> <br>
> @@ -718,11 +696,6 @@ int
amdgpu_bo_pin_restricted(struct amdgpu_bo<br>
> *bo, u32 domain,<br>
> return r;<br>
> }<br>
> <br>
> -int amdgpu_bo_pin(struct amdgpu_bo *bo, u32 domain,
u64 *gpu_addr)<br>
> -{<br>
> - return amdgpu_bo_pin_restricted(bo, domain, 0, 0,
gpu_addr);<br>
> -}<br>
> -<br>
> int amdgpu_bo_unpin(struct amdgpu_bo *bo)<br>
> {<br>
> struct amdgpu_device *adev =
amdgpu_ttm_adev(bo->tbo.bdev);<br>
> diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h<br>
> b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h<br>
> index 39b6bf6..4b2c042 100644<br>
> --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h<br>
> +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h<br>
> @@ -211,9 +211,6 @@ void amdgpu_bo_kunmap(struct
amdgpu_bo *bo);<br>
> struct amdgpu_bo *amdgpu_bo_ref(struct amdgpu_bo *bo);<br>
> void amdgpu_bo_unref(struct amdgpu_bo **bo);<br>
> int amdgpu_bo_pin(struct amdgpu_bo *bo, u32 domain,
u64 *gpu_addr);<br>
> -int amdgpu_bo_pin_restricted(struct amdgpu_bo *bo, u32
domain,<br>
> - u64 min_offset, u64
max_offset,<br>
> - u64 *gpu_addr);<br>
> int amdgpu_bo_unpin(struct amdgpu_bo *bo);<br>
> int amdgpu_bo_evict_vram(struct amdgpu_device *adev);<br>
> int amdgpu_bo_init(struct amdgpu_device *adev);<br>
> --<br>
> 2.7.4<br>
> <br>
> _______________________________________________<br>
> amd-gfx mailing list<br>
> <a class="moz-txt-link-abbreviated" href="mailto:amd-gfx@lists.freedesktop.org">amd-gfx@lists.freedesktop.org</a><br>
> <a
href="https://lists.freedesktop.org/mailman/listinfo/amd-gfx"
moz-do-not-send="true">https://lists.freedesktop.org/mailman/listinfo/amd-gfx</a><br>
_______________________________________________<br>
amd-gfx mailing list<br>
<a class="moz-txt-link-abbreviated" href="mailto:amd-gfx@lists.freedesktop.org">amd-gfx@lists.freedesktop.org</a><br>
<a
href="https://lists.freedesktop.org/mailman/listinfo/amd-gfx"
moz-do-not-send="true">https://lists.freedesktop.org/mailman/listinfo/amd-gfx</a><br>
</div>
</span></font>
</blockquote>
<p><br>
</p>
</body>
</html>