From fziglio at redhat.com Thu Oct 1 08:44:21 2020 From: fziglio at redhat.com (Frediano Ziglio) Date: Thu, 1 Oct 2020 04:44:21 -0400 (EDT) Subject: [Spice-devel] ANNOUNCE spice-protocol 0.14.3 release In-Reply-To: <1326575912.2741201.1601541831030.JavaMail.zimbra@redhat.com> Message-ID: <1371827209.2741725.1601541861379.JavaMail.zimbra@redhat.com> Hey everyone, I just cut a new small release. If you find any bugs or regressions, please report them in our issue tracker: https://gitlab.freedesktop.org/spice/spice-protocol/-/issues. See also https://gitlab.freedesktop.org/spice/spice-protocol/-/tags/v0.14.3. Major changes in 0.14.3 ======================= * Add VD_AGENT_CLIPBOARD_FILE_LIST to support copy/paste of files with WebDAV support * Add support for side mouse buttons * Add a MonitorsMM field to VDAgentMonitorsConfig allowing to pass physical monitor dimension https://gitlab.freedesktop.org/spice/spice-protocol/-/releases/v0.14.3 Kind Regards, Frediano From fziglio at redhat.com Thu Oct 1 08:46:53 2020 From: fziglio at redhat.com (Frediano Ziglio) Date: Thu, 1 Oct 2020 04:46:53 -0400 (EDT) Subject: [Spice-devel] ANNOUNCE spice-protocol 0.14.3 release In-Reply-To: <1371827209.2741725.1601541861379.JavaMail.zimbra@redhat.com> References: <1371827209.2741725.1601541861379.JavaMail.zimbra@redhat.com> Message-ID: <867107685.2741927.1601542013213.JavaMail.zimbra@redhat.com> Thanks to Victor Toso for the support for this release. Frediano > > Hey everyone, > > I just cut a new small release. > If you find any bugs or regressions, please report them in our issue > tracker: https://gitlab.freedesktop.org/spice/spice-protocol/-/issues. > See also https://gitlab.freedesktop.org/spice/spice-protocol/-/tags/v0.14.3. > > Major changes in 0.14.3 > ======================= > > * Add VD_AGENT_CLIPBOARD_FILE_LIST to support copy/paste of files with > WebDAV support > * Add support for side mouse buttons > * Add a MonitorsMM field to VDAgentMonitorsConfig allowing to pass > physical monitor dimension > > > https://gitlab.freedesktop.org/spice/spice-protocol/-/releases/v0.14.3 > > Kind Regards, > Frediano From daniel at ffwll.ch Fri Oct 2 09:48:00 2020 From: daniel at ffwll.ch (Daniel Vetter) Date: Fri, 2 Oct 2020 11:48:00 +0200 Subject: [Spice-devel] [PATCH v3 1/7] drm/vram-helper: Remove invariant parameters from internal kmap function In-Reply-To: <20200929151437.19717-2-tzimmermann@suse.de> References: <20200929151437.19717-1-tzimmermann@suse.de> <20200929151437.19717-2-tzimmermann@suse.de> Message-ID: <20201002094800.GG438822@phenom.ffwll.local> On Tue, Sep 29, 2020 at 05:14:31PM +0200, Thomas Zimmermann wrote: > The parameters map and is_iomem are always of the same value. Removed them > to prepares the function for conversion to struct dma_buf_map. > > Signed-off-by: Thomas Zimmermann Reviewed-by: Daniel Vetter > --- > drivers/gpu/drm/drm_gem_vram_helper.c | 17 ++++++----------- > 1 file changed, 6 insertions(+), 11 deletions(-) > > diff --git a/drivers/gpu/drm/drm_gem_vram_helper.c b/drivers/gpu/drm/drm_gem_vram_helper.c > index 3fe4b326e18e..256b346664f2 100644 > --- a/drivers/gpu/drm/drm_gem_vram_helper.c > +++ b/drivers/gpu/drm/drm_gem_vram_helper.c > @@ -382,16 +382,16 @@ int drm_gem_vram_unpin(struct drm_gem_vram_object *gbo) > } > EXPORT_SYMBOL(drm_gem_vram_unpin); > > -static void *drm_gem_vram_kmap_locked(struct drm_gem_vram_object *gbo, > - bool map, bool *is_iomem) > +static void *drm_gem_vram_kmap_locked(struct drm_gem_vram_object *gbo) > { > int ret; > struct ttm_bo_kmap_obj *kmap = &gbo->kmap; > + bool is_iomem; > > if (gbo->kmap_use_count > 0) > goto out; > > - if (kmap->virtual || !map) > + if (kmap->virtual) > goto out; > > ret = ttm_bo_kmap(&gbo->bo, 0, gbo->bo.num_pages, kmap); > @@ -399,15 +399,10 @@ static void *drm_gem_vram_kmap_locked(struct drm_gem_vram_object *gbo, > return ERR_PTR(ret); > > out: > - if (!kmap->virtual) { > - if (is_iomem) > - *is_iomem = false; > + if (!kmap->virtual) > return NULL; /* not mapped; don't increment ref */ > - } > ++gbo->kmap_use_count; > - if (is_iomem) > - return ttm_kmap_obj_virtual(kmap, is_iomem); > - return kmap->virtual; > + return ttm_kmap_obj_virtual(kmap, &is_iomem); > } > > static void drm_gem_vram_kunmap_locked(struct drm_gem_vram_object *gbo) > @@ -452,7 +447,7 @@ void *drm_gem_vram_vmap(struct drm_gem_vram_object *gbo) > ret = drm_gem_vram_pin_locked(gbo, 0); > if (ret) > goto err_ttm_bo_unreserve; > - base = drm_gem_vram_kmap_locked(gbo, true, NULL); > + base = drm_gem_vram_kmap_locked(gbo); > if (IS_ERR(base)) { > ret = PTR_ERR(base); > goto err_drm_gem_vram_unpin_locked; > -- > 2.28.0 > -- Daniel Vetter Software Engineer, Intel Corporation http://blog.ffwll.ch From daniel at ffwll.ch Fri Oct 2 09:58:30 2020 From: daniel at ffwll.ch (Daniel Vetter) Date: Fri, 2 Oct 2020 11:58:30 +0200 Subject: [Spice-devel] [PATCH v3 2/7] drm/ttm: Add ttm_kmap_obj_to_dma_buf_map() for type conversion In-Reply-To: References: <20200929151437.19717-1-tzimmermann@suse.de> <20200929151437.19717-3-tzimmermann@suse.de> <8fad0114-064a-4ed5-c21d-d1b4294de0a1@amd.com> <2614314a-81f7-4722-c400-68d90e48e09a@suse.de> <8a84f62b-33f3-f44c-52af-c859a0e0d1fb@gmail.com> <07972ada-9135-3743-a86b-487f610c509f@suse.de> <20200930094712.GW438822@phenom.ffwll.local> <8479d0aa-3826-4f37-0109-55daca515793@amd.com> Message-ID: <20201002095830.GH438822@phenom.ffwll.local> On Wed, Sep 30, 2020 at 02:51:46PM +0200, Daniel Vetter wrote: > On Wed, Sep 30, 2020 at 2:34 PM Christian K?nig > wrote: > > > > Am 30.09.20 um 11:47 schrieb Daniel Vetter: > > > On Wed, Sep 30, 2020 at 10:34:31AM +0200, Christian K?nig wrote: > > >> Am 30.09.20 um 10:19 schrieb Thomas Zimmermann: > > >>> Hi > > >>> > > >>> Am 30.09.20 um 10:05 schrieb Christian K?nig: > > >>>> Am 29.09.20 um 19:49 schrieb Thomas Zimmermann: > > >>>>> Hi Christian > > >>>>> > > >>>>> Am 29.09.20 um 17:35 schrieb Christian K?nig: > > >>>>>> Am 29.09.20 um 17:14 schrieb Thomas Zimmermann: > > >>>>>>> The new helper ttm_kmap_obj_to_dma_buf() extracts address and location > > >>>>>>> from and instance of TTM's kmap_obj and initializes struct dma_buf_map > > >>>>>>> with these values. Helpful for TTM-based drivers. > > >>>>>> We could completely drop that if we use the same structure inside TTM as > > >>>>>> well. > > >>>>>> > > >>>>>> Additional to that which driver is going to use this? > > >>>>> As Daniel mentioned, it's in patch 3. The TTM-based drivers will > > >>>>> retrieve the pointer via this function. > > >>>>> > > >>>>> I do want to see all that being more tightly integrated into TTM, but > > >>>>> not in this series. This one is about fixing the bochs-on-sparc64 > > >>>>> problem for good. Patch 7 adds an update to TTM to the DRM TODO list. > > >>>> I should have asked which driver you try to fix here :) > > >>>> > > >>>> In this case just keep the function inside bochs and only fix it there. > > >>>> > > >>>> All other drivers can be fixed when we generally pump this through TTM. > > >>> Did you take a look at patch 3? This function will be used by VRAM > > >>> helpers, nouveau, radeon, amdgpu and qxl. If we don't put it here, we > > >>> have to duplicate the functionality in each if these drivers. Bochs > > >>> itself uses VRAM helpers and doesn't touch the function directly. > > >> Ah, ok can we have that then only in the VRAM helpers? > > >> > > >> Alternative you could go ahead and use dma_buf_map in ttm_bo_kmap_obj > > >> directly and drop the hack with the TTM_BO_MAP_IOMEM_MASK. > > >> > > >> What I want to avoid is to have another conversion function in TTM because > > >> what happens here is that we already convert from ttm_bus_placement to > > >> ttm_bo_kmap_obj and then to dma_buf_map. > > > Hm I'm not really seeing how that helps with a gradual conversion of > > > everything over to dma_buf_map and assorted helpers for access? There's > > > too many places in ttm drivers where is_iomem and related stuff is used to > > > be able to convert it all in one go. An intermediate state with a bunch of > > > conversions seems fairly unavoidable to me. > > > > Fair enough. I would just have started bottom up and not top down. > > > > Anyway feel free to go ahead with this approach as long as we can remove > > the new function again when we clean that stuff up for good. > > Yeah I guess bottom up would make more sense as a refactoring. But the > main motivation to land this here is to fix the __mmio vs normal > memory confusion in the fbdev emulation helpers for sparc (and > anything else that needs this). Hence the top down approach for > rolling this out. Ok I started reviewing this a bit more in-depth, and I think this is a bit too much of a de-tour. Looking through all the callers of ttm_bo_kmap almost everyone maps the entire object. Only vmwgfx uses to map less than that. Also, everyone just immediately follows up with converting that full object map into a pointer. So I think what we really want here is: - new function int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map); _vmap name since that's consistent with both dma_buf functions and what's usually used to implement this. Outside of the ttm world kmap usually just means single-page mappings using kmap() or it's iomem sibling io_mapping_map* so rather confusing name for a function which usually is just used to set up a vmap of the entire buffer. - a helper which can be used for the drm_gem_object_funcs vmap/vunmap functions for all ttm drivers. We should be able to make this fully generic because a) we now have dma_buf_map and b) drm_gem_object is embedded in the ttm_bo, so we can upcast for everyone who's both a ttm and gem driver. This is maybe a good follow-up, since it should allow us to ditch quite a bit of the vram helper code for this more generic stuff. I also might have missed some special-cases here, but from a quick look everything just pins the buffer to the current location and that's it. Also this obviously requires Christian's generic ttm_bo_pin rework first. - roll the above out to drivers. Christian/Thomas, thoughts on this? I think for the immediate need of rolling this out for vram helpers and fbdev code we should be able to do this, but just postpone the driver wide roll-out for now. Cheers, Daniel > -Daniel > > > > > Christian. > > > > > -Daniel > > > > > >> Thanks, > > >> Christian. > > >> > > >>> Best regards > > >>> Thomas > > >>> > > >>>> Regards, > > >>>> Christian. > > >>>> > > >>>>> Best regards > > >>>>> Thomas > > >>>>> > > >>>>>> Regards, > > >>>>>> Christian. > > >>>>>> > > >>>>>>> Signed-off-by: Thomas Zimmermann > > >>>>>>> --- > > >>>>>>> include/drm/ttm/ttm_bo_api.h | 24 ++++++++++++++++++++++++ > > >>>>>>> include/linux/dma-buf-map.h | 20 ++++++++++++++++++++ > > >>>>>>> 2 files changed, 44 insertions(+) > > >>>>>>> > > >>>>>>> diff --git a/include/drm/ttm/ttm_bo_api.h b/include/drm/ttm/ttm_bo_api.h > > >>>>>>> index c96a25d571c8..62d89f05a801 100644 > > >>>>>>> --- a/include/drm/ttm/ttm_bo_api.h > > >>>>>>> +++ b/include/drm/ttm/ttm_bo_api.h > > >>>>>>> @@ -34,6 +34,7 @@ > > >>>>>>> #include > > >>>>>>> #include > > >>>>>>> #include > > >>>>>>> +#include > > >>>>>>> #include > > >>>>>>> #include > > >>>>>>> #include > > >>>>>>> @@ -486,6 +487,29 @@ static inline void *ttm_kmap_obj_virtual(struct > > >>>>>>> ttm_bo_kmap_obj *map, > > >>>>>>> return map->virtual; > > >>>>>>> } > > >>>>>>> +/** > > >>>>>>> + * ttm_kmap_obj_to_dma_buf_map > > >>>>>>> + * > > >>>>>>> + * @kmap: A struct ttm_bo_kmap_obj returned from ttm_bo_kmap. > > >>>>>>> + * @map: Returns the mapping as struct dma_buf_map > > >>>>>>> + * > > >>>>>>> + * Converts struct ttm_bo_kmap_obj to struct dma_buf_map. If the memory > > >>>>>>> + * is not mapped, the returned mapping is initialized to NULL. > > >>>>>>> + */ > > >>>>>>> +static inline void ttm_kmap_obj_to_dma_buf_map(struct ttm_bo_kmap_obj > > >>>>>>> *kmap, > > >>>>>>> + struct dma_buf_map *map) > > >>>>>>> +{ > > >>>>>>> + bool is_iomem; > > >>>>>>> + void *vaddr = ttm_kmap_obj_virtual(kmap, &is_iomem); > > >>>>>>> + > > >>>>>>> + if (!vaddr) > > >>>>>>> + dma_buf_map_clear(map); > > >>>>>>> + else if (is_iomem) > > >>>>>>> + dma_buf_map_set_vaddr_iomem(map, (void __force __iomem *)vaddr); > > >>>>>>> + else > > >>>>>>> + dma_buf_map_set_vaddr(map, vaddr); > > >>>>>>> +} > > >>>>>>> + > > >>>>>>> /** > > >>>>>>> * ttm_bo_kmap > > >>>>>>> * > > >>>>>>> diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h > > >>>>>>> index fd1aba545fdf..2e8bbecb5091 100644 > > >>>>>>> --- a/include/linux/dma-buf-map.h > > >>>>>>> +++ b/include/linux/dma-buf-map.h > > >>>>>>> @@ -45,6 +45,12 @@ > > >>>>>>> * > > >>>>>>> * dma_buf_map_set_vaddr(&map. 0xdeadbeaf); > > >>>>>>> * > > >>>>>>> + * To set an address in I/O memory, use dma_buf_map_set_vaddr_iomem(). > > >>>>>>> + * > > >>>>>>> + * .. code-block:: c > > >>>>>>> + * > > >>>>>>> + * dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf); > > >>>>>>> + * > > >>>>>>> * Test if a mapping is valid with either dma_buf_map_is_set() or > > >>>>>>> * dma_buf_map_is_null(). > > >>>>>>> * > > >>>>>>> @@ -118,6 +124,20 @@ static inline void dma_buf_map_set_vaddr(struct > > >>>>>>> dma_buf_map *map, void *vaddr) > > >>>>>>> map->is_iomem = false; > > >>>>>>> } > > >>>>>>> +/** > > >>>>>>> + * dma_buf_map_set_vaddr_iomem - Sets a dma-buf mapping structure to > > >>>>>>> an address in I/O memory > > >>>>>>> + * @map: The dma-buf mapping structure > > >>>>>>> + * @vaddr_iomem: An I/O-memory address > > >>>>>>> + * > > >>>>>>> + * Sets the address and the I/O-memory flag. > > >>>>>>> + */ > > >>>>>>> +static inline void dma_buf_map_set_vaddr_iomem(struct dma_buf_map *map, > > >>>>>>> + void __iomem *vaddr_iomem) > > >>>>>>> +{ > > >>>>>>> + map->vaddr_iomem = vaddr_iomem; > > >>>>>>> + map->is_iomem = true; > > >>>>>>> +} > > >>>>>>> + > > >>>>>>> /** > > >>>>>>> * dma_buf_map_is_equal - Compares two dma-buf mapping structures > > >>>>>>> for equality > > >>>>>>> * @lhs: The dma-buf mapping structure > > >>>>>> _______________________________________________ > > >>>>>> dri-devel mailing list > > >>>>>> dri-devel at lists.freedesktop.org > > >>>>>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Fdri-devel&data=02%7C01%7Cchristian.koenig%40amd.com%7C472c3d655a61411deb6708d86525d1b8%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637370560438965013&sdata=HdHOA%2F1VcIX%2F7YtfYTiAqYEvw7Ag%2FS%2BxS5VwJKOv5y0%3D&reserved=0 > > >>>>> _______________________________________________ > > >>>>> amd-gfx mailing list > > >>>>> amd-gfx at lists.freedesktop.org > > >>>>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Famd-gfx&data=02%7C01%7Cchristian.koenig%40amd.com%7C472c3d655a61411deb6708d86525d1b8%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637370560438965013&sdata=H%2B5HKCsTrksRV2EyEiFGSTyS79jsWCmJimSMoJYusx8%3D&reserved=0 > > >>>> _______________________________________________ > > >>>> dri-devel mailing list > > >>>> dri-devel at lists.freedesktop.org > > >>>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Fdri-devel&data=02%7C01%7Cchristian.koenig%40amd.com%7C472c3d655a61411deb6708d86525d1b8%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637370560438965013&sdata=HdHOA%2F1VcIX%2F7YtfYTiAqYEvw7Ag%2FS%2BxS5VwJKOv5y0%3D&reserved=0 > > >>>> > > >>> _______________________________________________ > > >>> amd-gfx mailing list > > >>> amd-gfx at lists.freedesktop.org > > >>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Famd-gfx&data=02%7C01%7Cchristian.koenig%40amd.com%7C472c3d655a61411deb6708d86525d1b8%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637370560438965013&sdata=H%2B5HKCsTrksRV2EyEiFGSTyS79jsWCmJimSMoJYusx8%3D&reserved=0 > > > > > -- > Daniel Vetter > Software Engineer, Intel Corporation > http://blog.ffwll.ch -- Daniel Vetter Software Engineer, Intel Corporation http://blog.ffwll.ch From ckoenig.leichtzumerken at gmail.com Fri Oct 2 11:30:20 2020 From: ckoenig.leichtzumerken at gmail.com (=?UTF-8?Q?Christian_K=c3=b6nig?=) Date: Fri, 2 Oct 2020 13:30:20 +0200 Subject: [Spice-devel] [PATCH v3 2/7] drm/ttm: Add ttm_kmap_obj_to_dma_buf_map() for type conversion In-Reply-To: <20201002095830.GH438822@phenom.ffwll.local> References: <20200929151437.19717-1-tzimmermann@suse.de> <20200929151437.19717-3-tzimmermann@suse.de> <8fad0114-064a-4ed5-c21d-d1b4294de0a1@amd.com> <2614314a-81f7-4722-c400-68d90e48e09a@suse.de> <8a84f62b-33f3-f44c-52af-c859a0e0d1fb@gmail.com> <07972ada-9135-3743-a86b-487f610c509f@suse.de> <20200930094712.GW438822@phenom.ffwll.local> <8479d0aa-3826-4f37-0109-55daca515793@amd.com> <20201002095830.GH438822@phenom.ffwll.local> Message-ID: Am 02.10.20 um 11:58 schrieb Daniel Vetter: > On Wed, Sep 30, 2020 at 02:51:46PM +0200, Daniel Vetter wrote: >> On Wed, Sep 30, 2020 at 2:34 PM Christian K?nig >> wrote: >>> Am 30.09.20 um 11:47 schrieb Daniel Vetter: >>>> On Wed, Sep 30, 2020 at 10:34:31AM +0200, Christian K?nig wrote: >>>>> Am 30.09.20 um 10:19 schrieb Thomas Zimmermann: >>>>>> Hi >>>>>> >>>>>> Am 30.09.20 um 10:05 schrieb Christian K?nig: >>>>>>> Am 29.09.20 um 19:49 schrieb Thomas Zimmermann: >>>>>>>> Hi Christian >>>>>>>> >>>>>>>> Am 29.09.20 um 17:35 schrieb Christian K?nig: >>>>>>>>> Am 29.09.20 um 17:14 schrieb Thomas Zimmermann: >>>>>>>>>> The new helper ttm_kmap_obj_to_dma_buf() extracts address and location >>>>>>>>>> from and instance of TTM's kmap_obj and initializes struct dma_buf_map >>>>>>>>>> with these values. Helpful for TTM-based drivers. >>>>>>>>> We could completely drop that if we use the same structure inside TTM as >>>>>>>>> well. >>>>>>>>> >>>>>>>>> Additional to that which driver is going to use this? >>>>>>>> As Daniel mentioned, it's in patch 3. The TTM-based drivers will >>>>>>>> retrieve the pointer via this function. >>>>>>>> >>>>>>>> I do want to see all that being more tightly integrated into TTM, but >>>>>>>> not in this series. This one is about fixing the bochs-on-sparc64 >>>>>>>> problem for good. Patch 7 adds an update to TTM to the DRM TODO list. >>>>>>> I should have asked which driver you try to fix here :) >>>>>>> >>>>>>> In this case just keep the function inside bochs and only fix it there. >>>>>>> >>>>>>> All other drivers can be fixed when we generally pump this through TTM. >>>>>> Did you take a look at patch 3? This function will be used by VRAM >>>>>> helpers, nouveau, radeon, amdgpu and qxl. If we don't put it here, we >>>>>> have to duplicate the functionality in each if these drivers. Bochs >>>>>> itself uses VRAM helpers and doesn't touch the function directly. >>>>> Ah, ok can we have that then only in the VRAM helpers? >>>>> >>>>> Alternative you could go ahead and use dma_buf_map in ttm_bo_kmap_obj >>>>> directly and drop the hack with the TTM_BO_MAP_IOMEM_MASK. >>>>> >>>>> What I want to avoid is to have another conversion function in TTM because >>>>> what happens here is that we already convert from ttm_bus_placement to >>>>> ttm_bo_kmap_obj and then to dma_buf_map. >>>> Hm I'm not really seeing how that helps with a gradual conversion of >>>> everything over to dma_buf_map and assorted helpers for access? There's >>>> too many places in ttm drivers where is_iomem and related stuff is used to >>>> be able to convert it all in one go. An intermediate state with a bunch of >>>> conversions seems fairly unavoidable to me. >>> Fair enough. I would just have started bottom up and not top down. >>> >>> Anyway feel free to go ahead with this approach as long as we can remove >>> the new function again when we clean that stuff up for good. >> Yeah I guess bottom up would make more sense as a refactoring. But the >> main motivation to land this here is to fix the __mmio vs normal >> memory confusion in the fbdev emulation helpers for sparc (and >> anything else that needs this). Hence the top down approach for >> rolling this out. > Ok I started reviewing this a bit more in-depth, and I think this is a bit > too much of a de-tour. > > Looking through all the callers of ttm_bo_kmap almost everyone maps the > entire object. Only vmwgfx uses to map less than that. Also, everyone just > immediately follows up with converting that full object map into a > pointer. > > So I think what we really want here is: > - new function > > int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map); > > _vmap name since that's consistent with both dma_buf functions and > what's usually used to implement this. Outside of the ttm world kmap > usually just means single-page mappings using kmap() or it's iomem > sibling io_mapping_map* so rather confusing name for a function which > usually is just used to set up a vmap of the entire buffer. > > - a helper which can be used for the drm_gem_object_funcs vmap/vunmap > functions for all ttm drivers. We should be able to make this fully > generic because a) we now have dma_buf_map and b) drm_gem_object is > embedded in the ttm_bo, so we can upcast for everyone who's both a ttm > and gem driver. > > This is maybe a good follow-up, since it should allow us to ditch quite > a bit of the vram helper code for this more generic stuff. I also might > have missed some special-cases here, but from a quick look everything > just pins the buffer to the current location and that's it. > > Also this obviously requires Christian's generic ttm_bo_pin rework > first. > > - roll the above out to drivers. > > Christian/Thomas, thoughts on this? Calling this vmap instead of kmap certainly makes sense. Not 100% sure about the generic helpers, but it sounds like this should indeed look rather clean in the end. Christian. > > I think for the immediate need of rolling this out for vram helpers and > fbdev code we should be able to do this, but just postpone the driver wide > roll-out for now. > > Cheers, Daniel > >> -Daniel >> >>> Christian. >>> >>>> -Daniel >>>> >>>>> Thanks, >>>>> Christian. >>>>> >>>>>> Best regards >>>>>> Thomas >>>>>> >>>>>>> Regards, >>>>>>> Christian. >>>>>>> >>>>>>>> Best regards >>>>>>>> Thomas >>>>>>>> >>>>>>>>> Regards, >>>>>>>>> Christian. >>>>>>>>> >>>>>>>>>> Signed-off-by: Thomas Zimmermann >>>>>>>>>> --- >>>>>>>>>> include/drm/ttm/ttm_bo_api.h | 24 ++++++++++++++++++++++++ >>>>>>>>>> include/linux/dma-buf-map.h | 20 ++++++++++++++++++++ >>>>>>>>>> 2 files changed, 44 insertions(+) >>>>>>>>>> >>>>>>>>>> diff --git a/include/drm/ttm/ttm_bo_api.h b/include/drm/ttm/ttm_bo_api.h >>>>>>>>>> index c96a25d571c8..62d89f05a801 100644 >>>>>>>>>> --- a/include/drm/ttm/ttm_bo_api.h >>>>>>>>>> +++ b/include/drm/ttm/ttm_bo_api.h >>>>>>>>>> @@ -34,6 +34,7 @@ >>>>>>>>>> #include >>>>>>>>>> #include >>>>>>>>>> #include >>>>>>>>>> +#include >>>>>>>>>> #include >>>>>>>>>> #include >>>>>>>>>> #include >>>>>>>>>> @@ -486,6 +487,29 @@ static inline void *ttm_kmap_obj_virtual(struct >>>>>>>>>> ttm_bo_kmap_obj *map, >>>>>>>>>> return map->virtual; >>>>>>>>>> } >>>>>>>>>> +/** >>>>>>>>>> + * ttm_kmap_obj_to_dma_buf_map >>>>>>>>>> + * >>>>>>>>>> + * @kmap: A struct ttm_bo_kmap_obj returned from ttm_bo_kmap. >>>>>>>>>> + * @map: Returns the mapping as struct dma_buf_map >>>>>>>>>> + * >>>>>>>>>> + * Converts struct ttm_bo_kmap_obj to struct dma_buf_map. If the memory >>>>>>>>>> + * is not mapped, the returned mapping is initialized to NULL. >>>>>>>>>> + */ >>>>>>>>>> +static inline void ttm_kmap_obj_to_dma_buf_map(struct ttm_bo_kmap_obj >>>>>>>>>> *kmap, >>>>>>>>>> + struct dma_buf_map *map) >>>>>>>>>> +{ >>>>>>>>>> + bool is_iomem; >>>>>>>>>> + void *vaddr = ttm_kmap_obj_virtual(kmap, &is_iomem); >>>>>>>>>> + >>>>>>>>>> + if (!vaddr) >>>>>>>>>> + dma_buf_map_clear(map); >>>>>>>>>> + else if (is_iomem) >>>>>>>>>> + dma_buf_map_set_vaddr_iomem(map, (void __force __iomem *)vaddr); >>>>>>>>>> + else >>>>>>>>>> + dma_buf_map_set_vaddr(map, vaddr); >>>>>>>>>> +} >>>>>>>>>> + >>>>>>>>>> /** >>>>>>>>>> * ttm_bo_kmap >>>>>>>>>> * >>>>>>>>>> diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h >>>>>>>>>> index fd1aba545fdf..2e8bbecb5091 100644 >>>>>>>>>> --- a/include/linux/dma-buf-map.h >>>>>>>>>> +++ b/include/linux/dma-buf-map.h >>>>>>>>>> @@ -45,6 +45,12 @@ >>>>>>>>>> * >>>>>>>>>> * dma_buf_map_set_vaddr(&map. 0xdeadbeaf); >>>>>>>>>> * >>>>>>>>>> + * To set an address in I/O memory, use dma_buf_map_set_vaddr_iomem(). >>>>>>>>>> + * >>>>>>>>>> + * .. code-block:: c >>>>>>>>>> + * >>>>>>>>>> + * dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf); >>>>>>>>>> + * >>>>>>>>>> * Test if a mapping is valid with either dma_buf_map_is_set() or >>>>>>>>>> * dma_buf_map_is_null(). >>>>>>>>>> * >>>>>>>>>> @@ -118,6 +124,20 @@ static inline void dma_buf_map_set_vaddr(struct >>>>>>>>>> dma_buf_map *map, void *vaddr) >>>>>>>>>> map->is_iomem = false; >>>>>>>>>> } >>>>>>>>>> +/** >>>>>>>>>> + * dma_buf_map_set_vaddr_iomem - Sets a dma-buf mapping structure to >>>>>>>>>> an address in I/O memory >>>>>>>>>> + * @map: The dma-buf mapping structure >>>>>>>>>> + * @vaddr_iomem: An I/O-memory address >>>>>>>>>> + * >>>>>>>>>> + * Sets the address and the I/O-memory flag. >>>>>>>>>> + */ >>>>>>>>>> +static inline void dma_buf_map_set_vaddr_iomem(struct dma_buf_map *map, >>>>>>>>>> + void __iomem *vaddr_iomem) >>>>>>>>>> +{ >>>>>>>>>> + map->vaddr_iomem = vaddr_iomem; >>>>>>>>>> + map->is_iomem = true; >>>>>>>>>> +} >>>>>>>>>> + >>>>>>>>>> /** >>>>>>>>>> * dma_buf_map_is_equal - Compares two dma-buf mapping structures >>>>>>>>>> for equality >>>>>>>>>> * @lhs: The dma-buf mapping structure >>>>>>>>> _______________________________________________ >>>>>>>>> dri-devel mailing list >>>>>>>>> dri-devel at lists.freedesktop.org >>>>>>>>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Fdri-devel&data=02%7C01%7Cchristian.koenig%40amd.com%7C472c3d655a61411deb6708d86525d1b8%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637370560438965013&sdata=HdHOA%2F1VcIX%2F7YtfYTiAqYEvw7Ag%2FS%2BxS5VwJKOv5y0%3D&reserved=0 >>>>>>>> _______________________________________________ >>>>>>>> amd-gfx mailing list >>>>>>>> amd-gfx at lists.freedesktop.org >>>>>>>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Famd-gfx&data=02%7C01%7Cchristian.koenig%40amd.com%7C472c3d655a61411deb6708d86525d1b8%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637370560438965013&sdata=H%2B5HKCsTrksRV2EyEiFGSTyS79jsWCmJimSMoJYusx8%3D&reserved=0 >>>>>>> _______________________________________________ >>>>>>> dri-devel mailing list >>>>>>> dri-devel at lists.freedesktop.org >>>>>>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Fdri-devel&data=02%7C01%7Cchristian.koenig%40amd.com%7C472c3d655a61411deb6708d86525d1b8%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637370560438965013&sdata=HdHOA%2F1VcIX%2F7YtfYTiAqYEvw7Ag%2FS%2BxS5VwJKOv5y0%3D&reserved=0 >>>>>>> >>>>>> _______________________________________________ >>>>>> amd-gfx mailing list >>>>>> amd-gfx at lists.freedesktop.org >>>>>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Famd-gfx&data=02%7C01%7Cchristian.koenig%40amd.com%7C472c3d655a61411deb6708d86525d1b8%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637370560438965013&sdata=H%2B5HKCsTrksRV2EyEiFGSTyS79jsWCmJimSMoJYusx8%3D&reserved=0 >> >> -- >> Daniel Vetter >> Software Engineer, Intel Corporation >> http://blog.ffwll.ch From daniel at ffwll.ch Fri Oct 2 13:04:40 2020 From: daniel at ffwll.ch (Daniel Vetter) Date: Fri, 2 Oct 2020 15:04:40 +0200 Subject: [Spice-devel] [PATCH v3 4/7] drm/gem: Update internal GEM vmap/vunmap interfaces to use struct dma_buf_map In-Reply-To: <20200929151437.19717-5-tzimmermann@suse.de> References: <20200929151437.19717-1-tzimmermann@suse.de> <20200929151437.19717-5-tzimmermann@suse.de> Message-ID: <20201002130440.GK438822@phenom.ffwll.local> On Tue, Sep 29, 2020 at 05:14:34PM +0200, Thomas Zimmermann wrote: > GEM's vmap and vunmap interfaces now wrap memory pointers in struct > dma_buf_map. > > Signed-off-by: Thomas Zimmermann > --- > drivers/gpu/drm/drm_client.c | 18 +++++++++++------- > drivers/gpu/drm/drm_gem.c | 28 ++++++++++++++-------------- > drivers/gpu/drm/drm_internal.h | 5 +++-- > drivers/gpu/drm/drm_prime.c | 14 ++++---------- > 4 files changed, 32 insertions(+), 33 deletions(-) > > diff --git a/drivers/gpu/drm/drm_client.c b/drivers/gpu/drm/drm_client.c > index 495f47d23d87..ac0082bed966 100644 > --- a/drivers/gpu/drm/drm_client.c > +++ b/drivers/gpu/drm/drm_client.c > @@ -3,6 +3,7 @@ > * Copyright 2018 Noralf Tr?nnes > */ > > +#include > #include > #include > #include > @@ -304,7 +305,8 @@ drm_client_buffer_create(struct drm_client_dev *client, u32 width, u32 height, u > */ > void *drm_client_buffer_vmap(struct drm_client_buffer *buffer) > { > - void *vaddr; > + struct dma_buf_map map; > + int ret; > > if (buffer->vaddr) > return buffer->vaddr; > @@ -317,13 +319,13 @@ void *drm_client_buffer_vmap(struct drm_client_buffer *buffer) > * fd_install step out of the driver backend hooks, to make that > * final step optional for internal users. > */ > - vaddr = drm_gem_vmap(buffer->gem); > - if (IS_ERR(vaddr)) > - return vaddr; > + ret = drm_gem_vmap(buffer->gem, &map); > + if (ret) > + return ERR_PTR(ret); > > - buffer->vaddr = vaddr; > + buffer->vaddr = map.vaddr; > > - return vaddr; > + return map.vaddr; > } > EXPORT_SYMBOL(drm_client_buffer_vmap); > > @@ -337,7 +339,9 @@ EXPORT_SYMBOL(drm_client_buffer_vmap); > */ > void drm_client_buffer_vunmap(struct drm_client_buffer *buffer) > { > - drm_gem_vunmap(buffer->gem, buffer->vaddr); > + struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(buffer->vaddr); > + > + drm_gem_vunmap(buffer->gem, &map); > buffer->vaddr = NULL; > } > EXPORT_SYMBOL(drm_client_buffer_vunmap); > diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c > index 0c4a66dea5c2..f2b2f37d41c4 100644 > --- a/drivers/gpu/drm/drm_gem.c > +++ b/drivers/gpu/drm/drm_gem.c > @@ -1205,32 +1205,32 @@ void drm_gem_unpin(struct drm_gem_object *obj) > obj->funcs->unpin(obj); > } > > -void *drm_gem_vmap(struct drm_gem_object *obj) > +int drm_gem_vmap(struct drm_gem_object *obj, struct dma_buf_map *map) > { > - struct dma_buf_map map; > int ret; > > - if (!obj->funcs->vmap) { > - return ERR_PTR(-EOPNOTSUPP); > + if (!obj->funcs->vmap) > + return -EOPNOTSUPP; > > - ret = obj->funcs->vmap(obj, &map); > + ret = obj->funcs->vmap(obj, map); > if (ret) > - return ERR_PTR(ret); > - else if (dma_buf_map_is_null(&map)) > - return ERR_PTR(-ENOMEM); > + return ret; > + else if (dma_buf_map_is_null(map)) > + return -ENOMEM; > > - return map.vaddr; > + return 0; > } > > -void drm_gem_vunmap(struct drm_gem_object *obj, void *vaddr) > +void drm_gem_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map) > { > - struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(vaddr); > - > - if (!vaddr) > + if (dma_buf_map_is_null(map)) > return; > > if (obj->funcs->vunmap) > - obj->funcs->vunmap(obj, &map); > + obj->funcs->vunmap(obj, map); > + > + /* Always set the mapping to NULL. Callers may rely on this. */ > + dma_buf_map_clear(map); > } > > /** > diff --git a/drivers/gpu/drm/drm_internal.h b/drivers/gpu/drm/drm_internal.h > index b65865c630b0..58832d75a9bd 100644 > --- a/drivers/gpu/drm/drm_internal.h > +++ b/drivers/gpu/drm/drm_internal.h > @@ -33,6 +33,7 @@ > > struct dentry; > struct dma_buf; > +struct dma_buf_map; > struct drm_connector; > struct drm_crtc; > struct drm_framebuffer; > @@ -187,8 +188,8 @@ void drm_gem_print_info(struct drm_printer *p, unsigned int indent, > > int drm_gem_pin(struct drm_gem_object *obj); > void drm_gem_unpin(struct drm_gem_object *obj); > -void *drm_gem_vmap(struct drm_gem_object *obj); > -void drm_gem_vunmap(struct drm_gem_object *obj, void *vaddr); > +int drm_gem_vmap(struct drm_gem_object *obj, struct dma_buf_map *map); > +void drm_gem_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map); > > /* drm_debugfs.c drm_debugfs_crc.c */ > #if defined(CONFIG_DEBUG_FS) > diff --git a/drivers/gpu/drm/drm_prime.c b/drivers/gpu/drm/drm_prime.c > index 89e2a2496734..cb8fbeeb731b 100644 > --- a/drivers/gpu/drm/drm_prime.c > +++ b/drivers/gpu/drm/drm_prime.c > @@ -667,21 +667,15 @@ EXPORT_SYMBOL(drm_gem_unmap_dma_buf); > * > * Sets up a kernel virtual mapping. This can be used as the &dma_buf_ops.vmap > * callback. Calls into &drm_gem_object_funcs.vmap for device specific handling. > + * The kernel virtual address is returned in map. > * > - * Returns the kernel virtual address or NULL on failure. > + * Returns 0 on success or a negative errno code otherwise. > */ > int drm_gem_dmabuf_vmap(struct dma_buf *dma_buf, struct dma_buf_map *map) > { > struct drm_gem_object *obj = dma_buf->priv; > - void *vaddr; > > - vaddr = drm_gem_vmap(obj); > - if (IS_ERR(vaddr)) > - return PTR_ERR(vaddr); > - > - dma_buf_map_set_vaddr(map, vaddr); > - > - return 0; > + return drm_gem_vmap(obj, map); > } > EXPORT_SYMBOL(drm_gem_dmabuf_vmap); > > @@ -697,7 +691,7 @@ void drm_gem_dmabuf_vunmap(struct dma_buf *dma_buf, struct dma_buf_map *map) > { > struct drm_gem_object *obj = dma_buf->priv; > > - drm_gem_vunmap(obj, map->vaddr); > + drm_gem_vunmap(obj, map); > } > EXPORT_SYMBOL(drm_gem_dmabuf_vunmap); Some of the transitional stuff disappearing! Reviewed-by: Daniel Vetter > > -- > 2.28.0 > -- Daniel Vetter Software Engineer, Intel Corporation http://blog.ffwll.ch From daniel at ffwll.ch Fri Oct 2 13:02:42 2020 From: daniel at ffwll.ch (Daniel Vetter) Date: Fri, 2 Oct 2020 15:02:42 +0200 Subject: [Spice-devel] [PATCH v3 3/7] drm/gem: Use struct dma_buf_map in GEM vmap ops and convert GEM backends In-Reply-To: <20200929151437.19717-4-tzimmermann@suse.de> References: <20200929151437.19717-1-tzimmermann@suse.de> <20200929151437.19717-4-tzimmermann@suse.de> Message-ID: <20201002130242.GJ438822@phenom.ffwll.local> On Tue, Sep 29, 2020 at 05:14:33PM +0200, Thomas Zimmermann wrote: > This patch replaces the vmap/vunmap's use of raw pointers in GEM object > functions with instances of struct dma_buf_map. GEM backends are > converted as well. > > For most GEM backends, this simply change the returned type. GEM VRAM > helpers are also updated to indicate whether the returned framebuffer > address is in system or I/O memory. > > Signed-off-by: Thomas Zimmermann > --- > drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c | 14 ++-- > drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.h | 4 +- > drivers/gpu/drm/ast/ast_cursor.c | 29 +++---- > drivers/gpu/drm/ast/ast_drv.h | 7 +- > drivers/gpu/drm/drm_gem.c | 22 ++--- > drivers/gpu/drm/drm_gem_cma_helper.c | 14 ++-- > drivers/gpu/drm/drm_gem_shmem_helper.c | 48 ++++++----- > drivers/gpu/drm/drm_gem_vram_helper.c | 90 +++++++++++---------- > drivers/gpu/drm/etnaviv/etnaviv_drv.h | 4 +- > drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c | 11 ++- > drivers/gpu/drm/exynos/exynos_drm_gem.c | 6 +- > drivers/gpu/drm/exynos/exynos_drm_gem.h | 4 +- > drivers/gpu/drm/lima/lima_gem.c | 6 +- > drivers/gpu/drm/lima/lima_sched.c | 11 ++- > drivers/gpu/drm/mgag200/mgag200_mode.c | 12 +-- > drivers/gpu/drm/nouveau/nouveau_gem.h | 4 +- > drivers/gpu/drm/nouveau/nouveau_prime.c | 9 ++- > drivers/gpu/drm/panfrost/panfrost_perfcnt.c | 14 ++-- > drivers/gpu/drm/qxl/qxl_display.c | 13 +-- > drivers/gpu/drm/qxl/qxl_draw.c | 16 ++-- > drivers/gpu/drm/qxl/qxl_drv.h | 8 +- > drivers/gpu/drm/qxl/qxl_object.c | 23 +++--- > drivers/gpu/drm/qxl/qxl_object.h | 2 +- > drivers/gpu/drm/qxl/qxl_prime.c | 12 +-- > drivers/gpu/drm/radeon/radeon_gem.c | 4 +- > drivers/gpu/drm/radeon/radeon_prime.c | 9 ++- > drivers/gpu/drm/rockchip/rockchip_drm_gem.c | 22 +++-- > drivers/gpu/drm/rockchip/rockchip_drm_gem.h | 4 +- > drivers/gpu/drm/tiny/cirrus.c | 10 ++- > drivers/gpu/drm/tiny/gm12u320.c | 10 ++- > drivers/gpu/drm/udl/udl_modeset.c | 8 +- > drivers/gpu/drm/vboxvideo/vbox_mode.c | 11 ++- > drivers/gpu/drm/vc4/vc4_bo.c | 6 +- > drivers/gpu/drm/vc4/vc4_drv.h | 2 +- > drivers/gpu/drm/vgem/vgem_drv.c | 16 ++-- > drivers/gpu/drm/xen/xen_drm_front_gem.c | 18 +++-- > drivers/gpu/drm/xen/xen_drm_front_gem.h | 6 +- > include/drm/drm_gem.h | 5 +- > include/drm/drm_gem_cma_helper.h | 4 +- > include/drm/drm_gem_shmem_helper.h | 4 +- > include/drm/drm_gem_vram_helper.h | 4 +- > 41 files changed, 304 insertions(+), 222 deletions(-) > > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c > index 5b465ab774d1..de7d0cfe1b93 100644 > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c > @@ -44,13 +44,14 @@ > /** > * amdgpu_gem_prime_vmap - &dma_buf_ops.vmap implementation > * @obj: GEM BO > + * @map: The virtual address of the mapping. > * > * Sets up an in-kernel virtual mapping of the BO's memory. > * > * Returns: > - * The virtual address of the mapping or an error pointer. > + * 0 on success, or a negative errno code otherwise. > */ > -void *amdgpu_gem_prime_vmap(struct drm_gem_object *obj) > +int amdgpu_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map) > { > struct amdgpu_bo *bo = gem_to_amdgpu_bo(obj); > int ret; > @@ -58,19 +59,20 @@ void *amdgpu_gem_prime_vmap(struct drm_gem_object *obj) > ret = ttm_bo_kmap(&bo->tbo, 0, bo->tbo.num_pages, > &bo->dma_buf_vmap); > if (ret) > - return ERR_PTR(ret); > + return ret; > + ttm_kmap_obj_to_dma_buf_map(&bo->dma_buf_vmap, map); I guess with the ttm_bo_vmap idea all the ttm changes here will look a bit different. > > - return bo->dma_buf_vmap.virtual; > + return 0; > } > > /** > * amdgpu_gem_prime_vunmap - &dma_buf_ops.vunmap implementation > * @obj: GEM BO > - * @vaddr: Virtual address (unused) > + * @map: Virtual address (unused) > * > * Tears down the in-kernel virtual mapping of the BO's memory. > */ > -void amdgpu_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr) > +void amdgpu_gem_prime_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map) > { > struct amdgpu_bo *bo = gem_to_amdgpu_bo(obj); > > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.h > index 2c5c84a06bb9..622642793064 100644 > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.h > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.h > @@ -31,8 +31,8 @@ struct drm_gem_object *amdgpu_gem_prime_import(struct drm_device *dev, > struct dma_buf *dma_buf); > bool amdgpu_dmabuf_is_xgmi_accessible(struct amdgpu_device *adev, > struct amdgpu_bo *bo); > -void *amdgpu_gem_prime_vmap(struct drm_gem_object *obj); > -void amdgpu_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr); > +int amdgpu_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map); > +void amdgpu_gem_prime_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map); > int amdgpu_gem_prime_mmap(struct drm_gem_object *obj, > struct vm_area_struct *vma); > > diff --git a/drivers/gpu/drm/ast/ast_cursor.c b/drivers/gpu/drm/ast/ast_cursor.c > index e0f4613918ad..459a3774e4e1 100644 > --- a/drivers/gpu/drm/ast/ast_cursor.c > +++ b/drivers/gpu/drm/ast/ast_cursor.c > @@ -39,7 +39,7 @@ static void ast_cursor_fini(struct ast_private *ast) > > for (i = 0; i < ARRAY_SIZE(ast->cursor.gbo); ++i) { > gbo = ast->cursor.gbo[i]; > - drm_gem_vram_vunmap(gbo, ast->cursor.vaddr[i]); > + drm_gem_vram_vunmap(gbo, &ast->cursor.map[i]); > drm_gem_vram_unpin(gbo); > drm_gem_vram_put(gbo); > } > @@ -60,7 +60,7 @@ int ast_cursor_init(struct ast_private *ast) > struct drm_device *dev = &ast->base; > size_t size, i; > struct drm_gem_vram_object *gbo; > - void __iomem *vaddr; > + struct dma_buf_map map; > int ret; > > size = roundup(AST_HWC_SIZE + AST_HWC_SIGNATURE_SIZE, PAGE_SIZE); > @@ -77,16 +77,15 @@ int ast_cursor_init(struct ast_private *ast) > drm_gem_vram_put(gbo); > goto err_drm_gem_vram_put; > } > - vaddr = drm_gem_vram_vmap(gbo); > - if (IS_ERR(vaddr)) { > - ret = PTR_ERR(vaddr); > + ret = drm_gem_vram_vmap(gbo, &map); > + if (ret) { > drm_gem_vram_unpin(gbo); > drm_gem_vram_put(gbo); > goto err_drm_gem_vram_put; > } > > ast->cursor.gbo[i] = gbo; > - ast->cursor.vaddr[i] = vaddr; > + ast->cursor.map[i] = map; > } > > return drmm_add_action_or_reset(dev, ast_cursor_release, NULL); > @@ -95,7 +94,7 @@ int ast_cursor_init(struct ast_private *ast) > while (i) { > --i; > gbo = ast->cursor.gbo[i]; > - drm_gem_vram_vunmap(gbo, ast->cursor.vaddr[i]); > + drm_gem_vram_vunmap(gbo, &ast->cursor.map[i]); > drm_gem_vram_unpin(gbo); > drm_gem_vram_put(gbo); > } > @@ -170,8 +169,8 @@ int ast_cursor_blit(struct ast_private *ast, struct drm_framebuffer *fb) > { > struct drm_device *dev = &ast->base; > struct drm_gem_vram_object *gbo; > + struct dma_buf_map map; > int ret; > - void *src; > void __iomem *dst; > > if (drm_WARN_ON_ONCE(dev, fb->width > AST_MAX_HWC_WIDTH) || > @@ -183,18 +182,16 @@ int ast_cursor_blit(struct ast_private *ast, struct drm_framebuffer *fb) > ret = drm_gem_vram_pin(gbo, 0); > if (ret) > return ret; > - src = drm_gem_vram_vmap(gbo); > - if (IS_ERR(src)) { > - ret = PTR_ERR(src); > + ret = drm_gem_vram_vmap(gbo, &map); > + if (ret) > goto err_drm_gem_vram_unpin; > - } > > - dst = ast->cursor.vaddr[ast->cursor.next_index]; > + dst = ast->cursor.map[ast->cursor.next_index].vaddr_iomem; > > /* do data transfer to cursor BO */ > - update_cursor_image(dst, src, fb->width, fb->height); > + update_cursor_image(dst, map.vaddr, fb->width, fb->height); I don't think digging around in the pointer is a good idea, imo this should get a /* TODO: Use mapping abstraction properly */ or similar. Same for all the other usage for map.vaddr added to drivers below (the stuff in helpers that the next patches will change again I think you can leave as-is, it'll go away). I'm also wondering whether we should prefix all members of struct dma_buf_map with _ to make it clear they shouldn't be touched, so map._vaddr and map._is_iomem. Also todo.rst entry for all these, there's a lot from looking throught this patch. > > - drm_gem_vram_vunmap(gbo, src); > + drm_gem_vram_vunmap(gbo, &map); > drm_gem_vram_unpin(gbo); > > return 0; > @@ -257,7 +254,7 @@ void ast_cursor_show(struct ast_private *ast, int x, int y, > u8 __iomem *sig; > u8 jreg; > > - dst = ast->cursor.vaddr[ast->cursor.next_index]; > + dst = ast->cursor.map[ast->cursor.next_index].vaddr; > > sig = dst + AST_HWC_SIZE; > writel(x, sig + AST_HWC_SIGNATURE_X); > diff --git a/drivers/gpu/drm/ast/ast_drv.h b/drivers/gpu/drm/ast/ast_drv.h > index 467049ca8430..f963141dd851 100644 > --- a/drivers/gpu/drm/ast/ast_drv.h > +++ b/drivers/gpu/drm/ast/ast_drv.h > @@ -28,10 +28,11 @@ > #ifndef __AST_DRV_H__ > #define __AST_DRV_H__ > > -#include > -#include > +#include > #include > #include > +#include > +#include > > #include > #include > @@ -131,7 +132,7 @@ struct ast_private { > > struct { > struct drm_gem_vram_object *gbo[AST_DEFAULT_HWC_NUM]; > - void __iomem *vaddr[AST_DEFAULT_HWC_NUM]; > + struct dma_buf_map map[AST_DEFAULT_HWC_NUM]; > unsigned int next_index; > } cursor; > > diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c > index 1da67d34e55d..0c4a66dea5c2 100644 > --- a/drivers/gpu/drm/drm_gem.c > +++ b/drivers/gpu/drm/drm_gem.c > @@ -1207,26 +1207,30 @@ void drm_gem_unpin(struct drm_gem_object *obj) > > void *drm_gem_vmap(struct drm_gem_object *obj) > { > - void *vaddr; > + struct dma_buf_map map; > + int ret; > > - if (obj->funcs->vmap) > - vaddr = obj->funcs->vmap(obj); > - else > - vaddr = ERR_PTR(-EOPNOTSUPP); > + if (!obj->funcs->vmap) { > + return ERR_PTR(-EOPNOTSUPP); > > - if (!vaddr) > - vaddr = ERR_PTR(-ENOMEM); > + ret = obj->funcs->vmap(obj, &map); > + if (ret) > + return ERR_PTR(ret); > + else if (dma_buf_map_is_null(&map)) > + return ERR_PTR(-ENOMEM); > > - return vaddr; > + return map.vaddr; > } > > void drm_gem_vunmap(struct drm_gem_object *obj, void *vaddr) > { > + struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(vaddr); > + > if (!vaddr) > return; > > if (obj->funcs->vunmap) > - obj->funcs->vunmap(obj, vaddr); > + obj->funcs->vunmap(obj, &map); > } > > /** > diff --git a/drivers/gpu/drm/drm_gem_cma_helper.c b/drivers/gpu/drm/drm_gem_cma_helper.c > index 2165633c9b9e..e87cd36518d3 100644 > --- a/drivers/gpu/drm/drm_gem_cma_helper.c > +++ b/drivers/gpu/drm/drm_gem_cma_helper.c > @@ -519,6 +519,8 @@ EXPORT_SYMBOL_GPL(drm_gem_cma_prime_mmap); > * drm_gem_cma_prime_vmap - map a CMA GEM object into the kernel's virtual > * address space > * @obj: GEM object > + * @map: Returns the kernel virtual address of the CMA GEM object's backing > + * store. > * > * This function maps a buffer exported via DRM PRIME into the kernel's > * virtual address space. Since the CMA buffers are already mapped into the > @@ -527,13 +529,15 @@ EXPORT_SYMBOL_GPL(drm_gem_cma_prime_mmap); > * driver's &drm_gem_object_funcs.vmap callback. > * > * Returns: > - * The kernel virtual address of the CMA GEM object's backing store. > + * 0 on success, or a negative error code otherwise. > */ > -void *drm_gem_cma_prime_vmap(struct drm_gem_object *obj) > +int drm_gem_cma_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map) > { > struct drm_gem_cma_object *cma_obj = to_drm_gem_cma_obj(obj); > > - return cma_obj->vaddr; > + dma_buf_map_set_vaddr(map, cma_obj->vaddr); > + > + return 0; > } > EXPORT_SYMBOL_GPL(drm_gem_cma_prime_vmap); > > @@ -541,14 +545,14 @@ EXPORT_SYMBOL_GPL(drm_gem_cma_prime_vmap); > * drm_gem_cma_prime_vunmap - unmap a CMA GEM object from the kernel's virtual > * address space > * @obj: GEM object > - * @vaddr: kernel virtual address where the CMA GEM object was mapped > + * @map: Kernel virtual address where the CMA GEM object was mapped > * > * This function removes a buffer exported via DRM PRIME from the kernel's > * virtual address space. This is a no-op because CMA buffers cannot be > * unmapped from kernel space. Drivers using the CMA helpers should set this > * as their &drm_gem_object_funcs.vunmap callback. > */ > -void drm_gem_cma_prime_vunmap(struct drm_gem_object *obj, void *vaddr) > +void drm_gem_cma_prime_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map) > { > /* Nothing to do */ > } > diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c > index fb11df7aced5..5553f58f68f3 100644 > --- a/drivers/gpu/drm/drm_gem_shmem_helper.c > +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c > @@ -258,19 +258,25 @@ void drm_gem_shmem_unpin(struct drm_gem_object *obj) > } > EXPORT_SYMBOL(drm_gem_shmem_unpin); > > -static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem) > +static int drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem, struct dma_buf_map *map) > { > struct drm_gem_object *obj = &shmem->base; > - struct dma_buf_map map; > int ret = 0; > > - if (shmem->vmap_use_count++ > 0) > - return shmem->vaddr; > + if (shmem->vmap_use_count++ > 0) { > + dma_buf_map_set_vaddr(map, shmem->vaddr); > + return 0; > + } > > if (obj->import_attach) { > - ret = dma_buf_vmap(obj->import_attach->dmabuf, &map); > - if (!ret) > - shmem->vaddr = map.vaddr; > + ret = dma_buf_vmap(obj->import_attach->dmabuf, map); > + if (!ret) { > + if (WARN_ON(map->is_iomem)) { > + ret = -EIO; > + goto err_put_pages; > + } > + shmem->vaddr = map->vaddr; > + } > } else { > pgprot_t prot = PAGE_KERNEL; > > @@ -284,6 +290,8 @@ static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem) > VM_MAP, prot); > if (!shmem->vaddr) > ret = -ENOMEM; > + else > + dma_buf_map_set_vaddr(map, shmem->vaddr); > } > > if (ret) { > @@ -291,7 +299,7 @@ static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem) > goto err_put_pages; > } > > - return shmem->vaddr; > + return 0; > > err_put_pages: > if (!obj->import_attach) > @@ -299,12 +307,14 @@ static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem) > err_zero_use: > shmem->vmap_use_count = 0; > > - return ERR_PTR(ret); > + return ret; > } > > /* > * drm_gem_shmem_vmap - Create a virtual mapping for a shmem GEM object > * @shmem: shmem GEM object > + * @map: Returns the kernel virtual address of the SHMEM GEM object's backing > + * store. > * > * This function makes sure that a contiguous kernel virtual address mapping > * exists for the buffer backing the shmem GEM object. > @@ -318,26 +328,25 @@ static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem) > * Returns: > * 0 on success or a negative error code on failure. > */ > -void *drm_gem_shmem_vmap(struct drm_gem_object *obj) > +int drm_gem_shmem_vmap(struct drm_gem_object *obj, struct dma_buf_map *map) > { > struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj); > - void *vaddr; > int ret; > > ret = mutex_lock_interruptible(&shmem->vmap_lock); > if (ret) > - return ERR_PTR(ret); > - vaddr = drm_gem_shmem_vmap_locked(shmem); > + return ret; > + ret = drm_gem_shmem_vmap_locked(shmem, map); > mutex_unlock(&shmem->vmap_lock); > > - return vaddr; > + return ret; > } > EXPORT_SYMBOL(drm_gem_shmem_vmap); > > -static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem) > +static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem, > + struct dma_buf_map *map) > { > struct drm_gem_object *obj = &shmem->base; > - struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(shmem->vaddr); > > if (WARN_ON_ONCE(!shmem->vmap_use_count)) > return; > @@ -346,7 +355,7 @@ static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem) > return; > > if (obj->import_attach) > - dma_buf_vunmap(obj->import_attach->dmabuf, &map); > + dma_buf_vunmap(obj->import_attach->dmabuf, map); > else > vunmap(shmem->vaddr); > > @@ -357,6 +366,7 @@ static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem) > /* > * drm_gem_shmem_vunmap - Unmap a virtual mapping fo a shmem GEM object > * @shmem: shmem GEM object > + * @map: Kernel virtual address where the SHMEM GEM object was mapped > * > * This function cleans up a kernel virtual address mapping acquired by > * drm_gem_shmem_vmap(). The mapping is only removed when the use count drops to > @@ -366,12 +376,12 @@ static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem) > * also be called by drivers directly, in which case it will hide the > * differences between dma-buf imported and natively allocated objects. > */ > -void drm_gem_shmem_vunmap(struct drm_gem_object *obj, void *vaddr) > +void drm_gem_shmem_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map) > { > struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj); > > mutex_lock(&shmem->vmap_lock); > - drm_gem_shmem_vunmap_locked(shmem); > + drm_gem_shmem_vunmap_locked(shmem, map); > mutex_unlock(&shmem->vmap_lock); > } > EXPORT_SYMBOL(drm_gem_shmem_vunmap); > diff --git a/drivers/gpu/drm/drm_gem_vram_helper.c b/drivers/gpu/drm/drm_gem_vram_helper.c > index 256b346664f2..6a5b932e0d06 100644 > --- a/drivers/gpu/drm/drm_gem_vram_helper.c > +++ b/drivers/gpu/drm/drm_gem_vram_helper.c > @@ -1,5 +1,6 @@ > // SPDX-License-Identifier: GPL-2.0-or-later > > +#include > #include > > #include > @@ -382,11 +383,11 @@ int drm_gem_vram_unpin(struct drm_gem_vram_object *gbo) > } > EXPORT_SYMBOL(drm_gem_vram_unpin); > > -static void *drm_gem_vram_kmap_locked(struct drm_gem_vram_object *gbo) > +static int drm_gem_vram_kmap_locked(struct drm_gem_vram_object *gbo, > + struct dma_buf_map *map) > { > int ret; > struct ttm_bo_kmap_obj *kmap = &gbo->kmap; > - bool is_iomem; > > if (gbo->kmap_use_count > 0) > goto out; > @@ -396,17 +397,30 @@ static void *drm_gem_vram_kmap_locked(struct drm_gem_vram_object *gbo) > > ret = ttm_bo_kmap(&gbo->bo, 0, gbo->bo.num_pages, kmap); > if (ret) > - return ERR_PTR(ret); > + return ret; > > out: > - if (!kmap->virtual) > - return NULL; /* not mapped; don't increment ref */ > + if (!kmap->virtual) { > + dma_buf_map_clear(map); > + return 0; /* not mapped; don't increment ref */ > + } > ++gbo->kmap_use_count; > - return ttm_kmap_obj_virtual(kmap, &is_iomem); > + ttm_kmap_obj_to_dma_buf_map(kmap, map); > + return 0; > } > > -static void drm_gem_vram_kunmap_locked(struct drm_gem_vram_object *gbo) > +static void drm_gem_vram_kunmap_locked(struct drm_gem_vram_object *gbo, > + struct dma_buf_map *map) > { > + struct drm_device *dev = gbo->bo.base.dev; > + struct ttm_bo_kmap_obj *kmap = &gbo->kmap; > + struct dma_buf_map kmap_map; > + > + ttm_kmap_obj_to_dma_buf_map(kmap, &kmap_map); > + > + if (drm_WARN_ON_ONCE(dev, !dma_buf_map_is_equal(&kmap_map, map))) > + return; /* BUG: map not mapped from this BO */ > + > if (WARN_ON_ONCE(!gbo->kmap_use_count)) > return; > if (--gbo->kmap_use_count > 0) > @@ -423,7 +437,9 @@ static void drm_gem_vram_kunmap_locked(struct drm_gem_vram_object *gbo) > /** > * drm_gem_vram_vmap() - Pins and maps a GEM VRAM object into kernel address > * space > - * @gbo: The GEM VRAM object to map > + * @gbo: The GEM VRAM object to map > + * @map: Returns the kernel virtual address of the VRAM GEM object's backing > + * store. > * > * The vmap function pins a GEM VRAM object to its current location, either > * system or video memory, and maps its buffer into kernel address space. > @@ -432,48 +448,44 @@ static void drm_gem_vram_kunmap_locked(struct drm_gem_vram_object *gbo) > * unmap and unpin the GEM VRAM object. > * > * Returns: > - * The buffer's virtual address on success, or > - * an ERR_PTR()-encoded error code otherwise. > + * 0 on success, or a negative error code otherwise. > */ > -void *drm_gem_vram_vmap(struct drm_gem_vram_object *gbo) > +int drm_gem_vram_vmap(struct drm_gem_vram_object *gbo, struct dma_buf_map *map) > { > int ret; > - void *base; > > ret = ttm_bo_reserve(&gbo->bo, true, false, NULL); > if (ret) > - return ERR_PTR(ret); > + return ret; > > ret = drm_gem_vram_pin_locked(gbo, 0); > if (ret) > goto err_ttm_bo_unreserve; > - base = drm_gem_vram_kmap_locked(gbo); > - if (IS_ERR(base)) { > - ret = PTR_ERR(base); > + ret = drm_gem_vram_kmap_locked(gbo, map); > + if (ret) > goto err_drm_gem_vram_unpin_locked; > - } > > ttm_bo_unreserve(&gbo->bo); > > - return base; > + return 0; > > err_drm_gem_vram_unpin_locked: > drm_gem_vram_unpin_locked(gbo); > err_ttm_bo_unreserve: > ttm_bo_unreserve(&gbo->bo); > - return ERR_PTR(ret); > + return ret; > } > EXPORT_SYMBOL(drm_gem_vram_vmap); > > /** > * drm_gem_vram_vunmap() - Unmaps and unpins a GEM VRAM object > - * @gbo: The GEM VRAM object to unmap > - * @vaddr: The mapping's base address as returned by drm_gem_vram_vmap() > + * @gbo: The GEM VRAM object to unmap > + * @map: Kernel virtual address where the VRAM GEM object was mapped > * > * A call to drm_gem_vram_vunmap() unmaps and unpins a GEM VRAM buffer. See > * the documentation for drm_gem_vram_vmap() for more information. > */ > -void drm_gem_vram_vunmap(struct drm_gem_vram_object *gbo, void *vaddr) > +void drm_gem_vram_vunmap(struct drm_gem_vram_object *gbo, struct dma_buf_map *map) > { > int ret; > > @@ -481,7 +493,7 @@ void drm_gem_vram_vunmap(struct drm_gem_vram_object *gbo, void *vaddr) > if (WARN_ONCE(ret, "ttm_bo_reserve_failed(): ret=%d\n", ret)) > return; > > - drm_gem_vram_kunmap_locked(gbo); > + drm_gem_vram_kunmap_locked(gbo, map); > drm_gem_vram_unpin_locked(gbo); > > ttm_bo_unreserve(&gbo->bo); > @@ -829,37 +841,33 @@ static void drm_gem_vram_object_unpin(struct drm_gem_object *gem) > } > > /** > - * drm_gem_vram_object_vmap() - \ > - Implements &struct drm_gem_object_funcs.vmap > - * @gem: The GEM object to map > + * drm_gem_vram_object_vmap() - > + * Implements &struct drm_gem_object_funcs.vmap > + * @gem: The GEM object to map > + * @map: Returns the kernel virtual address of the VRAM GEM object's backing > + * store. > * > * Returns: > - * The buffers virtual address on success, or > - * NULL otherwise. > + * 0 on success, or a negative error code otherwise. > */ > -static void *drm_gem_vram_object_vmap(struct drm_gem_object *gem) > +static int drm_gem_vram_object_vmap(struct drm_gem_object *gem, struct dma_buf_map *map) > { > struct drm_gem_vram_object *gbo = drm_gem_vram_of_gem(gem); > - void *base; > > - base = drm_gem_vram_vmap(gbo); > - if (IS_ERR(base)) > - return NULL; > - return base; > + return drm_gem_vram_vmap(gbo, map); > } > > /** > - * drm_gem_vram_object_vunmap() - \ > - Implements &struct drm_gem_object_funcs.vunmap > - * @gem: The GEM object to unmap > - * @vaddr: The mapping's base address > + * drm_gem_vram_object_vunmap() - > + * Implements &struct drm_gem_object_funcs.vunmap > + * @gem: The GEM object to unmap > + * @map: Kernel virtual address where the VRAM GEM object was mapped > */ > -static void drm_gem_vram_object_vunmap(struct drm_gem_object *gem, > - void *vaddr) > +static void drm_gem_vram_object_vunmap(struct drm_gem_object *gem, struct dma_buf_map *map) > { > struct drm_gem_vram_object *gbo = drm_gem_vram_of_gem(gem); > > - drm_gem_vram_vunmap(gbo, vaddr); > + drm_gem_vram_vunmap(gbo, map); > } > > /* > diff --git a/drivers/gpu/drm/etnaviv/etnaviv_drv.h b/drivers/gpu/drm/etnaviv/etnaviv_drv.h > index 914f0867ff71..3d1eb8065fce 100644 > --- a/drivers/gpu/drm/etnaviv/etnaviv_drv.h > +++ b/drivers/gpu/drm/etnaviv/etnaviv_drv.h > @@ -51,8 +51,8 @@ int etnaviv_ioctl_gem_submit(struct drm_device *dev, void *data, > int etnaviv_gem_mmap(struct file *filp, struct vm_area_struct *vma); > int etnaviv_gem_mmap_offset(struct drm_gem_object *obj, u64 *offset); > struct sg_table *etnaviv_gem_prime_get_sg_table(struct drm_gem_object *obj); > -void *etnaviv_gem_prime_vmap(struct drm_gem_object *obj); > -void etnaviv_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr); > +int etnaviv_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map); > +void etnaviv_gem_prime_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map); > int etnaviv_gem_prime_mmap(struct drm_gem_object *obj, > struct vm_area_struct *vma); > struct drm_gem_object *etnaviv_gem_prime_import_sg_table(struct drm_device *dev, > diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c > index 135fbff6fecf..36c03e287e29 100644 > --- a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c > +++ b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c > @@ -22,12 +22,17 @@ struct sg_table *etnaviv_gem_prime_get_sg_table(struct drm_gem_object *obj) > return drm_prime_pages_to_sg(obj->dev, etnaviv_obj->pages, npages); > } > > -void *etnaviv_gem_prime_vmap(struct drm_gem_object *obj) > +int etnaviv_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map) > { > - return etnaviv_gem_vmap(obj); > + void *vaddr = etnaviv_gem_vmap(obj); > + if (!vaddr) > + return -ENOMEM; > + dma_buf_map_set_vaddr(map, vaddr); > + > + return 0; > } > > -void etnaviv_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr) > +void etnaviv_gem_prime_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map) > { > /* TODO msm_gem_vunmap() */ > } > diff --git a/drivers/gpu/drm/exynos/exynos_drm_gem.c b/drivers/gpu/drm/exynos/exynos_drm_gem.c > index e7a6eb96f692..2c74e06669fa 100644 > --- a/drivers/gpu/drm/exynos/exynos_drm_gem.c > +++ b/drivers/gpu/drm/exynos/exynos_drm_gem.c > @@ -471,12 +471,12 @@ exynos_drm_gem_prime_import_sg_table(struct drm_device *dev, > return &exynos_gem->base; > } > > -void *exynos_drm_gem_prime_vmap(struct drm_gem_object *obj) > +int exynos_drm_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map) > { > - return NULL; > + return -ENOMEM; > } > > -void exynos_drm_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr) > +void exynos_drm_gem_prime_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map) > { > /* Nothing to do */ > } Might want to just start out with a patch to delete these. We don't keep dummy functions around generally. > diff --git a/drivers/gpu/drm/exynos/exynos_drm_gem.h b/drivers/gpu/drm/exynos/exynos_drm_gem.h > index 74e926abeff0..ecfd048fd91d 100644 > --- a/drivers/gpu/drm/exynos/exynos_drm_gem.h > +++ b/drivers/gpu/drm/exynos/exynos_drm_gem.h > @@ -107,8 +107,8 @@ struct drm_gem_object * > exynos_drm_gem_prime_import_sg_table(struct drm_device *dev, > struct dma_buf_attachment *attach, > struct sg_table *sgt); > -void *exynos_drm_gem_prime_vmap(struct drm_gem_object *obj); > -void exynos_drm_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr); > +int exynos_drm_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map); > +void exynos_drm_gem_prime_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map); > int exynos_drm_gem_prime_mmap(struct drm_gem_object *obj, > struct vm_area_struct *vma); > > diff --git a/drivers/gpu/drm/lima/lima_gem.c b/drivers/gpu/drm/lima/lima_gem.c > index 11223fe348df..832e5280a6ed 100644 > --- a/drivers/gpu/drm/lima/lima_gem.c > +++ b/drivers/gpu/drm/lima/lima_gem.c > @@ -182,14 +182,14 @@ static int lima_gem_pin(struct drm_gem_object *obj) > return drm_gem_shmem_pin(obj); > } > > -static void *lima_gem_vmap(struct drm_gem_object *obj) > +static int lima_gem_vmap(struct drm_gem_object *obj, struct dma_buf_map *map) > { > struct lima_bo *bo = to_lima_bo(obj); > > if (bo->heap_size) > - return ERR_PTR(-EINVAL); > + return -EINVAL; > > - return drm_gem_shmem_vmap(obj); > + return drm_gem_shmem_vmap(obj, map); > } > > static int lima_gem_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma) > diff --git a/drivers/gpu/drm/lima/lima_sched.c b/drivers/gpu/drm/lima/lima_sched.c > index dc6df9e9a40d..a070a85f8f36 100644 > --- a/drivers/gpu/drm/lima/lima_sched.c > +++ b/drivers/gpu/drm/lima/lima_sched.c > @@ -1,6 +1,7 @@ > // SPDX-License-Identifier: GPL-2.0 OR MIT > /* Copyright 2017-2019 Qiang Yu */ > > +#include > #include > #include > #include > @@ -303,6 +304,8 @@ static void lima_sched_build_error_task_list(struct lima_sched_task *task) > struct lima_dump_chunk_buffer *buffer_chunk; > u32 size, task_size, mem_size; > int i; > + struct dma_buf_map map; > + int ret; > > mutex_lock(&dev->error_task_list_lock); > > @@ -388,15 +391,15 @@ static void lima_sched_build_error_task_list(struct lima_sched_task *task) > } else { > buffer_chunk->size = lima_bo_size(bo); > > - data = drm_gem_shmem_vmap(&bo->base.base); > - if (IS_ERR_OR_NULL(data)) { > + ret = drm_gem_shmem_vmap(&bo->base.base, &map); > + if (ret) { > kvfree(et); > goto out; > } > > - memcpy(buffer_chunk + 1, data, buffer_chunk->size); > + memcpy(buffer_chunk + 1, map.vaddr, buffer_chunk->size); > > - drm_gem_shmem_vunmap(&bo->base.base, data); > + drm_gem_shmem_vunmap(&bo->base.base, &map); > } > > buffer_chunk = (void *)(buffer_chunk + 1) + buffer_chunk->size; > diff --git a/drivers/gpu/drm/mgag200/mgag200_mode.c b/drivers/gpu/drm/mgag200/mgag200_mode.c > index 38672f9e5c4f..ae4c8cb33fae 100644 > --- a/drivers/gpu/drm/mgag200/mgag200_mode.c > +++ b/drivers/gpu/drm/mgag200/mgag200_mode.c > @@ -9,6 +9,7 @@ > */ > > #include > +#include > > #include > #include > @@ -1556,15 +1557,16 @@ mgag200_handle_damage(struct mga_device *mdev, struct drm_framebuffer *fb, > struct drm_rect *clip) > { > struct drm_device *dev = &mdev->base; > - void *vmap; > + struct dma_buf_map map; > + int ret; > > - vmap = drm_gem_shmem_vmap(fb->obj[0]); > - if (drm_WARN_ON(dev, !vmap)) > + ret = drm_gem_shmem_vmap(fb->obj[0], &map); > + if (drm_WARN_ON(dev, ret)) > return; /* BUG: SHMEM BO should always be vmapped */ > > - drm_fb_memcpy_dstclip(mdev->vram, vmap, fb, clip); > + drm_fb_memcpy_dstclip(mdev->vram, map.vaddr, fb, clip); > > - drm_gem_shmem_vunmap(fb->obj[0], vmap); > + drm_gem_shmem_vunmap(fb->obj[0], &map); > > /* Always scanout image at VRAM offset 0 */ > mgag200_set_startadd(mdev, (u32)0); > diff --git a/drivers/gpu/drm/nouveau/nouveau_gem.h b/drivers/gpu/drm/nouveau/nouveau_gem.h > index b35c180322e2..e780b6b1763d 100644 > --- a/drivers/gpu/drm/nouveau/nouveau_gem.h > +++ b/drivers/gpu/drm/nouveau/nouveau_gem.h > @@ -37,7 +37,7 @@ extern void nouveau_gem_prime_unpin(struct drm_gem_object *); > extern struct sg_table *nouveau_gem_prime_get_sg_table(struct drm_gem_object *); > extern struct drm_gem_object *nouveau_gem_prime_import_sg_table( > struct drm_device *, struct dma_buf_attachment *, struct sg_table *); > -extern void *nouveau_gem_prime_vmap(struct drm_gem_object *); > -extern void nouveau_gem_prime_vunmap(struct drm_gem_object *, void *); > +extern int nouveau_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map); > +extern void nouveau_gem_prime_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map); > > #endif > diff --git a/drivers/gpu/drm/nouveau/nouveau_prime.c b/drivers/gpu/drm/nouveau/nouveau_prime.c > index a8264aebf3d4..75e973a5675a 100644 > --- a/drivers/gpu/drm/nouveau/nouveau_prime.c > +++ b/drivers/gpu/drm/nouveau/nouveau_prime.c > @@ -35,7 +35,7 @@ struct sg_table *nouveau_gem_prime_get_sg_table(struct drm_gem_object *obj) > return drm_prime_pages_to_sg(obj->dev, nvbo->bo.ttm->pages, npages); > } > > -void *nouveau_gem_prime_vmap(struct drm_gem_object *obj) > +int nouveau_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map) > { > struct nouveau_bo *nvbo = nouveau_gem_object(obj); > int ret; > @@ -43,12 +43,13 @@ void *nouveau_gem_prime_vmap(struct drm_gem_object *obj) > ret = ttm_bo_kmap(&nvbo->bo, 0, nvbo->bo.num_pages, > &nvbo->dma_buf_vmap); > if (ret) > - return ERR_PTR(ret); > + return ret; > + ttm_kmap_obj_to_dma_buf_map(&nvbo->dma_buf_vmap, map); > > - return nvbo->dma_buf_vmap.virtual; > + return 0; > } > > -void nouveau_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr) > +void nouveau_gem_prime_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map) > { > struct nouveau_bo *nvbo = nouveau_gem_object(obj); > > diff --git a/drivers/gpu/drm/panfrost/panfrost_perfcnt.c b/drivers/gpu/drm/panfrost/panfrost_perfcnt.c > index fdbc8d949135..5ab03d605f57 100644 > --- a/drivers/gpu/drm/panfrost/panfrost_perfcnt.c > +++ b/drivers/gpu/drm/panfrost/panfrost_perfcnt.c > @@ -5,6 +5,7 @@ > #include > #include > #include > +#include > #include > #include > #include > @@ -72,6 +73,7 @@ static int panfrost_perfcnt_enable_locked(struct panfrost_device *pfdev, > { > struct panfrost_file_priv *user = file_priv->driver_priv; > struct panfrost_perfcnt *perfcnt = pfdev->perfcnt; > + struct dma_buf_map map; > struct drm_gem_shmem_object *bo; > u32 cfg, as; > int ret; > @@ -103,11 +105,10 @@ static int panfrost_perfcnt_enable_locked(struct panfrost_device *pfdev, > goto err_close_bo; > } > > - perfcnt->buf = drm_gem_shmem_vmap(&bo->base); > - if (IS_ERR(perfcnt->buf)) { > - ret = PTR_ERR(perfcnt->buf); > + ret = drm_gem_shmem_vmap(&bo->base, &map); > + if (ret) > goto err_put_mapping; > - } > + perfcnt->buf = map.vaddr; > > /* > * Invalidate the cache and clear the counters to start from a fresh > @@ -163,7 +164,7 @@ static int panfrost_perfcnt_enable_locked(struct panfrost_device *pfdev, > return 0; > > err_vunmap: > - drm_gem_shmem_vunmap(&bo->base, perfcnt->buf); > + drm_gem_shmem_vunmap(&bo->base, &map); > err_put_mapping: > panfrost_gem_mapping_put(perfcnt->mapping); > err_close_bo: > @@ -180,6 +181,7 @@ static int panfrost_perfcnt_disable_locked(struct panfrost_device *pfdev, > { > struct panfrost_file_priv *user = file_priv->driver_priv; > struct panfrost_perfcnt *perfcnt = pfdev->perfcnt; > + struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(perfcnt->buf); > > if (user != perfcnt->user) > return -EINVAL; > @@ -192,7 +194,7 @@ static int panfrost_perfcnt_disable_locked(struct panfrost_device *pfdev, > GPU_PERFCNT_CFG_MODE(GPU_PERFCNT_CFG_MODE_OFF)); > > perfcnt->user = NULL; > - drm_gem_shmem_vunmap(&perfcnt->mapping->obj->base.base, perfcnt->buf); > + drm_gem_shmem_vunmap(&perfcnt->mapping->obj->base.base, &map); > perfcnt->buf = NULL; > panfrost_gem_close(&perfcnt->mapping->obj->base.base, file_priv); > panfrost_mmu_as_put(pfdev, perfcnt->mapping->mmu); > diff --git a/drivers/gpu/drm/qxl/qxl_display.c b/drivers/gpu/drm/qxl/qxl_display.c > index 6063f3a15329..ed0d22fa0161 100644 > --- a/drivers/gpu/drm/qxl/qxl_display.c > +++ b/drivers/gpu/drm/qxl/qxl_display.c > @@ -25,6 +25,7 @@ > > #include > #include > +#include > > #include > #include > @@ -581,7 +582,8 @@ static void qxl_cursor_atomic_update(struct drm_plane *plane, > struct drm_gem_object *obj; > struct qxl_bo *cursor_bo = NULL, *user_bo = NULL, *old_cursor_bo = NULL; > int ret; > - void *user_ptr; > + struct dma_buf_map user_map; > + struct dma_buf_map cursor_map; > int size = 64*64*4; > > ret = qxl_alloc_release_reserved(qdev, sizeof(*cmd), > @@ -595,7 +597,7 @@ static void qxl_cursor_atomic_update(struct drm_plane *plane, > user_bo = gem_to_qxl_bo(obj); > > /* pinning is done in the prepare/cleanup framevbuffer */ > - ret = qxl_bo_kmap(user_bo, &user_ptr); > + ret = qxl_bo_kmap(user_bo, &user_map); > if (ret) > goto out_free_release; > > @@ -613,7 +615,7 @@ static void qxl_cursor_atomic_update(struct drm_plane *plane, > if (ret) > goto out_unpin; > > - ret = qxl_bo_kmap(cursor_bo, (void **)&cursor); > + ret = qxl_bo_kmap(cursor_bo, &cursor_map); > if (ret) > goto out_backoff; > > @@ -627,7 +629,7 @@ static void qxl_cursor_atomic_update(struct drm_plane *plane, > cursor->chunk.next_chunk = 0; > cursor->chunk.prev_chunk = 0; > cursor->chunk.data_size = size; > - memcpy(cursor->chunk.data, user_ptr, size); > + memcpy(cursor->chunk.data, user_map.vaddr, size); > qxl_bo_kunmap(cursor_bo); > qxl_bo_kunmap(user_bo); > > @@ -1138,6 +1140,7 @@ int qxl_create_monitors_object(struct qxl_device *qdev) > { > int ret; > struct drm_gem_object *gobj; > + struct dma_buf_map map; > int monitors_config_size = sizeof(struct qxl_monitors_config) + > qxl_num_crtc * sizeof(struct qxl_head); > > @@ -1154,7 +1157,7 @@ int qxl_create_monitors_object(struct qxl_device *qdev) > if (ret) > return ret; > > - qxl_bo_kmap(qdev->monitors_config_bo, NULL); > + qxl_bo_kmap(qdev->monitors_config_bo, &map); > > qdev->monitors_config = qdev->monitors_config_bo->kptr; > qdev->ram_header->monitors_config = > diff --git a/drivers/gpu/drm/qxl/qxl_draw.c b/drivers/gpu/drm/qxl/qxl_draw.c > index 3599db096973..1bf4f465ecf4 100644 > --- a/drivers/gpu/drm/qxl/qxl_draw.c > +++ b/drivers/gpu/drm/qxl/qxl_draw.c > @@ -20,6 +20,8 @@ > * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. > */ > > +#include > + > #include > > #include "qxl_drv.h" > @@ -42,13 +44,15 @@ static struct qxl_rect *drawable_set_clipping(struct qxl_device *qdev, > unsigned int num_clips, > struct qxl_bo *clips_bo) > { > + struct dma_buf_map map; > struct qxl_clip_rects *dev_clips; > int ret; > > - ret = qxl_bo_kmap(clips_bo, (void **)&dev_clips); > - if (ret) { > + ret = qxl_bo_kmap(clips_bo, &map); > + if (ret) > return NULL; > - } > + > + dev_clips = map.vaddr; > dev_clips->num_rects = num_clips; > dev_clips->chunk.next_chunk = 0; > dev_clips->chunk.prev_chunk = 0; > @@ -142,7 +146,7 @@ void qxl_draw_dirty_fb(struct qxl_device *qdev, > int stride = fb->pitches[0]; > /* depth is not actually interesting, we don't mask with it */ > int depth = fb->format->cpp[0] * 8; > - uint8_t *surface_base; > + struct dma_buf_map surface_map; > struct qxl_release *release; > struct qxl_bo *clips_bo; > struct qxl_drm_image *dimage; > @@ -197,11 +201,11 @@ void qxl_draw_dirty_fb(struct qxl_device *qdev, > if (ret) > goto out_release_backoff; > > - ret = qxl_bo_kmap(bo, (void **)&surface_base); > + ret = qxl_bo_kmap(bo, &surface_map); > if (ret) > goto out_release_backoff; > > - ret = qxl_image_init(qdev, release, dimage, surface_base, > + ret = qxl_image_init(qdev, release, dimage, surface_map.vaddr, > left - dumb_shadow_offset, > top, width, height, depth, stride); > qxl_bo_kunmap(bo); > diff --git a/drivers/gpu/drm/qxl/qxl_drv.h b/drivers/gpu/drm/qxl/qxl_drv.h > index 3602e8b34189..a9e9da4f4605 100644 > --- a/drivers/gpu/drm/qxl/qxl_drv.h > +++ b/drivers/gpu/drm/qxl/qxl_drv.h > @@ -50,6 +50,8 @@ > > #include "qxl_dev.h" > > +struct dma_buf_map; > + > #define DRIVER_AUTHOR "Dave Airlie" > > #define DRIVER_NAME "qxl" > @@ -335,7 +337,6 @@ int qxl_gem_object_open(struct drm_gem_object *obj, struct drm_file *file_priv); > void qxl_gem_object_close(struct drm_gem_object *obj, > struct drm_file *file_priv); > void qxl_bo_force_delete(struct qxl_device *qdev); > -int qxl_bo_kmap(struct qxl_bo *bo, void **ptr); > > /* qxl_dumb.c */ > int qxl_mode_dumb_create(struct drm_file *file_priv, > @@ -445,8 +446,9 @@ struct sg_table *qxl_gem_prime_get_sg_table(struct drm_gem_object *obj); > struct drm_gem_object *qxl_gem_prime_import_sg_table( > struct drm_device *dev, struct dma_buf_attachment *attach, > struct sg_table *sgt); > -void *qxl_gem_prime_vmap(struct drm_gem_object *obj); > -void qxl_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr); > +int qxl_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map); > +void qxl_gem_prime_vunmap(struct drm_gem_object *obj, > + struct dma_buf_map *map); > int qxl_gem_prime_mmap(struct drm_gem_object *obj, > struct vm_area_struct *vma); > > diff --git a/drivers/gpu/drm/qxl/qxl_object.c b/drivers/gpu/drm/qxl/qxl_object.c > index d3635e3e3267..2d8ae3b10b1c 100644 > --- a/drivers/gpu/drm/qxl/qxl_object.c > +++ b/drivers/gpu/drm/qxl/qxl_object.c > @@ -23,10 +23,12 @@ > * Alon Levy > */ > > +#include > +#include > + > #include "qxl_drv.h" > #include "qxl_object.h" > > -#include > static void qxl_ttm_bo_destroy(struct ttm_buffer_object *tbo) > { > struct qxl_bo *bo; > @@ -150,24 +152,22 @@ int qxl_bo_create(struct qxl_device *qdev, > return 0; > } > > -int qxl_bo_kmap(struct qxl_bo *bo, void **ptr) > +int qxl_bo_kmap(struct qxl_bo *bo, struct dma_buf_map *map) > { > - bool is_iomem; > int r; > > if (bo->kptr) { > - if (ptr) > - *ptr = bo->kptr; > bo->map_count++; > - return 0; > + goto out; > } > r = ttm_bo_kmap(&bo->tbo, 0, bo->tbo.num_pages, &bo->kmap); > if (r) > return r; > - bo->kptr = ttm_kmap_obj_virtual(&bo->kmap, &is_iomem); > - if (ptr) > - *ptr = bo->kptr; > bo->map_count = 1; > + bo->kptr = bo->kmap.virtual; > + > +out: > + ttm_kmap_obj_to_dma_buf_map(&bo->kmap, map); > return 0; > } > > @@ -178,6 +178,7 @@ void *qxl_bo_kmap_atomic_page(struct qxl_device *qdev, > void *rptr; > int ret; > struct io_mapping *map; > + struct dma_buf_map bo_map; > > if (bo->tbo.mem.mem_type == TTM_PL_VRAM) > map = qdev->vram_mapping; > @@ -194,11 +195,11 @@ void *qxl_bo_kmap_atomic_page(struct qxl_device *qdev, Uh, this fallback is wild. Not exactly sure this is a good idea or anything, but also it's here already :-) > return rptr; > } > > - ret = qxl_bo_kmap(bo, &rptr); > + ret = qxl_bo_kmap(bo, &bo_map); > if (ret) > return NULL; > > - rptr += page_offset * PAGE_SIZE; > + rptr = bo_map.vaddr + page_offset * PAGE_SIZE; > return rptr; > } > > diff --git a/drivers/gpu/drm/qxl/qxl_object.h b/drivers/gpu/drm/qxl/qxl_object.h > index 09a5c818324d..ebf24c9d2bf2 100644 > --- a/drivers/gpu/drm/qxl/qxl_object.h > +++ b/drivers/gpu/drm/qxl/qxl_object.h > @@ -63,7 +63,7 @@ extern int qxl_bo_create(struct qxl_device *qdev, > bool kernel, bool pinned, u32 domain, > struct qxl_surface *surf, > struct qxl_bo **bo_ptr); > -extern int qxl_bo_kmap(struct qxl_bo *bo, void **ptr); > +extern int qxl_bo_kmap(struct qxl_bo *bo, struct dma_buf_map *map); > extern void qxl_bo_kunmap(struct qxl_bo *bo); > void *qxl_bo_kmap_atomic_page(struct qxl_device *qdev, struct qxl_bo *bo, int page_offset); > void qxl_bo_kunmap_atomic_page(struct qxl_device *qdev, struct qxl_bo *bo, void *map); > diff --git a/drivers/gpu/drm/qxl/qxl_prime.c b/drivers/gpu/drm/qxl/qxl_prime.c > index 7d3816fca5a8..4aa949799446 100644 > --- a/drivers/gpu/drm/qxl/qxl_prime.c > +++ b/drivers/gpu/drm/qxl/qxl_prime.c > @@ -54,20 +54,20 @@ struct drm_gem_object *qxl_gem_prime_import_sg_table( > return ERR_PTR(-ENOSYS); > } > > -void *qxl_gem_prime_vmap(struct drm_gem_object *obj) > +int qxl_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map) > { > struct qxl_bo *bo = gem_to_qxl_bo(obj); > - void *ptr; > int ret; > > - ret = qxl_bo_kmap(bo, &ptr); > + ret = qxl_bo_kmap(bo, map); > if (ret < 0) > - return ERR_PTR(ret); > + return ret; > > - return ptr; > + return 0; > } > > -void qxl_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr) > +void qxl_gem_prime_vunmap(struct drm_gem_object *obj, > + struct dma_buf_map *map) > { > struct qxl_bo *bo = gem_to_qxl_bo(obj); > > diff --git a/drivers/gpu/drm/radeon/radeon_gem.c b/drivers/gpu/drm/radeon/radeon_gem.c > index 0ccd7213e41f..ac51517bdfcd 100644 > --- a/drivers/gpu/drm/radeon/radeon_gem.c > +++ b/drivers/gpu/drm/radeon/radeon_gem.c > @@ -40,8 +40,8 @@ struct dma_buf *radeon_gem_prime_export(struct drm_gem_object *gobj, > struct sg_table *radeon_gem_prime_get_sg_table(struct drm_gem_object *obj); > int radeon_gem_prime_pin(struct drm_gem_object *obj); > void radeon_gem_prime_unpin(struct drm_gem_object *obj); > -void *radeon_gem_prime_vmap(struct drm_gem_object *obj); > -void radeon_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr); > +int radeon_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map); > +void radeon_gem_prime_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map); > > static const struct drm_gem_object_funcs radeon_gem_object_funcs; > > diff --git a/drivers/gpu/drm/radeon/radeon_prime.c b/drivers/gpu/drm/radeon/radeon_prime.c > index b9de0e51c0be..a1a358de5448 100644 > --- a/drivers/gpu/drm/radeon/radeon_prime.c > +++ b/drivers/gpu/drm/radeon/radeon_prime.c > @@ -39,7 +39,7 @@ struct sg_table *radeon_gem_prime_get_sg_table(struct drm_gem_object *obj) > return drm_prime_pages_to_sg(obj->dev, bo->tbo.ttm->pages, npages); > } > > -void *radeon_gem_prime_vmap(struct drm_gem_object *obj) > +int radeon_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map) > { > struct radeon_bo *bo = gem_to_radeon_bo(obj); > int ret; > @@ -47,12 +47,13 @@ void *radeon_gem_prime_vmap(struct drm_gem_object *obj) > ret = ttm_bo_kmap(&bo->tbo, 0, bo->tbo.num_pages, > &bo->dma_buf_vmap); > if (ret) > - return ERR_PTR(ret); > + return ret; > + ttm_kmap_obj_to_dma_buf_map(&bo->dma_buf_vmap, map); > > - return bo->dma_buf_vmap.virtual; > + return 0; > } > > -void radeon_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr) > +void radeon_gem_prime_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map) > { > struct radeon_bo *bo = gem_to_radeon_bo(obj); > > diff --git a/drivers/gpu/drm/rockchip/rockchip_drm_gem.c b/drivers/gpu/drm/rockchip/rockchip_drm_gem.c > index 7d5ebb10323b..7971f57436dd 100644 > --- a/drivers/gpu/drm/rockchip/rockchip_drm_gem.c > +++ b/drivers/gpu/drm/rockchip/rockchip_drm_gem.c > @@ -532,26 +532,32 @@ rockchip_gem_prime_import_sg_table(struct drm_device *drm, > return ERR_PTR(ret); > } > > -void *rockchip_gem_prime_vmap(struct drm_gem_object *obj) > +int rockchip_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map) > { > struct rockchip_gem_object *rk_obj = to_rockchip_obj(obj); > > - if (rk_obj->pages) > - return vmap(rk_obj->pages, rk_obj->num_pages, VM_MAP, > - pgprot_writecombine(PAGE_KERNEL)); > + if (rk_obj->pages) { > + void *vaddr = vmap(rk_obj->pages, rk_obj->num_pages, VM_MAP, > + pgprot_writecombine(PAGE_KERNEL)); > + if (!vaddr) > + return -ENOMEM; > + dma_buf_map_set_vaddr(map, vaddr); > + return 0; > + } > > if (rk_obj->dma_attrs & DMA_ATTR_NO_KERNEL_MAPPING) > - return NULL; > + return -ENOMEM; > + dma_buf_map_set_vaddr(map, rk_obj->kvaddr); > > - return rk_obj->kvaddr; > + return 0; > } > > -void rockchip_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr) > +void rockchip_gem_prime_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map) > { > struct rockchip_gem_object *rk_obj = to_rockchip_obj(obj); > > if (rk_obj->pages) { > - vunmap(vaddr); > + vunmap(map->vaddr); > return; > } > > diff --git a/drivers/gpu/drm/rockchip/rockchip_drm_gem.h b/drivers/gpu/drm/rockchip/rockchip_drm_gem.h > index 7ffc541bea07..5a70a56cd406 100644 > --- a/drivers/gpu/drm/rockchip/rockchip_drm_gem.h > +++ b/drivers/gpu/drm/rockchip/rockchip_drm_gem.h > @@ -31,8 +31,8 @@ struct drm_gem_object * > rockchip_gem_prime_import_sg_table(struct drm_device *dev, > struct dma_buf_attachment *attach, > struct sg_table *sg); > -void *rockchip_gem_prime_vmap(struct drm_gem_object *obj); > -void rockchip_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr); > +int rockchip_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map); > +void rockchip_gem_prime_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map); > > /* drm driver mmap file operations */ > int rockchip_gem_mmap(struct file *filp, struct vm_area_struct *vma); > diff --git a/drivers/gpu/drm/tiny/cirrus.c b/drivers/gpu/drm/tiny/cirrus.c > index 744a8e337e41..6dc013f4b236 100644 > --- a/drivers/gpu/drm/tiny/cirrus.c > +++ b/drivers/gpu/drm/tiny/cirrus.c > @@ -17,6 +17,7 @@ > */ > > #include > +#include > #include > #include > > @@ -314,6 +315,7 @@ static int cirrus_fb_blit_rect(struct drm_framebuffer *fb, > struct drm_rect *rect) > { > struct cirrus_device *cirrus = to_cirrus(fb->dev); > + struct dma_buf_map map; > void *vmap; > int idx, ret; > > @@ -321,10 +323,10 @@ static int cirrus_fb_blit_rect(struct drm_framebuffer *fb, > if (!drm_dev_enter(&cirrus->dev, &idx)) > goto out; > > - ret = -ENOMEM; > - vmap = drm_gem_shmem_vmap(fb->obj[0]); > - if (!vmap) > + ret = drm_gem_shmem_vmap(fb->obj[0], &map); > + if (ret) > goto out_dev_exit; > + vmap = map.vaddr; > > if (cirrus->cpp == fb->format->cpp[0]) > drm_fb_memcpy_dstclip(cirrus->vram, > @@ -343,7 +345,7 @@ static int cirrus_fb_blit_rect(struct drm_framebuffer *fb, > else > WARN_ON_ONCE("cpp mismatch"); > > - drm_gem_shmem_vunmap(fb->obj[0], vmap); > + drm_gem_shmem_vunmap(fb->obj[0], &map); > ret = 0; > > out_dev_exit: > diff --git a/drivers/gpu/drm/tiny/gm12u320.c b/drivers/gpu/drm/tiny/gm12u320.c > index cc397671f689..5865027a1667 100644 > --- a/drivers/gpu/drm/tiny/gm12u320.c > +++ b/drivers/gpu/drm/tiny/gm12u320.c > @@ -248,6 +248,7 @@ static void gm12u320_copy_fb_to_blocks(struct gm12u320_device *gm12u320) > { > int block, dst_offset, len, remain, ret, x1, x2, y1, y2; > struct drm_framebuffer *fb; > + struct dma_buf_map map; > void *vaddr; > u8 *src; > > @@ -262,11 +263,12 @@ static void gm12u320_copy_fb_to_blocks(struct gm12u320_device *gm12u320) > y1 = gm12u320->fb_update.rect.y1; > y2 = gm12u320->fb_update.rect.y2; > > - vaddr = drm_gem_shmem_vmap(fb->obj[0]); > - if (IS_ERR(vaddr)) { > - GM12U320_ERR("failed to vmap fb: %ld\n", PTR_ERR(vaddr)); > + ret = drm_gem_shmem_vmap(fb->obj[0], &map); > + if (ret) { > + GM12U320_ERR("failed to vmap fb: %d\n", ret); > goto put_fb; > } > + vaddr = map.vaddr; > > if (fb->obj[0]->import_attach) { > ret = dma_buf_begin_cpu_access( > @@ -318,7 +320,7 @@ static void gm12u320_copy_fb_to_blocks(struct gm12u320_device *gm12u320) > GM12U320_ERR("dma_buf_end_cpu_access err: %d\n", ret); > } > vunmap: > - drm_gem_shmem_vunmap(fb->obj[0], vaddr); > + drm_gem_shmem_vunmap(fb->obj[0], &map); > put_fb: > drm_framebuffer_put(fb); > gm12u320->fb_update.fb = NULL; > diff --git a/drivers/gpu/drm/udl/udl_modeset.c b/drivers/gpu/drm/udl/udl_modeset.c > index fef43f4e3bac..9c8ace1aa647 100644 > --- a/drivers/gpu/drm/udl/udl_modeset.c > +++ b/drivers/gpu/drm/udl/udl_modeset.c > @@ -276,6 +276,7 @@ static int udl_handle_damage(struct drm_framebuffer *fb, int x, int y, > struct urb *urb; > struct drm_rect clip; > int log_bpp; > + struct dma_buf_map map; > void *vaddr; > > ret = udl_log_cpp(fb->format->cpp[0]); > @@ -296,11 +297,12 @@ static int udl_handle_damage(struct drm_framebuffer *fb, int x, int y, > return ret; > } > > - vaddr = drm_gem_shmem_vmap(fb->obj[0]); > - if (IS_ERR(vaddr)) { > + ret = drm_gem_shmem_vmap(fb->obj[0], &map); > + if (ret) { > DRM_ERROR("failed to vmap fb\n"); > goto out_dma_buf_end_cpu_access; > } > + vaddr = map.vaddr; > > urb = udl_get_urb(dev); > if (!urb) > @@ -333,7 +335,7 @@ static int udl_handle_damage(struct drm_framebuffer *fb, int x, int y, > ret = 0; > > out_drm_gem_shmem_vunmap: > - drm_gem_shmem_vunmap(fb->obj[0], vaddr); > + drm_gem_shmem_vunmap(fb->obj[0], &map); > out_dma_buf_end_cpu_access: > if (import_attach) { > tmp_ret = dma_buf_end_cpu_access(import_attach->dmabuf, > diff --git a/drivers/gpu/drm/vboxvideo/vbox_mode.c b/drivers/gpu/drm/vboxvideo/vbox_mode.c > index 4fcc0a542b8a..6040b9ec747f 100644 > --- a/drivers/gpu/drm/vboxvideo/vbox_mode.c > +++ b/drivers/gpu/drm/vboxvideo/vbox_mode.c > @@ -9,6 +9,8 @@ > * Michael Thayer * Hans de Goede > */ > + > +#include > #include > > #include > @@ -384,6 +386,8 @@ static void vbox_cursor_atomic_update(struct drm_plane *plane, > u32 height = plane->state->crtc_h; > size_t data_size, mask_size; > u32 flags; > + struct dma_buf_map map; > + int ret; > u8 *src; > > /* > @@ -397,8 +401,8 @@ static void vbox_cursor_atomic_update(struct drm_plane *plane, > > vbox_crtc->cursor_enabled = true; > > - src = drm_gem_vram_vmap(gbo); > - if (IS_ERR(src)) { > + ret = drm_gem_vram_vmap(gbo, &map); > + if (ret) { > /* > * BUG: we should have pinned the BO in prepare_fb(). > */ > @@ -406,6 +410,7 @@ static void vbox_cursor_atomic_update(struct drm_plane *plane, > DRM_WARN("Could not map cursor bo, skipping update\n"); > return; > } I don't think digging around in the pointer is a good idea, imo this should get a /* FIXME: Use mapping abstraction properly */ or similar. > + src = map.vaddr; > > /* > * The mask must be calculated based on the alpha > @@ -416,7 +421,7 @@ static void vbox_cursor_atomic_update(struct drm_plane *plane, > data_size = width * height * 4 + mask_size; > > copy_cursor_image(src, vbox->cursor_data, width, height, mask_size); > - drm_gem_vram_vunmap(gbo, src); > + drm_gem_vram_vunmap(gbo, &map); > > flags = VBOX_MOUSE_POINTER_VISIBLE | VBOX_MOUSE_POINTER_SHAPE | > VBOX_MOUSE_POINTER_ALPHA; > diff --git a/drivers/gpu/drm/vc4/vc4_bo.c b/drivers/gpu/drm/vc4/vc4_bo.c > index f432278173cd..250266fb437e 100644 > --- a/drivers/gpu/drm/vc4/vc4_bo.c > +++ b/drivers/gpu/drm/vc4/vc4_bo.c > @@ -786,16 +786,16 @@ int vc4_prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma) > return drm_gem_cma_prime_mmap(obj, vma); > } > > -void *vc4_prime_vmap(struct drm_gem_object *obj) > +int vc4_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map) > { > struct vc4_bo *bo = to_vc4_bo(obj); > > if (bo->validated_shader) { > DRM_DEBUG("mmaping of shader BOs not allowed.\n"); > - return ERR_PTR(-EINVAL); > + return -EINVAL; > } > > - return drm_gem_cma_prime_vmap(obj); > + return drm_gem_cma_prime_vmap(obj, map); > } > > struct drm_gem_object * > diff --git a/drivers/gpu/drm/vc4/vc4_drv.h b/drivers/gpu/drm/vc4/vc4_drv.h > index a22478a35199..6af453c84777 100644 > --- a/drivers/gpu/drm/vc4/vc4_drv.h > +++ b/drivers/gpu/drm/vc4/vc4_drv.h > @@ -804,7 +804,7 @@ int vc4_prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma); > struct drm_gem_object *vc4_prime_import_sg_table(struct drm_device *dev, > struct dma_buf_attachment *attach, > struct sg_table *sgt); > -void *vc4_prime_vmap(struct drm_gem_object *obj); > +int vc4_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map); > int vc4_bo_cache_init(struct drm_device *dev); > void vc4_bo_cache_destroy(struct drm_device *dev); > int vc4_bo_inc_usecnt(struct vc4_bo *bo); > diff --git a/drivers/gpu/drm/vgem/vgem_drv.c b/drivers/gpu/drm/vgem/vgem_drv.c > index fa54a6d1403d..b2aa26e1e4a2 100644 > --- a/drivers/gpu/drm/vgem/vgem_drv.c > +++ b/drivers/gpu/drm/vgem/vgem_drv.c > @@ -361,24 +361,30 @@ static struct drm_gem_object *vgem_prime_import_sg_table(struct drm_device *dev, > return &obj->base; > } > > -static void *vgem_prime_vmap(struct drm_gem_object *obj) > +static int vgem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map) > { > struct drm_vgem_gem_object *bo = to_vgem_bo(obj); > long n_pages = obj->size >> PAGE_SHIFT; > struct page **pages; > + void *vaddr; > > pages = vgem_pin_pages(bo); > if (IS_ERR(pages)) > - return NULL; > + return PTR_ERR(pages); > + > + vaddr = vmap(pages, n_pages, 0, pgprot_writecombine(PAGE_KERNEL)); > + if (!vaddr) > + return -ENOMEM; > + dma_buf_map_set_vaddr(map, vaddr); > > - return vmap(pages, n_pages, 0, pgprot_writecombine(PAGE_KERNEL)); > + return 0; > } > > -static void vgem_prime_vunmap(struct drm_gem_object *obj, void *vaddr) > +static void vgem_prime_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map) > { > struct drm_vgem_gem_object *bo = to_vgem_bo(obj); > > - vunmap(vaddr); > + vunmap(map->vaddr); > vgem_unpin_pages(bo); > } > > diff --git a/drivers/gpu/drm/xen/xen_drm_front_gem.c b/drivers/gpu/drm/xen/xen_drm_front_gem.c > index 4f34ef34ba60..74db5a840bed 100644 > --- a/drivers/gpu/drm/xen/xen_drm_front_gem.c > +++ b/drivers/gpu/drm/xen/xen_drm_front_gem.c > @@ -290,22 +290,28 @@ int xen_drm_front_gem_mmap(struct file *filp, struct vm_area_struct *vma) > return gem_mmap_obj(xen_obj, vma); > } > > -void *xen_drm_front_gem_prime_vmap(struct drm_gem_object *gem_obj) > +int xen_drm_front_gem_prime_vmap(struct drm_gem_object *gem_obj, struct dma_buf_map *map) > { > struct xen_gem_object *xen_obj = to_xen_gem_obj(gem_obj); > + void *vaddr; > > if (!xen_obj->pages) > - return NULL; > + return -ENOMEM; > > /* Please see comment in gem_mmap_obj on mapping and attributes. */ > - return vmap(xen_obj->pages, xen_obj->num_pages, > - VM_MAP, PAGE_KERNEL); > + vaddr = vmap(xen_obj->pages, xen_obj->num_pages, > + VM_MAP, PAGE_KERNEL); > + if (!vaddr) > + return -ENOMEM; > + dma_buf_map_set_vaddr(map, vaddr); > + > + return 0; > } > > void xen_drm_front_gem_prime_vunmap(struct drm_gem_object *gem_obj, > - void *vaddr) > + struct dma_buf_map *map) > { > - vunmap(vaddr); > + vunmap(map->vaddr); > } > > int xen_drm_front_gem_prime_mmap(struct drm_gem_object *gem_obj, > diff --git a/drivers/gpu/drm/xen/xen_drm_front_gem.h b/drivers/gpu/drm/xen/xen_drm_front_gem.h > index a39675fa31b2..a4e67d0a149c 100644 > --- a/drivers/gpu/drm/xen/xen_drm_front_gem.h > +++ b/drivers/gpu/drm/xen/xen_drm_front_gem.h > @@ -12,6 +12,7 @@ > #define __XEN_DRM_FRONT_GEM_H > > struct dma_buf_attachment; > +struct dma_buf_map; > struct drm_device; > struct drm_gem_object; > struct file; > @@ -34,10 +35,11 @@ void xen_drm_front_gem_free_object_unlocked(struct drm_gem_object *gem_obj); > > int xen_drm_front_gem_mmap(struct file *filp, struct vm_area_struct *vma); > > -void *xen_drm_front_gem_prime_vmap(struct drm_gem_object *gem_obj); > +int xen_drm_front_gem_prime_vmap(struct drm_gem_object *gem_obj, > + struct dma_buf_map *map); > > void xen_drm_front_gem_prime_vunmap(struct drm_gem_object *gem_obj, > - void *vaddr); > + struct dma_buf_map *map); > > int xen_drm_front_gem_prime_mmap(struct drm_gem_object *gem_obj, > struct vm_area_struct *vma); > diff --git a/include/drm/drm_gem.h b/include/drm/drm_gem.h > index c38dd35da00b..5e6daa1c982f 100644 > --- a/include/drm/drm_gem.h > +++ b/include/drm/drm_gem.h > @@ -39,6 +39,7 @@ > > #include > > +struct dma_buf_map; > struct drm_gem_object; > > /** > @@ -138,7 +139,7 @@ struct drm_gem_object_funcs { > * > * This callback is optional. > */ > - void *(*vmap)(struct drm_gem_object *obj); > + int (*vmap)(struct drm_gem_object *obj, struct dma_buf_map *map); > > /** > * @vunmap: > @@ -148,7 +149,7 @@ struct drm_gem_object_funcs { > * > * This callback is optional. > */ > - void (*vunmap)(struct drm_gem_object *obj, void *vaddr); > + void (*vunmap)(struct drm_gem_object *obj, struct dma_buf_map *map); > > /** > * @mmap: > diff --git a/include/drm/drm_gem_cma_helper.h b/include/drm/drm_gem_cma_helper.h > index 2bfa2502607a..34a7f72879c5 100644 > --- a/include/drm/drm_gem_cma_helper.h > +++ b/include/drm/drm_gem_cma_helper.h > @@ -103,8 +103,8 @@ drm_gem_cma_prime_import_sg_table(struct drm_device *dev, > struct sg_table *sgt); > int drm_gem_cma_prime_mmap(struct drm_gem_object *obj, > struct vm_area_struct *vma); > -void *drm_gem_cma_prime_vmap(struct drm_gem_object *obj); > -void drm_gem_cma_prime_vunmap(struct drm_gem_object *obj, void *vaddr); > +int drm_gem_cma_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map); > +void drm_gem_cma_prime_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map); > > struct drm_gem_object * > drm_gem_cma_create_object_default_funcs(struct drm_device *dev, size_t size); > diff --git a/include/drm/drm_gem_shmem_helper.h b/include/drm/drm_gem_shmem_helper.h > index 5381f0c8cf6f..3449a0353fe0 100644 > --- a/include/drm/drm_gem_shmem_helper.h > +++ b/include/drm/drm_gem_shmem_helper.h > @@ -113,8 +113,8 @@ int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem); > void drm_gem_shmem_put_pages(struct drm_gem_shmem_object *shmem); > int drm_gem_shmem_pin(struct drm_gem_object *obj); > void drm_gem_shmem_unpin(struct drm_gem_object *obj); > -void *drm_gem_shmem_vmap(struct drm_gem_object *obj); > -void drm_gem_shmem_vunmap(struct drm_gem_object *obj, void *vaddr); > +int drm_gem_shmem_vmap(struct drm_gem_object *obj, struct dma_buf_map *map); > +void drm_gem_shmem_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map); > > int drm_gem_shmem_madvise(struct drm_gem_object *obj, int madv); > > diff --git a/include/drm/drm_gem_vram_helper.h b/include/drm/drm_gem_vram_helper.h > index 128f88174d32..0c43b8f17ee9 100644 > --- a/include/drm/drm_gem_vram_helper.h > +++ b/include/drm/drm_gem_vram_helper.h > @@ -97,8 +97,8 @@ u64 drm_gem_vram_mmap_offset(struct drm_gem_vram_object *gbo); > s64 drm_gem_vram_offset(struct drm_gem_vram_object *gbo); > int drm_gem_vram_pin(struct drm_gem_vram_object *gbo, unsigned long pl_flag); > int drm_gem_vram_unpin(struct drm_gem_vram_object *gbo); > -void *drm_gem_vram_vmap(struct drm_gem_vram_object *gbo); > -void drm_gem_vram_vunmap(struct drm_gem_vram_object *gbo, void *vaddr); > +int drm_gem_vram_vmap(struct drm_gem_vram_object *gbo, struct dma_buf_map *map); > +void drm_gem_vram_vunmap(struct drm_gem_vram_object *gbo, struct dma_buf_map *map); > > int drm_gem_vram_fill_create_dumb(struct drm_file *file, > struct drm_device *dev, > -- > 2.28.0 Bit a big patch, I can't think of a way to split it up either. -Daniel -- Daniel Vetter Software Engineer, Intel Corporation http://blog.ffwll.ch From daniel at ffwll.ch Fri Oct 2 13:05:39 2020 From: daniel at ffwll.ch (Daniel Vetter) Date: Fri, 2 Oct 2020 15:05:39 +0200 Subject: [Spice-devel] [PATCH v3 5/7] drm/gem: Store client buffer mappings as struct dma_buf_map In-Reply-To: <20200929151437.19717-6-tzimmermann@suse.de> References: <20200929151437.19717-1-tzimmermann@suse.de> <20200929151437.19717-6-tzimmermann@suse.de> Message-ID: <20201002130539.GL438822@phenom.ffwll.local> On Tue, Sep 29, 2020 at 05:14:35PM +0200, Thomas Zimmermann wrote: > Kernel DRM clients now store their framebuffer address in an instance > of struct dma_buf_map. Depending on the buffer's location, the address > refers to system or I/O memory. > > Callers of drm_client_buffer_vmap() receive a copy of the value in > the call's supplied arguments. It can be accessed and modified with > dma_buf_map interfaces. > > Signed-off-by: Thomas Zimmermann > --- > drivers/gpu/drm/drm_client.c | 34 +++++++++++++++++++-------------- > drivers/gpu/drm/drm_fb_helper.c | 23 +++++++++++++--------- > include/drm/drm_client.h | 7 ++++--- > 3 files changed, 38 insertions(+), 26 deletions(-) > > diff --git a/drivers/gpu/drm/drm_client.c b/drivers/gpu/drm/drm_client.c > index ac0082bed966..fe573acf1067 100644 > --- a/drivers/gpu/drm/drm_client.c > +++ b/drivers/gpu/drm/drm_client.c > @@ -235,7 +235,7 @@ static void drm_client_buffer_delete(struct drm_client_buffer *buffer) > { > struct drm_device *dev = buffer->client->dev; > > - drm_gem_vunmap(buffer->gem, buffer->vaddr); > + drm_gem_vunmap(buffer->gem, &buffer->map); > > if (buffer->gem) > drm_gem_object_put(buffer->gem); > @@ -291,25 +291,31 @@ drm_client_buffer_create(struct drm_client_dev *client, u32 width, u32 height, u > /** > * drm_client_buffer_vmap - Map DRM client buffer into address space > * @buffer: DRM client buffer > + * @map_copy: Returns the mapped memory's address > * > * This function maps a client buffer into kernel address space. If the > - * buffer is already mapped, it returns the mapping's address. > + * buffer is already mapped, it returns the existing mapping's address. > * > * Client buffer mappings are not ref'counted. Each call to > * drm_client_buffer_vmap() should be followed by a call to > * drm_client_buffer_vunmap(); or the client buffer should be mapped > * throughout its lifetime. > * > + * The returned address is a copy of the internal value. In contrast to > + * other vmap interfaces, you don't need it for the client's vunmap > + * function. So you can modify it at will during blit and draw operations. > + * > * Returns: > - * The mapped memory's address > + * 0 on success, or a negative errno code otherwise. > */ > -void *drm_client_buffer_vmap(struct drm_client_buffer *buffer) > +int > +drm_client_buffer_vmap(struct drm_client_buffer *buffer, struct dma_buf_map *map_copy) > { > - struct dma_buf_map map; > + struct dma_buf_map *map = &buffer->map; > int ret; > > - if (buffer->vaddr) > - return buffer->vaddr; > + if (dma_buf_map_is_set(map)) > + goto out; > > /* > * FIXME: The dependency on GEM here isn't required, we could > @@ -319,13 +325,14 @@ void *drm_client_buffer_vmap(struct drm_client_buffer *buffer) > * fd_install step out of the driver backend hooks, to make that > * final step optional for internal users. > */ > - ret = drm_gem_vmap(buffer->gem, &map); > + ret = drm_gem_vmap(buffer->gem, map); > if (ret) > - return ERR_PTR(ret); > + return ret; > > - buffer->vaddr = map.vaddr; > +out: > + *map_copy = *map; > > - return map.vaddr; > + return 0; > } > EXPORT_SYMBOL(drm_client_buffer_vmap); > > @@ -339,10 +346,9 @@ EXPORT_SYMBOL(drm_client_buffer_vmap); > */ > void drm_client_buffer_vunmap(struct drm_client_buffer *buffer) > { > - struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(buffer->vaddr); > + struct dma_buf_map *map = &buffer->map; > > - drm_gem_vunmap(buffer->gem, &map); > - buffer->vaddr = NULL; > + drm_gem_vunmap(buffer->gem, map); > } > EXPORT_SYMBOL(drm_client_buffer_vunmap); > > diff --git a/drivers/gpu/drm/drm_fb_helper.c b/drivers/gpu/drm/drm_fb_helper.c > index 8697554ccd41..343a292f2c7c 100644 > --- a/drivers/gpu/drm/drm_fb_helper.c > +++ b/drivers/gpu/drm/drm_fb_helper.c > @@ -394,7 +394,7 @@ static void drm_fb_helper_dirty_blit_real(struct drm_fb_helper *fb_helper, > unsigned int cpp = fb->format->cpp[0]; > size_t offset = clip->y1 * fb->pitches[0] + clip->x1 * cpp; > void *src = fb_helper->fbdev->screen_buffer + offset; > - void *dst = fb_helper->buffer->vaddr + offset; > + void *dst = fb_helper->buffer->map.vaddr + offset; > size_t len = (clip->x2 - clip->x1) * cpp; > unsigned int y; > > @@ -416,7 +416,8 @@ static void drm_fb_helper_dirty_work(struct work_struct *work) > struct drm_clip_rect *clip = &helper->dirty_clip; > struct drm_clip_rect clip_copy; > unsigned long flags; > - void *vaddr; > + struct dma_buf_map map; > + int ret; > > spin_lock_irqsave(&helper->dirty_lock, flags); > clip_copy = *clip; > @@ -429,8 +430,8 @@ static void drm_fb_helper_dirty_work(struct work_struct *work) > > /* Generic fbdev uses a shadow buffer */ > if (helper->buffer) { > - vaddr = drm_client_buffer_vmap(helper->buffer); > - if (IS_ERR(vaddr)) > + ret = drm_client_buffer_vmap(helper->buffer, &map); > + if (ret) > return; > drm_fb_helper_dirty_blit_real(helper, &clip_copy); > } > @@ -2076,7 +2077,8 @@ static int drm_fb_helper_generic_probe(struct drm_fb_helper *fb_helper, > struct drm_framebuffer *fb; > struct fb_info *fbi; > u32 format; > - void *vaddr; > + struct dma_buf_map map; > + int ret; > > drm_dbg_kms(dev, "surface width(%d), height(%d) and bpp(%d)\n", > sizes->surface_width, sizes->surface_height, > @@ -2112,11 +2114,14 @@ static int drm_fb_helper_generic_probe(struct drm_fb_helper *fb_helper, > fb_deferred_io_init(fbi); > } else { > /* buffer is mapped for HW framebuffer */ > - vaddr = drm_client_buffer_vmap(fb_helper->buffer); > - if (IS_ERR(vaddr)) > - return PTR_ERR(vaddr); > + ret = drm_client_buffer_vmap(fb_helper->buffer, &map); > + if (ret) > + return ret; > + if (map.is_iomem) > + fbi->screen_base = map.vaddr_iomem; > + else > + fbi->screen_buffer = map.vaddr; > > - fbi->screen_buffer = vaddr; > /* Shamelessly leak the physical address to user-space */ > #if IS_ENABLED(CONFIG_DRM_FBDEV_LEAK_PHYS_SMEM) > if (drm_leak_fbdev_smem && fbi->fix.smem_start == 0) > diff --git a/include/drm/drm_client.h b/include/drm/drm_client.h > index 7aaea665bfc2..f07f2fb02e75 100644 > --- a/include/drm/drm_client.h > +++ b/include/drm/drm_client.h > @@ -3,6 +3,7 @@ > #ifndef _DRM_CLIENT_H_ > #define _DRM_CLIENT_H_ > > +#include > #include > #include > #include > @@ -141,9 +142,9 @@ struct drm_client_buffer { > struct drm_gem_object *gem; > > /** > - * @vaddr: Virtual address for the buffer > + * @map: Virtual address for the buffer > */ > - void *vaddr; > + struct dma_buf_map map; > > /** > * @fb: DRM framebuffer > @@ -155,7 +156,7 @@ struct drm_client_buffer * > drm_client_framebuffer_create(struct drm_client_dev *client, u32 width, u32 height, u32 format); > void drm_client_framebuffer_delete(struct drm_client_buffer *buffer); > int drm_client_framebuffer_flush(struct drm_client_buffer *buffer, struct drm_rect *rect); > -void *drm_client_buffer_vmap(struct drm_client_buffer *buffer); > +int drm_client_buffer_vmap(struct drm_client_buffer *buffer, struct dma_buf_map *map); > void drm_client_buffer_vunmap(struct drm_client_buffer *buffer); > > int drm_client_modeset_create(struct drm_client_dev *client); Reviewed-by: Daniel Vetter > -- > 2.28.0 > -- Daniel Vetter Software Engineer, Intel Corporation http://blog.ffwll.ch From daniel at ffwll.ch Fri Oct 2 12:21:38 2020 From: daniel at ffwll.ch (Daniel Vetter) Date: Fri, 2 Oct 2020 14:21:38 +0200 Subject: [Spice-devel] [PATCH v3 2/7] drm/ttm: Add ttm_kmap_obj_to_dma_buf_map() for type conversion In-Reply-To: References: <20200929151437.19717-1-tzimmermann@suse.de> <20200929151437.19717-3-tzimmermann@suse.de> <8fad0114-064a-4ed5-c21d-d1b4294de0a1@amd.com> <2614314a-81f7-4722-c400-68d90e48e09a@suse.de> <8a84f62b-33f3-f44c-52af-c859a0e0d1fb@gmail.com> <07972ada-9135-3743-a86b-487f610c509f@suse.de> <20200930094712.GW438822@phenom.ffwll.local> <8479d0aa-3826-4f37-0109-55daca515793@amd.com> <20201002095830.GH438822@phenom.ffwll.local> Message-ID: On Fri, Oct 2, 2020 at 1:30 PM Christian K?nig wrote: > > Am 02.10.20 um 11:58 schrieb Daniel Vetter: > > On Wed, Sep 30, 2020 at 02:51:46PM +0200, Daniel Vetter wrote: > >> On Wed, Sep 30, 2020 at 2:34 PM Christian K?nig > >> wrote: > >>> Am 30.09.20 um 11:47 schrieb Daniel Vetter: > >>>> On Wed, Sep 30, 2020 at 10:34:31AM +0200, Christian K?nig wrote: > >>>>> Am 30.09.20 um 10:19 schrieb Thomas Zimmermann: > >>>>>> Hi > >>>>>> > >>>>>> Am 30.09.20 um 10:05 schrieb Christian K?nig: > >>>>>>> Am 29.09.20 um 19:49 schrieb Thomas Zimmermann: > >>>>>>>> Hi Christian > >>>>>>>> > >>>>>>>> Am 29.09.20 um 17:35 schrieb Christian K?nig: > >>>>>>>>> Am 29.09.20 um 17:14 schrieb Thomas Zimmermann: > >>>>>>>>>> The new helper ttm_kmap_obj_to_dma_buf() extracts address and location > >>>>>>>>>> from and instance of TTM's kmap_obj and initializes struct dma_buf_map > >>>>>>>>>> with these values. Helpful for TTM-based drivers. > >>>>>>>>> We could completely drop that if we use the same structure inside TTM as > >>>>>>>>> well. > >>>>>>>>> > >>>>>>>>> Additional to that which driver is going to use this? > >>>>>>>> As Daniel mentioned, it's in patch 3. The TTM-based drivers will > >>>>>>>> retrieve the pointer via this function. > >>>>>>>> > >>>>>>>> I do want to see all that being more tightly integrated into TTM, but > >>>>>>>> not in this series. This one is about fixing the bochs-on-sparc64 > >>>>>>>> problem for good. Patch 7 adds an update to TTM to the DRM TODO list. > >>>>>>> I should have asked which driver you try to fix here :) > >>>>>>> > >>>>>>> In this case just keep the function inside bochs and only fix it there. > >>>>>>> > >>>>>>> All other drivers can be fixed when we generally pump this through TTM. > >>>>>> Did you take a look at patch 3? This function will be used by VRAM > >>>>>> helpers, nouveau, radeon, amdgpu and qxl. If we don't put it here, we > >>>>>> have to duplicate the functionality in each if these drivers. Bochs > >>>>>> itself uses VRAM helpers and doesn't touch the function directly. > >>>>> Ah, ok can we have that then only in the VRAM helpers? > >>>>> > >>>>> Alternative you could go ahead and use dma_buf_map in ttm_bo_kmap_obj > >>>>> directly and drop the hack with the TTM_BO_MAP_IOMEM_MASK. > >>>>> > >>>>> What I want to avoid is to have another conversion function in TTM because > >>>>> what happens here is that we already convert from ttm_bus_placement to > >>>>> ttm_bo_kmap_obj and then to dma_buf_map. > >>>> Hm I'm not really seeing how that helps with a gradual conversion of > >>>> everything over to dma_buf_map and assorted helpers for access? There's > >>>> too many places in ttm drivers where is_iomem and related stuff is used to > >>>> be able to convert it all in one go. An intermediate state with a bunch of > >>>> conversions seems fairly unavoidable to me. > >>> Fair enough. I would just have started bottom up and not top down. > >>> > >>> Anyway feel free to go ahead with this approach as long as we can remove > >>> the new function again when we clean that stuff up for good. > >> Yeah I guess bottom up would make more sense as a refactoring. But the > >> main motivation to land this here is to fix the __mmio vs normal > >> memory confusion in the fbdev emulation helpers for sparc (and > >> anything else that needs this). Hence the top down approach for > >> rolling this out. > > Ok I started reviewing this a bit more in-depth, and I think this is a bit > > too much of a de-tour. > > > > Looking through all the callers of ttm_bo_kmap almost everyone maps the > > entire object. Only vmwgfx uses to map less than that. Also, everyone just > > immediately follows up with converting that full object map into a > > pointer. > > > > So I think what we really want here is: > > - new function > > > > int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map); > > > > _vmap name since that's consistent with both dma_buf functions and > > what's usually used to implement this. Outside of the ttm world kmap > > usually just means single-page mappings using kmap() or it's iomem > > sibling io_mapping_map* so rather confusing name for a function which > > usually is just used to set up a vmap of the entire buffer. > > > > - a helper which can be used for the drm_gem_object_funcs vmap/vunmap > > functions for all ttm drivers. We should be able to make this fully > > generic because a) we now have dma_buf_map and b) drm_gem_object is > > embedded in the ttm_bo, so we can upcast for everyone who's both a ttm > > and gem driver. > > > > This is maybe a good follow-up, since it should allow us to ditch quite > > a bit of the vram helper code for this more generic stuff. I also might > > have missed some special-cases here, but from a quick look everything > > just pins the buffer to the current location and that's it. > > > > Also this obviously requires Christian's generic ttm_bo_pin rework > > first. > > > > - roll the above out to drivers. > > > > Christian/Thomas, thoughts on this? > > Calling this vmap instead of kmap certainly makes sense. > > Not 100% sure about the generic helpers, but it sounds like this should > indeed look rather clean in the end. Yeah generic helper is probably better left for a later step, after we've rolled out ttm_bo_vmap out everywhere. -Daniel > > Christian. > > > > > I think for the immediate need of rolling this out for vram helpers and > > fbdev code we should be able to do this, but just postpone the driver wide > > roll-out for now. > > > > Cheers, Daniel > > > >> -Daniel > >> > >>> Christian. > >>> > >>>> -Daniel > >>>> > >>>>> Thanks, > >>>>> Christian. > >>>>> > >>>>>> Best regards > >>>>>> Thomas > >>>>>> > >>>>>>> Regards, > >>>>>>> Christian. > >>>>>>> > >>>>>>>> Best regards > >>>>>>>> Thomas > >>>>>>>> > >>>>>>>>> Regards, > >>>>>>>>> Christian. > >>>>>>>>> > >>>>>>>>>> Signed-off-by: Thomas Zimmermann > >>>>>>>>>> --- > >>>>>>>>>> include/drm/ttm/ttm_bo_api.h | 24 ++++++++++++++++++++++++ > >>>>>>>>>> include/linux/dma-buf-map.h | 20 ++++++++++++++++++++ > >>>>>>>>>> 2 files changed, 44 insertions(+) > >>>>>>>>>> > >>>>>>>>>> diff --git a/include/drm/ttm/ttm_bo_api.h b/include/drm/ttm/ttm_bo_api.h > >>>>>>>>>> index c96a25d571c8..62d89f05a801 100644 > >>>>>>>>>> --- a/include/drm/ttm/ttm_bo_api.h > >>>>>>>>>> +++ b/include/drm/ttm/ttm_bo_api.h > >>>>>>>>>> @@ -34,6 +34,7 @@ > >>>>>>>>>> #include > >>>>>>>>>> #include > >>>>>>>>>> #include > >>>>>>>>>> +#include > >>>>>>>>>> #include > >>>>>>>>>> #include > >>>>>>>>>> #include > >>>>>>>>>> @@ -486,6 +487,29 @@ static inline void *ttm_kmap_obj_virtual(struct > >>>>>>>>>> ttm_bo_kmap_obj *map, > >>>>>>>>>> return map->virtual; > >>>>>>>>>> } > >>>>>>>>>> +/** > >>>>>>>>>> + * ttm_kmap_obj_to_dma_buf_map > >>>>>>>>>> + * > >>>>>>>>>> + * @kmap: A struct ttm_bo_kmap_obj returned from ttm_bo_kmap. > >>>>>>>>>> + * @map: Returns the mapping as struct dma_buf_map > >>>>>>>>>> + * > >>>>>>>>>> + * Converts struct ttm_bo_kmap_obj to struct dma_buf_map. If the memory > >>>>>>>>>> + * is not mapped, the returned mapping is initialized to NULL. > >>>>>>>>>> + */ > >>>>>>>>>> +static inline void ttm_kmap_obj_to_dma_buf_map(struct ttm_bo_kmap_obj > >>>>>>>>>> *kmap, > >>>>>>>>>> + struct dma_buf_map *map) > >>>>>>>>>> +{ > >>>>>>>>>> + bool is_iomem; > >>>>>>>>>> + void *vaddr = ttm_kmap_obj_virtual(kmap, &is_iomem); > >>>>>>>>>> + > >>>>>>>>>> + if (!vaddr) > >>>>>>>>>> + dma_buf_map_clear(map); > >>>>>>>>>> + else if (is_iomem) > >>>>>>>>>> + dma_buf_map_set_vaddr_iomem(map, (void __force __iomem *)vaddr); > >>>>>>>>>> + else > >>>>>>>>>> + dma_buf_map_set_vaddr(map, vaddr); > >>>>>>>>>> +} > >>>>>>>>>> + > >>>>>>>>>> /** > >>>>>>>>>> * ttm_bo_kmap > >>>>>>>>>> * > >>>>>>>>>> diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h > >>>>>>>>>> index fd1aba545fdf..2e8bbecb5091 100644 > >>>>>>>>>> --- a/include/linux/dma-buf-map.h > >>>>>>>>>> +++ b/include/linux/dma-buf-map.h > >>>>>>>>>> @@ -45,6 +45,12 @@ > >>>>>>>>>> * > >>>>>>>>>> * dma_buf_map_set_vaddr(&map. 0xdeadbeaf); > >>>>>>>>>> * > >>>>>>>>>> + * To set an address in I/O memory, use dma_buf_map_set_vaddr_iomem(). > >>>>>>>>>> + * > >>>>>>>>>> + * .. code-block:: c > >>>>>>>>>> + * > >>>>>>>>>> + * dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf); > >>>>>>>>>> + * > >>>>>>>>>> * Test if a mapping is valid with either dma_buf_map_is_set() or > >>>>>>>>>> * dma_buf_map_is_null(). > >>>>>>>>>> * > >>>>>>>>>> @@ -118,6 +124,20 @@ static inline void dma_buf_map_set_vaddr(struct > >>>>>>>>>> dma_buf_map *map, void *vaddr) > >>>>>>>>>> map->is_iomem = false; > >>>>>>>>>> } > >>>>>>>>>> +/** > >>>>>>>>>> + * dma_buf_map_set_vaddr_iomem - Sets a dma-buf mapping structure to > >>>>>>>>>> an address in I/O memory > >>>>>>>>>> + * @map: The dma-buf mapping structure > >>>>>>>>>> + * @vaddr_iomem: An I/O-memory address > >>>>>>>>>> + * > >>>>>>>>>> + * Sets the address and the I/O-memory flag. > >>>>>>>>>> + */ > >>>>>>>>>> +static inline void dma_buf_map_set_vaddr_iomem(struct dma_buf_map *map, > >>>>>>>>>> + void __iomem *vaddr_iomem) > >>>>>>>>>> +{ > >>>>>>>>>> + map->vaddr_iomem = vaddr_iomem; > >>>>>>>>>> + map->is_iomem = true; > >>>>>>>>>> +} > >>>>>>>>>> + > >>>>>>>>>> /** > >>>>>>>>>> * dma_buf_map_is_equal - Compares two dma-buf mapping structures > >>>>>>>>>> for equality > >>>>>>>>>> * @lhs: The dma-buf mapping structure > >>>>>>>>> _______________________________________________ > >>>>>>>>> dri-devel mailing list > >>>>>>>>> dri-devel at lists.freedesktop.org > >>>>>>>>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Fdri-devel&data=02%7C01%7Cchristian.koenig%40amd.com%7C472c3d655a61411deb6708d86525d1b8%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637370560438965013&sdata=HdHOA%2F1VcIX%2F7YtfYTiAqYEvw7Ag%2FS%2BxS5VwJKOv5y0%3D&reserved=0 > >>>>>>>> _______________________________________________ > >>>>>>>> amd-gfx mailing list > >>>>>>>> amd-gfx at lists.freedesktop.org > >>>>>>>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Famd-gfx&data=02%7C01%7Cchristian.koenig%40amd.com%7C472c3d655a61411deb6708d86525d1b8%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637370560438965013&sdata=H%2B5HKCsTrksRV2EyEiFGSTyS79jsWCmJimSMoJYusx8%3D&reserved=0 > >>>>>>> _______________________________________________ > >>>>>>> dri-devel mailing list > >>>>>>> dri-devel at lists.freedesktop.org > >>>>>>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Fdri-devel&data=02%7C01%7Cchristian.koenig%40amd.com%7C472c3d655a61411deb6708d86525d1b8%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637370560438965013&sdata=HdHOA%2F1VcIX%2F7YtfYTiAqYEvw7Ag%2FS%2BxS5VwJKOv5y0%3D&reserved=0 > >>>>>>> > >>>>>> _______________________________________________ > >>>>>> amd-gfx mailing list > >>>>>> amd-gfx at lists.freedesktop.org > >>>>>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Famd-gfx&data=02%7C01%7Cchristian.koenig%40amd.com%7C472c3d655a61411deb6708d86525d1b8%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637370560438965013&sdata=H%2B5HKCsTrksRV2EyEiFGSTyS79jsWCmJimSMoJYusx8%3D&reserved=0 > >> > >> -- > >> Daniel Vetter > >> Software Engineer, Intel Corporation > >> http://blog.ffwll.ch > -- Daniel Vetter Software Engineer, Intel Corporation http://blog.ffwll.ch From daniel at ffwll.ch Fri Oct 2 18:05:00 2020 From: daniel at ffwll.ch (Daniel Vetter) Date: Fri, 2 Oct 2020 20:05:00 +0200 Subject: [Spice-devel] [PATCH v3 6/7] drm/fb_helper: Support framebuffers in I/O memory In-Reply-To: <20200929151437.19717-7-tzimmermann@suse.de> References: <20200929151437.19717-1-tzimmermann@suse.de> <20200929151437.19717-7-tzimmermann@suse.de> Message-ID: <20201002180500.GM438822@phenom.ffwll.local> On Tue, Sep 29, 2020 at 05:14:36PM +0200, Thomas Zimmermann wrote: > At least sparc64 requires I/O-specific access to framebuffers. This > patch updates the fbdev console accordingly. > > For drivers with direct access to the framebuffer memory, the callback > functions in struct fb_ops test for the type of memory and call the rsp > fb_sys_ of fb_cfb_ functions. > > For drivers that employ a shadow buffer, fbdev's blit function retrieves > the framebuffer address as struct dma_buf_map, and uses dma_buf_map > interfaces to access the buffer. > > The bochs driver on sparc64 uses a workaround to flag the framebuffer as > I/O memory and avoid a HW exception. With the introduction of struct > dma_buf_map, this is not required any longer. The patch removes the rsp > code from both, bochs and fbdev. > > Signed-off-by: Thomas Zimmermann > --- > drivers/gpu/drm/bochs/bochs_kms.c | 1 - > drivers/gpu/drm/drm_fb_helper.c | 217 ++++++++++++++++++++++++++++-- > include/drm/drm_mode_config.h | 12 -- > include/linux/dma-buf-map.h | 72 ++++++++-- > 4 files changed, 265 insertions(+), 37 deletions(-) > > diff --git a/drivers/gpu/drm/bochs/bochs_kms.c b/drivers/gpu/drm/bochs/bochs_kms.c > index 13d0d04c4457..853081d186d5 100644 > --- a/drivers/gpu/drm/bochs/bochs_kms.c > +++ b/drivers/gpu/drm/bochs/bochs_kms.c > @@ -151,7 +151,6 @@ int bochs_kms_init(struct bochs_device *bochs) > bochs->dev->mode_config.preferred_depth = 24; > bochs->dev->mode_config.prefer_shadow = 0; > bochs->dev->mode_config.prefer_shadow_fbdev = 1; > - bochs->dev->mode_config.fbdev_use_iomem = true; > bochs->dev->mode_config.quirk_addfb_prefer_host_byte_order = true; > > bochs->dev->mode_config.funcs = &bochs_mode_funcs; > diff --git a/drivers/gpu/drm/drm_fb_helper.c b/drivers/gpu/drm/drm_fb_helper.c > index 343a292f2c7c..f345a314a437 100644 > --- a/drivers/gpu/drm/drm_fb_helper.c > +++ b/drivers/gpu/drm/drm_fb_helper.c > @@ -388,24 +388,22 @@ static void drm_fb_helper_resume_worker(struct work_struct *work) > } > > static void drm_fb_helper_dirty_blit_real(struct drm_fb_helper *fb_helper, > - struct drm_clip_rect *clip) > + struct drm_clip_rect *clip, > + struct dma_buf_map *dst) > { > struct drm_framebuffer *fb = fb_helper->fb; > unsigned int cpp = fb->format->cpp[0]; > size_t offset = clip->y1 * fb->pitches[0] + clip->x1 * cpp; > void *src = fb_helper->fbdev->screen_buffer + offset; > - void *dst = fb_helper->buffer->map.vaddr + offset; > size_t len = (clip->x2 - clip->x1) * cpp; > unsigned int y; > > - for (y = clip->y1; y < clip->y2; y++) { > - if (!fb_helper->dev->mode_config.fbdev_use_iomem) > - memcpy(dst, src, len); > - else > - memcpy_toio((void __iomem *)dst, src, len); > + dma_buf_map_incr(dst, offset); /* go to first pixel within clip rect */ > > + for (y = clip->y1; y < clip->y2; y++) { > + dma_buf_map_memcpy_to(dst, src, len); > + dma_buf_map_incr(dst, fb->pitches[0]); > src += fb->pitches[0]; > - dst += fb->pitches[0]; > } > } > > @@ -433,8 +431,9 @@ static void drm_fb_helper_dirty_work(struct work_struct *work) > ret = drm_client_buffer_vmap(helper->buffer, &map); > if (ret) > return; > - drm_fb_helper_dirty_blit_real(helper, &clip_copy); > + drm_fb_helper_dirty_blit_real(helper, &clip_copy, &map); > } > + > if (helper->fb->funcs->dirty) > helper->fb->funcs->dirty(helper->fb, NULL, 0, 0, > &clip_copy, 1); > @@ -771,6 +770,136 @@ void drm_fb_helper_sys_imageblit(struct fb_info *info, > } > EXPORT_SYMBOL(drm_fb_helper_sys_imageblit); > > +static ssize_t drm_fb_helper_cfb_read(struct fb_info *info, char __user *buf, > + size_t count, loff_t *ppos) > +{ > + unsigned long p = *ppos; > + u8 *dst; > + u8 __iomem *src; > + int c, err = 0; > + unsigned long total_size; > + unsigned long alloc_size; > + ssize_t ret = 0; > + > + if (info->state != FBINFO_STATE_RUNNING) > + return -EPERM; > + > + total_size = info->screen_size; > + > + if (total_size == 0) > + total_size = info->fix.smem_len; > + > + if (p >= total_size) > + return 0; > + > + if (count >= total_size) > + count = total_size; > + > + if (count + p > total_size) > + count = total_size - p; > + > + src = (u8 __iomem *)(info->screen_base + p); > + > + alloc_size = min(count, PAGE_SIZE); > + > + dst = kmalloc(alloc_size, GFP_KERNEL); > + if (!dst) > + return -ENOMEM; > + > + while (count) { > + c = min(count, alloc_size); > + > + memcpy_fromio(dst, src, c); > + if (copy_to_user(buf, dst, c)) { > + err = -EFAULT; > + break; > + } > + > + src += c; > + *ppos += c; > + buf += c; > + ret += c; > + count -= c; > + } > + > + kfree(dst); > + > + if (err) > + return err; > + > + return ret; > +} > + > +static ssize_t drm_fb_helper_cfb_write(struct fb_info *info, const char __user *buf, > + size_t count, loff_t *ppos) > +{ > + unsigned long p = *ppos; > + u8 *src; > + u8 __iomem *dst; > + int c, err = 0; > + unsigned long total_size; > + unsigned long alloc_size; > + ssize_t ret = 0; > + > + if (info->state != FBINFO_STATE_RUNNING) > + return -EPERM; > + > + total_size = info->screen_size; > + > + if (total_size == 0) > + total_size = info->fix.smem_len; > + > + if (p > total_size) > + return -EFBIG; > + > + if (count > total_size) { > + err = -EFBIG; > + count = total_size; > + } > + > + if (count + p > total_size) { > + /* > + * The framebuffer is too small. We do the > + * copy operation, but return an error code > + * afterwards. Taken from fbdev. > + */ > + if (!err) > + err = -ENOSPC; > + count = total_size - p; > + } > + > + alloc_size = min(count, PAGE_SIZE); > + > + src = kmalloc(alloc_size, GFP_KERNEL); > + if (!src) > + return -ENOMEM; > + > + dst = (u8 __iomem *)(info->screen_base + p); > + > + while (count) { > + c = min(count, alloc_size); > + > + if (copy_from_user(src, buf, c)) { > + err = -EFAULT; > + break; > + } > + memcpy_toio(dst, src, c); > + > + dst += c; > + *ppos += c; > + buf += c; > + ret += c; > + count -= c; > + } > + > + kfree(src); > + > + if (err) > + return err; > + > + return ret; > +} > + > /** > * drm_fb_helper_cfb_fillrect - wrapper around cfb_fillrect > * @info: fbdev registered by the helper > @@ -2043,6 +2172,66 @@ static int drm_fbdev_fb_mmap(struct fb_info *info, struct vm_area_struct *vma) > return -ENODEV; > } > > +static ssize_t drm_fbdev_fb_read(struct fb_info *info, char __user *buf, > + size_t count, loff_t *ppos) > +{ > + struct drm_fb_helper *fb_helper = info->par; > + struct drm_client_buffer *buffer = fb_helper->buffer; > + > + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem) > + return drm_fb_helper_sys_read(info, buf, count, ppos); > + else > + return drm_fb_helper_cfb_read(info, buf, count, ppos); > +} > + > +static ssize_t drm_fbdev_fb_write(struct fb_info *info, const char __user *buf, > + size_t count, loff_t *ppos) > +{ > + struct drm_fb_helper *fb_helper = info->par; > + struct drm_client_buffer *buffer = fb_helper->buffer; > + > + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem) > + return drm_fb_helper_sys_write(info, buf, count, ppos); > + else > + return drm_fb_helper_cfb_write(info, buf, count, ppos); > +} > + > +static void drm_fbdev_fb_fillrect(struct fb_info *info, > + const struct fb_fillrect *rect) > +{ > + struct drm_fb_helper *fb_helper = info->par; > + struct drm_client_buffer *buffer = fb_helper->buffer; > + > + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem) > + drm_fb_helper_sys_fillrect(info, rect); > + else > + drm_fb_helper_cfb_fillrect(info, rect); > +} > + > +static void drm_fbdev_fb_copyarea(struct fb_info *info, > + const struct fb_copyarea *area) > +{ > + struct drm_fb_helper *fb_helper = info->par; > + struct drm_client_buffer *buffer = fb_helper->buffer; > + > + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem) > + drm_fb_helper_sys_copyarea(info, area); > + else > + drm_fb_helper_cfb_copyarea(info, area); > +} > + > +static void drm_fbdev_fb_imageblit(struct fb_info *info, > + const struct fb_image *image) > +{ > + struct drm_fb_helper *fb_helper = info->par; > + struct drm_client_buffer *buffer = fb_helper->buffer; > + > + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem) > + drm_fb_helper_sys_imageblit(info, image); > + else > + drm_fb_helper_cfb_imageblit(info, image); > +} I think a todo to make the new generic functions the real ones, and drivers not using the sys/cfb ones anymore would be a good addition. > + > static const struct fb_ops drm_fbdev_fb_ops = { > .owner = THIS_MODULE, > DRM_FB_HELPER_DEFAULT_OPS, > @@ -2050,11 +2239,11 @@ static const struct fb_ops drm_fbdev_fb_ops = { > .fb_release = drm_fbdev_fb_release, > .fb_destroy = drm_fbdev_fb_destroy, > .fb_mmap = drm_fbdev_fb_mmap, > - .fb_read = drm_fb_helper_sys_read, > - .fb_write = drm_fb_helper_sys_write, > - .fb_fillrect = drm_fb_helper_sys_fillrect, > - .fb_copyarea = drm_fb_helper_sys_copyarea, > - .fb_imageblit = drm_fb_helper_sys_imageblit, > + .fb_read = drm_fbdev_fb_read, > + .fb_write = drm_fbdev_fb_write, > + .fb_fillrect = drm_fbdev_fb_fillrect, > + .fb_copyarea = drm_fbdev_fb_copyarea, > + .fb_imageblit = drm_fbdev_fb_imageblit, > }; > > static struct fb_deferred_io drm_fbdev_defio = { > diff --git a/include/drm/drm_mode_config.h b/include/drm/drm_mode_config.h > index 5ffbb4ed5b35..ab424ddd7665 100644 > --- a/include/drm/drm_mode_config.h > +++ b/include/drm/drm_mode_config.h > @@ -877,18 +877,6 @@ struct drm_mode_config { > */ > bool prefer_shadow_fbdev; > > - /** > - * @fbdev_use_iomem: > - * > - * Set to true if framebuffer reside in iomem. > - * When set to true memcpy_toio() is used when copying the framebuffer in > - * drm_fb_helper.drm_fb_helper_dirty_blit_real(). > - * > - * FIXME: This should be replaced with a per-mapping is_iomem > - * flag (like ttm does), and then used everywhere in fbdev code. > - */ > - bool fbdev_use_iomem; > - > /** > * @quirk_addfb_prefer_xbgr_30bpp: > * > diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h I think the below should be split out as a prep patch. > index 2e8bbecb5091..6ca0f304dda2 100644 > --- a/include/linux/dma-buf-map.h > +++ b/include/linux/dma-buf-map.h > @@ -32,6 +32,14 @@ > * accessing the buffer. Use the returned instance and the helper functions > * to access the buffer's memory in the correct way. > * > + * The type :c:type:`struct dma_buf_map ` and its helpers are > + * actually independent from the dma-buf infrastructure. When sharing buffers > + * among devices, drivers have to know the location of the memory to access > + * the buffers in a safe way. :c:type:`struct dma_buf_map ` > + * solves this problem for dma-buf and its users. If other drivers or > + * sub-systems require similar functionality, the type could be generalized > + * and moved to a more prominent header file. > + * > * Open-coding access to :c:type:`struct dma_buf_map ` is > * considered bad style. Rather then accessing its fields directly, use one > * of the provided helper functions, or implement your own. For example, > @@ -51,6 +59,14 @@ > * > * dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf); > * > + * Instances of struct dma_buf_map do not have to be cleaned up, but > + * can be cleared to NULL with dma_buf_map_clear(). Cleared mappings > + * always refer to system memory. > + * > + * .. code-block:: c > + * > + * dma_buf_map_clear(&map); > + * > * Test if a mapping is valid with either dma_buf_map_is_set() or > * dma_buf_map_is_null(). > * > @@ -73,17 +89,19 @@ > * if (dma_buf_map_is_equal(&sys_map, &io_map)) > * // always false > * > - * Instances of struct dma_buf_map do not have to be cleaned up, but > - * can be cleared to NULL with dma_buf_map_clear(). Cleared mappings > - * always refer to system memory. > + * A set up instance of struct dma_buf_map can be used to access or manipulate > + * the buffer memory. Depending on the location of the memory, the provided > + * helpers will pick the correct operations. Data can be copied into the memory > + * with dma_buf_map_memcpy_to(). The address can be manipulated with > + * dma_buf_map_incr(). > * > - * The type :c:type:`struct dma_buf_map ` and its helpers are > - * actually independent from the dma-buf infrastructure. When sharing buffers > - * among devices, drivers have to know the location of the memory to access > - * the buffers in a safe way. :c:type:`struct dma_buf_map ` > - * solves this problem for dma-buf and its users. If other drivers or > - * sub-systems require similar functionality, the type could be generalized > - * and moved to a more prominent header file. > + * .. code-block:: c > + * > + * const void *src = ...; // source buffer > + * size_t len = ...; // length of src > + * > + * dma_buf_map_memcpy_to(&map, src, len); > + * dma_buf_map_incr(&map, len); // go to first byte after the memcpy > */ > > /** > @@ -210,4 +228,38 @@ static inline void dma_buf_map_clear(struct dma_buf_map *map) > } > } > > +/** > + * dma_buf_map_memcpy_to - Memcpy into dma-buf mapping > + * @dst: The dma-buf mapping structure > + * @src: The source buffer > + * @len: The number of byte in src > + * > + * Copies data into a dma-buf mapping. The source buffer is in system > + * memory. Depending on the buffer's location, the helper picks the correct > + * method of accessing the memory. > + */ > +static inline void dma_buf_map_memcpy_to(struct dma_buf_map *dst, const void *src, size_t len) > +{ > + if (dst->is_iomem) > + memcpy_toio(dst->vaddr_iomem, src, len); > + else > + memcpy(dst->vaddr, src, len); > +} > + > +/** > + * dma_buf_map_incr - Increments the address stored in a dma-buf mapping > + * @map: The dma-buf mapping structure > + * @incr: The number of bytes to increment > + * > + * Increments the address stored in a dma-buf mapping. Depending on the > + * buffer's location, the correct value will be updated. > + */ > +static inline void dma_buf_map_incr(struct dma_buf_map *map, size_t incr) > +{ > + if (map->is_iomem) > + map->vaddr_iomem += incr; > + else > + map->vaddr += incr; > +} > + > #endif /* __DMA_BUF_MAP_H__ */ > -- > 2.28.0 > -- Daniel Vetter Software Engineer, Intel Corporation http://blog.ffwll.ch From daniel at ffwll.ch Fri Oct 2 18:45:54 2020 From: daniel at ffwll.ch (Daniel Vetter) Date: Fri, 2 Oct 2020 20:45:54 +0200 Subject: [Spice-devel] [PATCH v3 7/7] drm/todo: Update entries around struct dma_buf_map In-Reply-To: <20200929151437.19717-8-tzimmermann@suse.de> References: <20200929151437.19717-1-tzimmermann@suse.de> <20200929151437.19717-8-tzimmermann@suse.de> Message-ID: <20201002184554.GN438822@phenom.ffwll.local> On Tue, Sep 29, 2020 at 05:14:37PM +0200, Thomas Zimmermann wrote: > Instances of struct dma_buf_map should be useful throughout DRM's > memory management code. Furthermore, several drivers can now use > generic fbdev emulation. > > Signed-off-by: Thomas Zimmermann Acked-by: Daniel Vetter > --- > Documentation/gpu/todo.rst | 24 ++++++++++++++++++++++-- > 1 file changed, 22 insertions(+), 2 deletions(-) > > diff --git a/Documentation/gpu/todo.rst b/Documentation/gpu/todo.rst > index 3751ac976c3e..023626c1837b 100644 > --- a/Documentation/gpu/todo.rst > +++ b/Documentation/gpu/todo.rst > @@ -197,8 +197,10 @@ Convert drivers to use drm_fbdev_generic_setup() > ------------------------------------------------ > > Most drivers can use drm_fbdev_generic_setup(). Driver have to implement > -atomic modesetting and GEM vmap support. Current generic fbdev emulation > -expects the framebuffer in system memory (or system-like memory). > +atomic modesetting and GEM vmap support. Historically, generic fbdev emulation > +expected the framebuffer in system memory or system-like memory. By employing > +struct dma_buf_map, drivers with frambuffers in I/O memory can be supported > +as well. > > Contact: Maintainer of the driver you plan to convert > > @@ -446,6 +448,24 @@ Contact: Ville Syrj?l?, Daniel Vetter > > Level: Intermediate > > +Use struct dma_buf_map throughout codebase > +------------------------------------------ > + > +Pointers to shared device memory are stored in struct dma_buf_map. Each > +instance knows whether it refers to system or I/O memory. Most of the DRM-wide > +interface have been converted to use struct dma_buf_map, but implementations > +often still use raw pointers. > + > +The task is to use struct dma_buf_map where it makes sense. > + > +* Memory managers should use struct dma_buf_map for dma-buf-imported buffers. > +* TTM might benefit from using struct dma_buf_map internally. > +* Framebuffer copying and blitting helpers should operate on struct dma_buf_map. > + > +Contact: Thomas Zimmermann , Christian K?nig, Daniel Vetter > + > +Level: Intermediate > + > > Core refactorings > ================= > -- > 2.28.0 > -- Daniel Vetter Software Engineer, Intel Corporation http://blog.ffwll.ch From daniel at ffwll.ch Fri Oct 2 18:44:52 2020 From: daniel at ffwll.ch (Daniel Vetter) Date: Fri, 2 Oct 2020 20:44:52 +0200 Subject: [Spice-devel] [PATCH v3 6/7] drm/fb_helper: Support framebuffers in I/O memory In-Reply-To: <20201002180500.GM438822@phenom.ffwll.local> References: <20200929151437.19717-1-tzimmermann@suse.de> <20200929151437.19717-7-tzimmermann@suse.de> <20201002180500.GM438822@phenom.ffwll.local> Message-ID: On Fri, Oct 2, 2020 at 8:05 PM Daniel Vetter wrote: > > On Tue, Sep 29, 2020 at 05:14:36PM +0200, Thomas Zimmermann wrote: > > At least sparc64 requires I/O-specific access to framebuffers. This > > patch updates the fbdev console accordingly. > > > > For drivers with direct access to the framebuffer memory, the callback > > functions in struct fb_ops test for the type of memory and call the rsp > > fb_sys_ of fb_cfb_ functions. > > > > For drivers that employ a shadow buffer, fbdev's blit function retrieves > > the framebuffer address as struct dma_buf_map, and uses dma_buf_map > > interfaces to access the buffer. > > > > The bochs driver on sparc64 uses a workaround to flag the framebuffer as > > I/O memory and avoid a HW exception. With the introduction of struct > > dma_buf_map, this is not required any longer. The patch removes the rsp > > code from both, bochs and fbdev. > > > > Signed-off-by: Thomas Zimmermann Argh, I accidentally hit send before finishing this ... > > --- > > drivers/gpu/drm/bochs/bochs_kms.c | 1 - > > drivers/gpu/drm/drm_fb_helper.c | 217 ++++++++++++++++++++++++++++-- > > include/drm/drm_mode_config.h | 12 -- > > include/linux/dma-buf-map.h | 72 ++++++++-- > > 4 files changed, 265 insertions(+), 37 deletions(-) > > > > diff --git a/drivers/gpu/drm/bochs/bochs_kms.c b/drivers/gpu/drm/bochs/bochs_kms.c > > index 13d0d04c4457..853081d186d5 100644 > > --- a/drivers/gpu/drm/bochs/bochs_kms.c > > +++ b/drivers/gpu/drm/bochs/bochs_kms.c > > @@ -151,7 +151,6 @@ int bochs_kms_init(struct bochs_device *bochs) > > bochs->dev->mode_config.preferred_depth = 24; > > bochs->dev->mode_config.prefer_shadow = 0; > > bochs->dev->mode_config.prefer_shadow_fbdev = 1; > > - bochs->dev->mode_config.fbdev_use_iomem = true; > > bochs->dev->mode_config.quirk_addfb_prefer_host_byte_order = true; > > > > bochs->dev->mode_config.funcs = &bochs_mode_funcs; > > diff --git a/drivers/gpu/drm/drm_fb_helper.c b/drivers/gpu/drm/drm_fb_helper.c > > index 343a292f2c7c..f345a314a437 100644 > > --- a/drivers/gpu/drm/drm_fb_helper.c > > +++ b/drivers/gpu/drm/drm_fb_helper.c > > @@ -388,24 +388,22 @@ static void drm_fb_helper_resume_worker(struct work_struct *work) > > } > > > > static void drm_fb_helper_dirty_blit_real(struct drm_fb_helper *fb_helper, > > - struct drm_clip_rect *clip) > > + struct drm_clip_rect *clip, > > + struct dma_buf_map *dst) > > { > > struct drm_framebuffer *fb = fb_helper->fb; > > unsigned int cpp = fb->format->cpp[0]; > > size_t offset = clip->y1 * fb->pitches[0] + clip->x1 * cpp; > > void *src = fb_helper->fbdev->screen_buffer + offset; > > - void *dst = fb_helper->buffer->map.vaddr + offset; > > size_t len = (clip->x2 - clip->x1) * cpp; > > unsigned int y; > > > > - for (y = clip->y1; y < clip->y2; y++) { > > - if (!fb_helper->dev->mode_config.fbdev_use_iomem) > > - memcpy(dst, src, len); > > - else > > - memcpy_toio((void __iomem *)dst, src, len); > > + dma_buf_map_incr(dst, offset); /* go to first pixel within clip rect */ > > > > + for (y = clip->y1; y < clip->y2; y++) { > > + dma_buf_map_memcpy_to(dst, src, len); > > + dma_buf_map_incr(dst, fb->pitches[0]); > > src += fb->pitches[0]; > > - dst += fb->pitches[0]; > > } > > } > > > > @@ -433,8 +431,9 @@ static void drm_fb_helper_dirty_work(struct work_struct *work) > > ret = drm_client_buffer_vmap(helper->buffer, &map); > > if (ret) > > return; > > - drm_fb_helper_dirty_blit_real(helper, &clip_copy); > > + drm_fb_helper_dirty_blit_real(helper, &clip_copy, &map); > > } > > + > > if (helper->fb->funcs->dirty) > > helper->fb->funcs->dirty(helper->fb, NULL, 0, 0, > > &clip_copy, 1); > > @@ -771,6 +770,136 @@ void drm_fb_helper_sys_imageblit(struct fb_info *info, > > } > > EXPORT_SYMBOL(drm_fb_helper_sys_imageblit); > > > > +static ssize_t drm_fb_helper_cfb_read(struct fb_info *info, char __user *buf, > > + size_t count, loff_t *ppos) > > +{ > > + unsigned long p = *ppos; > > + u8 *dst; > > + u8 __iomem *src; > > + int c, err = 0; > > + unsigned long total_size; > > + unsigned long alloc_size; > > + ssize_t ret = 0; > > + > > + if (info->state != FBINFO_STATE_RUNNING) > > + return -EPERM; > > + > > + total_size = info->screen_size; > > + > > + if (total_size == 0) > > + total_size = info->fix.smem_len; > > + > > + if (p >= total_size) > > + return 0; > > + > > + if (count >= total_size) > > + count = total_size; > > + > > + if (count + p > total_size) > > + count = total_size - p; > > + > > + src = (u8 __iomem *)(info->screen_base + p); > > + > > + alloc_size = min(count, PAGE_SIZE); > > + > > + dst = kmalloc(alloc_size, GFP_KERNEL); > > + if (!dst) > > + return -ENOMEM; > > + > > + while (count) { > > + c = min(count, alloc_size); > > + > > + memcpy_fromio(dst, src, c); > > + if (copy_to_user(buf, dst, c)) { > > + err = -EFAULT; > > + break; > > + } > > + > > + src += c; > > + *ppos += c; > > + buf += c; > > + ret += c; > > + count -= c; > > + } > > + > > + kfree(dst); > > + > > + if (err) > > + return err; > > + > > + return ret; > > +} > > + > > +static ssize_t drm_fb_helper_cfb_write(struct fb_info *info, const char __user *buf, > > + size_t count, loff_t *ppos) > > +{ > > + unsigned long p = *ppos; > > + u8 *src; > > + u8 __iomem *dst; > > + int c, err = 0; > > + unsigned long total_size; > > + unsigned long alloc_size; > > + ssize_t ret = 0; > > + > > + if (info->state != FBINFO_STATE_RUNNING) > > + return -EPERM; > > + > > + total_size = info->screen_size; > > + > > + if (total_size == 0) > > + total_size = info->fix.smem_len; > > + > > + if (p > total_size) > > + return -EFBIG; > > + > > + if (count > total_size) { > > + err = -EFBIG; > > + count = total_size; > > + } > > + > > + if (count + p > total_size) { > > + /* > > + * The framebuffer is too small. We do the > > + * copy operation, but return an error code > > + * afterwards. Taken from fbdev. > > + */ > > + if (!err) > > + err = -ENOSPC; > > + count = total_size - p; > > + } > > + > > + alloc_size = min(count, PAGE_SIZE); > > + > > + src = kmalloc(alloc_size, GFP_KERNEL); > > + if (!src) > > + return -ENOMEM; > > + > > + dst = (u8 __iomem *)(info->screen_base + p); > > + > > + while (count) { > > + c = min(count, alloc_size); > > + > > + if (copy_from_user(src, buf, c)) { > > + err = -EFAULT; > > + break; > > + } > > + memcpy_toio(dst, src, c); > > + > > + dst += c; > > + *ppos += c; > > + buf += c; > > + ret += c; > > + count -= c; > > + } > > + > > + kfree(src); > > + > > + if (err) > > + return err; > > + > > + return ret; > > +} The duplication is a bit annoying here, but can't really be avoided. I do think though we should maybe go a bit further, and have drm implementations of this stuff instead of following fbdev concepts as closely as possible. So here roughly: - if we have a shadow fb, construct a dma_buf_map for that, otherwise take the one from the driver - have a full generic implementation using that one directly (and checking size limits against the underlying gem buffer) - ideally also with some testcases in the fbdev testcase we have (very bare-bones right now) in igt But I'm not really sure whether that's worth all the trouble. It's just that the fbdev-ness here in this copied code sticks out a lot :-) > > + > > /** > > * drm_fb_helper_cfb_fillrect - wrapper around cfb_fillrect > > * @info: fbdev registered by the helper > > @@ -2043,6 +2172,66 @@ static int drm_fbdev_fb_mmap(struct fb_info *info, struct vm_area_struct *vma) > > return -ENODEV; > > } > > > > +static ssize_t drm_fbdev_fb_read(struct fb_info *info, char __user *buf, > > + size_t count, loff_t *ppos) > > +{ > > + struct drm_fb_helper *fb_helper = info->par; > > + struct drm_client_buffer *buffer = fb_helper->buffer; > > + > > + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem) > > + return drm_fb_helper_sys_read(info, buf, count, ppos); > > + else > > + return drm_fb_helper_cfb_read(info, buf, count, ppos); > > +} > > + > > +static ssize_t drm_fbdev_fb_write(struct fb_info *info, const char __user *buf, > > + size_t count, loff_t *ppos) > > +{ > > + struct drm_fb_helper *fb_helper = info->par; > > + struct drm_client_buffer *buffer = fb_helper->buffer; > > + > > + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem) > > + return drm_fb_helper_sys_write(info, buf, count, ppos); > > + else > > + return drm_fb_helper_cfb_write(info, buf, count, ppos); > > +} > > + > > +static void drm_fbdev_fb_fillrect(struct fb_info *info, > > + const struct fb_fillrect *rect) > > +{ > > + struct drm_fb_helper *fb_helper = info->par; > > + struct drm_client_buffer *buffer = fb_helper->buffer; > > + > > + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem) > > + drm_fb_helper_sys_fillrect(info, rect); > > + else > > + drm_fb_helper_cfb_fillrect(info, rect); > > +} > > + > > +static void drm_fbdev_fb_copyarea(struct fb_info *info, > > + const struct fb_copyarea *area) > > +{ > > + struct drm_fb_helper *fb_helper = info->par; > > + struct drm_client_buffer *buffer = fb_helper->buffer; > > + > > + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem) > > + drm_fb_helper_sys_copyarea(info, area); > > + else > > + drm_fb_helper_cfb_copyarea(info, area); > > +} > > + > > +static void drm_fbdev_fb_imageblit(struct fb_info *info, > > + const struct fb_image *image) > > +{ > > + struct drm_fb_helper *fb_helper = info->par; > > + struct drm_client_buffer *buffer = fb_helper->buffer; > > + > > + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem) > > + drm_fb_helper_sys_imageblit(info, image); > > + else > > + drm_fb_helper_cfb_imageblit(info, image); > > +} I think a todo.rst entry to make the new generic functions the real ones, and drivers not using the sys/cfb ones anymore would be a good addition. It's kinda covered by the move to the generic helpers, but maybe we can convert a few more drivers over to these here. Would also allow us to maybe flatten the code a bit and use more of the dma_buf_map stuff directly (instead of reusing crusty fbdev code written 20 years ago or so). > > + > > static const struct fb_ops drm_fbdev_fb_ops = { > > .owner = THIS_MODULE, > > DRM_FB_HELPER_DEFAULT_OPS, > > @@ -2050,11 +2239,11 @@ static const struct fb_ops drm_fbdev_fb_ops = { > > .fb_release = drm_fbdev_fb_release, > > .fb_destroy = drm_fbdev_fb_destroy, > > .fb_mmap = drm_fbdev_fb_mmap, > > - .fb_read = drm_fb_helper_sys_read, > > - .fb_write = drm_fb_helper_sys_write, > > - .fb_fillrect = drm_fb_helper_sys_fillrect, > > - .fb_copyarea = drm_fb_helper_sys_copyarea, > > - .fb_imageblit = drm_fb_helper_sys_imageblit, > > + .fb_read = drm_fbdev_fb_read, > > + .fb_write = drm_fbdev_fb_write, > > + .fb_fillrect = drm_fbdev_fb_fillrect, > > + .fb_copyarea = drm_fbdev_fb_copyarea, > > + .fb_imageblit = drm_fbdev_fb_imageblit, > > }; > > > > static struct fb_deferred_io drm_fbdev_defio = { > > diff --git a/include/drm/drm_mode_config.h b/include/drm/drm_mode_config.h > > index 5ffbb4ed5b35..ab424ddd7665 100644 > > --- a/include/drm/drm_mode_config.h > > +++ b/include/drm/drm_mode_config.h > > @@ -877,18 +877,6 @@ struct drm_mode_config { > > */ > > bool prefer_shadow_fbdev; > > > > - /** > > - * @fbdev_use_iomem: > > - * > > - * Set to true if framebuffer reside in iomem. > > - * When set to true memcpy_toio() is used when copying the framebuffer in > > - * drm_fb_helper.drm_fb_helper_dirty_blit_real(). > > - * > > - * FIXME: This should be replaced with a per-mapping is_iomem > > - * flag (like ttm does), and then used everywhere in fbdev code. > > - */ > > - bool fbdev_use_iomem; > > - > > /** > > * @quirk_addfb_prefer_xbgr_30bpp: > > * > > diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h I think the below should be split out as a prep patch. > > index 2e8bbecb5091..6ca0f304dda2 100644 > > --- a/include/linux/dma-buf-map.h > > +++ b/include/linux/dma-buf-map.h > > @@ -32,6 +32,14 @@ > > * accessing the buffer. Use the returned instance and the helper functions > > * to access the buffer's memory in the correct way. > > * > > + * The type :c:type:`struct dma_buf_map ` and its helpers are > > + * actually independent from the dma-buf infrastructure. When sharing buffers > > + * among devices, drivers have to know the location of the memory to access > > + * the buffers in a safe way. :c:type:`struct dma_buf_map ` > > + * solves this problem for dma-buf and its users. If other drivers or > > + * sub-systems require similar functionality, the type could be generalized > > + * and moved to a more prominent header file. > > + * > > * Open-coding access to :c:type:`struct dma_buf_map ` is > > * considered bad style. Rather then accessing its fields directly, use one > > * of the provided helper functions, or implement your own. For example, > > @@ -51,6 +59,14 @@ > > * > > * dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf); > > * > > + * Instances of struct dma_buf_map do not have to be cleaned up, but > > + * can be cleared to NULL with dma_buf_map_clear(). Cleared mappings > > + * always refer to system memory. > > + * > > + * .. code-block:: c > > + * > > + * dma_buf_map_clear(&map); > > + * > > * Test if a mapping is valid with either dma_buf_map_is_set() or > > * dma_buf_map_is_null(). > > * > > @@ -73,17 +89,19 @@ > > * if (dma_buf_map_is_equal(&sys_map, &io_map)) > > * // always false > > * > > - * Instances of struct dma_buf_map do not have to be cleaned up, but > > - * can be cleared to NULL with dma_buf_map_clear(). Cleared mappings > > - * always refer to system memory. > > + * A set up instance of struct dma_buf_map can be used to access or manipulate > > + * the buffer memory. Depending on the location of the memory, the provided > > + * helpers will pick the correct operations. Data can be copied into the memory > > + * with dma_buf_map_memcpy_to(). The address can be manipulated with > > + * dma_buf_map_incr(). > > * > > - * The type :c:type:`struct dma_buf_map ` and its helpers are > > - * actually independent from the dma-buf infrastructure. When sharing buffers > > - * among devices, drivers have to know the location of the memory to access > > - * the buffers in a safe way. :c:type:`struct dma_buf_map ` > > - * solves this problem for dma-buf and its users. If other drivers or > > - * sub-systems require similar functionality, the type could be generalized > > - * and moved to a more prominent header file. > > + * .. code-block:: c > > + * > > + * const void *src = ...; // source buffer > > + * size_t len = ...; // length of src > > + * > > + * dma_buf_map_memcpy_to(&map, src, len); > > + * dma_buf_map_incr(&map, len); // go to first byte after the memcpy > > */ > > > > /** > > @@ -210,4 +228,38 @@ static inline void dma_buf_map_clear(struct dma_buf_map *map) > > } > > } > > > > +/** > > + * dma_buf_map_memcpy_to - Memcpy into dma-buf mapping > > + * @dst: The dma-buf mapping structure > > + * @src: The source buffer > > + * @len: The number of byte in src > > + * > > + * Copies data into a dma-buf mapping. The source buffer is in system > > + * memory. Depending on the buffer's location, the helper picks the correct > > + * method of accessing the memory. > > + */ > > +static inline void dma_buf_map_memcpy_to(struct dma_buf_map *dst, const void *src, size_t len) > > +{ > > + if (dst->is_iomem) > > + memcpy_toio(dst->vaddr_iomem, src, len); > > + else > > + memcpy(dst->vaddr, src, len); > > +} > > + > > +/** > > + * dma_buf_map_incr - Increments the address stored in a dma-buf mapping > > + * @map: The dma-buf mapping structure > > + * @incr: The number of bytes to increment > > + * > > + * Increments the address stored in a dma-buf mapping. Depending on the > > + * buffer's location, the correct value will be updated. > > + */ > > +static inline void dma_buf_map_incr(struct dma_buf_map *map, size_t incr) > > +{ > > + if (map->is_iomem) > > + map->vaddr_iomem += incr; > > + else > > + map->vaddr += incr; > > +} > > + > > #endif /* __DMA_BUF_MAP_H__ */ > > -- > > 2.28.0 Aside from the details I think looks all reasonable. -Daniel -- Daniel Vetter Software Engineer, Intel Corporation http://blog.ffwll.ch From fziglio at redhat.com Tue Oct 6 12:08:55 2020 From: fziglio at redhat.com (Frediano Ziglio) Date: Tue, 6 Oct 2020 13:08:55 +0100 Subject: [Spice-devel] [PATCH spice-common 0/6] Multiple buffer overflow vulnerabilities in QUIC decoding code Message-ID: <20201006120901.17027-1-fziglio@redhat.com> From: Frediano Ziglio The patches on this series are addressing CVE-2020-14355. Multiple buffer overflow vulnerabilities were found in the QUIC image decoding process of the SPICE remote display system. More specifically, these flaws reside in the spice-common shared code between the client and server of SPICE. In other words, both the client (spice-gtk) and server are affected by these flaws. A malicious client or server could send specially crafted messages which could result in a process crash or potential code execution scenario. * One issue leading to controlled writing overflow is due to the 'width * height' integer overflow. Using this overflow an attacker could cause small allocation and control the data using compressed data. Note that using the check for input data the attacker can avoid the crash filling the whole needed buffer. ("quic: Check image size in quic_decode_begin" patch). * Another controlled write could be achieved using the RLE decode which is done line by line, in theory with former lines writing more bytes in order to build the desired buffer content after the allocated buffer. ("quic: Check RLE lengths" patch). * The "quic: Avoid possible buffer overflow in find_bucket" is a read buffer overflow which will dereference an invalid pointer mainly causing a crash. * Embargo date+time: Tue, 06 Oct 2020, 12:00 hrs. UTC. Frediano Ziglio (6): quic: Check we have some data to start decoding quic image quic: Check image size in quic_decode_begin quic: Check RLE lengths quic: Avoid possible buffer overflow in find_bucket test-quic: Add fuzzer capabilities to the test test-quic: Add test cases for quic fuzzer common/quic.c | 15 +++++++- common/quic_family_tmpl.c | 7 +++- common/quic_tmpl.c | 6 ++- tests/fuzzer-quic-testcases/test1.quic | Bin 0 -> 4292 bytes tests/fuzzer-quic-testcases/test2.quic | Bin 0 -> 2808 bytes tests/fuzzer-quic-testcases/test3.quic | Bin 0 -> 2556 bytes tests/fuzzer-quic-testcases/test4.quic | Bin 0 -> 30892 bytes tests/test-quic.c | 51 ++++++++++++++++++++++++- 8 files changed, 75 insertions(+), 4 deletions(-) create mode 100644 tests/fuzzer-quic-testcases/test1.quic create mode 100644 tests/fuzzer-quic-testcases/test2.quic create mode 100644 tests/fuzzer-quic-testcases/test3.quic create mode 100644 tests/fuzzer-quic-testcases/test4.quic -- 2.26.2 From fziglio at redhat.com Tue Oct 6 12:08:56 2020 From: fziglio at redhat.com (Frediano Ziglio) Date: Tue, 6 Oct 2020 13:08:56 +0100 Subject: [Spice-devel] [PATCH spice-common 1/6] quic: Check we have some data to start decoding quic image In-Reply-To: <20201006120901.17027-1-fziglio@redhat.com> References: <20201006120901.17027-1-fziglio@redhat.com> Message-ID: <20201006120901.17027-2-fziglio@redhat.com> From: Frediano Ziglio All paths already pass some data to quic_decode_begin but for the test check it, it's not that expensive test. Checking for not 0 is enough, all other words will potentially be read calling more_io_words but we need one to avoid a potential initial buffer overflow or deferencing an invalid pointer. Signed-off-by: Frediano Ziglio Acked-by: Uri Lublin --- common/quic.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/common/quic.c b/common/quic.c index e2dee0f..bc753ca 100644 --- a/common/quic.c +++ b/common/quic.c @@ -1136,7 +1136,7 @@ int quic_decode_begin(QuicContext *quic, uint32_t *io_ptr, unsigned int num_io_w int channels; int bpc; - if (!encoder_reset(encoder, io_ptr, io_ptr_end)) { + if (!num_io_words || !encoder_reset(encoder, io_ptr, io_ptr_end)) { return QUIC_ERROR; } -- 2.26.2 From fziglio at redhat.com Tue Oct 6 12:08:57 2020 From: fziglio at redhat.com (Frediano Ziglio) Date: Tue, 6 Oct 2020 13:08:57 +0100 Subject: [Spice-devel] [PATCH spice-common 2/6] quic: Check image size in quic_decode_begin In-Reply-To: <20201006120901.17027-1-fziglio@redhat.com> References: <20201006120901.17027-1-fziglio@redhat.com> Message-ID: <20201006120901.17027-3-fziglio@redhat.com> From: Frediano Ziglio Avoid some overflow in code due to images too big or negative numbers. Signed-off-by: Frediano Ziglio Acked-by: Uri Lublin --- common/quic.c | 13 +++++++++++++ 1 file changed, 13 insertions(+) diff --git a/common/quic.c b/common/quic.c index bc753ca..6815316 100644 --- a/common/quic.c +++ b/common/quic.c @@ -56,6 +56,9 @@ typedef uint8_t BYTE; #define MINwminext 1 #define MAXwminext 100000000 +/* Maximum image size in pixels, mainly to avoid possible integer overflows */ +#define SPICE_MAX_IMAGE_SIZE (512 * 1024 * 1024 - 1) + typedef struct QuicFamily { unsigned int nGRcodewords[MAXNUMCODES]; /* indexed by code number, contains number of unmodified GR codewords in the code */ @@ -1165,6 +1168,16 @@ int quic_decode_begin(QuicContext *quic, uint32_t *io_ptr, unsigned int num_io_w height = encoder->io_word; decode_eat32bits(encoder); + if (width <= 0 || height <= 0) { + encoder->usr->warn(encoder->usr, "invalid size\n"); + return QUIC_ERROR; + } + + /* avoid too big images */ + if ((uint64_t) width * height > SPICE_MAX_IMAGE_SIZE) { + encoder->usr->error(encoder->usr, "image too large\n"); + } + quic_image_params(encoder, type, &channels, &bpc); if (!encoder_reset_channels(encoder, channels, width, bpc)) { -- 2.26.2 From fziglio at redhat.com Tue Oct 6 12:08:58 2020 From: fziglio at redhat.com (Frediano Ziglio) Date: Tue, 6 Oct 2020 13:08:58 +0100 Subject: [Spice-devel] [PATCH spice-common 3/6] quic: Check RLE lengths In-Reply-To: <20201006120901.17027-1-fziglio@redhat.com> References: <20201006120901.17027-1-fziglio@redhat.com> Message-ID: <20201006120901.17027-4-fziglio@redhat.com> From: Frediano Ziglio Avoid buffer overflows decoding images. On compression we compute lengths till end of line so it won't cause regressions. Proved by fuzzing the code. Signed-off-by: Frediano Ziglio Acked-by: Uri Lublin --- common/quic_tmpl.c | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/common/quic_tmpl.c b/common/quic_tmpl.c index ecd6f3f..ebae992 100644 --- a/common/quic_tmpl.c +++ b/common/quic_tmpl.c @@ -563,7 +563,11 @@ static void FNAME_DECL(uncompress_row_seg)(const PIXEL * const prev_row, do_run: state->waitcnt = stopidx - i; run_index = i; - run_end = i + decode_state_run(encoder, state); + run_end = decode_state_run(encoder, state); + if (run_end < 0 || run_end > (end - i)) { + encoder->usr->error(encoder->usr, "wrong RLE\n"); + } + run_end += i; for (; i < run_end; i++) { UNCOMPRESS_PIX_START(&cur_row[i]); -- 2.26.2 From fziglio at redhat.com Tue Oct 6 12:08:59 2020 From: fziglio at redhat.com (Frediano Ziglio) Date: Tue, 6 Oct 2020 13:08:59 +0100 Subject: [Spice-devel] [PATCH spice-common 4/6] quic: Avoid possible buffer overflow in find_bucket In-Reply-To: <20201006120901.17027-1-fziglio@redhat.com> References: <20201006120901.17027-1-fziglio@redhat.com> Message-ID: <20201006120901.17027-5-fziglio@redhat.com> From: Frediano Ziglio Proved by fuzzing the code. Signed-off-by: Frediano Ziglio Acked-by: Uri Lublin --- common/quic_family_tmpl.c | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-) diff --git a/common/quic_family_tmpl.c b/common/quic_family_tmpl.c index 8a5f7d2..6cc051b 100644 --- a/common/quic_family_tmpl.c +++ b/common/quic_family_tmpl.c @@ -103,7 +103,12 @@ static s_bucket *FNAME(find_bucket)(Channel *channel, const unsigned int val) { spice_extra_assert(val < (0x1U << BPC)); - return channel->_buckets_ptrs[val]; + /* The and (&) here is to avoid buffer overflows in case of garbage or malicious + * attempts. Is much faster then using comparisons and save us from such situations. + * Note that on normal build the check above won't be compiled as this code path + * is pretty hot and would cause speed regressions. + */ + return channel->_buckets_ptrs[val & ((1U << BPC) - 1)]; } #undef FNAME -- 2.26.2 From fziglio at redhat.com Tue Oct 6 12:09:00 2020 From: fziglio at redhat.com (Frediano Ziglio) Date: Tue, 6 Oct 2020 13:09:00 +0100 Subject: [Spice-devel] [PATCH spice-common 5/6] test-quic: Add fuzzer capabilities to the test In-Reply-To: <20201006120901.17027-1-fziglio@redhat.com> References: <20201006120901.17027-1-fziglio@redhat.com> Message-ID: <20201006120901.17027-6-fziglio@redhat.com> From: Frediano Ziglio Allows it to be used for fuzzying compressed images. Signed-off-by: Frediano Ziglio Acked-by: Uri Lublin --- tests/test-quic.c | 51 ++++++++++++++++++++++++++++++++++++++++++++++- 1 file changed, 50 insertions(+), 1 deletion(-) diff --git a/tests/test-quic.c b/tests/test-quic.c index 7af6a68..01f159b 100644 --- a/tests/test-quic.c +++ b/tests/test-quic.c @@ -14,6 +14,20 @@ You should have received a copy of the GNU Lesser General Public License along with this library; if not, see . */ + +/* Test QUIC encoding and decoding. This test can also be used to fuzz the decoding. + * + * To use for the fuzzer you should: + * 1- build enabling AFL. + * $ make clean + * $ make CC=afl-gcc CFLAGS='-O2 -fno-omit-frame-pointer' + * 2- run AFL, the export is to use ElectricFence to detect some additional + * possible buffer overflow, AFL required the program to crash in case of errors + * $ cd tests + * $ mkdir afl_findings + * $ export AFL_PRELOAD=/usr/lib64/libefence.so.0.0 + * $ afl-fuzz -i fuzzer-quic-testcases -o afl_findings -m 100 -- ./test_quic --fuzzer-decode @@ + */ #include #include @@ -32,6 +46,7 @@ typedef enum { } color_mode_t; static color_mode_t color_mode = COLOR_MODE_RGB; +static bool fuzzying = false; typedef struct { QuicUsrContext usr; @@ -41,6 +56,10 @@ typedef struct { static SPICE_GNUC_NORETURN SPICE_GNUC_PRINTF(2, 3) void quic_usr_error(QuicUsrContext *usr, const char *fmt, ...) { + if (fuzzying) { + exit(1); + } + va_list ap; va_start(ap, fmt); @@ -300,10 +319,14 @@ static GdkPixbuf *quic_decode_to_pixbuf(GByteArray *compressed_data) status = quic_decode_begin(quic, (uint32_t *)compressed_data->data, compressed_data->len/4, &type, &width, &height); + /* limit size for fuzzer, he restrict virtual memory */ + if (fuzzying && (status != QUIC_OK || (width * height) > 16 * 1024 * 1024 / 4)) { + exit(1); + } g_assert(status == QUIC_OK); pixbuf = gdk_pixbuf_new(GDK_COLORSPACE_RGB, - (type == QUIC_IMAGE_TYPE_RGBA), 8, + (type == QUIC_IMAGE_TYPE_RGBA || type == QUIC_IMAGE_TYPE_RGB32), 8, width, height); status = quic_decode(quic, type, gdk_pixbuf_get_pixels(pixbuf), @@ -391,8 +414,34 @@ static void test_pixbuf(GdkPixbuf *pixbuf) } +static int +fuzzer_decode(const char *fn) +{ + GdkPixbuf *uncompressed_pixbuf; + GByteArray compressed_data[1]; + gchar *contents = NULL; + gsize length; + + fuzzying = true; + if (!g_file_get_contents(fn, &contents, &length, NULL)) { + exit(1); + } + compressed_data->data = (void*) contents; + compressed_data->len = length; + uncompressed_pixbuf = quic_decode_to_pixbuf(compressed_data); + + g_object_unref(uncompressed_pixbuf); + g_free(contents); + + return 0; +} + int main(int argc, char **argv) { + if (argc >= 3 && strcmp(argv[1], "--fuzzer-decode") == 0) { + return fuzzer_decode(argv[2]); + } + if (argc >= 2) { for (int i = 1; i < argc; ++i) { GdkPixbuf *source_pixbuf; -- 2.26.2 From fziglio at redhat.com Tue Oct 6 12:09:01 2020 From: fziglio at redhat.com (Frediano Ziglio) Date: Tue, 6 Oct 2020 13:09:01 +0100 Subject: [Spice-devel] [PATCH spice-common 6/6] test-quic: Add test cases for quic fuzzer In-Reply-To: <20201006120901.17027-1-fziglio@redhat.com> References: <20201006120901.17027-1-fziglio@redhat.com> Message-ID: <20201006120901.17027-7-fziglio@redhat.com> From: Frediano Ziglio To use for start for the fuzzer. Tests have been generated with a patch like: diff --git a/tests/test-quic.c b/tests/test-quic.c --- a/tests/test-quic.c +++ b/tests/test-quic.c @@ -372,8 +372,8 @@ static void pixbuf_compare(GdkPixbuf *pixbuf_a, GdkPixbuf *pixbuf_b) static GdkPixbuf *pixbuf_new_random(int alpha) { gboolean has_alpha = alpha >= 0 ? alpha : g_random_boolean(); - gint width = g_random_int_range(100, 2000); - gint height = g_random_int_range(100, 500); + gint width = g_random_int_range(10, 100); + gint height = g_random_int_range(10, 100); GdkPixbuf *random_pixbuf; guint i, size; guint8 *pixels; @@ -401,6 +401,12 @@ static void test_pixbuf(GdkPixbuf *pixbuf) compressed_data = quic_encode_from_pixbuf(pixbuf, imgbuf); uncompressed_pixbuf = quic_decode_to_pixbuf(compressed_data); + { + static int num = 0; + char fn[256]; + sprintf(fn, "test%d.quic", ++num); + g_assert(g_file_set_contents(fn, (void *) compressed_data->data, compressed_data->len, NULL)); + } image_buf_free(imgbuf, uncompressed_pixbuf); //g_assert(memcmp(gdk_pixbuf_get_pixels(pixbuf), gdk_pixbuf_get_pixels(uncompressed_pixbuf), gdk_pixbuf_get_byte_length(uncompressed_pixbuf))); Signed-off-by: Frediano Ziglio Acked-by: Uri Lublin --- tests/fuzzer-quic-testcases/test1.quic | Bin 0 -> 4292 bytes tests/fuzzer-quic-testcases/test2.quic | Bin 0 -> 2808 bytes tests/fuzzer-quic-testcases/test3.quic | Bin 0 -> 2556 bytes tests/fuzzer-quic-testcases/test4.quic | Bin 0 -> 30892 bytes 4 files changed, 0 insertions(+), 0 deletions(-) create mode 100644 tests/fuzzer-quic-testcases/test1.quic create mode 100644 tests/fuzzer-quic-testcases/test2.quic create mode 100644 tests/fuzzer-quic-testcases/test3.quic create mode 100644 tests/fuzzer-quic-testcases/test4.quic diff --git a/tests/fuzzer-quic-testcases/test1.quic b/tests/fuzzer-quic-testcases/test1.quic new file mode 100644 index 0000000000000000000000000000000000000000..e5490defda36b3b63b87ac593d55d1b61f1e82a4 GIT binary patch literal 4292 zcmV;#5IgTtRY^kt000030000x0000O0000Cmxt*QZ~R;He5Wh*_AQ3y7COSZgx(H( z)k5IaZ!0N~}*@<2Hhn+jWu&z3LIHhi(}1q0hHHQkxPVkn(T~ zI|<@?-v&+aJT$H+c^n8o$F(uAMNn9PA>yWjB8{ySv=+rh29RHuKtk>~lhfeq3T7Hk z%jq-h at vdIgLl*#pz+`z}nZtT;9X)%bHChfps31RqR*^Te{FGUTt_StM{8kz68L!S= z24dD_JH>@8iqJU<*Krvyd)LPeL}s-NLr_l$YZcBO8f at y8C+6czxje3T{Iir4E-2?R zpBKPI^)!^hu;?GRNe{PmIP0kRCS&Zo29$61=l-tVy^8j&m$Ls78pFA+U$*dq5;`Gh zkbn1{vW=}h#4(O>kSQgIF}gLY({%>N6ES^P^VgKnJ7HOTu1LDBHM?u|r+=??mZ`K% zTXbK%r~d}ZRDj$4=ZZ+CW=}_7l{IL5G}+%Fnw}`F0!$3&Iqa{SY~*)3ZQLHi*sais zoI-R~N|>u~p;;!pZ!sZzM at 334Vx?|CQxaM+g%6m=&`dTk?AO?I0LR*iN>;ox8j%-1 z1~}>(&J`fHzsb_MiXL{NWxSLX?Pqk2vxp7`;OeuKcI621f4mUE{<8)}f}0o=EKD?H zHXFd7!#=iPiQO at KUrl(=zRX|#xgv^>nuzPL at 1q&#iuDB3g>YB3P}a&#DB*kWGMuz( zyIfS)G)~yrmYaAndseG at HXS1LnJ=apa83p#(mUTGTyEa{2{*hN9_ex^gd)W`@E!47 zH31=e6fY7!S%}mQ(-B2|_iI{01|Z+c@|RWi06ldzQFsyNegQ_KV`rM6$t2t?ljIG9 z15=b1c{pEUSmEIh^@Djkm(?%c$dT7$Nk{l`ShaGbizAWV{37 at 94T?L`-a*vx|3xbn z1c3@!-#IfC8=Cldg%vVBal{9agfQH_Tbf#1O+j-qAlAPEKSlbg^712(#i^a$z(M<3 zbYT0*-KmuLktAhL0%3K}wr?Q4$f^o$daWm~io*SN>gJGVyEo+zIsG>#t?BzpKHybV z#U^C3RErwn-G@|ihP+D~8j7<+rxC7gyRfU0XBWIW at T1lziE>OPc-FHOce0$(7Tdd* zQ{{ZNjYMV|jI1>e-;J#X$e~KGCuRdI)pR|WKDC65y(CR>1IaWJgn))yv*AWh;_N>L z2<6Jc10vmtR3b?}v*S at r`)r~(py)xa&y9f9Ac6L1c8T6*0#166Q-cI-CulJ*thO}l zqhU(GT%WXuw0hdh_HJc14iZw?Ubv%yNpyQQNvnzc^*YWk zn(*_)C1`=UN>u#=Q1w{yk*)O=7XfCq;}7cewUkSJeeH zF{!+Na~mAM`lrrt%&hywNN$sVTmK7+=xg9U$IM0a{;iT#t|8S0N1g%wN}|~>iY at Qo zyY9?kRlAVC8Y^sCInb}43B`~RC&4GT_%VR z<>wQ`vYF9TZ>xu*8bWgu6Bcm{%`9$oVn38JWOLmyF5q)(5?3n~YFGY;$m9bnzQ-wRXN$FCPH#d!8UYb+_pXKap0FI z#n%q5=^Sx}_WqWOSigf~XC4}4F);eU%jd!7p4^gOt|%7cF=SwINb(P;qf56t*9 zRjNwCfF`uymTx80-r#onS{PP~|CukQ#2YoDPFzxx;?902gJ6`M>ky>@sMI)cXjc>* z*8?9*@U8_In!bPtKoKJsjnVU-q{I8g24#?%{O0B+&;?Npc9)buS&!!(miZJ;?)~s+ zW%^;rZQNA?GiMZ>nhkdY&*~}1*F&1i at g7H56tCvg0=1e z;-<40UbhG^Cxnr;>%IRkwLI_J(1D2Y>HH!~+fzE{x+s2>Y)^ znyoCO=Ft5814ND?170hcFX1tZ69xfQeS^t|jemgr8Vs=tnSR=UFs(5mh?C zU{m@^0z1;N#l`{p=9MK;$LrKdEsf6;SQ|ObLYep}DXM+0#&rO-8xF|53rbbW_24}$ z$NF_=JRVsY%=|vU-?F%kS6Z)g1VUC0(rPe~tv!>CSW_vDW_#@fqUj>tfie`O>$z#Y zQjwEC&sGuH4i^BQLJhPp^s4M4oQWoId>eOL(@8JHqoqy06D~RbWm at y7%1w5|qB;&1 zaihp7czG&TXE2t^*u}mYz1|mV)~@@O0C6B;yQ!PPsPNCKPeNvEmqcaEnNHXwW!-$l#>bDxxJGbi&853zHbG2$ z0rHVYevS$u%kR0rsu8msgAHhLE ze=W<+LY{6ZJ_lEDQ(Plj3)K at 5LwfF%CV3(4*l3ud45V+*HhkNiqM(j5qUZfGf4NE+ z*<-u{Z$B#E`QtaHbWG3P7xX=iAV5^{i6jn=Jlz4?@8`E-Z1WYjT%D{4xTsX?kiUhq zY2K|hlC}h_-{10DYa~h~8PJd9M5X?GMgR9+rd7KhwiuM*u_)l)Wl>HJSRy14o)(|+ zMR`IQqbM1Ke~|Zm5`mK`!?k>X6FizlqK|)AH~Hf!nBDGrnAbIcD<^$fF&RKwMa}A; ziqS?j at Y@V2Gt*_*_Tw`j-`AgJfBdv2-4A|%RS z7AGSQ$-8SVKHy+rPTDPRN-#2}rTU=XTfYKG&dutRzdCaP1dVhK>x(+C z*bLOSp-c>$M6a7C^!-!mkhnWm?I}fot)HXq;zF~S>ZN_CEKmJwMVac576R1a5RpP9qrmr4UE4DeiI++6 zj0HsAvRlsSOozsMp*(|c+#bn7Lc*}8lq~Rhj4^w`DL9qW^M8iBogdbfcWaz>Nln4|u(yzrSQQ%@s0!nN6mPwpkVz1xkP z|9oV(T-L|@9HBH;4VYSf3+Kg6sE51JZXgj!Z#~Vq at w%6YS`3(f_!AjR;mbUSwb9p6I)2oJWspKiz7H z6;D~Q=PCguSMhV*WfFEFwQkskarV!{irV8*q|z&e$N`Osj7$J>%zNpEftB1RUzj6 at 8+4<+Vpn-qki1|x!Usq{Pl zFvfGt6;L^)*EST*+>DnHyI`M&+1b2a(~WtGKuzR3EJ1YA&d_)RX(9VnIYqyFK9p9H z35>U61TQy<;X_sTtJ1!BB`HO*eBKv|b;yGfuHh=OWJr}zpqEi2tqV?%;_!zqw-HGe z6%!Uo2ireUiR#nHDpD$)r(Nt0pLT2cBK}Ddi$!l>qfjUdY=6D3BlhP{S27Gavrxyaz&oztX+E^7~0U%4U>6m+9yZZjRyqaLg-SQD{?T at K@@%b+h at u z9FZ2AdPu`t#t6Tru>rXQZ6KJE*jV)c0uq~*Z)$6Zj~Xc1jBo;G1pIuPcLV*on7f!Um;p-Ux2!lut?v4eM zw|ikk;NgI{;YA!po at 529fvD}KJ%lKl=Vcid-xK20_vd3ZyDbukQu3uQYOo+Z6DE|nw=o~&F_K9Togrs2_g zm2pYoR at D5FO-qg1=-}1A8)}gR*p4QH)E<5upXSR(jnSlD(dVE at P?Pww z!A=_3_O(#5tvcW}DxgYentsiQUOYBhEkSZMH-QS87C|1Q_PBLc+II}OdS1UI3mISb z0rfkXSv^E9`1Mgh5t=21RH1Kp4YUeKdl>`vEs0Q8%?~8}1mKOpSe0w;tMSGSlqOHC ztS<1(5 at UnWs&>kfMxj4x2bzxSgbzFKp%>SON~Q?A1OL3=5mm-(?iO$&g(?Z(rrvAK zoQg$Q)%Iz0>nC_} zJvrgFiGYWrFy at g+gckb!*N5C0&)@~VxU+8cKybuCQr4H-%Ujuk#lr-}pu~h at q z(G1G{srA4CAbn`Z|28`Fn>$egCZm#iWUn-;IL+Gc0uQ0=)VF=;4Tf|;c=^wj7>KN< zS5X%3$8p|We>h?5ah1$C4*%vW8-i`f%pnGq2_3TK-&%UF#2UgxJOMPzo-OQI#tvjH z;j!GC=D>hY#BY2ob7Xsg%dtV9f-nji|Dq-&Fv?+rria!d=N^OA_^1Y zsZhz*!~iPdr&kd?^R9ocVk?k`m+wj7Ia~YLLP8k0XAt@$~Qd{K`VtuOby94FnGS! zs+NL}QZUtugU{CYuIbnJ;)M7_Ur-`0Jxx`m>FWQa^lzN>Y!oL>ZIZRhmlZjE3dQc6)6J0TwMEe4`GQltlSI(iwL?6p?N*P;`g#xr_oDX}VQ@<2nHiFTFfHA1x z-3bjmdcYK@#OhH0 at +*{58Rd0UuW(}(Seis|wlGe$F{rCbK)R|PoTD8m)EF+3C}};S z=h1*^V)Xh=F;eQjDC9*Z$k7$-_N`%;b^SR%>l+brtRuZ_t1iGUsYMjz%K7`)hHnxw z2QW0ATW1YiS4f#8y3eH;bnv z8gB0w0U3!`5Jn%yaFD~6LZ$iPb&P>9uj(4sIZwMkk0SR7?nfF59jqnM*dt&>Iw>%$ zswI)5Kv1Jk2z`2 at h_CiUpiUG1NZQDXywiosX!NKzn?Fw-mc5!@W$E9Z=`%1~7q ziR*+MCvbv07_<0k!#s at F?-ON$Nx*OwmR*Ej_sHr zls{q`e%3vUhF|naUY+U`0ki~^OMG?I(+5&D6NCB;Ggu=M>+!^%X&r4YJ`q z3ps^>Qi%4iR|u9YA>zs2aSwLG8gq|JsP(ufxKPabFdU3{_7$D$6S_|G(5BC6kWKB$ zzWf~`Sf3*z^P;B at 6o(Kv{L)ZLk-h!0nxGAIsCk*`8|$C*KJ!Yob1Ba^MX9Tes!C+P%(jwH{$1*^-jDjxXEgr_ at 8E|w+RtFesg|6z+Fv>U6_?1-I&zZ zh1aaA`63DXQ^<)DibXEpXaU!aVZiUD`1_4}l#vg5T)-<5Y4>xu<%Qwr;=Rp at Xt~V; zAKYZ!+&>L9SKIJ6EZ-z7G9DsKv*kWY3p9?ZP6STab2&hTE342m{($`?Gy&}p>`8OT zb09eQASpsncN)gvU(bRt^M$;7G at 6^QzC@#}RaVkeAjrh)hu)r<07FzT`CW>Cfk at IA zQPKIhMsa0q{pNSgrROM_8I^tWJNRBnCp z;djimWca)wyn>yoEm`)8W%LT$#RhU0k{9~5I-9--30X#;*r@~K*bZN>GI5~hHty<6 zQ>spiU0xZl2W+45CC4a8n`}3e8?|0;d68g|^rayAIYMnT9hqH at bzB-ARA2 zJx8v+ at i}~UBcGN~T{2PjqYW+kC8DEyI{X#Rbv^3cBX6 z(fMmE0J}lnDgAmwsmE&=UHvJiU_AG7)Y+SjZi3at98G2e3ugqRkhM5rHxc7~dUw)# z;BO~G8q6Rl{22VDy9p(FkC`+w2tjqeacv3iyWTHZ(Tr=qH`nv0B)Gw4-Id&_tW!Vr zkmPPssH=E=(+F4?GE*L9)xMdO-kFz;TD>J`6PhXu0ulbUmq`o$QhH8s zM3vOu-vEpq}D-bboMq)YqeyLq^%Ip#WN7>^(-AWv-%;nAs(G?Ja5?OaX zGvd2gq4gDmbyNg7yj2$8(6NZLbg}pzyS~Io-5ZXow}eA=(@>ipN={r$(j&Hnxs- at l zr at pf=GU2+7!}M5LD!|bW`g_ZA+`7J5(mDr at x9g~E^m2eG(S)$fBOw$X(yoIZS!0NB zCvO#N# zZ(|{B9k4B at q-lGv=fLDG`a3lHF*_5o2IMLS*Jx6Vm|TW$k0i+?#wUvSe^v+RfLNg$@MZY&$#Aw}M zT*g!W_~6UY%@8^Z3x>)@&Ub!K?Kv_LwZ9HB>!GagKHD#H2YT83{`qd{f$7&_QRIOJ zW6r=v&>CT^N~dp_rA0(v_1#_dr=9BIDvVz~zdsXel+>N~7iv2sDX&DhG+hVRMhT(- K002M$0002CXiEG5 literal 0 HcmV?d00001 diff --git a/tests/fuzzer-quic-testcases/test3.quic b/tests/fuzzer-quic-testcases/test3.quic new file mode 100644 index 0000000000000000000000000000000000000000..524171449cfb96b4daa9013d8159955c7dc5f35f GIT binary patch literal 2556 zcmV~RY^kt00001000120000T0002Sm~h5K0%wWM_mdEVILiR8>qTOe1mE9- z0Dvwh9MDNWZ4~qA)?R at g2vAu7xDsKFp?F14=3;$v$I)i=V=Cw(BbUsC-?dA>?DU`VSHFlfo>aI at ps3j*sDyktsGqJ4M%V?&7LoRfDC6xUt zhA|-5Q`GZQyMFps50LN!aC16!fWQkZ5?L!YgLWXjUD zINz(q781^wJ(rSKxfl+;XQ}q7u;oa#*QPg!O)dtIT=S?0=PUyPONl(;^UjN4+qjAD^VH+Hjl}3JinN>wE2plssXCB~ ze}QK~HJsn|6FAWMIm at HHO?uujiBhqXpA_T71+oa}xN^ZSO0p}b`Ti9M^2x+C^1he7 zh#*5&cIsq at 56cmC`T(|SfLp$1-^a7HSy at YhZWNtMVQX(!@fx at m+hln^DDqdA4s`a+2rypNZgVg0bRHP(Z|=a5ozee7S703 zf5Q#Q6h86cCdfr0<=Vrliv6T4b!gX;EFhHYZ*d1?24Rw~5?PD9E7gPOAq)X<6|Kw3 z!1&YBHx>QjROM0R7m5u=F}Z;nBlTL*S4o2LlLq>!2gs#}sk3_6Jr&cJJks`r4dBer zR9Sv)a5Rr(A=FI`(~*GX9t at cXQh5|s*g at wiYnO6uQx?V_MKKOn zW4I&4L1DUc2aR_%>J-HSpSTjzN%x3`UQz-L8BwHm1i`#}xJX(_#}egv*&vE{xST=4 zd?_dFUQn{)2_iagK%D&09Z*#96-_I+u|+Kzxx#DN0>9sPVzSH$W!!H_WMU$cJZmjU)rGUNz at mje`fABVR~%R!xMqJNk`oGU3(mQu6NxF6a1WGry6q5xcl}WVq@`8v(dcW)ACsJ_+vVkBKJ!I zLxR?v at _6%^Co at dzvh(f62|Y9?vDL}~;_#=$s=9?0e6X9oC4&oP&X%1sw4@}dF0?D- zB*~A-dn{ZBj$9%&)5RFXF8CKKKIelEN?Gd!u^_m3)g=QXxOLJ|hLYeqBSw@!qvn?yjkw3wv4kB+YLe6UBPNzZjvW ze0j}aNVkHmjJvp8+F^$K4AkQVKa;tFJ`2mVc}^_vT+1KVzEYhDK2~~B)rcb>?{?si zA!61*+Nj{Wj0#$itchhX!sncVTtUr*1p#-R;Xl`t?`Z1>jOW#+_Uk6~qSRNNfoc)J zv0T&a+VEU at VNECYL7J7z>a5h|mSF*0CdGExIG%<1=6uS^3d*tq-OrkPu0U;h3Wxc==`=$|i;LnwR_ at Xmty7*&$?Ori{^|ZSxqMNs`2U8()*?z=lkxs@$`ac)=}VgV zdf{{0&Nat$5D7jtr=-YArf`+>j&wT^gT;AFahAgsbAayb at 1X^fnt>}V at 5E~5%VnZ~ zx+`PN5=`WerU37cuu?I96E}UO9K317L$41?6IJ~Z**Cy?11o)nQdY{SpH;1D&Gdo+ zB~j{UHHca~t%MnI#a5%|uzXtsN(J{~Qh%*Qk6<_(m5PkRpKRv9P(fW{WK1Pr8!Zn^ z;(l2cAEz|4ZD)RIk}x0pCN`BJh?~lr-fO29{2wdc68k)v&)zKpe1;Jr*ad1CtrGw? zkUWxvQgg$czb5LgzpsD at 2eGu!+ zmMGLlK*|RynMga^e@>Ij=j5Tc%8|1;g-59jWL~&D_H+EH8peP40p$U$Mzc0w17kDX z!_=q;y_c&{8_L%aZZdB|aNIP4hBgPXPHeC&gS_^wv4HgDnYvVhBhnXH# zKU`Qm_K(tL)X8%JJ#OKMgnpq-;TX$oojHX7lp&B6%9uDX*8VfqzdgAM^DNtA^&P5I zDZj{>f5jHfY9b3l#A47&pOMm0>??LE+!gg#fq6%BzgRL&8WXjKGz$Z?!>@QL3qB!B z(l}8MkO`~?ED5I662~|k>S+bdXHrm$6kZTnwzGjCam^G-ogR1%T+kwr4y9Izad2wD zBt;%qJ|X8Vg~D?zxts;S5`)iF>v4h)lRd18a}OvLK84M0Um!Ut9k^C5|5GG7d*SVO z2j1rPf@F$ z-~>ue^gVfjkxf at ClF9W6G4eli_q?Zs?c2w;e-SBw0Gi$oAXLF5p9&C S=S-%E%18+S006)M00026HQ!eN literal 0 HcmV?d00001 diff --git a/tests/fuzzer-quic-testcases/test4.quic b/tests/fuzzer-quic-testcases/test4.quic new file mode 100644 index 0000000000000000000000000000000000000000..fff72518b45e679b50546ec6505304db2779a77e GIT binary patch literal 30892 zcmV(tKZ7zc<;^9j~H&(!@G!4s3pUlhRg#xj=EQPBH`TA5P_3MU7h* zn*IwfN(+m}4E7ooQ)dmZ? zDDnPeG|p at m@bPqln8w9;$Byrc+{8O(y2o=!tRR^vTb?sw?E?*KvBI7v+78a_`s(|9 zEz)y!QTy)BpHtnxFpYTRgpf}F27q#pU_fkto!G6e91h4MYNbSa$vV}iN%0I7)(&u> zLYn at ZOZa%Qmh-x^uJps%ENwP^#{ZY739ooiJDI1z5Ys{~zX at 6Ze5FW%DYf7Jv#vPL zgi4pNu+JSsCF{SGGqe0e+ at l4JTzkSZ72$RJ{Htd&>oK~>y28=uO(dg%skiz_!oV?E zfRhQ?PfPm&Y%6xxS at kJ3)x2*3WfvHcsvh}_<*oy%z3V= zojxBX0^&UHn+CUBcTU;uP|Zk-f0T(xx{xj*yGI`C1Hcj{S|(*{|2<%&i29wNRa6{Eig{F9v`|Hcw`{ zG1;$^o;N at yPP&vMe at 9v1In?K$tO#pr$~M_PgKZ?eT6VxbTF3lk>_fS;V?`F)}!% zDT;NdD8R at se$+KF*t9}b*ewSOFCj&Q0?V9-_F)`9Kgjw6s|KrfEYa(QZ)YIk+9N&- zj5#EVJQ2=;x2DUdfci4%_>Jket=j(o>e9qAVEFEHNUTC9<+$yU3tm=VxmDwR8FA|R zo&$IV7!iWNf1mh2oW4w0SCv4vx+J4Rsj=S+_APbbO7>!*TuH_Hp7p!aqYy2tp5bG; zhtPdGTJ35+9-~WyU$s~4Gw(1YMet1?w`R8t_t~*$d$9ZL34Ud5wywk;_B`x% zFouL-j(TZm=b=u6>Rr3{(NKL-2*b*SUBf)F`(MJK+15%Ptz9ip14k+r|Q*UpdllC1$hc zQEMQ92W#?51M>6Ye)>#7=S*Od6ZuS{yp3m|bRkOWfx!uvmdfY0yy|t3VII}w`9g|u zd?r4^putb(v}7qL{WO%lYq($4%aQ%kc%ltJC3c=L=Z11jXgdoOVn}|XR{i^_2)LFt zP;F&c93Puz9JbgwF2L*ei{2{s{JfN)I{1#u%2NlySM1!LJ`n<}kIu^n2D_h(J>4LeYd-<0;AM~#Uw zIjRzQPrMHE;u^>XU3d^A at 2t5*Eg`E}^48PZvL5sm_lk2sk4hN`X(hKP? zQs~jtNiO>Eb#>U(_srLBZ;^X2mDq<-yP`H^S2x(@d;4Pf)NVMiAC^fkJ}IEI8(!D$ z+t-GV^U`}+|A-7(S4y)6r?6%?9^whlM(5CQ>zh&_(6PeAqoi5p`Ff+XX|3wRW>a5FbwpowxA0e^r9b=6;CJd5FE znW;ww?rDDh8W at 5RMtE$g`&oT`AFzj)wF+YhU3PkGVS%>LYtb?g;*)OSLKbJiu?UIx zNr4tb%gZu`^tz7&y~MT|J+pg4s8mTlu+D1;pwY6#ppQB1fu+Zcb?y`#N!r=_cYgrW=Qz`70R-X8UDBfs(HZUJ zV^F`c0-qh_?o1-&?pv at g4NQKgL=F`s5Li((yt_IZR|^g4b}5ADR(Oa$qGAtr6c}>> zlcq;!W3O`h0%f+=%JHnKMecGB!BkExcLh_?D|M84J#B#zC`VS#3V_ZQ4o?DLg+Y2e z57({amL6<9qIAm|MsjU)V*wPT5l0A5ITZ%bBWSr~p)S5 at 8}@~C6R>>r{H4?E3Y%;^ zg=NpjgXH|201tcG0GCcGY8v7g#Juc7a4jmT9s-h1E3i9dhO<}Lii2bXW0VI>Y=fq! zqs~lWEyG$t at Yk7Mgj|=#M319FywuyNjCn$PD z8>?)_asLYe_sPM!d`x34GP%$q9Kj)ffrc~GY2Gi9ts2q0_N1Dz>8}=o+n?lM!mqI3%SUuBrl?_- z9<3Q^&%b}h3KH^EPpLw&@vD1J4q{nCcEWBCAvoPXmIJgK4H58UUsiU~J+}*zP{0I@ z3WkS5^=*cTw*5!vbQT`b8FSNd?6+_a<(0$O92)~S-x?F0ZRXSHdXwOqQvtC9|AI!9 z#^#CHFwTZ`(djVLfeJr&_o|U?h^nr&mukulM-4a;;+4&EeZ3K*0<2C>P;-4A%DtVH zP?74>%^-QYSw-*yN z9aF>3!Vjlv9+6HSog>^m$RQy*N_k(v^6G!qu;&WaA%##D~Ai;^^ zNO5t(WB4<>5T*;%wMi)_dgH&hYnOM~1NUlto#CwW?RolCjh)3J&!u&iThXH7TO9aYV` zR~_)(nyV#-#QpE+$&aP at X>fiHv{eNQBb3cq*285Wi($Po zD^owW5&u-bd>Y4$kQbSagz`pNFt|@_!kE3wd~eL8VYbuJ(Q-fj8PX2(d++Ls1YwMZ zAZK0HRXm15awrO@>Ke>r9P(4Ceoa6UK^9Jl6zj-NU4 z5NO2xJ2Uy{;$6Y248jv_)&yu(U at Yt*&D?TB##kX-#EfWFhkWESIgBuug<$ERQ>mOp z`@x9+Ia>m?qzrIG51bT#bkh?$)H7mtP!~TKxnV;1tC2;9DY}fbRG?_ z!z-f?F3|`Z)#DpVd3(C#ng{p97+D2*qtMQpxw%(LXPqB3^PV$SypbUGm^tpqdzNx^ zb)>p1Ytz^)ECeNgN`&+MLMoLrF0)|jtm0U5lf`(s(&;Jmg#6*VMf4le&T z+DH9*E0V-YWB)ZHVE*jGH1 z5EXz+N5w&j1y8pZ=e|V51(Y2fREzU#?r)LC*KRL?9Vj92M<*@?>Le(Gex}>h2lK0P z{d^cRWg{lc*zdl1QgQP(woC2)cMhwbN#5}L*G#iQoA#$x#J76NpH$SW15CoUQ8jjq zHxgmU9Xe0}WH57`-9|88;F@#k>4ayNw1gt3K1U{^s=?|@ncT`(7gXC0jAU_&Pr-tF!lZ|{{Er48^((2sduSLe578-t8&O!UtJVD zPQm1acdts>207klM*<6I}(cTrT5?tFF~C$3mQ6u zlOqT9|GAI8yT%b=36X})4Qr0D8FRD0nX;0+jb55i at iB&G!S+=UF&{FYl54097Yv17*4KL?8u zEtIp`x-+>S*pfYf&!O|gV}{I&o%9EgUY7*-lVNEhUo}`GAU0~`tiG!79h4`SdG8~- z)zC)y&{0&N*aNM)v8)>uW*$`MRXxdbouy6HOfxUIIq9PnEu;EdszF|_Ax4z0i`B-I zlhoo<(R`Ycb^@3eGfy+dG;Nan$x?EWL^2y`wM{A^ZUK&?03Q3h1G-h(LmdAZ;A~&w znbS^ApZcG;SGEqYL9{UFroyGM_47{)h8iWK z2o(lUOl16m!21s2DT!vsUQMdnT#Vt({N{)I8 at aG%OJqiBM7)kPAyqo at 1tSiN0#2j7 zv-YCxvvUs6AcfUReMv;z1WLE4Y0AuGOaMa18rqT0+=9)OzzAK!W#u=(Y+nVRcx2qC z=r|ZN)4tDUyE_koGj{_f at H3BjhDAz37?dYlQn+1t6zu&Q!!$x>hxjO`bm1`i5^yW) zqPd9bS6CXxkUlySq at -By4drJ;Mr+|v&G&|yr at 6u=JltkIyLhKR6-rN~F5>Hp$;KuO z1dCmvC)ar7=siUP^d!$mLm?-9W1PZ7HqO5m{2~f=OT#&*XRSQc zCbDH+M{z1*yQcusH7!8>9x<_VElg-tQwO^PrfBX6=0Z{MD(Mzb5p6#v%;BUOXh9iV zjfc%V@~kpAOEY1SmT%42RkClf4HT1jnA4iH8w!J2S)-pF1%Waaa`p@>Iae$nT@|88pG$%X_^~_G^B&Bcc_(ZoBsPd7!`B~K4(vVSLDT__U zv!lUnL+A4U(wZdu1W~7-6DS3F=NdX-bFDRX>vycz1C&7?i8>-VJ8o`!RY%z73Z%KW zjl$bJBfp-&Rt8C8^sAeTx$|3$wzL4Yp)0V)G*lnxG#X6CaERC`cEBS8pZ1d*6X>Xd zK~ES$e?3`;L2}jOG0NE+r!Y;xZPJo>^GTvX<}mf(A|ti3+71qldpep31%qU%Bm${1 z&j`;7 at 390;H;F3?le1H)*tKNJO|D0Ogm&)CA_`|d&QohkS)H{RfOO;YvTKJ* z)hj_FF+p=YAK3u7@!6cnn(A3%#DekrBxrMzwmS;nTob{G=dpp`06o at c`YdEcU-bdV z_7=7#gaYPoU=vztNM5-3(wuQ at q|FOdgD>@5IMT}Otjn)`ywwinGc1Xa)RyZ+-NdhQ z7f&2Z?e-B4x)0|>b>3hZW=e`Spa(yvd*p{4!$!#H*LLrpE5kQ%z>7F*@=KSU!Scv5 z0>}JlfVJXn7W*QEZn4RX5U0TNSs3fg$tuRUjP%tVkya}jZ09l~jsN{G3i(?+ywG+& zb)sBAr#oM_dYXKFU2Go14t|0W5>b6wxZ>cC7(*=nH`0MW_5={A!%j<25_thBzUoR> zW#=(@VUir64}hJXJTj at _uVtWpRD12KvgtiD at d3_KNMf;b+2u%`}_{=t~hxt9h4M zu26p+K(Zz6(N2QiL@;O74d%mfBb9fSh%OgH9zkYAqtm^N`5ooir~(7qLHn(8(Jvlw zm0(=ZeHAEBLK$Bfv6mFv{%9?*_xY2OVI_x7&|J*&!+DxHm);ElZd+T*%BYMkX8rFX z<$}WI_KCAAzzmHIyu-GZY{TScpT{c11R3^b)8QRvadugx!We;g4xm6i at w1jgRltTK)Qzr?0Rc>7J at K#5#BrN?A55eAvR8Z)>`j9b0 z=VCjcP7KwPhj_*=K053W6sQHL-UfDeGP#$Y5>mwz*B+Z<7;$+TRmJKFQcPxP z{7a3VA89Fxb$5#rk?V#0bNFmE0-8)mP9-4P6cdu0j`mbX zRoweJ;?olc#D_0`XbCLvtN at D49?6mpPr4jEkYaGFSeHp(uO`ILQJq6qtLh@!$4Lta zedEepc&IZ8*dY>4Ad{=pce=3`AXF)YM6V7% z2@~he)wEAHO!WKKy@&Ik2r&OR6?)Cd>ev;+lo&=7ZoBj8zgW;XT$_?fEUw-H-%^ci z(3n*UURQI7f*54VPRUriL<>&`8i$ToC`cX-r6^}wE-!;mRmJahMXtH!Lb}Zx5#A&= zdioXQ^dwMVJE8PUOF-}hnOM~c-?992f6Si5dsd{OY}Uc2Ifhj8db z?T+{&iKnpcs(=LZQ;Yg5y{A$^bc;?mJgH40ygli_G18<*Kw8Rnw3YrDYd#&MP(1X! zu)XG^3x{%k3#@Oy_m-*6-aK4(1vu1}JF8p;^(rL67i47z0o<y>u%0=pyCMdT9Dn`&u%^?VG%+L|r$oI&*9i+!?^@dvS%BZ`Ghm z;5^Kuq7!qy^{KM@%{|aOH23Ok3^n|Ifnx~VD{-M-UhTp%Pkt`uOm|rOt#3BG0R~Od z?Kc(%?Im69REdFcWsaRnkG*R(05%`YfJng>$EI9q9eO8H at -~K*9lu7C+$79^g;C9> z543z(wsElYndM*W!cD*vy`{ob$=)*6Nx9up1#mIy*QJ$P&gCr_d>#?Bt1`dd9O{3(WY;kqd+X_MCgQno_ZI~OA1kNWfi?3p at J(RZ#Q zZDMlsyOK=5g^^gmWaEdR$@}6^U+y{%o8tDy0iM09>*8tH1bL$Z!1N-=fD&r%vK$}O zkzD#U-nSQ(9`e0OC1-~XR%(7#S0*$M)%N42OJ03;5Ax at YaKH` z3|X8NpE3#`MySBHo$7$0VG!Q*aaQo7yn=w(agzY-Gn(u0j!om`G3}Y^}Pn~oN!+2?ga~tn4*kmkQyXN*M))@&$EYoSiqh34J$I+N$};w z!LghZ>r66we|pJ+ at S`18E*pvRSSk9UpS5)gpZfzhjjNU5{+#*{P$bnpt3AaPv(*~+ z_JF7jDMC&w)AJHN#~t0};Zz+0^}-oeM1gJgzL9NmQ7alBn*hP!O~}^CS7$a7e+OEM zCkH|c(!D7AFMTX;_}2V3;-r!DJ2OWO0-rn(RvVo2M^-Gc zlH(#ug}{owU^X?{hg4!q8AyQDa0jWwu^=C6u=EFb1?N`0k;z; zt at YWI(RmWxi3+(6FvrpHXP0)2&L(PnJ+rQaijts#R>4C)z^sG4v73ByG{4($xoo)& z3K{B9Lw+ljj|^W# z*`G&San&?n$Nu~A(O*y;jgt^=yBid_`Gz305OR}R3adN91ysk)M;It>g2qK_simWRZm6dD(EU=pNE ztuCE2kTxeJ{RW2H>OpsMbYERYoRC6Jm^TXxcXVpBvkS(c+RYH at S}TH2PlnBr}dkjmWr=|A!q zdG%_p^_k}D5}PJYr at w4DP+%LqC^?#FFan)~XYxLE48wc!wGH4ttzNIt^O0HEK zd=o_VLKyS at c9sM%)=2`)Fj+gYCW+TJ-^{!uc7tzgz}y(`@C2(xY5P${coMkIzoY>> zmdM58V!tm-2wxfF&1vG(R~w=|z_3Jw;FcV??T)Q2KYP8+e)qPMpMi_B$7W?JG#LaHKa3v;!p}8`tFocOaLj+ at Y3!K6lvvADWfxNUTyugVJ*F zYZjRcH)b*{=Jf{4-<-~8Hg&c!@m~m3NEJju!4bdhndq) zV^zTw%J{03k3-}HC99SUHOVq}Sv~>j0q&JYb<9W{)Nck at DI;g+dObowAW`kYgRVFe zaX7r>nYpc>ho6kA;1r0zjyH&*aK(NKC6e4sEK`#xyl-`N|m%C2{(IWnVw+G{oFtu9LokLvFH z?2QbD6H|Wgb-aZR67-NW(6jUMitnMBqRiGfq&1137LZzA1Ay;zzZ6HIwjntq(9`Mr zHh={9A$`AV7RQk at Vh$4N0wuJuLE8^^#7aqTnibdx(AU#X0SJ2*xL9kp zkxhUA6dQX6Ypra3XPu^*^~4bs<5$lCE_Q3kBo{pwV*KI|Pi_^bf@=qejP}=vFSDoZ z1VF8{3q_PmO&`hzM at IoN+DWY;vM_)Hvt&n5!pzAsm11{gk>eTKhv=$^B?)rOyHqW2as at +-}Hkt;)zQn?D$}Li%yhn>|9*W%jc-&p=pe5`v|NFJS(M z=3cTGdN^R@>l+Zeb9l?nhtz_CY8|xFM693xGt3MXa%_ z(YhBTv!X|M4+`qX at S%2cRs4^zr~(*qX~*8eE>!qeYcf22q)@Z6 zwZmWa`*ErXg>H2$Oy2<7TOwP84NUlM%^Jy0(HHtX*NCcyh5BHhjSO1cn!MJzb>8;k zfW#MxSFB$LB2ZFzPu3%oBW813q!iIU(lnpZ|0|c-U z0dnSvSV5S;4)eYAsCVx(_5$G;;2tI*jOJzjsWA)KUo7^{YGYqw?klU|22wkG9+y_Z zeN0r{(5skbebgOn4;88dpd?HcL8O^MK(O6b!tdCHprZ-d8pxed1I!zC at xLf(X%vd? z2psxikw&)Q-eKXzmRz?vtLns6a`U-ZMDCgiPJ8w#H`(GRHydno->N)y at d}VduHTQA z=aQ-znk{w6GHd|yCsu|^o^Mk!#FZk3jKEDn`F}>8A6z1iR9`?wIiXEQ{WI}52BRvb zk4COR986-WmCp at iU5_p6*y9;Af^GhfwuaRT_rxSNR3Xr^&r{qhKmEST@(w}G9>?04 zsfKsmC^Hg+7(?2gPyh#U;Y4Phx|}lLw03Ue0B}F*+P8%|MFDcdJ)OOd5Q6=87=6KH z?_Ezhn$DvjEPjQAAWx at 2OG}o)_A#8`*1K75Nvw~lDrB+q^s-w+jwoGIJW49~HDpoV zbuHw#+#eEJ+<=S?>hVN{ubU-MmH=JHAJ1|IG+FSMcp_92F0xHs7T81Dx(EM=;;Y#W zR%;7 zK9i!y^IG zjfKUO!y|Vr##*aog9a}ExM{8z%aqxW#zr})6N}7jhNsT-f(+bTuq1fAk4=SCYdk!^ zo7%SRrpu7J?vPLDHq2#__nAp^{UW)rBq%O2D-ViNRn0#aCFsi7Z=b&k&3)lm2fyhG zlp$%6{<)iQIR=kcVU)uE2hCSkbZW>F9U2JaI zP+GhvQSaY+g)F#`VnoI(u)={O-g+S^!4AI3R-#7OVIl=-m;O<9YCa|69pwV9Met&f z2x-RA%Yb0Jzb$uQ9v01KsyOsuuX*4#;CrpK3fgzFr6FNxIoLQdplVMj>?M(0aYRCU ztnHu_S}gfn_I69fN%(DXpwuXK7J}1Vr;OAM(1h;=I17x8%3wtqk;BCCi(`WuDs+hw zU;#!E9wct;-WeepNYkaCDFm67iIftx$7aL~AucBh!7Wnz_gfw%*glwWwYh<9VOC~U z>7%p>+D=YmMin!xnc5KiZv6YwxKW|2PuUcn+nf;E0Jc%4i9*~hz;s4HaYROXfQEh{ zx!HKO)+7ZFykJd0?`?VAj+3tF;}1^%#@9Vbed<9Xh>jItD8^qx3pjlc3dXZv`1+#s(Y5$4vE(^ z%#H%eqU9hZw<3T?jnX5oat7oo^(;T<3QfA9`^SvYK+CvM3*IV?WvQ-_IZ*unUO~((Y%0EH21)%vQ zs&m`gXqKFd{rX^YKv`p2^Hz%Gbalx&+TvU5(L-<_?b+^GuYn89_cMNMiwkTs!Ws}> zcRO5_Akb243ZO7)DLN=QoeD#FBHpYr%TIUw_2(@w`#nsB^P=76RNt!}q$6t}mgOCC z*MJ-uYTqpy<&iaNi41ExLdQ4}(}+D%*3o~bDo&4C`WV;i`wLY` zv at 2ABK2*O)l`Ge7kp7iD>9fm-7l==UEv)S77VuXlwn4pk%3MVb5j?+5rO at 3RB0?=V z_9t1}pDlFc0yTIUcY;x}3WM%Mt9Y!d0Dm%`nrhHPHL+xnoxVpWL>-tOv3j;PQRUIE z{)Q*fua_U1swuZu{=7KvhS#M#ms7V$QJegH`=QV~zMqf*E9m&ePG#=~^c$^yFb?-Ty1ql|aSQWanx!(Xx6}vB&&R zw|*Yrr)*5WVz!q*4v7+40}Ne*JzLDrmnV^V`cBz3n-$X%Lw@!z#wjrlCsBd|vb!w{ z>3T^c at A)cJIKcBcE~)y(bq)s{I}2Ik!~*}^QoBq2`|1U z7XCvIAvYM~u8GO?l(o`%ACj;q{iy4G=goJ7Tg*cTk#@iN70ykKM_`Z;Y(g))D>ahH zsI>(JtK0L+1|c`1SPu;_>PWG<-b1dvD9KB--O0`YY7gTP5oWV*N4{Emsk0xmvgg45wc=<(nEI8>*oJ2_?{8zW}-1sxw zgK7jpGN at urVgE0j!vO1P#H&EyEUPv>JMV1i4zqB7_+VDCOZpK08jYJoLG at goiQNB2 zdX>`~rPt)RGbvw1_5}!hn|J#Zv at Dpa=a!2z$wPUmnfOo=2v#V&-I9t2Q>GE6+HPTk zZWp(uOWMtA{AA9G5jUjI!)6q0vV|6RD!=$8LQu)_^9F zx=y}GP>j9>4`a^#RzUVe1rxaCLs>h5gyJH5Ny#DctB%Io#;Th7&2$s-WivZSeA|-G zcnu at 3QqE>)25xM^(~964yA5}1ynUOU;M-&r?tj%tkAI5#hQh2f!4L_o)>@FL)(Y## zVuF?Y^l*R6K8g5#Ww9UhO1J_mut4nI#BKr#aZtLyXn$V$WhRwFlg4ko#?b6uJxK#t z{&QEHn1i}dDDk)d)@2#Ns>O#~e%WIkgpH?e{e)47MbJ2Vgj at MKCp+!yaNm)x^IoS$ z8ZJB`=FT1-N&c}gF{HRL+#MKVr9@;m_suTiVfcWWc)eIh1zW~mgu3^oSR%DR2P0>| z%56X6%){8LYbs-X(K0lW_R>QBidk_|u4k=P=z-O7?c`8#4$DXLx^H7gxZ=Kt(6!*! z^i=-gMs7W$l`OiiL+_h=1LC7vN<<}I-h7V{uMt2-?KK+j at n0~E^Q4H1A+HB4Gg at tm zrW=fU at l)L{-IJwA7HUEwIw{t?i!DVNCA-j_i(YXvX>2b_E{GZl>ZX&n3P2~S3&CF! z at ry>wdW}+i(LuiSEy8>CnomYKSonx+^AAlxCXT-O6nkl5fo+ at w6Hd4S&moQVs=qya zSxd&DKV!jQej25yYItyc#IE?#JZ^MEMWCw3^TZ}TkB>Z*_iE{{G%bcriq37JiH*!6 zHFpyC;h%N=*CN)an5)8{7Ao1^DIn-0={gmGAbb=pWO2*0ZR;|S(<{#cDc at zFCI&S0 z at MCRCdYId=3Dq&IUH8`M8-5r(`gY=9yu8~ikdlP316(fM^iJRQN(GK+r%x>AK8=oI z3`waX3?zxZKMb7!w{xl5!NyBMj-YLR{X{^~?k!t0D{qCiE7*o&`3!aMiuJw z{eh^w1CsRHf_l+NUvKWn37{wGF!7;0pWiZtzyHRCZA#0w?mm=d+1Vx4$t8(2ESxCD z;Bwm?Dwd;zYARC&G&fYDgGe+Q7CchAqG8l$BH+ofaPG2SH zZP8TRIh2Hragq5uVS$seHY=MiFtOeUG61ex> zz^&QD$aa(E0rnE_bK>1sqx~uyMTg_yuQ_mp$7zY1wSpBA&y5WELJ6OjR9M*V6mX=f zlz2%hEyh~qV$CuEPm3BLR2`=l|Azmvm9|(pNrQVyj50+Ykg*w;YSee3UpXEscTppU z5k98N11J_hHs^yN#;mFfl2uk at Ah9k2mvIVOoF(q|{pTc>i-C0Y~+J&6bsW zXFk)%RlS}Bc(!60zU!fexM0FBXLoG@{9MQHPeM|)q|B7fj_V`W*dZ{6cY7w!yc=K! zL1C5*(ncp3IRIBIScm2*jBN+_mQ|4_7fD4vUm<@;Ykl}NEa4AK>canrnhQHttP)*)sV9IK=FZWuA2-TX%O$*%QtQ679GvfjfN_VZ at ip z={6O7+PqOyKvdmTcD~8iM at B@79!n`8GndXV2(U=M(3>6R-&Py1 at eIELSB|Ix`Y)2$ z6|Ft~o%8muxTAIL3!P?l(C35_z7T-N-Kx)hfP?&VD3#HegZ(zQ_0v@<{VILMWGoqT z!Xae^Axsxbiu1Na!``~{6YHvB?=H(%Uu$z>JUBN)Sm`i*ymc$%`-v^oldT9|-K zjwh4+7J+c46?>(ZW*vO&t)Q+l7;7jI%mJC}--($rh8JY;$UFVmd0rnst-zb7&c1o<`)jYgu)$3;CU4U87 zIJj7$4>)^>hki`I2XD_18AL~zhxs4Nk&mSPCi!+j)d>Pe_o_Ub6F}%a_R?)D(Ufu; zHB2>GcGtt7&u4I4ZVxlnJbN`fP_9H#2z}{A?-ZG56u!|&KRa542aK(*oA$w19!50j zU~^8n=~xoJc)Q@#_;=WR)lV6h7s~r(c at f@2 at jyK}NY?M_n{0g)p+$f15)Qlnq at RZw zh4!Ujv2c{_%-{0C4}9pLknH77BXF75U=aDOQBe zARhz0iG0rdISVGu)j=>UlQsVMpN?RQw>*W7h;aoYx|fWC-K|mVuxTG3v?vG=RWIlk(|q+-HkfJCUaV*is0$&iR;2vuil`4mJ7O* zO=&;5H#n|q7Sc&_cXW~F&Lt$69y}fQPE-WZ1%HSsmJ)U({YvPOwHj8DV>|>00Z8d& zC*9F|S6Ok0IbF}pD8&u#lkSR|JH}<-4z^Y>;@l#`ogU-_mNrIiG58N_k}RdpvThgw znwSU1?me9S#FtJTIg0JtLo*>U-W at eT1`h<@x+#qA8tJB}^F6oW3Yf=44uRPInsE@& z>*`ADSnRlP6Lme7#<;&(!wLd&w&b<2b5*qg9w?^WOv>S0DRVy2ooh%Fgy(hh<{U0F z(2v2L4V>@~AU;9M4hDTVI(Q*V$r~GCS|Ko0!K&*zA%Czms0$dBh9wfc_?~82yE;!y zqz2R9TIDl=q?KdvVe((o2Iz?JuuZ-a?-j zkHlp?Vl=t1Tl%_%i8k?%#^+U at 5z8y>3SbF6n4elQUgH=;WKR@`2H5>>F8j6LdH-Q8 zRFL63<68AtON}XrazH%*VZra+#@Au?`)?+GRowX%1g#RazKLV=O2)}p_Z&S4W=B|` z=r#>nv$vrS;y#Qsm4vJVJ2~B9ZKSj<2>hRe_(|bjy7B;GfAI;9KLv=fl-z!%gpVvz zy$bg(_c%N3t_zvLeKbBvj((>{Q+$-&0*n9cej;3H-q at MC?t=OynXh`m_2UzOSs+~r zi-#_CAOVL459S^9XY~39q}CSPyGw$^(FR`8CESsJWWur|QU-7Ie}^X_cQL$yxXl7z zC^ibz|H+ww-PWJU9Dy0VseTo+_ubbt&iRSmnnX5LQ`;S2vuiq8o^n;orzZoFYaILP znKR5$bRv_$<&+S{f&FCp9d)8)%f8xM_gF!BNK0+o?!T3~Ga@<9ddC>Y`Eh9ll#aBz zq|f at ft0A9(Vt7XFQ0D9g{5k+26dAV)T*~HGZ9xN>wLIS(KM8+ at D`=x%-u?o1b#_KY z>G<8IWuLf?U04_E64G^mB1o0xhO0cN6tz at J^_?qHAHz}oUIJfo3kaEI)rRkPvCQH+ zMOS}AFdtYRq>7^dDCl(2p~$~RMr1CEAv}~=w|jnJfxeGIS#EQnayXu5pXwnre00x_ zKD;6(H$8t2Hih?zEZw8H`6W`gFNA()bnk)BcjK_2i+t?* zaB>*cBY|*-&)q>pb=-SC=vYo#Q?Z>A)?Ap;nXsOvK;wBOF_cOLAvJKx9h2bEE~V2R zGt+>6K)Y97KpIaQe8%a%gH8JL-8#CoK3FIfdn4hRKyd^fRhGZLR at g2gPvo2d;93+v zhnu2K%G^@vvVmefE(2QcMGt;7`&9jMYbaFRi_=uiGua!DEWvQdTlb=1{HkWXY`&57 z?=9;YLT$_VqS90|K~$iXdIt|{Bu1K0x79@%%I#UkwY795^A7e5^AgJkmS5TQq*=L!zL~wTD{I(@c!-(yz5k(dsd at 0dUtNKC4lm1_N27%r=iwEBL|UU<&F<~EKBmgoY5d_i*HKwUNeHu0-+ zM9P}afai(l%h!)?O>|N&i*WIgHD|3Han#Ykzux*+q)Zd?m)7Cfv04$6 at nO~_}oqLDR2#i%o&h- z8{s*ED{B-BgfsB~B zB>A6ZlFuy*uP at ZXH+%{mun$;P*v(!{7X}8ZYpW8sWwrodb5qS7 at a9R$eOYu1%{Xae_j6Gg)A>y4CY*l)#nS<`&1d(~o1q zy%05ET5-1_BpQD>Mwm{IQ$W=*H^)vt18UJBCP0X^FjuV()rX#lDLsxe&i92ykT>Zo>%hIMj#zXBCY2U z;q at zh_Ac7NRbtneHtCco;-H$Xh at G*zrfAg6O9(;vzTvL^It<92 zeGwZZ8&&7JPb6}1o3(@~>re}TW at htD_aR@n7keBqwbiQtT|rsWpyl zkxn{Xdq5mHRJCb8nm84$7DOU^7A{k!SFO!C>Z>u}`cmwbMVby+CK7HMzU=`6RrxyqdmDhbZ=E+WL&am9=s# z^d-z}&@9i!$4-j6wB)ZV#CV73)5f#;3_piU{*KJ`$Wha7jB>=K+l(HLreAmwjA2sL z23t%mM)r_8N)5|EXcqF1pV8wKtoRK`c?~~3-QZwH(}hUyh%4K! z0f8}Q>lfvEglOo{ff%x(Qf)fyuXJ%Fzr%L_~X(0L80^IN-P<^eg- zcW-R%hNzrT<#&_-ORTqNB>c*60{g-kzlKd2UKYDeqMksnc*A~lU5oQ at rnp_3UQ0Ua z=la=JVHj>Dw!_w^PZaFXQt_K~S#uvSs2BL}90pvcQi$j9ua+iD+vQ-$<_vW2wCiA# zoB5Vm#J~%ZenaO(SNZDE$Q!CoMFv at wI*0K1+gRbuPx%g&{)}qyNZUH>jI{*5)6{zu zO1#_>i7m&gh9Ba-hLiO=n?)Ox!i`Pssnste`I*D*7ix8Et947eMXHtcec%#9FXq2$ zI-f at 8G}v&6X>@XEh0NTgrvz4Z5MzX}g3V6Xn`Rs+1D_{sT;hU9l_6m#l!&A)CKwa; ztOaeYUsQnKz8DVGoXwA at 5UvW0c@7?LtZ|z-ZvsNfTpQ3L7!2{>oGB-w at vd^sZ4^J^ zam!=-Ax2;<Gyp*n at XU?6czY6D_t at jG! zzPPK_*n9E{$s`ex!KQB*>%NMHZHQtN(I33SGxt}YzSF`DIB6Xq?uuG1#Bp%#yp5re zf8u3^HvK{V{Qa-}E{k6nzRF3U;34a8 at zwwCm{pe98lgYLL#lJbCu1bJkY>G4&}}X2 zk at j(+e0$io#ZX}kLEux38w@UHx=x(Ppcm8{ z*AeDL)6L_WQN;=)CeBqQZ0C$rXBgVQ%rn2Ij>A$wxB3;Y%f#-D>U$gUe{+fikRE9N zj%YfY$U{o99Bb69;x*&<56-=TyoyM^=)|%3VgR+g5TT~}emi3=T{Rm#A#NldJRSab zpK9=fMC2f;a%Hmj2!5{tCtIMQy}ZkxQ}W2`cp%#^{QlTHbj2LIAHqMRfaqtUQZK1@ zKZG8S7gj$Ws>&$J$@MF$y=Nb=|3`R2z*xpsH#qC_;7q7z=G)yPACk*b8#^Im^=KC9 zUref}={E%V6Eu at XYecVPb5OSWUS{(hC6CU->BtqHM@=!aW}v)}&s7n8ktA42o2qWto0m^eObLM#7H06A)-wi->xAO{j`uNlA zbf09#zjS>3e^hiGl{ckib_L~xl)FjQ2&cQOfFX at z2Kz9GThKWmu<*2MaTEtN%R!^E zHFqq#U+QXDf5bNL2}3 zoT#g$=h6<0+mELE+v$j{DHL!2l|IlMqyiu4dO(qYFd!mQ9JlN@@_KjYlj&TS=5+VszeeYL-yfXGreJI_cwqW5d+P7deGj+bB{ zO~nT=(Zv9xh_F)v2y5e{X!nnn+no)CBDy0hB#2xh9}{fI;sa1v-!x|&yLTtWflID$Y5j9y!|Ui4t-9f9yX2 at RUV*)`d;H1YuhI~uyjI;w^AWga_)bI z3{&%{l!pTAD$Lxm77Q$&7xm&$%z7y?zP7_ at GR~iGSv$a<*Plj_Sq2uv?xGF$1- at cS&f3nh*&*C4_}UEk9Y+YUSrw#Oc!}X}c7yN6 zk!gq_K4ZZkQfh=$f=Zhb^+Jh6=V9r}OyjSN0Ptn!QRfdjEwdp277({-$VKhs3_ys_ z*NLFV(J~JOm*!1p=cyI)W0m>3&*P;k7>+^vtDk!oJ0mDgRlN5%m zN=0LM&;(pHh$pd{b3e9xpVXGPe|tn`eW}^urzkQ9bo#}H>OUnp{l`h|LR+CrTKor5 z*H6DCWbiuF>bgKApLcmD`@X5M%@31o7R=$7 at XBDV10>mg>1IBw3S?f~pa;H)+(F{9 zK04HR>X2extLAE!=~->1^imb#wf*HV6XpEd+~z?}2PMyt&qXrlFX8g<+$Lc}k4`&G zM;FDPoUz-jH6jfnus*t|h3CIOZyt5U#7UG2MhzQ`X;3uVK0S%Ev%-W+)6VRXQISgF zh>{jy8QL;PB2TLk>+lbyMBFyb6OLK!?f}-4S}^m1Q7uL082%{X;w0Cscyq`#-w*sF zT#&U|K=5WXbA;1+Ap}Y| zF^Wkku#*K~?cHWlAC}kCAKOeS^yYXD?iD7K5P_9iv7&P~4P|Z!c at b=#*8-^p(ULMg zh8|G?*+?h5PBU4+Zg?gDzq%!gquGl()}OM81tDv(XeVyDlYF zZ4|-6j?&07^pNZ2w)H}WZd?avBeN0s*j{VSEu?_z$Bd)Sjs51--)(hesAR{BLDZ$WuB=uETOf6oKtM zp6=*;BNb*}h-Xu7bsTq~OPWvs2p7U4EvVJ2oS;8z`o^+^MY^!;1LLS8`fR8CZzGO! zp1GY+#GM*#c*6i#G-#C+sYPaxs|98yq}OUNhh%^q>w*MwclnPDocRXBnF3>!0nlT5 z=n%RERJfu9B#NClO3vMr0`NLisnS7dEuQD}tJ07|HD6OH{iIKWWA%H#wlnv>n-pD; zTvnS`!_J=55;{njDiq0cera4FbOK|mOF#>7Qn$mR(ZIUy%!RB2T z1P^E_tkOX at Jng!p at aL3;-kWdp9&)7l)W~=j=P3pwH3ukAT*;VL%!CE%?djw4^P}uB zmUR%Nt3Ui81GXD~Ne-irDf&@DX={JLfrL>F at 4FP6p?=}CY8y=>s$F;fj8};V_I#Z-;iYd%leY*}&#NPnm0lHcE-w;sFm?kPDHp9)5jI3<~ zE5l!hwl^SU-;UGbv;Ptt<>Q>=D4Lx^M~A~Y%(yWQ- at bQ|n)umw2TC7oP z-Ec?y%?-nvvxYiu36i4ws7lrd?>R zaQoxv4-|y3T_3aiJmM|N%8y~&clVVlb4SoM+(x;5^6xCv&o at T~O)BfbLhrM>#|y at - zf9l2dx`|zPxch?9M>vjT^B!GL(2Kkhy6I|hM}w1qqB*_tCchgIcEUjHj#KZ13RZ45 zabO1B7QB(eHjmd=8N!o^O=OB;E&46dWS3Sdpxiaj6u7r04$4~RLsRdq%v)h048)W; zIX#~s>d5v}af9R>75h0;4+TNXPI=;6Fig+k+i3&kTaN}$fo($v z>Xyx=PC)@DUjW6GpRKpLx4)a9#u#+X=lH4jseWL9D;Ym`6Z&ze<^SY%m3%J335$W4 z(F=<9H9_6N`Cw`yJ3N*w{D^R+@*E!rynYuqirh>(a*BGN=Cc~3vA$UCvf^hJ%u?`D z-Rg-X=haluhGMtDd8Gs7qoM1Qlyf+e+8c5z|kRCr#EBBNDNxI1O%wVI_J at -OUSR1#!!bR z6_1T~&8w8!5gQxm34)9GK`oX#=jG#Ao(gcBV12Y=H<{zEe*lC4vZVvVOiaaE%gk7U zFnm;&S at zu7m}49zR5#32S}nMbv$w2Rt at MCA3_6QY!WM>w;t_MH`ER5VO;*VXN{VJnT>q%Rb}1KHON4Ey@{vo-+7|(TatG{58I{$ zxFp48{gnmQ{XU&tfT??Hi at MYUuRXR@Jekir?mW_xFpk+3qb0T$Q4-zwJrNkP5O*~b zBF=h|o-u6ccHBM5t7i0D;cGF=DF|TcQ$)twsMDwBkaMc}w((N7!jrZ9i>eme&?kYGa;s zIo;6D5sRuaU4k!VftQPEw+vRwnARU&eYT8go37uR8;R2ySdPn(IFJ>p>chq(9kwYK ziL#R;;XqeJhXs+lM+*l}a9 at hJ(T(u%Gg(H+hJ40jP3Dvy9v#C~p6bVUWUylD{b0&& z at UcDDvxcyVga{@dXs#k2jKoQCmOQSeP}Y{*ZR0FqJw#L8C at bnBZMlq4TE0NZevQ+V z^Z|2mSsJaUgO)xz-!uhMJ}u)FxjG|yGcXxL(Q6&{0cv$Gq&N|*rF#(Xcb-BC{}=N9 zW8-+`55E)4Ucujjzv?r at BG@oM8*d`8{JmLE&!lurhZuY-j5WdUILnR~ zQ3R}6qsM+~N*)_+AnhD%oNiE*Wpxx+Ohlq8vj{ebc>j&NMeVF05mtT&XNsQ}*r7g2 z$T`kC)Rs<%?D at 76Ef4U}<5D8PpQEos5hA&mB{ z8Y1Knsz792g=X78{)P>fuAcQ6HXKwcB&Xb2&j)Ua at V32kS^9+Ic}y?f7&=QZPp;-k zo9)@543ue7?q&8ncz1!OX#E*mYYP(1B`r%pnj_TLr-{gU32&kLrS5r4C&S_}A{h*r zwYJk at +q|B&{npLXQ^4vQ01fM7R=xPMfo(gpXFO=z`ZbNI{>-W^Mt4bAv?gQwW&Fm+ z+{PB4kTsL_Fj&X!H&OFC+ncJd<)H}M4vKS)ibemWMFkn`6f}M^C}e0&VR6nHd7CBh zx!2$QHu|TGxDy+fqVjbAeg`Q>q+3w79YZx|=BU4yE<%^h8iYYV;7&_Qx1`fYrB4CsV|)tC)s-PP|vnOb5;a_|6Xd%CJQjgKt<~U~6hx z$qvgr)L_*>WGvR=8=MkIs7t`Zr&cQHD)$*2aZ(K_1+Ku-YHWAUkpqCi<3b%@fe4O2 zS_K>|04QI+WeYmK*6{XMGpChyc;5i%*_QGI(9#}yA5YcVD2gSmsYa$wZ|Lrz_`_S& z7_f|^e{LH4W#%iyeN^;{2V;QPLx2M<8~ZR`GA4Ghp=7BF7~&LYGYP3sP|Dsg;KkM1 zERxBP9#frGI5{7VM>24GUjBG%Rf+zVwGABJ-zw*DVFm1;FI;y5GXUv8iN$FA25nHE zSt_%brDeqbyYW;9Vda7F@*X892U{! zD3B(7`VrHCnf`75Vz=*_SJ#27mHnN^Cw zR0FfG!z5>^dUZ+3oyCvUo^`BORDsAI$nRC&9jJc<1Y#;C_bKR2z@?K8NOSIBfnvWL z{_ub&rJ6BeuTR#YksTPhOESV1iiGjBSjA7Iu~*C=xTtTnn|OZncQ({dIy&WX%e87r z!!>e}XYJHwl?qo=9_M8e^z&1*ylLw-^Q+5gRcdQQj{;Dfar3mT?aVK_*79cfH1%w3R`FS29l}M!~vd1 at cK?rK7 zfd$vJPW0Y#yS*sgKdtJs6<+JvXc?Q{i>zHwSj!y&p}$xHl0?w9{+0Gs65CO`wFi`- zPmR4vnPNX{8)y@*X_%~m77 at gl8KJAEJ`%^GhHquHKu{OEjs8+bZ05q!B$yYzg-Atv zYN3H83Xlg>vaBS3eP6GJbHu2Ox;CH_XE;sqnD#d^LS{y7GS}9P4wF02bfmcRvjT)i zj7hPZk0LIt5~WC4i=`_%34*we85y*P>T#Gl)}yI_MSdgR;7zhwp%@;+GH2iJL-sp^ znsR9&6uw5O4S@&IDML at kLsg~q19P=jCfcFXWRH31wuLrhvNXfLe~N-{!K@}@oHryW zATrhoikoKM_*JPL(WhtK1=r*t=;AjOo`aSIa_jNi3yu zZ_bU!1=^(B4gpI>V1<34tGNRWljs9LSd(Wxptt^=s0&_&@@Xbgy!X%Dm)19#{J0v+ zRd?npU_9AQ)Tb|DmXebO80&>*T4!3Bs))oz?*@&5AEHoTdPVw2XA}2a<#2RMK)tN;j`m%ua2<6y at xqjeq zX^-YczFC$@WzU=V4x0b-U0 at Ikm_+V_M*@ThzQS#>x8nCYW52={LRMZ{&f$0j{b~{f zTeZKnS>XzoWmFPk&VC^eW2}UrgPVKfw%cydNrF+!2E6x7SOGCo$S3K?b?c!rmmb!! zuwavLSBFG$;na$Q1;-Ru3BC5_y)`0wxcJ9=(zzlPE_ at Zi-qbS11uI=!^zduAOgmE1O3XXlhfG~mlFEOU^vEzS+9fV at rr`5J+bMWz` z_oXcCuJ%;G(%AHtAf~XXjFwwvqR=Rt at lA2w#zvoJ5KrX=sa||~U6P0gkfL>vZC_Bs!Pmrv{*TV6EMySq z1+VvpJ&vKf^T+<8tXj;(4R0*WV^x{=LVb70v!FEP67BWxY1LoETKWZz?F#pkySyH)KFasj0jFl2nj#z#ULFmSi>3X3!pM5*80e z=@Wrxx*I}Nj~F1vbo1$6Fsx)*6lI*ArFqxS$rA8fcNs={dVe&TwL)~Xp|?q0(jb_b zN9j at 0Ua#XSwYkWB7>fgx<7Bxc;V*;K!LtF6Gi6BQ(w=8iPRgctHGXk1km;EmMED3{ zryVHAj+WbB>2nGsJde=I&fzSdF-A=eK{)V*j=ds?|9jyw&u^iHNzuAb;_9LfdD=OP_ z4=%^C<+rubIB+ZPE`YEf>uTQR_xc>|TdlZ|f&S-f>wyJjU;Pq at qCH$o1BFrsLym>z z)oHTuM)&Yk>waeF=844DD6ZXhB+pXP2|=g8*Bef(q$jE?f4Q4ldN_l`9lP{n+9#iWRWW zA?*Jl6yT4zS%tC at bugwd2#;1iS~3W$ba7CZ++%O z^6mCMj at 2xx*@|!_U~Gk1%Am26cxUjXB7`tPqtPpGDh`3`jM*SBFIMbxaAT0t} zIWRbgUcKZRXicmVoEHR!q}jNelyrjmyZvLmBhpJ?a%xsH(_~PWz7Fn5?y at TIk%)@# zqZP~q<-S+T0?yk^+zh;m2SPQa3j%T_fMxS*$k~KIQ^QeY+rZ~Jj&!B|y1O=a7Z~-{ z$?}0Ej67Li2(H}vDh_y!>0kzDH}%_8kj$0ldVK`L^lDlAm~6tLqHG&mq69?Xb~UqS z!NA?H9L1?oa6N>gZ8H1d_qwz)bZvZPQ!ryZCUG4?<|58grmEV0*vj3DgEt0N1?Ly1Km7G=4W6O^vJ(fufr%SUE6Fo at i}m3Rz%6vl4gR2) ztll304S=&JZy!5TT0WKs+ITB3+!uwQ^*~B0g|HdkQ3rXQ-}OpdV1SxIt6BvwuNA)+ z`-sD3KfZ9k5J<+wKJ%R|dM+SlQNo)p=hE0qll`PKD?P+_ z_$Wy8RJ`fujhPt=3rVEX%w8^%jBj at Ix1MlEvNNkeM$BDRNwcJJ8iDgp{<(LG;fsu`Te zB$nZ?RiD%fOLEO7+1r_|SNP+DUSA)4%wyDYkT at 7Gl5^z1_NJ*_`0N-56Dj)yg6-F+ zK_*F41{0=v0F?Zkp^JcCgs<+#h=5M-G-gFJJ3|I?%(Hs9M}G7QOS&woW=$Pd9Yn~b zu{?#M907mRLk3dvR9APVX}rq8ixCrEqiNE6eWIqLZ0Go;$;6A8URU!E?=8*=W_JFh zM1s-E#4Vluv{*|f%-dQK;8|m=BDt-NN5rOQ_ z++R_TcHy;gLKx}#w_)=Fi3mqV|_V at -64;p>lAyI2eHFA?hfG&Zh9nuv0bnha0P9DSw~JZ z1!!uT${Sn>yL#AG_gX3e!$3RmC^?5B0ZR}PW9xh>nI}YWw2hG{W1v-GW0@$2M8xP2 zYtYp5f|D6cD{eF<45tAfVmLq2>-IDc(xCSa36oNBkbU4H-SB1&EQnwZuD!G}K`$Ln=Cpa5 at z*3m+IoiEZV4|;u+ z{j~R+Sg&07%>s})XmmD-PVRRuOyR*z^{_U^f8#03O|Agd835A%2o`bP691N%ff at fs z^PX^|l22QrKjlCrowEX5`Hd2oEdFr9A9&eYx*h1Va)?;tXx}l*9wd9(rLMH_=Vqt` zEG!=mHTcnCLtHVcrRP^e_*Q&?V(X&(>0d}=?FQ?sM^Z+6rrmnw(9RjN zZ`Qsz5IdC=0A9Fng{Mjm99>HqZ(l#GY&Xv3M*K7RK(m+c-{|4_ at c2l#GQsH^Y6E>LC%fv3IdKYkiu zhOe}ti{W3#a0D^%B8%Fgma)Aj&7<`{25ZfjXSnsBMFjWUc%N{DwZEp0ivPO1LZN1(I!GZ6- zrR==nz?IX at C>IBCMz+0GVaDZ7ImVyonma66<RohqX0VMr${OK4 zQa{r6j9lAc>*=syVb`p at snAR|02cuQmgvJQZ-&e7PLk{DqwY9tp6aBa8><4HNtGz2ITG{bvrahv3!p-n!CPlhRbcZSXWd)$*&Wm0d^i zwR*(lY9qBWENc_BusVUD?9Wh~%4^}P$)1-x{@I4+Md%$fO9az6UzQm87puA~4X)RK zC52NotBQKla_eLQZsc84-{{|}55(NkiJ;_eAxJ|bp;z(Ss6|pH2Sa4 at w5HY~xc-R) zT at jYcZlPCpuene4iUbzu-~w;-SxUc9S?~V*>p8 at fS4$c3xLDeNf^le2QSgqkVYy21 zOQyBunjg3BXSPusZpIsE}S|#;!i|YyMJ{rs0xR-Y9yN zI8HgdxQLQrsEJLIv++gL-6+-9{rrXR8Xbj>RU8UfnkO+vXue+KII-ZnVZHyER~GBg zqv6?wrREbklkHuLnFh_kNRs3cwH1B3S6UuxwiRyd%t&&cx at DgIN$-V&=+#WYv6Pz} ziTx8N4FRV!KACg!R9#UR;q+tV;aV0b#q`q=kx4lv5U%1G-XqS1dFDW&fbLMjesw}{ z7h>@HJ~&x%F*Qt|J?0KvnJ_9#G0#c=JdJ_W4I1;oXxydTfq1t%^AYu0k((k^9yH9p zX}nVR)1fo7B>X_t8IC3VIV0D(xr|8|pt-t|7SmsMJO(%Ab*8jpa3y)-FrR zem7`tK5F}0!_sjC?(CjX_z}_p at Ept_4FqA%`7q(C%cMnv0}7^{Xyh|6ZHM^bLz!f} z)7w0rhwim4ikSeo#sZ2 at zr44Mik+2*6TBB_QO7A6oHi?sleTy-)0q5o z{bP~Oa&QC_C)DJqxC%^hgf0Pp+uKirKSU`s(Snyix7y^43%}--skIYP%uqPvc at LmA`A*%UomdP0co4dRg3_K&y*hW!d(UoZROIA{Z-V~cjEvvs|OlA^9- zWIWAV)tI^H<#x{|&XIkxFYyv?fl&6jLwFqR-E&Oo*~(^ScKJonREO{V9oH8KID!?y zjuo91@=Ap0jn==_8U}Sc5A+qropnFR%X5ny+{J=_$I}hYGiaB3B#($rb3#|S*}3B> zJMy0jI7MA6Z`C~4ch$ghFl#;Ypyn%8Y5X9AM|;0Tnr=uqIB98>#Ycp62Qh8DoCsE;k6z`=hLOAQ9X98-`F_y`t&tysf&o at 16Nf>K-|fAtcK z59|7S)Q*XVBuK`q4>8?~7q+9Cpl30Pc-oQDkgZ at n7No1IhG^|}_Lh$x?JIS-R at y-T znpTSTi;wG*Bt!NVhwaJ{s3c`3=6%r-m_!wQZJaN|UA{P%KwU>T+YO)bd# zcg?ABSEhQ!lRps4OeD3KG{XI))OZbNs|&uVuUmV>z2N{Rc<+|nuu at cnR$LE0d_*(a zcyOTH^wbRw9d#UeTZ+*fJ9Yx2eizD7HPly~`ALn(At7U at W|!%aA{AgJpw*8LpWDy~A z=^+{+Su5>fN;o3Yoikzv;64WoOEP4L at _+;xs6>w~e&Cv}zPVUYXA_lGpVlEK`-sH* zFE&81_ySeh3$kY;e~=L<$w38bE67~B$Zt5ofeG4$OZM!QYb0e1PX+k?G#`c#3*>rN z5~2GYAu;IK(xRJUjN7DH)VAimE?y|5lN=A0o$%Zp at 9uOqRFxMRH21N1?IS?LA-;Dm!Jqv_`mcevU)+$9;;5%ldPF<2e776!qalNT?7Utv?&`bm3CyfOZG_s93u)t;=u zbqr{l1fIZSg-v9!uIq+B+l4v4FGAh8UQg*pb}*<9SVC)AjrNMVg7tv4I?w!}ja^CO z?-N at k1qd%z`N*|g;^KhHHJb8_eFod*LBs+%=Y^vCte9J+OO;%XB0qyJ#38ygHa9Kb zg(tIQf>WJ}Bf8Sz1SuO;eKhXoHcGeLN|+y}>W$Z_9x;^DGvVv=nsYV+zEx)NbZK2| z(C7nb|c*r)dv8o*V3ihex;60?!N*qTBAb^^ea3<1<$;`4=| z%G9l_tI55DUbe^$yOP~&k&)7|$sNoK`PSH10Oj?ICul09+L!=k$wt)G3M at UzBy zS&Q~Pj_?xv!-7|ks*_CtqpQK9y(b1=yhH}jG5vyOQE$^!aCyTU_(C7&0)<)ZQpL02 z-)PmUQXvamj3Yj*v at P90(L2N2ZHlpkow at 18SI|qtoZ8tqXTU;cpG%I*4;DmO16V$ulaUzYWslYC?duRKpsLjqYT;R zh5&cu?`2E at LnRNI=5CdZ%$h0?Y>(M-Mn)EoV*>v}K!&wn%04B*l5QD;B)3V}H<(3PcI* z4#AW>Hy(YF5lHN8PWujUynnA_+yoQBhQ|#~^EF$KC{ktXql+#@o1svum_^q9R;>IN z7x9CB>&gdQ#M9Puck&)9c)%c?K3{YUPyb zmL?Ysdn#5c+x6>fqli3G^>MVGm07cMLFt$zB#U*R);1s}Ds zT88yZwI?CC^;?=ZeDNsVQX9ybVs at gZ*-fii2pG$rR_qV7S||E!Pue22yhDdU89pTu zqcB_Tv3sU^$h#_Hc=#R-8zEN4cx0_msNc`$97zxG0M6sX000008^ZId literal 0 HcmV?d00001 -- 2.26.2 From tzimmermann at suse.de Wed Oct 7 12:57:23 2020 From: tzimmermann at suse.de (Thomas Zimmermann) Date: Wed, 7 Oct 2020 14:57:23 +0200 Subject: [Spice-devel] [PATCH v3 2/7] drm/ttm: Add ttm_kmap_obj_to_dma_buf_map() for type conversion In-Reply-To: <20201002095830.GH438822@phenom.ffwll.local> References: <20200929151437.19717-1-tzimmermann@suse.de> <20200929151437.19717-3-tzimmermann@suse.de> <8fad0114-064a-4ed5-c21d-d1b4294de0a1@amd.com> <2614314a-81f7-4722-c400-68d90e48e09a@suse.de> <8a84f62b-33f3-f44c-52af-c859a0e0d1fb@gmail.com> <07972ada-9135-3743-a86b-487f610c509f@suse.de> <20200930094712.GW438822@phenom.ffwll.local> <8479d0aa-3826-4f37-0109-55daca515793@amd.com> <20201002095830.GH438822@phenom.ffwll.local> Message-ID: <5bf40546-8da9-1649-22da-a982f1e8d9c3@suse.de> Hi Am 02.10.20 um 11:58 schrieb Daniel Vetter: > On Wed, Sep 30, 2020 at 02:51:46PM +0200, Daniel Vetter wrote: >> On Wed, Sep 30, 2020 at 2:34 PM Christian K?nig >> wrote: >>> >>> Am 30.09.20 um 11:47 schrieb Daniel Vetter: >>>> On Wed, Sep 30, 2020 at 10:34:31AM +0200, Christian K?nig wrote: >>>>> Am 30.09.20 um 10:19 schrieb Thomas Zimmermann: >>>>>> Hi >>>>>> >>>>>> Am 30.09.20 um 10:05 schrieb Christian K?nig: >>>>>>> Am 29.09.20 um 19:49 schrieb Thomas Zimmermann: >>>>>>>> Hi Christian >>>>>>>> >>>>>>>> Am 29.09.20 um 17:35 schrieb Christian K?nig: >>>>>>>>> Am 29.09.20 um 17:14 schrieb Thomas Zimmermann: >>>>>>>>>> The new helper ttm_kmap_obj_to_dma_buf() extracts address and location >>>>>>>>>> from and instance of TTM's kmap_obj and initializes struct dma_buf_map >>>>>>>>>> with these values. Helpful for TTM-based drivers. >>>>>>>>> We could completely drop that if we use the same structure inside TTM as >>>>>>>>> well. >>>>>>>>> >>>>>>>>> Additional to that which driver is going to use this? >>>>>>>> As Daniel mentioned, it's in patch 3. The TTM-based drivers will >>>>>>>> retrieve the pointer via this function. >>>>>>>> >>>>>>>> I do want to see all that being more tightly integrated into TTM, but >>>>>>>> not in this series. This one is about fixing the bochs-on-sparc64 >>>>>>>> problem for good. Patch 7 adds an update to TTM to the DRM TODO list. >>>>>>> I should have asked which driver you try to fix here :) >>>>>>> >>>>>>> In this case just keep the function inside bochs and only fix it there. >>>>>>> >>>>>>> All other drivers can be fixed when we generally pump this through TTM. >>>>>> Did you take a look at patch 3? This function will be used by VRAM >>>>>> helpers, nouveau, radeon, amdgpu and qxl. If we don't put it here, we >>>>>> have to duplicate the functionality in each if these drivers. Bochs >>>>>> itself uses VRAM helpers and doesn't touch the function directly. >>>>> Ah, ok can we have that then only in the VRAM helpers? >>>>> >>>>> Alternative you could go ahead and use dma_buf_map in ttm_bo_kmap_obj >>>>> directly and drop the hack with the TTM_BO_MAP_IOMEM_MASK. >>>>> >>>>> What I want to avoid is to have another conversion function in TTM because >>>>> what happens here is that we already convert from ttm_bus_placement to >>>>> ttm_bo_kmap_obj and then to dma_buf_map. >>>> Hm I'm not really seeing how that helps with a gradual conversion of >>>> everything over to dma_buf_map and assorted helpers for access? There's >>>> too many places in ttm drivers where is_iomem and related stuff is used to >>>> be able to convert it all in one go. An intermediate state with a bunch of >>>> conversions seems fairly unavoidable to me. >>> >>> Fair enough. I would just have started bottom up and not top down. >>> >>> Anyway feel free to go ahead with this approach as long as we can remove >>> the new function again when we clean that stuff up for good. >> >> Yeah I guess bottom up would make more sense as a refactoring. But the >> main motivation to land this here is to fix the __mmio vs normal >> memory confusion in the fbdev emulation helpers for sparc (and >> anything else that needs this). Hence the top down approach for >> rolling this out. > > Ok I started reviewing this a bit more in-depth, and I think this is a bit > too much of a de-tour. > > Looking through all the callers of ttm_bo_kmap almost everyone maps the > entire object. Only vmwgfx uses to map less than that. Also, everyone just > immediately follows up with converting that full object map into a > pointer. > > So I think what we really want here is: > - new function > > int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map); > > _vmap name since that's consistent with both dma_buf functions and > what's usually used to implement this. Outside of the ttm world kmap > usually just means single-page mappings using kmap() or it's iomem > sibling io_mapping_map* so rather confusing name for a function which > usually is just used to set up a vmap of the entire buffer. > > - a helper which can be used for the drm_gem_object_funcs vmap/vunmap > functions for all ttm drivers. We should be able to make this fully > generic because a) we now have dma_buf_map and b) drm_gem_object is > embedded in the ttm_bo, so we can upcast for everyone who's both a ttm > and gem driver. > > This is maybe a good follow-up, since it should allow us to ditch quite > a bit of the vram helper code for this more generic stuff. I also might > have missed some special-cases here, but from a quick look everything > just pins the buffer to the current location and that's it. > > Also this obviously requires Christian's generic ttm_bo_pin rework > first. > > - roll the above out to drivers. > > Christian/Thomas, thoughts on this? I agree on the goals, but what is the immediate objective here? Adding ttm_bo_vmap() does not work out easily, as struct ttm_bo_kmap_obj is a central part of the internals of TTM. struct ttm_bo_kmap_obj has more internal state that struct dma_buf_map, so they are not easily convertible either. What you propose seems to require a reimplementation of the existing ttm_bo_kmap() code. That is it's own patch series. I'd rather go with some variant of the existing patch and add ttm_bo_vmap() in a follow-up. Best regards Thomas > > I think for the immediate need of rolling this out for vram helpers and > fbdev code we should be able to do this, but just postpone the driver wide > roll-out for now. > > Cheers, Daniel > >> -Daniel >> >>> >>> Christian. >>> >>>> -Daniel >>>> >>>>> Thanks, >>>>> Christian. >>>>> >>>>>> Best regards >>>>>> Thomas >>>>>> >>>>>>> Regards, >>>>>>> Christian. >>>>>>> >>>>>>>> Best regards >>>>>>>> Thomas >>>>>>>> >>>>>>>>> Regards, >>>>>>>>> Christian. >>>>>>>>> >>>>>>>>>> Signed-off-by: Thomas Zimmermann >>>>>>>>>> --- >>>>>>>>>> include/drm/ttm/ttm_bo_api.h | 24 ++++++++++++++++++++++++ >>>>>>>>>> include/linux/dma-buf-map.h | 20 ++++++++++++++++++++ >>>>>>>>>> 2 files changed, 44 insertions(+) >>>>>>>>>> >>>>>>>>>> diff --git a/include/drm/ttm/ttm_bo_api.h b/include/drm/ttm/ttm_bo_api.h >>>>>>>>>> index c96a25d571c8..62d89f05a801 100644 >>>>>>>>>> --- a/include/drm/ttm/ttm_bo_api.h >>>>>>>>>> +++ b/include/drm/ttm/ttm_bo_api.h >>>>>>>>>> @@ -34,6 +34,7 @@ >>>>>>>>>> #include >>>>>>>>>> #include >>>>>>>>>> #include >>>>>>>>>> +#include >>>>>>>>>> #include >>>>>>>>>> #include >>>>>>>>>> #include >>>>>>>>>> @@ -486,6 +487,29 @@ static inline void *ttm_kmap_obj_virtual(struct >>>>>>>>>> ttm_bo_kmap_obj *map, >>>>>>>>>> return map->virtual; >>>>>>>>>> } >>>>>>>>>> +/** >>>>>>>>>> + * ttm_kmap_obj_to_dma_buf_map >>>>>>>>>> + * >>>>>>>>>> + * @kmap: A struct ttm_bo_kmap_obj returned from ttm_bo_kmap. >>>>>>>>>> + * @map: Returns the mapping as struct dma_buf_map >>>>>>>>>> + * >>>>>>>>>> + * Converts struct ttm_bo_kmap_obj to struct dma_buf_map. If the memory >>>>>>>>>> + * is not mapped, the returned mapping is initialized to NULL. >>>>>>>>>> + */ >>>>>>>>>> +static inline void ttm_kmap_obj_to_dma_buf_map(struct ttm_bo_kmap_obj >>>>>>>>>> *kmap, >>>>>>>>>> + struct dma_buf_map *map) >>>>>>>>>> +{ >>>>>>>>>> + bool is_iomem; >>>>>>>>>> + void *vaddr = ttm_kmap_obj_virtual(kmap, &is_iomem); >>>>>>>>>> + >>>>>>>>>> + if (!vaddr) >>>>>>>>>> + dma_buf_map_clear(map); >>>>>>>>>> + else if (is_iomem) >>>>>>>>>> + dma_buf_map_set_vaddr_iomem(map, (void __force __iomem *)vaddr); >>>>>>>>>> + else >>>>>>>>>> + dma_buf_map_set_vaddr(map, vaddr); >>>>>>>>>> +} >>>>>>>>>> + >>>>>>>>>> /** >>>>>>>>>> * ttm_bo_kmap >>>>>>>>>> * >>>>>>>>>> diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h >>>>>>>>>> index fd1aba545fdf..2e8bbecb5091 100644 >>>>>>>>>> --- a/include/linux/dma-buf-map.h >>>>>>>>>> +++ b/include/linux/dma-buf-map.h >>>>>>>>>> @@ -45,6 +45,12 @@ >>>>>>>>>> * >>>>>>>>>> * dma_buf_map_set_vaddr(&map. 0xdeadbeaf); >>>>>>>>>> * >>>>>>>>>> + * To set an address in I/O memory, use dma_buf_map_set_vaddr_iomem(). >>>>>>>>>> + * >>>>>>>>>> + * .. code-block:: c >>>>>>>>>> + * >>>>>>>>>> + * dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf); >>>>>>>>>> + * >>>>>>>>>> * Test if a mapping is valid with either dma_buf_map_is_set() or >>>>>>>>>> * dma_buf_map_is_null(). >>>>>>>>>> * >>>>>>>>>> @@ -118,6 +124,20 @@ static inline void dma_buf_map_set_vaddr(struct >>>>>>>>>> dma_buf_map *map, void *vaddr) >>>>>>>>>> map->is_iomem = false; >>>>>>>>>> } >>>>>>>>>> +/** >>>>>>>>>> + * dma_buf_map_set_vaddr_iomem - Sets a dma-buf mapping structure to >>>>>>>>>> an address in I/O memory >>>>>>>>>> + * @map: The dma-buf mapping structure >>>>>>>>>> + * @vaddr_iomem: An I/O-memory address >>>>>>>>>> + * >>>>>>>>>> + * Sets the address and the I/O-memory flag. >>>>>>>>>> + */ >>>>>>>>>> +static inline void dma_buf_map_set_vaddr_iomem(struct dma_buf_map *map, >>>>>>>>>> + void __iomem *vaddr_iomem) >>>>>>>>>> +{ >>>>>>>>>> + map->vaddr_iomem = vaddr_iomem; >>>>>>>>>> + map->is_iomem = true; >>>>>>>>>> +} >>>>>>>>>> + >>>>>>>>>> /** >>>>>>>>>> * dma_buf_map_is_equal - Compares two dma-buf mapping structures >>>>>>>>>> for equality >>>>>>>>>> * @lhs: The dma-buf mapping structure >>>>>>>>> _______________________________________________ >>>>>>>>> dri-devel mailing list >>>>>>>>> dri-devel at lists.freedesktop.org >>>>>>>>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Fdri-devel&data=02%7C01%7Cchristian.koenig%40amd.com%7C472c3d655a61411deb6708d86525d1b8%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637370560438965013&sdata=HdHOA%2F1VcIX%2F7YtfYTiAqYEvw7Ag%2FS%2BxS5VwJKOv5y0%3D&reserved=0 >>>>>>>> _______________________________________________ >>>>>>>> amd-gfx mailing list >>>>>>>> amd-gfx at lists.freedesktop.org >>>>>>>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Famd-gfx&data=02%7C01%7Cchristian.koenig%40amd.com%7C472c3d655a61411deb6708d86525d1b8%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637370560438965013&sdata=H%2B5HKCsTrksRV2EyEiFGSTyS79jsWCmJimSMoJYusx8%3D&reserved=0 >>>>>>> _______________________________________________ >>>>>>> dri-devel mailing list >>>>>>> dri-devel at lists.freedesktop.org >>>>>>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Fdri-devel&data=02%7C01%7Cchristian.koenig%40amd.com%7C472c3d655a61411deb6708d86525d1b8%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637370560438965013&sdata=HdHOA%2F1VcIX%2F7YtfYTiAqYEvw7Ag%2FS%2BxS5VwJKOv5y0%3D&reserved=0 >>>>>>> >>>>>> _______________________________________________ >>>>>> amd-gfx mailing list >>>>>> amd-gfx at lists.freedesktop.org >>>>>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Famd-gfx&data=02%7C01%7Cchristian.koenig%40amd.com%7C472c3d655a61411deb6708d86525d1b8%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637370560438965013&sdata=H%2B5HKCsTrksRV2EyEiFGSTyS79jsWCmJimSMoJYusx8%3D&reserved=0 >>> >> >> >> -- >> Daniel Vetter >> Software Engineer, Intel Corporation >> http://blog.ffwll.ch > -- Thomas Zimmermann Graphics Driver Developer SUSE Software Solutions Germany GmbH Maxfeldstr. 5, 90409 N?rnberg, Germany (HRB 36809, AG N?rnberg) Gesch?ftsf?hrer: Felix Imend?rffer -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 516 bytes Desc: OpenPGP digital signature URL: From daniel at ffwll.ch Wed Oct 7 13:10:41 2020 From: daniel at ffwll.ch (Daniel Vetter) Date: Wed, 7 Oct 2020 15:10:41 +0200 Subject: [Spice-devel] [PATCH v3 2/7] drm/ttm: Add ttm_kmap_obj_to_dma_buf_map() for type conversion In-Reply-To: <5bf40546-8da9-1649-22da-a982f1e8d9c3@suse.de> References: <20200929151437.19717-1-tzimmermann@suse.de> <20200929151437.19717-3-tzimmermann@suse.de> <8fad0114-064a-4ed5-c21d-d1b4294de0a1@amd.com> <2614314a-81f7-4722-c400-68d90e48e09a@suse.de> <8a84f62b-33f3-f44c-52af-c859a0e0d1fb@gmail.com> <07972ada-9135-3743-a86b-487f610c509f@suse.de> <20200930094712.GW438822@phenom.ffwll.local> <8479d0aa-3826-4f37-0109-55daca515793@amd.com> <20201002095830.GH438822@phenom.ffwll.local> <5bf40546-8da9-1649-22da-a982f1e8d9c3@suse.de> Message-ID: On Wed, Oct 7, 2020 at 2:57 PM Thomas Zimmermann wrote: > > Hi > > Am 02.10.20 um 11:58 schrieb Daniel Vetter: > > On Wed, Sep 30, 2020 at 02:51:46PM +0200, Daniel Vetter wrote: > >> On Wed, Sep 30, 2020 at 2:34 PM Christian K?nig > >> wrote: > >>> > >>> Am 30.09.20 um 11:47 schrieb Daniel Vetter: > >>>> On Wed, Sep 30, 2020 at 10:34:31AM +0200, Christian K?nig wrote: > >>>>> Am 30.09.20 um 10:19 schrieb Thomas Zimmermann: > >>>>>> Hi > >>>>>> > >>>>>> Am 30.09.20 um 10:05 schrieb Christian K?nig: > >>>>>>> Am 29.09.20 um 19:49 schrieb Thomas Zimmermann: > >>>>>>>> Hi Christian > >>>>>>>> > >>>>>>>> Am 29.09.20 um 17:35 schrieb Christian K?nig: > >>>>>>>>> Am 29.09.20 um 17:14 schrieb Thomas Zimmermann: > >>>>>>>>>> The new helper ttm_kmap_obj_to_dma_buf() extracts address and location > >>>>>>>>>> from and instance of TTM's kmap_obj and initializes struct dma_buf_map > >>>>>>>>>> with these values. Helpful for TTM-based drivers. > >>>>>>>>> We could completely drop that if we use the same structure inside TTM as > >>>>>>>>> well. > >>>>>>>>> > >>>>>>>>> Additional to that which driver is going to use this? > >>>>>>>> As Daniel mentioned, it's in patch 3. The TTM-based drivers will > >>>>>>>> retrieve the pointer via this function. > >>>>>>>> > >>>>>>>> I do want to see all that being more tightly integrated into TTM, but > >>>>>>>> not in this series. This one is about fixing the bochs-on-sparc64 > >>>>>>>> problem for good. Patch 7 adds an update to TTM to the DRM TODO list. > >>>>>>> I should have asked which driver you try to fix here :) > >>>>>>> > >>>>>>> In this case just keep the function inside bochs and only fix it there. > >>>>>>> > >>>>>>> All other drivers can be fixed when we generally pump this through TTM. > >>>>>> Did you take a look at patch 3? This function will be used by VRAM > >>>>>> helpers, nouveau, radeon, amdgpu and qxl. If we don't put it here, we > >>>>>> have to duplicate the functionality in each if these drivers. Bochs > >>>>>> itself uses VRAM helpers and doesn't touch the function directly. > >>>>> Ah, ok can we have that then only in the VRAM helpers? > >>>>> > >>>>> Alternative you could go ahead and use dma_buf_map in ttm_bo_kmap_obj > >>>>> directly and drop the hack with the TTM_BO_MAP_IOMEM_MASK. > >>>>> > >>>>> What I want to avoid is to have another conversion function in TTM because > >>>>> what happens here is that we already convert from ttm_bus_placement to > >>>>> ttm_bo_kmap_obj and then to dma_buf_map. > >>>> Hm I'm not really seeing how that helps with a gradual conversion of > >>>> everything over to dma_buf_map and assorted helpers for access? There's > >>>> too many places in ttm drivers where is_iomem and related stuff is used to > >>>> be able to convert it all in one go. An intermediate state with a bunch of > >>>> conversions seems fairly unavoidable to me. > >>> > >>> Fair enough. I would just have started bottom up and not top down. > >>> > >>> Anyway feel free to go ahead with this approach as long as we can remove > >>> the new function again when we clean that stuff up for good. > >> > >> Yeah I guess bottom up would make more sense as a refactoring. But the > >> main motivation to land this here is to fix the __mmio vs normal > >> memory confusion in the fbdev emulation helpers for sparc (and > >> anything else that needs this). Hence the top down approach for > >> rolling this out. > > > > Ok I started reviewing this a bit more in-depth, and I think this is a bit > > too much of a de-tour. > > > > Looking through all the callers of ttm_bo_kmap almost everyone maps the > > entire object. Only vmwgfx uses to map less than that. Also, everyone just > > immediately follows up with converting that full object map into a > > pointer. > > > > So I think what we really want here is: > > - new function > > > > int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map); > > > > _vmap name since that's consistent with both dma_buf functions and > > what's usually used to implement this. Outside of the ttm world kmap > > usually just means single-page mappings using kmap() or it's iomem > > sibling io_mapping_map* so rather confusing name for a function which > > usually is just used to set up a vmap of the entire buffer. > > > > - a helper which can be used for the drm_gem_object_funcs vmap/vunmap > > functions for all ttm drivers. We should be able to make this fully > > generic because a) we now have dma_buf_map and b) drm_gem_object is > > embedded in the ttm_bo, so we can upcast for everyone who's both a ttm > > and gem driver. > > > > This is maybe a good follow-up, since it should allow us to ditch quite > > a bit of the vram helper code for this more generic stuff. I also might > > have missed some special-cases here, but from a quick look everything > > just pins the buffer to the current location and that's it. > > > > Also this obviously requires Christian's generic ttm_bo_pin rework > > first. > > > > - roll the above out to drivers. > > > > Christian/Thomas, thoughts on this? > > I agree on the goals, but what is the immediate objective here? > > Adding ttm_bo_vmap() does not work out easily, as struct ttm_bo_kmap_obj > is a central part of the internals of TTM. struct ttm_bo_kmap_obj has > more internal state that struct dma_buf_map, so they are not easily > convertible either. What you propose seems to require a reimplementation > of the existing ttm_bo_kmap() code. That is it's own patch series. > > I'd rather go with some variant of the existing patch and add > ttm_bo_vmap() in a follow-up. ttm_bo_vmap would simply wrap what you currently open-code as ttm_bo_kmap + ttm_kmap_obj_to_dma_buf_map. Removing ttm_kmap_obj would be a much later step. Why do you think adding ttm_bo_vmap is not possible? -Daniel > Best regards > Thomas > > > > > I think for the immediate need of rolling this out for vram helpers and > > fbdev code we should be able to do this, but just postpone the driver wide > > roll-out for now. > > > > Cheers, Daniel > > > >> -Daniel > >> > >>> > >>> Christian. > >>> > >>>> -Daniel > >>>> > >>>>> Thanks, > >>>>> Christian. > >>>>> > >>>>>> Best regards > >>>>>> Thomas > >>>>>> > >>>>>>> Regards, > >>>>>>> Christian. > >>>>>>> > >>>>>>>> Best regards > >>>>>>>> Thomas > >>>>>>>> > >>>>>>>>> Regards, > >>>>>>>>> Christian. > >>>>>>>>> > >>>>>>>>>> Signed-off-by: Thomas Zimmermann > >>>>>>>>>> --- > >>>>>>>>>> include/drm/ttm/ttm_bo_api.h | 24 ++++++++++++++++++++++++ > >>>>>>>>>> include/linux/dma-buf-map.h | 20 ++++++++++++++++++++ > >>>>>>>>>> 2 files changed, 44 insertions(+) > >>>>>>>>>> > >>>>>>>>>> diff --git a/include/drm/ttm/ttm_bo_api.h b/include/drm/ttm/ttm_bo_api.h > >>>>>>>>>> index c96a25d571c8..62d89f05a801 100644 > >>>>>>>>>> --- a/include/drm/ttm/ttm_bo_api.h > >>>>>>>>>> +++ b/include/drm/ttm/ttm_bo_api.h > >>>>>>>>>> @@ -34,6 +34,7 @@ > >>>>>>>>>> #include > >>>>>>>>>> #include > >>>>>>>>>> #include > >>>>>>>>>> +#include > >>>>>>>>>> #include > >>>>>>>>>> #include > >>>>>>>>>> #include > >>>>>>>>>> @@ -486,6 +487,29 @@ static inline void *ttm_kmap_obj_virtual(struct > >>>>>>>>>> ttm_bo_kmap_obj *map, > >>>>>>>>>> return map->virtual; > >>>>>>>>>> } > >>>>>>>>>> +/** > >>>>>>>>>> + * ttm_kmap_obj_to_dma_buf_map > >>>>>>>>>> + * > >>>>>>>>>> + * @kmap: A struct ttm_bo_kmap_obj returned from ttm_bo_kmap. > >>>>>>>>>> + * @map: Returns the mapping as struct dma_buf_map > >>>>>>>>>> + * > >>>>>>>>>> + * Converts struct ttm_bo_kmap_obj to struct dma_buf_map. If the memory > >>>>>>>>>> + * is not mapped, the returned mapping is initialized to NULL. > >>>>>>>>>> + */ > >>>>>>>>>> +static inline void ttm_kmap_obj_to_dma_buf_map(struct ttm_bo_kmap_obj > >>>>>>>>>> *kmap, > >>>>>>>>>> + struct dma_buf_map *map) > >>>>>>>>>> +{ > >>>>>>>>>> + bool is_iomem; > >>>>>>>>>> + void *vaddr = ttm_kmap_obj_virtual(kmap, &is_iomem); > >>>>>>>>>> + > >>>>>>>>>> + if (!vaddr) > >>>>>>>>>> + dma_buf_map_clear(map); > >>>>>>>>>> + else if (is_iomem) > >>>>>>>>>> + dma_buf_map_set_vaddr_iomem(map, (void __force __iomem *)vaddr); > >>>>>>>>>> + else > >>>>>>>>>> + dma_buf_map_set_vaddr(map, vaddr); > >>>>>>>>>> +} > >>>>>>>>>> + > >>>>>>>>>> /** > >>>>>>>>>> * ttm_bo_kmap > >>>>>>>>>> * > >>>>>>>>>> diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h > >>>>>>>>>> index fd1aba545fdf..2e8bbecb5091 100644 > >>>>>>>>>> --- a/include/linux/dma-buf-map.h > >>>>>>>>>> +++ b/include/linux/dma-buf-map.h > >>>>>>>>>> @@ -45,6 +45,12 @@ > >>>>>>>>>> * > >>>>>>>>>> * dma_buf_map_set_vaddr(&map. 0xdeadbeaf); > >>>>>>>>>> * > >>>>>>>>>> + * To set an address in I/O memory, use dma_buf_map_set_vaddr_iomem(). > >>>>>>>>>> + * > >>>>>>>>>> + * .. code-block:: c > >>>>>>>>>> + * > >>>>>>>>>> + * dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf); > >>>>>>>>>> + * > >>>>>>>>>> * Test if a mapping is valid with either dma_buf_map_is_set() or > >>>>>>>>>> * dma_buf_map_is_null(). > >>>>>>>>>> * > >>>>>>>>>> @@ -118,6 +124,20 @@ static inline void dma_buf_map_set_vaddr(struct > >>>>>>>>>> dma_buf_map *map, void *vaddr) > >>>>>>>>>> map->is_iomem = false; > >>>>>>>>>> } > >>>>>>>>>> +/** > >>>>>>>>>> + * dma_buf_map_set_vaddr_iomem - Sets a dma-buf mapping structure to > >>>>>>>>>> an address in I/O memory > >>>>>>>>>> + * @map: The dma-buf mapping structure > >>>>>>>>>> + * @vaddr_iomem: An I/O-memory address > >>>>>>>>>> + * > >>>>>>>>>> + * Sets the address and the I/O-memory flag. > >>>>>>>>>> + */ > >>>>>>>>>> +static inline void dma_buf_map_set_vaddr_iomem(struct dma_buf_map *map, > >>>>>>>>>> + void __iomem *vaddr_iomem) > >>>>>>>>>> +{ > >>>>>>>>>> + map->vaddr_iomem = vaddr_iomem; > >>>>>>>>>> + map->is_iomem = true; > >>>>>>>>>> +} > >>>>>>>>>> + > >>>>>>>>>> /** > >>>>>>>>>> * dma_buf_map_is_equal - Compares two dma-buf mapping structures > >>>>>>>>>> for equality > >>>>>>>>>> * @lhs: The dma-buf mapping structure > >>>>>>>>> _______________________________________________ > >>>>>>>>> dri-devel mailing list > >>>>>>>>> dri-devel at lists.freedesktop.org > >>>>>>>>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Fdri-devel&data=02%7C01%7Cchristian.koenig%40amd.com%7C472c3d655a61411deb6708d86525d1b8%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637370560438965013&sdata=HdHOA%2F1VcIX%2F7YtfYTiAqYEvw7Ag%2FS%2BxS5VwJKOv5y0%3D&reserved=0 > >>>>>>>> _______________________________________________ > >>>>>>>> amd-gfx mailing list > >>>>>>>> amd-gfx at lists.freedesktop.org > >>>>>>>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Famd-gfx&data=02%7C01%7Cchristian.koenig%40amd.com%7C472c3d655a61411deb6708d86525d1b8%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637370560438965013&sdata=H%2B5HKCsTrksRV2EyEiFGSTyS79jsWCmJimSMoJYusx8%3D&reserved=0 > >>>>>>> _______________________________________________ > >>>>>>> dri-devel mailing list > >>>>>>> dri-devel at lists.freedesktop.org > >>>>>>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Fdri-devel&data=02%7C01%7Cchristian.koenig%40amd.com%7C472c3d655a61411deb6708d86525d1b8%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637370560438965013&sdata=HdHOA%2F1VcIX%2F7YtfYTiAqYEvw7Ag%2FS%2BxS5VwJKOv5y0%3D&reserved=0 > >>>>>>> > >>>>>> _______________________________________________ > >>>>>> amd-gfx mailing list > >>>>>> amd-gfx at lists.freedesktop.org > >>>>>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Famd-gfx&data=02%7C01%7Cchristian.koenig%40amd.com%7C472c3d655a61411deb6708d86525d1b8%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637370560438965013&sdata=H%2B5HKCsTrksRV2EyEiFGSTyS79jsWCmJimSMoJYusx8%3D&reserved=0 > >>> > >> > >> > >> -- > >> Daniel Vetter > >> Software Engineer, Intel Corporation > >> http://blog.ffwll.ch > > > > -- > Thomas Zimmermann > Graphics Driver Developer > SUSE Software Solutions Germany GmbH > Maxfeldstr. 5, 90409 N?rnberg, Germany > (HRB 36809, AG N?rnberg) > Gesch?ftsf?hrer: Felix Imend?rffer > -- Daniel Vetter Software Engineer, Intel Corporation http://blog.ffwll.ch From tzimmermann at suse.de Wed Oct 7 13:20:27 2020 From: tzimmermann at suse.de (Thomas Zimmermann) Date: Wed, 7 Oct 2020 15:20:27 +0200 Subject: [Spice-devel] [PATCH v3 2/7] drm/ttm: Add ttm_kmap_obj_to_dma_buf_map() for type conversion In-Reply-To: References: <20200929151437.19717-1-tzimmermann@suse.de> <20200929151437.19717-3-tzimmermann@suse.de> <8fad0114-064a-4ed5-c21d-d1b4294de0a1@amd.com> <2614314a-81f7-4722-c400-68d90e48e09a@suse.de> <8a84f62b-33f3-f44c-52af-c859a0e0d1fb@gmail.com> <07972ada-9135-3743-a86b-487f610c509f@suse.de> <20200930094712.GW438822@phenom.ffwll.local> <8479d0aa-3826-4f37-0109-55daca515793@amd.com> <20201002095830.GH438822@phenom.ffwll.local> <5bf40546-8da9-1649-22da-a982f1e8d9c3@suse.de> Message-ID: <26ac0446-9e16-1ca1-7407-3d0cd7125e0e@suse.de> Hi Am 07.10.20 um 15:10 schrieb Daniel Vetter: > On Wed, Oct 7, 2020 at 2:57 PM Thomas Zimmermann wrote: >> >> Hi >> >> Am 02.10.20 um 11:58 schrieb Daniel Vetter: >>> On Wed, Sep 30, 2020 at 02:51:46PM +0200, Daniel Vetter wrote: >>>> On Wed, Sep 30, 2020 at 2:34 PM Christian K?nig >>>> wrote: >>>>> >>>>> Am 30.09.20 um 11:47 schrieb Daniel Vetter: >>>>>> On Wed, Sep 30, 2020 at 10:34:31AM +0200, Christian K?nig wrote: >>>>>>> Am 30.09.20 um 10:19 schrieb Thomas Zimmermann: >>>>>>>> Hi >>>>>>>> >>>>>>>> Am 30.09.20 um 10:05 schrieb Christian K?nig: >>>>>>>>> Am 29.09.20 um 19:49 schrieb Thomas Zimmermann: >>>>>>>>>> Hi Christian >>>>>>>>>> >>>>>>>>>> Am 29.09.20 um 17:35 schrieb Christian K?nig: >>>>>>>>>>> Am 29.09.20 um 17:14 schrieb Thomas Zimmermann: >>>>>>>>>>>> The new helper ttm_kmap_obj_to_dma_buf() extracts address and location >>>>>>>>>>>> from and instance of TTM's kmap_obj and initializes struct dma_buf_map >>>>>>>>>>>> with these values. Helpful for TTM-based drivers. >>>>>>>>>>> We could completely drop that if we use the same structure inside TTM as >>>>>>>>>>> well. >>>>>>>>>>> >>>>>>>>>>> Additional to that which driver is going to use this? >>>>>>>>>> As Daniel mentioned, it's in patch 3. The TTM-based drivers will >>>>>>>>>> retrieve the pointer via this function. >>>>>>>>>> >>>>>>>>>> I do want to see all that being more tightly integrated into TTM, but >>>>>>>>>> not in this series. This one is about fixing the bochs-on-sparc64 >>>>>>>>>> problem for good. Patch 7 adds an update to TTM to the DRM TODO list. >>>>>>>>> I should have asked which driver you try to fix here :) >>>>>>>>> >>>>>>>>> In this case just keep the function inside bochs and only fix it there. >>>>>>>>> >>>>>>>>> All other drivers can be fixed when we generally pump this through TTM. >>>>>>>> Did you take a look at patch 3? This function will be used by VRAM >>>>>>>> helpers, nouveau, radeon, amdgpu and qxl. If we don't put it here, we >>>>>>>> have to duplicate the functionality in each if these drivers. Bochs >>>>>>>> itself uses VRAM helpers and doesn't touch the function directly. >>>>>>> Ah, ok can we have that then only in the VRAM helpers? >>>>>>> >>>>>>> Alternative you could go ahead and use dma_buf_map in ttm_bo_kmap_obj >>>>>>> directly and drop the hack with the TTM_BO_MAP_IOMEM_MASK. >>>>>>> >>>>>>> What I want to avoid is to have another conversion function in TTM because >>>>>>> what happens here is that we already convert from ttm_bus_placement to >>>>>>> ttm_bo_kmap_obj and then to dma_buf_map. >>>>>> Hm I'm not really seeing how that helps with a gradual conversion of >>>>>> everything over to dma_buf_map and assorted helpers for access? There's >>>>>> too many places in ttm drivers where is_iomem and related stuff is used to >>>>>> be able to convert it all in one go. An intermediate state with a bunch of >>>>>> conversions seems fairly unavoidable to me. >>>>> >>>>> Fair enough. I would just have started bottom up and not top down. >>>>> >>>>> Anyway feel free to go ahead with this approach as long as we can remove >>>>> the new function again when we clean that stuff up for good. >>>> >>>> Yeah I guess bottom up would make more sense as a refactoring. But the >>>> main motivation to land this here is to fix the __mmio vs normal >>>> memory confusion in the fbdev emulation helpers for sparc (and >>>> anything else that needs this). Hence the top down approach for >>>> rolling this out. >>> >>> Ok I started reviewing this a bit more in-depth, and I think this is a bit >>> too much of a de-tour. >>> >>> Looking through all the callers of ttm_bo_kmap almost everyone maps the >>> entire object. Only vmwgfx uses to map less than that. Also, everyone just >>> immediately follows up with converting that full object map into a >>> pointer. >>> >>> So I think what we really want here is: >>> - new function >>> >>> int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map); >>> >>> _vmap name since that's consistent with both dma_buf functions and >>> what's usually used to implement this. Outside of the ttm world kmap >>> usually just means single-page mappings using kmap() or it's iomem >>> sibling io_mapping_map* so rather confusing name for a function which >>> usually is just used to set up a vmap of the entire buffer. >>> >>> - a helper which can be used for the drm_gem_object_funcs vmap/vunmap >>> functions for all ttm drivers. We should be able to make this fully >>> generic because a) we now have dma_buf_map and b) drm_gem_object is >>> embedded in the ttm_bo, so we can upcast for everyone who's both a ttm >>> and gem driver. >>> >>> This is maybe a good follow-up, since it should allow us to ditch quite >>> a bit of the vram helper code for this more generic stuff. I also might >>> have missed some special-cases here, but from a quick look everything >>> just pins the buffer to the current location and that's it. >>> >>> Also this obviously requires Christian's generic ttm_bo_pin rework >>> first. >>> >>> - roll the above out to drivers. >>> >>> Christian/Thomas, thoughts on this? >> >> I agree on the goals, but what is the immediate objective here? >> >> Adding ttm_bo_vmap() does not work out easily, as struct ttm_bo_kmap_obj >> is a central part of the internals of TTM. struct ttm_bo_kmap_obj has >> more internal state that struct dma_buf_map, so they are not easily >> convertible either. What you propose seems to require a reimplementation >> of the existing ttm_bo_kmap() code. That is it's own patch series. >> >> I'd rather go with some variant of the existing patch and add >> ttm_bo_vmap() in a follow-up. > > ttm_bo_vmap would simply wrap what you currently open-code as > ttm_bo_kmap + ttm_kmap_obj_to_dma_buf_map. Removing ttm_kmap_obj would > be a much later step. Why do you think adding ttm_bo_vmap is not > possible? The calls to ttm_bo_kmap/_kunmap() require an instance of struct ttm_bo_kmap_obj that is stored in each driver's private bo structure (e.g., struct drm_gem_vram_object, struct radeon_bo, etc). When I made patch 3, I flirted with the idea of unifying the driver's _vmap code in a shared helper, but I couldn't find a simple way of doing it. That's why it's open-coded in the first place. Best regards Thomas > -Daniel > > >> Best regards >> Thomas >> >>> >>> I think for the immediate need of rolling this out for vram helpers and >>> fbdev code we should be able to do this, but just postpone the driver wide >>> roll-out for now. >>> >>> Cheers, Daniel >>> >>>> -Daniel >>>> >>>>> >>>>> Christian. >>>>> >>>>>> -Daniel >>>>>> >>>>>>> Thanks, >>>>>>> Christian. >>>>>>> >>>>>>>> Best regards >>>>>>>> Thomas >>>>>>>> >>>>>>>>> Regards, >>>>>>>>> Christian. >>>>>>>>> >>>>>>>>>> Best regards >>>>>>>>>> Thomas >>>>>>>>>> >>>>>>>>>>> Regards, >>>>>>>>>>> Christian. >>>>>>>>>>> >>>>>>>>>>>> Signed-off-by: Thomas Zimmermann >>>>>>>>>>>> --- >>>>>>>>>>>> include/drm/ttm/ttm_bo_api.h | 24 ++++++++++++++++++++++++ >>>>>>>>>>>> include/linux/dma-buf-map.h | 20 ++++++++++++++++++++ >>>>>>>>>>>> 2 files changed, 44 insertions(+) >>>>>>>>>>>> >>>>>>>>>>>> diff --git a/include/drm/ttm/ttm_bo_api.h b/include/drm/ttm/ttm_bo_api.h >>>>>>>>>>>> index c96a25d571c8..62d89f05a801 100644 >>>>>>>>>>>> --- a/include/drm/ttm/ttm_bo_api.h >>>>>>>>>>>> +++ b/include/drm/ttm/ttm_bo_api.h >>>>>>>>>>>> @@ -34,6 +34,7 @@ >>>>>>>>>>>> #include >>>>>>>>>>>> #include >>>>>>>>>>>> #include >>>>>>>>>>>> +#include >>>>>>>>>>>> #include >>>>>>>>>>>> #include >>>>>>>>>>>> #include >>>>>>>>>>>> @@ -486,6 +487,29 @@ static inline void *ttm_kmap_obj_virtual(struct >>>>>>>>>>>> ttm_bo_kmap_obj *map, >>>>>>>>>>>> return map->virtual; >>>>>>>>>>>> } >>>>>>>>>>>> +/** >>>>>>>>>>>> + * ttm_kmap_obj_to_dma_buf_map >>>>>>>>>>>> + * >>>>>>>>>>>> + * @kmap: A struct ttm_bo_kmap_obj returned from ttm_bo_kmap. >>>>>>>>>>>> + * @map: Returns the mapping as struct dma_buf_map >>>>>>>>>>>> + * >>>>>>>>>>>> + * Converts struct ttm_bo_kmap_obj to struct dma_buf_map. If the memory >>>>>>>>>>>> + * is not mapped, the returned mapping is initialized to NULL. >>>>>>>>>>>> + */ >>>>>>>>>>>> +static inline void ttm_kmap_obj_to_dma_buf_map(struct ttm_bo_kmap_obj >>>>>>>>>>>> *kmap, >>>>>>>>>>>> + struct dma_buf_map *map) >>>>>>>>>>>> +{ >>>>>>>>>>>> + bool is_iomem; >>>>>>>>>>>> + void *vaddr = ttm_kmap_obj_virtual(kmap, &is_iomem); >>>>>>>>>>>> + >>>>>>>>>>>> + if (!vaddr) >>>>>>>>>>>> + dma_buf_map_clear(map); >>>>>>>>>>>> + else if (is_iomem) >>>>>>>>>>>> + dma_buf_map_set_vaddr_iomem(map, (void __force __iomem *)vaddr); >>>>>>>>>>>> + else >>>>>>>>>>>> + dma_buf_map_set_vaddr(map, vaddr); >>>>>>>>>>>> +} >>>>>>>>>>>> + >>>>>>>>>>>> /** >>>>>>>>>>>> * ttm_bo_kmap >>>>>>>>>>>> * >>>>>>>>>>>> diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h >>>>>>>>>>>> index fd1aba545fdf..2e8bbecb5091 100644 >>>>>>>>>>>> --- a/include/linux/dma-buf-map.h >>>>>>>>>>>> +++ b/include/linux/dma-buf-map.h >>>>>>>>>>>> @@ -45,6 +45,12 @@ >>>>>>>>>>>> * >>>>>>>>>>>> * dma_buf_map_set_vaddr(&map. 0xdeadbeaf); >>>>>>>>>>>> * >>>>>>>>>>>> + * To set an address in I/O memory, use dma_buf_map_set_vaddr_iomem(). >>>>>>>>>>>> + * >>>>>>>>>>>> + * .. code-block:: c >>>>>>>>>>>> + * >>>>>>>>>>>> + * dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf); >>>>>>>>>>>> + * >>>>>>>>>>>> * Test if a mapping is valid with either dma_buf_map_is_set() or >>>>>>>>>>>> * dma_buf_map_is_null(). >>>>>>>>>>>> * >>>>>>>>>>>> @@ -118,6 +124,20 @@ static inline void dma_buf_map_set_vaddr(struct >>>>>>>>>>>> dma_buf_map *map, void *vaddr) >>>>>>>>>>>> map->is_iomem = false; >>>>>>>>>>>> } >>>>>>>>>>>> +/** >>>>>>>>>>>> + * dma_buf_map_set_vaddr_iomem - Sets a dma-buf mapping structure to >>>>>>>>>>>> an address in I/O memory >>>>>>>>>>>> + * @map: The dma-buf mapping structure >>>>>>>>>>>> + * @vaddr_iomem: An I/O-memory address >>>>>>>>>>>> + * >>>>>>>>>>>> + * Sets the address and the I/O-memory flag. >>>>>>>>>>>> + */ >>>>>>>>>>>> +static inline void dma_buf_map_set_vaddr_iomem(struct dma_buf_map *map, >>>>>>>>>>>> + void __iomem *vaddr_iomem) >>>>>>>>>>>> +{ >>>>>>>>>>>> + map->vaddr_iomem = vaddr_iomem; >>>>>>>>>>>> + map->is_iomem = true; >>>>>>>>>>>> +} >>>>>>>>>>>> + >>>>>>>>>>>> /** >>>>>>>>>>>> * dma_buf_map_is_equal - Compares two dma-buf mapping structures >>>>>>>>>>>> for equality >>>>>>>>>>>> * @lhs: The dma-buf mapping structure >>>>>>>>>>> _______________________________________________ >>>>>>>>>>> dri-devel mailing list >>>>>>>>>>> dri-devel at lists.freedesktop.org >>>>>>>>>>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Fdri-devel&data=02%7C01%7Cchristian.koenig%40amd.com%7C472c3d655a61411deb6708d86525d1b8%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637370560438965013&sdata=HdHOA%2F1VcIX%2F7YtfYTiAqYEvw7Ag%2FS%2BxS5VwJKOv5y0%3D&reserved=0 >>>>>>>>>> _______________________________________________ >>>>>>>>>> amd-gfx mailing list >>>>>>>>>> amd-gfx at lists.freedesktop.org >>>>>>>>>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Famd-gfx&data=02%7C01%7Cchristian.koenig%40amd.com%7C472c3d655a61411deb6708d86525d1b8%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637370560438965013&sdata=H%2B5HKCsTrksRV2EyEiFGSTyS79jsWCmJimSMoJYusx8%3D&reserved=0 >>>>>>>>> _______________________________________________ >>>>>>>>> dri-devel mailing list >>>>>>>>> dri-devel at lists.freedesktop.org >>>>>>>>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Fdri-devel&data=02%7C01%7Cchristian.koenig%40amd.com%7C472c3d655a61411deb6708d86525d1b8%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637370560438965013&sdata=HdHOA%2F1VcIX%2F7YtfYTiAqYEvw7Ag%2FS%2BxS5VwJKOv5y0%3D&reserved=0 >>>>>>>>> >>>>>>>> _______________________________________________ >>>>>>>> amd-gfx mailing list >>>>>>>> amd-gfx at lists.freedesktop.org >>>>>>>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Famd-gfx&data=02%7C01%7Cchristian.koenig%40amd.com%7C472c3d655a61411deb6708d86525d1b8%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637370560438965013&sdata=H%2B5HKCsTrksRV2EyEiFGSTyS79jsWCmJimSMoJYusx8%3D&reserved=0 >>>>> >>>> >>>> >>>> -- >>>> Daniel Vetter >>>> Software Engineer, Intel Corporation >>>> http://blog.ffwll.ch >>> >> >> -- >> Thomas Zimmermann >> Graphics Driver Developer >> SUSE Software Solutions Germany GmbH >> Maxfeldstr. 5, 90409 N?rnberg, Germany >> (HRB 36809, AG N?rnberg) >> Gesch?ftsf?hrer: Felix Imend?rffer >> > > -- Thomas Zimmermann Graphics Driver Developer SUSE Software Solutions Germany GmbH Maxfeldstr. 5, 90409 N?rnberg, Germany (HRB 36809, AG N?rnberg) Gesch?ftsf?hrer: Felix Imend?rffer -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 516 bytes Desc: OpenPGP digital signature URL: From christian.koenig at amd.com Wed Oct 7 13:24:44 2020 From: christian.koenig at amd.com (=?UTF-8?Q?Christian_K=c3=b6nig?=) Date: Wed, 7 Oct 2020 15:24:44 +0200 Subject: [Spice-devel] [PATCH v3 2/7] drm/ttm: Add ttm_kmap_obj_to_dma_buf_map() for type conversion In-Reply-To: <26ac0446-9e16-1ca1-7407-3d0cd7125e0e@suse.de> References: <20200929151437.19717-1-tzimmermann@suse.de> <20200929151437.19717-3-tzimmermann@suse.de> <8fad0114-064a-4ed5-c21d-d1b4294de0a1@amd.com> <2614314a-81f7-4722-c400-68d90e48e09a@suse.de> <8a84f62b-33f3-f44c-52af-c859a0e0d1fb@gmail.com> <07972ada-9135-3743-a86b-487f610c509f@suse.de> <20200930094712.GW438822@phenom.ffwll.local> <8479d0aa-3826-4f37-0109-55daca515793@amd.com> <20201002095830.GH438822@phenom.ffwll.local> <5bf40546-8da9-1649-22da-a982f1e8d9c3@suse.de> <26ac0446-9e16-1ca1-7407-3d0cd7125e0e@suse.de> Message-ID: <09d634d0-f20a-e9a9-d8d2-b50e8aaf156f@amd.com> Am 07.10.20 um 15:20 schrieb Thomas Zimmermann: > Hi > > Am 07.10.20 um 15:10 schrieb Daniel Vetter: >> On Wed, Oct 7, 2020 at 2:57 PM Thomas Zimmermann wrote: >>> Hi >>> >>> Am 02.10.20 um 11:58 schrieb Daniel Vetter: >>>> On Wed, Sep 30, 2020 at 02:51:46PM +0200, Daniel Vetter wrote: >>>>> On Wed, Sep 30, 2020 at 2:34 PM Christian K?nig >>>>> wrote: >>>>>> Am 30.09.20 um 11:47 schrieb Daniel Vetter: >>>>>>> On Wed, Sep 30, 2020 at 10:34:31AM +0200, Christian K?nig wrote: >>>>>>>> Am 30.09.20 um 10:19 schrieb Thomas Zimmermann: >>>>>>>>> Hi >>>>>>>>> >>>>>>>>> Am 30.09.20 um 10:05 schrieb Christian K?nig: >>>>>>>>>> Am 29.09.20 um 19:49 schrieb Thomas Zimmermann: >>>>>>>>>>> Hi Christian >>>>>>>>>>> >>>>>>>>>>> Am 29.09.20 um 17:35 schrieb Christian K?nig: >>>>>>>>>>>> Am 29.09.20 um 17:14 schrieb Thomas Zimmermann: >>>>>>>>>>>>> The new helper ttm_kmap_obj_to_dma_buf() extracts address and location >>>>>>>>>>>>> from and instance of TTM's kmap_obj and initializes struct dma_buf_map >>>>>>>>>>>>> with these values. Helpful for TTM-based drivers. >>>>>>>>>>>> We could completely drop that if we use the same structure inside TTM as >>>>>>>>>>>> well. >>>>>>>>>>>> >>>>>>>>>>>> Additional to that which driver is going to use this? >>>>>>>>>>> As Daniel mentioned, it's in patch 3. The TTM-based drivers will >>>>>>>>>>> retrieve the pointer via this function. >>>>>>>>>>> >>>>>>>>>>> I do want to see all that being more tightly integrated into TTM, but >>>>>>>>>>> not in this series. This one is about fixing the bochs-on-sparc64 >>>>>>>>>>> problem for good. Patch 7 adds an update to TTM to the DRM TODO list. >>>>>>>>>> I should have asked which driver you try to fix here :) >>>>>>>>>> >>>>>>>>>> In this case just keep the function inside bochs and only fix it there. >>>>>>>>>> >>>>>>>>>> All other drivers can be fixed when we generally pump this through TTM. >>>>>>>>> Did you take a look at patch 3? This function will be used by VRAM >>>>>>>>> helpers, nouveau, radeon, amdgpu and qxl. If we don't put it here, we >>>>>>>>> have to duplicate the functionality in each if these drivers. Bochs >>>>>>>>> itself uses VRAM helpers and doesn't touch the function directly. >>>>>>>> Ah, ok can we have that then only in the VRAM helpers? >>>>>>>> >>>>>>>> Alternative you could go ahead and use dma_buf_map in ttm_bo_kmap_obj >>>>>>>> directly and drop the hack with the TTM_BO_MAP_IOMEM_MASK. >>>>>>>> >>>>>>>> What I want to avoid is to have another conversion function in TTM because >>>>>>>> what happens here is that we already convert from ttm_bus_placement to >>>>>>>> ttm_bo_kmap_obj and then to dma_buf_map. >>>>>>> Hm I'm not really seeing how that helps with a gradual conversion of >>>>>>> everything over to dma_buf_map and assorted helpers for access? There's >>>>>>> too many places in ttm drivers where is_iomem and related stuff is used to >>>>>>> be able to convert it all in one go. An intermediate state with a bunch of >>>>>>> conversions seems fairly unavoidable to me. >>>>>> Fair enough. I would just have started bottom up and not top down. >>>>>> >>>>>> Anyway feel free to go ahead with this approach as long as we can remove >>>>>> the new function again when we clean that stuff up for good. >>>>> Yeah I guess bottom up would make more sense as a refactoring. But the >>>>> main motivation to land this here is to fix the __mmio vs normal >>>>> memory confusion in the fbdev emulation helpers for sparc (and >>>>> anything else that needs this). Hence the top down approach for >>>>> rolling this out. >>>> Ok I started reviewing this a bit more in-depth, and I think this is a bit >>>> too much of a de-tour. >>>> >>>> Looking through all the callers of ttm_bo_kmap almost everyone maps the >>>> entire object. Only vmwgfx uses to map less than that. Also, everyone just >>>> immediately follows up with converting that full object map into a >>>> pointer. >>>> >>>> So I think what we really want here is: >>>> - new function >>>> >>>> int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map); >>>> >>>> _vmap name since that's consistent with both dma_buf functions and >>>> what's usually used to implement this. Outside of the ttm world kmap >>>> usually just means single-page mappings using kmap() or it's iomem >>>> sibling io_mapping_map* so rather confusing name for a function which >>>> usually is just used to set up a vmap of the entire buffer. >>>> >>>> - a helper which can be used for the drm_gem_object_funcs vmap/vunmap >>>> functions for all ttm drivers. We should be able to make this fully >>>> generic because a) we now have dma_buf_map and b) drm_gem_object is >>>> embedded in the ttm_bo, so we can upcast for everyone who's both a ttm >>>> and gem driver. >>>> >>>> This is maybe a good follow-up, since it should allow us to ditch quite >>>> a bit of the vram helper code for this more generic stuff. I also might >>>> have missed some special-cases here, but from a quick look everything >>>> just pins the buffer to the current location and that's it. >>>> >>>> Also this obviously requires Christian's generic ttm_bo_pin rework >>>> first. >>>> >>>> - roll the above out to drivers. >>>> >>>> Christian/Thomas, thoughts on this? >>> I agree on the goals, but what is the immediate objective here? >>> >>> Adding ttm_bo_vmap() does not work out easily, as struct ttm_bo_kmap_obj >>> is a central part of the internals of TTM. struct ttm_bo_kmap_obj has >>> more internal state that struct dma_buf_map, so they are not easily >>> convertible either. What you propose seems to require a reimplementation >>> of the existing ttm_bo_kmap() code. That is it's own patch series. >>> >>> I'd rather go with some variant of the existing patch and add >>> ttm_bo_vmap() in a follow-up. >> ttm_bo_vmap would simply wrap what you currently open-code as >> ttm_bo_kmap + ttm_kmap_obj_to_dma_buf_map. Removing ttm_kmap_obj would >> be a much later step. Why do you think adding ttm_bo_vmap is not >> possible? > The calls to ttm_bo_kmap/_kunmap() require an instance of struct > ttm_bo_kmap_obj that is stored in each driver's private bo structure > (e.g., struct drm_gem_vram_object, struct radeon_bo, etc). When I made > patch 3, I flirted with the idea of unifying the driver's _vmap code in > a shared helper, but I couldn't find a simple way of doing it. That's > why it's open-coded in the first place. Well that makes kind of sense. Keep in mind that ttm_bo_kmap is currently way to complicated. Christian. > > Best regards > Thomas > >> -Daniel >> >> >>> Best regards >>> Thomas >>> >>>> I think for the immediate need of rolling this out for vram helpers and >>>> fbdev code we should be able to do this, but just postpone the driver wide >>>> roll-out for now. >>>> >>>> Cheers, Daniel >>>> >>>>> -Daniel >>>>> >>>>>> Christian. >>>>>> >>>>>>> -Daniel >>>>>>> >>>>>>>> Thanks, >>>>>>>> Christian. >>>>>>>> >>>>>>>>> Best regards >>>>>>>>> Thomas >>>>>>>>> >>>>>>>>>> Regards, >>>>>>>>>> Christian. >>>>>>>>>> >>>>>>>>>>> Best regards >>>>>>>>>>> Thomas >>>>>>>>>>> >>>>>>>>>>>> Regards, >>>>>>>>>>>> Christian. >>>>>>>>>>>> >>>>>>>>>>>>> Signed-off-by: Thomas Zimmermann >>>>>>>>>>>>> --- >>>>>>>>>>>>> include/drm/ttm/ttm_bo_api.h | 24 ++++++++++++++++++++++++ >>>>>>>>>>>>> include/linux/dma-buf-map.h | 20 ++++++++++++++++++++ >>>>>>>>>>>>> 2 files changed, 44 insertions(+) >>>>>>>>>>>>> >>>>>>>>>>>>> diff --git a/include/drm/ttm/ttm_bo_api.h b/include/drm/ttm/ttm_bo_api.h >>>>>>>>>>>>> index c96a25d571c8..62d89f05a801 100644 >>>>>>>>>>>>> --- a/include/drm/ttm/ttm_bo_api.h >>>>>>>>>>>>> +++ b/include/drm/ttm/ttm_bo_api.h >>>>>>>>>>>>> @@ -34,6 +34,7 @@ >>>>>>>>>>>>> #include >>>>>>>>>>>>> #include >>>>>>>>>>>>> #include >>>>>>>>>>>>> +#include >>>>>>>>>>>>> #include >>>>>>>>>>>>> #include >>>>>>>>>>>>> #include >>>>>>>>>>>>> @@ -486,6 +487,29 @@ static inline void *ttm_kmap_obj_virtual(struct >>>>>>>>>>>>> ttm_bo_kmap_obj *map, >>>>>>>>>>>>> return map->virtual; >>>>>>>>>>>>> } >>>>>>>>>>>>> +/** >>>>>>>>>>>>> + * ttm_kmap_obj_to_dma_buf_map >>>>>>>>>>>>> + * >>>>>>>>>>>>> + * @kmap: A struct ttm_bo_kmap_obj returned from ttm_bo_kmap. >>>>>>>>>>>>> + * @map: Returns the mapping as struct dma_buf_map >>>>>>>>>>>>> + * >>>>>>>>>>>>> + * Converts struct ttm_bo_kmap_obj to struct dma_buf_map. If the memory >>>>>>>>>>>>> + * is not mapped, the returned mapping is initialized to NULL. >>>>>>>>>>>>> + */ >>>>>>>>>>>>> +static inline void ttm_kmap_obj_to_dma_buf_map(struct ttm_bo_kmap_obj >>>>>>>>>>>>> *kmap, >>>>>>>>>>>>> + struct dma_buf_map *map) >>>>>>>>>>>>> +{ >>>>>>>>>>>>> + bool is_iomem; >>>>>>>>>>>>> + void *vaddr = ttm_kmap_obj_virtual(kmap, &is_iomem); >>>>>>>>>>>>> + >>>>>>>>>>>>> + if (!vaddr) >>>>>>>>>>>>> + dma_buf_map_clear(map); >>>>>>>>>>>>> + else if (is_iomem) >>>>>>>>>>>>> + dma_buf_map_set_vaddr_iomem(map, (void __force __iomem *)vaddr); >>>>>>>>>>>>> + else >>>>>>>>>>>>> + dma_buf_map_set_vaddr(map, vaddr); >>>>>>>>>>>>> +} >>>>>>>>>>>>> + >>>>>>>>>>>>> /** >>>>>>>>>>>>> * ttm_bo_kmap >>>>>>>>>>>>> * >>>>>>>>>>>>> diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h >>>>>>>>>>>>> index fd1aba545fdf..2e8bbecb5091 100644 >>>>>>>>>>>>> --- a/include/linux/dma-buf-map.h >>>>>>>>>>>>> +++ b/include/linux/dma-buf-map.h >>>>>>>>>>>>> @@ -45,6 +45,12 @@ >>>>>>>>>>>>> * >>>>>>>>>>>>> * dma_buf_map_set_vaddr(&map. 0xdeadbeaf); >>>>>>>>>>>>> * >>>>>>>>>>>>> + * To set an address in I/O memory, use dma_buf_map_set_vaddr_iomem(). >>>>>>>>>>>>> + * >>>>>>>>>>>>> + * .. code-block:: c >>>>>>>>>>>>> + * >>>>>>>>>>>>> + * dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf); >>>>>>>>>>>>> + * >>>>>>>>>>>>> * Test if a mapping is valid with either dma_buf_map_is_set() or >>>>>>>>>>>>> * dma_buf_map_is_null(). >>>>>>>>>>>>> * >>>>>>>>>>>>> @@ -118,6 +124,20 @@ static inline void dma_buf_map_set_vaddr(struct >>>>>>>>>>>>> dma_buf_map *map, void *vaddr) >>>>>>>>>>>>> map->is_iomem = false; >>>>>>>>>>>>> } >>>>>>>>>>>>> +/** >>>>>>>>>>>>> + * dma_buf_map_set_vaddr_iomem - Sets a dma-buf mapping structure to >>>>>>>>>>>>> an address in I/O memory >>>>>>>>>>>>> + * @map: The dma-buf mapping structure >>>>>>>>>>>>> + * @vaddr_iomem: An I/O-memory address >>>>>>>>>>>>> + * >>>>>>>>>>>>> + * Sets the address and the I/O-memory flag. >>>>>>>>>>>>> + */ >>>>>>>>>>>>> +static inline void dma_buf_map_set_vaddr_iomem(struct dma_buf_map *map, >>>>>>>>>>>>> + void __iomem *vaddr_iomem) >>>>>>>>>>>>> +{ >>>>>>>>>>>>> + map->vaddr_iomem = vaddr_iomem; >>>>>>>>>>>>> + map->is_iomem = true; >>>>>>>>>>>>> +} >>>>>>>>>>>>> + >>>>>>>>>>>>> /** >>>>>>>>>>>>> * dma_buf_map_is_equal - Compares two dma-buf mapping structures >>>>>>>>>>>>> for equality >>>>>>>>>>>>> * @lhs: The dma-buf mapping structure >>>>>>>>>>>> _______________________________________________ >>>>>>>>>>>> dri-devel mailing list >>>>>>>>>>>> dri-devel at lists.freedesktop.org >>>>>>>>>>>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Fdri-devel&data=02%7C01%7Cchristian.koenig%40amd.com%7C472c3d655a61411deb6708d86525d1b8%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637370560438965013&sdata=HdHOA%2F1VcIX%2F7YtfYTiAqYEvw7Ag%2FS%2BxS5VwJKOv5y0%3D&reserved=0 >>>>>>>>>>> _______________________________________________ >>>>>>>>>>> amd-gfx mailing list >>>>>>>>>>> amd-gfx at lists.freedesktop.org >>>>>>>>>>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Famd-gfx&data=02%7C01%7Cchristian.koenig%40amd.com%7C472c3d655a61411deb6708d86525d1b8%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637370560438965013&sdata=H%2B5HKCsTrksRV2EyEiFGSTyS79jsWCmJimSMoJYusx8%3D&reserved=0 >>>>>>>>>> _______________________________________________ >>>>>>>>>> dri-devel mailing list >>>>>>>>>> dri-devel at lists.freedesktop.org >>>>>>>>>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Fdri-devel&data=02%7C01%7Cchristian.koenig%40amd.com%7C472c3d655a61411deb6708d86525d1b8%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637370560438965013&sdata=HdHOA%2F1VcIX%2F7YtfYTiAqYEvw7Ag%2FS%2BxS5VwJKOv5y0%3D&reserved=0 >>>>>>>>>> >>>>>>>>> _______________________________________________ >>>>>>>>> amd-gfx mailing list >>>>>>>>> amd-gfx at lists.freedesktop.org >>>>>>>>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Famd-gfx&data=02%7C01%7Cchristian.koenig%40amd.com%7C472c3d655a61411deb6708d86525d1b8%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637370560438965013&sdata=H%2B5HKCsTrksRV2EyEiFGSTyS79jsWCmJimSMoJYusx8%3D&reserved=0 >>>>> >>>>> -- >>>>> Daniel Vetter >>>>> Software Engineer, Intel Corporation >>>>> http://blog.ffwll.ch >>> -- >>> Thomas Zimmermann >>> Graphics Driver Developer >>> SUSE Software Solutions Germany GmbH >>> Maxfeldstr. 5, 90409 N?rnberg, Germany >>> (HRB 36809, AG N?rnberg) >>> Gesch?ftsf?hrer: Felix Imend?rffer >>> >> From daniel at ffwll.ch Wed Oct 7 14:30:56 2020 From: daniel at ffwll.ch (Daniel Vetter) Date: Wed, 7 Oct 2020 16:30:56 +0200 Subject: [Spice-devel] [PATCH v3 2/7] drm/ttm: Add ttm_kmap_obj_to_dma_buf_map() for type conversion In-Reply-To: <09d634d0-f20a-e9a9-d8d2-b50e8aaf156f@amd.com> References: <20200929151437.19717-1-tzimmermann@suse.de> <20200929151437.19717-3-tzimmermann@suse.de> <8fad0114-064a-4ed5-c21d-d1b4294de0a1@amd.com> <2614314a-81f7-4722-c400-68d90e48e09a@suse.de> <8a84f62b-33f3-f44c-52af-c859a0e0d1fb@gmail.com> <07972ada-9135-3743-a86b-487f610c509f@suse.de> <20200930094712.GW438822@phenom.ffwll.local> <8479d0aa-3826-4f37-0109-55daca515793@amd.com> <20201002095830.GH438822@phenom.ffwll.local> <5bf40546-8da9-1649-22da-a982f1e8d9c3@suse.de> <26ac0446-9e16-1ca1-7407-3d0cd7125e0e@suse.de> <09d634d0-f20a-e9a9-d8d2-b50e8aaf156f@amd.com> Message-ID: On Wed, Oct 7, 2020 at 3:25 PM Christian K?nig wrote: > > Am 07.10.20 um 15:20 schrieb Thomas Zimmermann: > > Hi > > > > Am 07.10.20 um 15:10 schrieb Daniel Vetter: > >> On Wed, Oct 7, 2020 at 2:57 PM Thomas Zimmermann wrote: > >>> Hi > >>> > >>> Am 02.10.20 um 11:58 schrieb Daniel Vetter: > >>>> On Wed, Sep 30, 2020 at 02:51:46PM +0200, Daniel Vetter wrote: > >>>>> On Wed, Sep 30, 2020 at 2:34 PM Christian K?nig > >>>>> wrote: > >>>>>> Am 30.09.20 um 11:47 schrieb Daniel Vetter: > >>>>>>> On Wed, Sep 30, 2020 at 10:34:31AM +0200, Christian K?nig wrote: > >>>>>>>> Am 30.09.20 um 10:19 schrieb Thomas Zimmermann: > >>>>>>>>> Hi > >>>>>>>>> > >>>>>>>>> Am 30.09.20 um 10:05 schrieb Christian K?nig: > >>>>>>>>>> Am 29.09.20 um 19:49 schrieb Thomas Zimmermann: > >>>>>>>>>>> Hi Christian > >>>>>>>>>>> > >>>>>>>>>>> Am 29.09.20 um 17:35 schrieb Christian K?nig: > >>>>>>>>>>>> Am 29.09.20 um 17:14 schrieb Thomas Zimmermann: > >>>>>>>>>>>>> The new helper ttm_kmap_obj_to_dma_buf() extracts address and location > >>>>>>>>>>>>> from and instance of TTM's kmap_obj and initializes struct dma_buf_map > >>>>>>>>>>>>> with these values. Helpful for TTM-based drivers. > >>>>>>>>>>>> We could completely drop that if we use the same structure inside TTM as > >>>>>>>>>>>> well. > >>>>>>>>>>>> > >>>>>>>>>>>> Additional to that which driver is going to use this? > >>>>>>>>>>> As Daniel mentioned, it's in patch 3. The TTM-based drivers will > >>>>>>>>>>> retrieve the pointer via this function. > >>>>>>>>>>> > >>>>>>>>>>> I do want to see all that being more tightly integrated into TTM, but > >>>>>>>>>>> not in this series. This one is about fixing the bochs-on-sparc64 > >>>>>>>>>>> problem for good. Patch 7 adds an update to TTM to the DRM TODO list. > >>>>>>>>>> I should have asked which driver you try to fix here :) > >>>>>>>>>> > >>>>>>>>>> In this case just keep the function inside bochs and only fix it there. > >>>>>>>>>> > >>>>>>>>>> All other drivers can be fixed when we generally pump this through TTM. > >>>>>>>>> Did you take a look at patch 3? This function will be used by VRAM > >>>>>>>>> helpers, nouveau, radeon, amdgpu and qxl. If we don't put it here, we > >>>>>>>>> have to duplicate the functionality in each if these drivers. Bochs > >>>>>>>>> itself uses VRAM helpers and doesn't touch the function directly. > >>>>>>>> Ah, ok can we have that then only in the VRAM helpers? > >>>>>>>> > >>>>>>>> Alternative you could go ahead and use dma_buf_map in ttm_bo_kmap_obj > >>>>>>>> directly and drop the hack with the TTM_BO_MAP_IOMEM_MASK. > >>>>>>>> > >>>>>>>> What I want to avoid is to have another conversion function in TTM because > >>>>>>>> what happens here is that we already convert from ttm_bus_placement to > >>>>>>>> ttm_bo_kmap_obj and then to dma_buf_map. > >>>>>>> Hm I'm not really seeing how that helps with a gradual conversion of > >>>>>>> everything over to dma_buf_map and assorted helpers for access? There's > >>>>>>> too many places in ttm drivers where is_iomem and related stuff is used to > >>>>>>> be able to convert it all in one go. An intermediate state with a bunch of > >>>>>>> conversions seems fairly unavoidable to me. > >>>>>> Fair enough. I would just have started bottom up and not top down. > >>>>>> > >>>>>> Anyway feel free to go ahead with this approach as long as we can remove > >>>>>> the new function again when we clean that stuff up for good. > >>>>> Yeah I guess bottom up would make more sense as a refactoring. But the > >>>>> main motivation to land this here is to fix the __mmio vs normal > >>>>> memory confusion in the fbdev emulation helpers for sparc (and > >>>>> anything else that needs this). Hence the top down approach for > >>>>> rolling this out. > >>>> Ok I started reviewing this a bit more in-depth, and I think this is a bit > >>>> too much of a de-tour. > >>>> > >>>> Looking through all the callers of ttm_bo_kmap almost everyone maps the > >>>> entire object. Only vmwgfx uses to map less than that. Also, everyone just > >>>> immediately follows up with converting that full object map into a > >>>> pointer. > >>>> > >>>> So I think what we really want here is: > >>>> - new function > >>>> > >>>> int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map); > >>>> > >>>> _vmap name since that's consistent with both dma_buf functions and > >>>> what's usually used to implement this. Outside of the ttm world kmap > >>>> usually just means single-page mappings using kmap() or it's iomem > >>>> sibling io_mapping_map* so rather confusing name for a function which > >>>> usually is just used to set up a vmap of the entire buffer. > >>>> > >>>> - a helper which can be used for the drm_gem_object_funcs vmap/vunmap > >>>> functions for all ttm drivers. We should be able to make this fully > >>>> generic because a) we now have dma_buf_map and b) drm_gem_object is > >>>> embedded in the ttm_bo, so we can upcast for everyone who's both a ttm > >>>> and gem driver. > >>>> > >>>> This is maybe a good follow-up, since it should allow us to ditch quite > >>>> a bit of the vram helper code for this more generic stuff. I also might > >>>> have missed some special-cases here, but from a quick look everything > >>>> just pins the buffer to the current location and that's it. > >>>> > >>>> Also this obviously requires Christian's generic ttm_bo_pin rework > >>>> first. > >>>> > >>>> - roll the above out to drivers. > >>>> > >>>> Christian/Thomas, thoughts on this? > >>> I agree on the goals, but what is the immediate objective here? > >>> > >>> Adding ttm_bo_vmap() does not work out easily, as struct ttm_bo_kmap_obj > >>> is a central part of the internals of TTM. struct ttm_bo_kmap_obj has > >>> more internal state that struct dma_buf_map, so they are not easily > >>> convertible either. What you propose seems to require a reimplementation > >>> of the existing ttm_bo_kmap() code. That is it's own patch series. > >>> > >>> I'd rather go with some variant of the existing patch and add > >>> ttm_bo_vmap() in a follow-up. > >> ttm_bo_vmap would simply wrap what you currently open-code as > >> ttm_bo_kmap + ttm_kmap_obj_to_dma_buf_map. Removing ttm_kmap_obj would > >> be a much later step. Why do you think adding ttm_bo_vmap is not > >> possible? > > The calls to ttm_bo_kmap/_kunmap() require an instance of struct > > ttm_bo_kmap_obj that is stored in each driver's private bo structure > > (e.g., struct drm_gem_vram_object, struct radeon_bo, etc). When I made > > patch 3, I flirted with the idea of unifying the driver's _vmap code in > > a shared helper, but I couldn't find a simple way of doing it. That's > > why it's open-coded in the first place. Yeah we'd need a ttm_bo_vunmap I guess to make this work. Which shouldn't be more than a few lines, but maybe too much to do in this series. > Well that makes kind of sense. Keep in mind that ttm_bo_kmap is > currently way to complicated. Yeah, simplifying this into a ttm_bo_vmap on one side, and a simple 1-page kmap helper on the other should help a lot. -Daniel > > Christian. > > > > > Best regards > > Thomas > > > >> -Daniel > >> > >> > >>> Best regards > >>> Thomas > >>> > >>>> I think for the immediate need of rolling this out for vram helpers and > >>>> fbdev code we should be able to do this, but just postpone the driver wide > >>>> roll-out for now. > >>>> > >>>> Cheers, Daniel > >>>> > >>>>> -Daniel > >>>>> > >>>>>> Christian. > >>>>>> > >>>>>>> -Daniel > >>>>>>> > >>>>>>>> Thanks, > >>>>>>>> Christian. > >>>>>>>> > >>>>>>>>> Best regards > >>>>>>>>> Thomas > >>>>>>>>> > >>>>>>>>>> Regards, > >>>>>>>>>> Christian. > >>>>>>>>>> > >>>>>>>>>>> Best regards > >>>>>>>>>>> Thomas > >>>>>>>>>>> > >>>>>>>>>>>> Regards, > >>>>>>>>>>>> Christian. > >>>>>>>>>>>> > >>>>>>>>>>>>> Signed-off-by: Thomas Zimmermann > >>>>>>>>>>>>> --- > >>>>>>>>>>>>> include/drm/ttm/ttm_bo_api.h | 24 ++++++++++++++++++++++++ > >>>>>>>>>>>>> include/linux/dma-buf-map.h | 20 ++++++++++++++++++++ > >>>>>>>>>>>>> 2 files changed, 44 insertions(+) > >>>>>>>>>>>>> > >>>>>>>>>>>>> diff --git a/include/drm/ttm/ttm_bo_api.h b/include/drm/ttm/ttm_bo_api.h > >>>>>>>>>>>>> index c96a25d571c8..62d89f05a801 100644 > >>>>>>>>>>>>> --- a/include/drm/ttm/ttm_bo_api.h > >>>>>>>>>>>>> +++ b/include/drm/ttm/ttm_bo_api.h > >>>>>>>>>>>>> @@ -34,6 +34,7 @@ > >>>>>>>>>>>>> #include > >>>>>>>>>>>>> #include > >>>>>>>>>>>>> #include > >>>>>>>>>>>>> +#include > >>>>>>>>>>>>> #include > >>>>>>>>>>>>> #include > >>>>>>>>>>>>> #include > >>>>>>>>>>>>> @@ -486,6 +487,29 @@ static inline void *ttm_kmap_obj_virtual(struct > >>>>>>>>>>>>> ttm_bo_kmap_obj *map, > >>>>>>>>>>>>> return map->virtual; > >>>>>>>>>>>>> } > >>>>>>>>>>>>> +/** > >>>>>>>>>>>>> + * ttm_kmap_obj_to_dma_buf_map > >>>>>>>>>>>>> + * > >>>>>>>>>>>>> + * @kmap: A struct ttm_bo_kmap_obj returned from ttm_bo_kmap. > >>>>>>>>>>>>> + * @map: Returns the mapping as struct dma_buf_map > >>>>>>>>>>>>> + * > >>>>>>>>>>>>> + * Converts struct ttm_bo_kmap_obj to struct dma_buf_map. If the memory > >>>>>>>>>>>>> + * is not mapped, the returned mapping is initialized to NULL. > >>>>>>>>>>>>> + */ > >>>>>>>>>>>>> +static inline void ttm_kmap_obj_to_dma_buf_map(struct ttm_bo_kmap_obj > >>>>>>>>>>>>> *kmap, > >>>>>>>>>>>>> + struct dma_buf_map *map) > >>>>>>>>>>>>> +{ > >>>>>>>>>>>>> + bool is_iomem; > >>>>>>>>>>>>> + void *vaddr = ttm_kmap_obj_virtual(kmap, &is_iomem); > >>>>>>>>>>>>> + > >>>>>>>>>>>>> + if (!vaddr) > >>>>>>>>>>>>> + dma_buf_map_clear(map); > >>>>>>>>>>>>> + else if (is_iomem) > >>>>>>>>>>>>> + dma_buf_map_set_vaddr_iomem(map, (void __force __iomem *)vaddr); > >>>>>>>>>>>>> + else > >>>>>>>>>>>>> + dma_buf_map_set_vaddr(map, vaddr); > >>>>>>>>>>>>> +} > >>>>>>>>>>>>> + > >>>>>>>>>>>>> /** > >>>>>>>>>>>>> * ttm_bo_kmap > >>>>>>>>>>>>> * > >>>>>>>>>>>>> diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h > >>>>>>>>>>>>> index fd1aba545fdf..2e8bbecb5091 100644 > >>>>>>>>>>>>> --- a/include/linux/dma-buf-map.h > >>>>>>>>>>>>> +++ b/include/linux/dma-buf-map.h > >>>>>>>>>>>>> @@ -45,6 +45,12 @@ > >>>>>>>>>>>>> * > >>>>>>>>>>>>> * dma_buf_map_set_vaddr(&map. 0xdeadbeaf); > >>>>>>>>>>>>> * > >>>>>>>>>>>>> + * To set an address in I/O memory, use dma_buf_map_set_vaddr_iomem(). > >>>>>>>>>>>>> + * > >>>>>>>>>>>>> + * .. code-block:: c > >>>>>>>>>>>>> + * > >>>>>>>>>>>>> + * dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf); > >>>>>>>>>>>>> + * > >>>>>>>>>>>>> * Test if a mapping is valid with either dma_buf_map_is_set() or > >>>>>>>>>>>>> * dma_buf_map_is_null(). > >>>>>>>>>>>>> * > >>>>>>>>>>>>> @@ -118,6 +124,20 @@ static inline void dma_buf_map_set_vaddr(struct > >>>>>>>>>>>>> dma_buf_map *map, void *vaddr) > >>>>>>>>>>>>> map->is_iomem = false; > >>>>>>>>>>>>> } > >>>>>>>>>>>>> +/** > >>>>>>>>>>>>> + * dma_buf_map_set_vaddr_iomem - Sets a dma-buf mapping structure to > >>>>>>>>>>>>> an address in I/O memory > >>>>>>>>>>>>> + * @map: The dma-buf mapping structure > >>>>>>>>>>>>> + * @vaddr_iomem: An I/O-memory address > >>>>>>>>>>>>> + * > >>>>>>>>>>>>> + * Sets the address and the I/O-memory flag. > >>>>>>>>>>>>> + */ > >>>>>>>>>>>>> +static inline void dma_buf_map_set_vaddr_iomem(struct dma_buf_map *map, > >>>>>>>>>>>>> + void __iomem *vaddr_iomem) > >>>>>>>>>>>>> +{ > >>>>>>>>>>>>> + map->vaddr_iomem = vaddr_iomem; > >>>>>>>>>>>>> + map->is_iomem = true; > >>>>>>>>>>>>> +} > >>>>>>>>>>>>> + > >>>>>>>>>>>>> /** > >>>>>>>>>>>>> * dma_buf_map_is_equal - Compares two dma-buf mapping structures > >>>>>>>>>>>>> for equality > >>>>>>>>>>>>> * @lhs: The dma-buf mapping structure > >>>>>>>>>>>> _______________________________________________ > >>>>>>>>>>>> dri-devel mailing list > >>>>>>>>>>>> dri-devel at lists.freedesktop.org > >>>>>>>>>>>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Fdri-devel&data=02%7C01%7Cchristian.koenig%40amd.com%7C472c3d655a61411deb6708d86525d1b8%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637370560438965013&sdata=HdHOA%2F1VcIX%2F7YtfYTiAqYEvw7Ag%2FS%2BxS5VwJKOv5y0%3D&reserved=0 > >>>>>>>>>>> _______________________________________________ > >>>>>>>>>>> amd-gfx mailing list > >>>>>>>>>>> amd-gfx at lists.freedesktop.org > >>>>>>>>>>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Famd-gfx&data=02%7C01%7Cchristian.koenig%40amd.com%7C472c3d655a61411deb6708d86525d1b8%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637370560438965013&sdata=H%2B5HKCsTrksRV2EyEiFGSTyS79jsWCmJimSMoJYusx8%3D&reserved=0 > >>>>>>>>>> _______________________________________________ > >>>>>>>>>> dri-devel mailing list > >>>>>>>>>> dri-devel at lists.freedesktop.org > >>>>>>>>>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Fdri-devel&data=02%7C01%7Cchristian.koenig%40amd.com%7C472c3d655a61411deb6708d86525d1b8%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637370560438965013&sdata=HdHOA%2F1VcIX%2F7YtfYTiAqYEvw7Ag%2FS%2BxS5VwJKOv5y0%3D&reserved=0 > >>>>>>>>>> > >>>>>>>>> _______________________________________________ > >>>>>>>>> amd-gfx mailing list > >>>>>>>>> amd-gfx at lists.freedesktop.org > >>>>>>>>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Famd-gfx&data=02%7C01%7Cchristian.koenig%40amd.com%7C472c3d655a61411deb6708d86525d1b8%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637370560438965013&sdata=H%2B5HKCsTrksRV2EyEiFGSTyS79jsWCmJimSMoJYusx8%3D&reserved=0 > >>>>> > >>>>> -- > >>>>> Daniel Vetter > >>>>> Software Engineer, Intel Corporation > >>>>> http://blog.ffwll.ch > >>> -- > >>> Thomas Zimmermann > >>> Graphics Driver Developer > >>> SUSE Software Solutions Germany GmbH > >>> Maxfeldstr. 5, 90409 N?rnberg, Germany > >>> (HRB 36809, AG N?rnberg) > >>> Gesch?ftsf?hrer: Felix Imend?rffer > >>> > >> > -- Daniel Vetter Software Engineer, Intel Corporation http://blog.ffwll.ch From jjelen at redhat.com Thu Oct 8 07:25:25 2020 From: jjelen at redhat.com (Jakub Jelen) Date: Thu, 8 Oct 2020 09:25:25 +0200 Subject: [Spice-devel] libcacard 2.8.0 release Message-ID: <6ec31253-256e-8894-31bf-47cd42c00516@redhat.com> Hello, We released libcacard 2.8.0 earlier this week. It is mostly bugfix release, but it contains these notable changes: * Improve project documentation * Bump minimal glib version to 2.32 and remove old compatibility functions * Switch to meson build system, replacing existing autotools * Create and run fuzzer drivers to improve stability * Introduce a new API vcard_emul_finalize() to clean up allocated resources * Remove key caching to avoid issues with some PKCS #11 modules This release can be found at the following locations: https://www.spice-space.org/download/libcacard/ https://gitlab.freedesktop.org/spice/libcacard/-/releases It is signed with Viktor Toso's GPG key: 206D 3B35 2F56 6F3B 0E65 72E9 97D9 123D E37A 484F Regards, Jakub Jelen From tzimmermann at suse.de Thu Oct 8 09:00:21 2020 From: tzimmermann at suse.de (Thomas Zimmermann) Date: Thu, 8 Oct 2020 11:00:21 +0200 Subject: [Spice-devel] [PATCH v3 2/7] drm/ttm: Add ttm_kmap_obj_to_dma_buf_map() for type conversion In-Reply-To: References: <20200929151437.19717-1-tzimmermann@suse.de> <20200929151437.19717-3-tzimmermann@suse.de> <8fad0114-064a-4ed5-c21d-d1b4294de0a1@amd.com> <2614314a-81f7-4722-c400-68d90e48e09a@suse.de> <8a84f62b-33f3-f44c-52af-c859a0e0d1fb@gmail.com> <07972ada-9135-3743-a86b-487f610c509f@suse.de> <20200930094712.GW438822@phenom.ffwll.local> <8479d0aa-3826-4f37-0109-55daca515793@amd.com> <20201002095830.GH438822@phenom.ffwll.local> <5bf40546-8da9-1649-22da-a982f1e8d9c3@suse.de> <26ac0446-9e16-1ca1-7407-3d0cd7125e0e@suse.de> <09d634d0-f20a-e9a9-d8d2-b50e8aaf156f@amd.com> Message-ID: <5c0dc0bf-b4ca-db84-708e-74a5b033018f@suse.de> Hi Am 07.10.20 um 16:30 schrieb Daniel Vetter: > On Wed, Oct 7, 2020 at 3:25 PM Christian K?nig wrote: >> >> Am 07.10.20 um 15:20 schrieb Thomas Zimmermann: >>> Hi >>> >>> Am 07.10.20 um 15:10 schrieb Daniel Vetter: >>>> On Wed, Oct 7, 2020 at 2:57 PM Thomas Zimmermann wrote: >>>>> Hi >>>>> >>>>> Am 02.10.20 um 11:58 schrieb Daniel Vetter: >>>>>> On Wed, Sep 30, 2020 at 02:51:46PM +0200, Daniel Vetter wrote: >>>>>>> On Wed, Sep 30, 2020 at 2:34 PM Christian K?nig >>>>>>> wrote: >>>>>>>> Am 30.09.20 um 11:47 schrieb Daniel Vetter: >>>>>>>>> On Wed, Sep 30, 2020 at 10:34:31AM +0200, Christian K?nig wrote: >>>>>>>>>> Am 30.09.20 um 10:19 schrieb Thomas Zimmermann: >>>>>>>>>>> Hi >>>>>>>>>>> >>>>>>>>>>> Am 30.09.20 um 10:05 schrieb Christian K?nig: >>>>>>>>>>>> Am 29.09.20 um 19:49 schrieb Thomas Zimmermann: >>>>>>>>>>>>> Hi Christian >>>>>>>>>>>>> >>>>>>>>>>>>> Am 29.09.20 um 17:35 schrieb Christian K?nig: >>>>>>>>>>>>>> Am 29.09.20 um 17:14 schrieb Thomas Zimmermann: >>>>>>>>>>>>>>> The new helper ttm_kmap_obj_to_dma_buf() extracts address and location >>>>>>>>>>>>>>> from and instance of TTM's kmap_obj and initializes struct dma_buf_map >>>>>>>>>>>>>>> with these values. Helpful for TTM-based drivers. >>>>>>>>>>>>>> We could completely drop that if we use the same structure inside TTM as >>>>>>>>>>>>>> well. >>>>>>>>>>>>>> >>>>>>>>>>>>>> Additional to that which driver is going to use this? >>>>>>>>>>>>> As Daniel mentioned, it's in patch 3. The TTM-based drivers will >>>>>>>>>>>>> retrieve the pointer via this function. >>>>>>>>>>>>> >>>>>>>>>>>>> I do want to see all that being more tightly integrated into TTM, but >>>>>>>>>>>>> not in this series. This one is about fixing the bochs-on-sparc64 >>>>>>>>>>>>> problem for good. Patch 7 adds an update to TTM to the DRM TODO list. >>>>>>>>>>>> I should have asked which driver you try to fix here :) >>>>>>>>>>>> >>>>>>>>>>>> In this case just keep the function inside bochs and only fix it there. >>>>>>>>>>>> >>>>>>>>>>>> All other drivers can be fixed when we generally pump this through TTM. >>>>>>>>>>> Did you take a look at patch 3? This function will be used by VRAM >>>>>>>>>>> helpers, nouveau, radeon, amdgpu and qxl. If we don't put it here, we >>>>>>>>>>> have to duplicate the functionality in each if these drivers. Bochs >>>>>>>>>>> itself uses VRAM helpers and doesn't touch the function directly. >>>>>>>>>> Ah, ok can we have that then only in the VRAM helpers? >>>>>>>>>> >>>>>>>>>> Alternative you could go ahead and use dma_buf_map in ttm_bo_kmap_obj >>>>>>>>>> directly and drop the hack with the TTM_BO_MAP_IOMEM_MASK. >>>>>>>>>> >>>>>>>>>> What I want to avoid is to have another conversion function in TTM because >>>>>>>>>> what happens here is that we already convert from ttm_bus_placement to >>>>>>>>>> ttm_bo_kmap_obj and then to dma_buf_map. >>>>>>>>> Hm I'm not really seeing how that helps with a gradual conversion of >>>>>>>>> everything over to dma_buf_map and assorted helpers for access? There's >>>>>>>>> too many places in ttm drivers where is_iomem and related stuff is used to >>>>>>>>> be able to convert it all in one go. An intermediate state with a bunch of >>>>>>>>> conversions seems fairly unavoidable to me. >>>>>>>> Fair enough. I would just have started bottom up and not top down. >>>>>>>> >>>>>>>> Anyway feel free to go ahead with this approach as long as we can remove >>>>>>>> the new function again when we clean that stuff up for good. >>>>>>> Yeah I guess bottom up would make more sense as a refactoring. But the >>>>>>> main motivation to land this here is to fix the __mmio vs normal >>>>>>> memory confusion in the fbdev emulation helpers for sparc (and >>>>>>> anything else that needs this). Hence the top down approach for >>>>>>> rolling this out. >>>>>> Ok I started reviewing this a bit more in-depth, and I think this is a bit >>>>>> too much of a de-tour. >>>>>> >>>>>> Looking through all the callers of ttm_bo_kmap almost everyone maps the >>>>>> entire object. Only vmwgfx uses to map less than that. Also, everyone just >>>>>> immediately follows up with converting that full object map into a >>>>>> pointer. >>>>>> >>>>>> So I think what we really want here is: >>>>>> - new function >>>>>> >>>>>> int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map); >>>>>> >>>>>> _vmap name since that's consistent with both dma_buf functions and >>>>>> what's usually used to implement this. Outside of the ttm world kmap >>>>>> usually just means single-page mappings using kmap() or it's iomem >>>>>> sibling io_mapping_map* so rather confusing name for a function which >>>>>> usually is just used to set up a vmap of the entire buffer. >>>>>> >>>>>> - a helper which can be used for the drm_gem_object_funcs vmap/vunmap >>>>>> functions for all ttm drivers. We should be able to make this fully >>>>>> generic because a) we now have dma_buf_map and b) drm_gem_object is >>>>>> embedded in the ttm_bo, so we can upcast for everyone who's both a ttm >>>>>> and gem driver. >>>>>> >>>>>> This is maybe a good follow-up, since it should allow us to ditch quite >>>>>> a bit of the vram helper code for this more generic stuff. I also might >>>>>> have missed some special-cases here, but from a quick look everything >>>>>> just pins the buffer to the current location and that's it. >>>>>> >>>>>> Also this obviously requires Christian's generic ttm_bo_pin rework >>>>>> first. >>>>>> >>>>>> - roll the above out to drivers. >>>>>> >>>>>> Christian/Thomas, thoughts on this? >>>>> I agree on the goals, but what is the immediate objective here? >>>>> >>>>> Adding ttm_bo_vmap() does not work out easily, as struct ttm_bo_kmap_obj >>>>> is a central part of the internals of TTM. struct ttm_bo_kmap_obj has >>>>> more internal state that struct dma_buf_map, so they are not easily >>>>> convertible either. What you propose seems to require a reimplementation >>>>> of the existing ttm_bo_kmap() code. That is it's own patch series. >>>>> >>>>> I'd rather go with some variant of the existing patch and add >>>>> ttm_bo_vmap() in a follow-up. >>>> ttm_bo_vmap would simply wrap what you currently open-code as >>>> ttm_bo_kmap + ttm_kmap_obj_to_dma_buf_map. Removing ttm_kmap_obj would >>>> be a much later step. Why do you think adding ttm_bo_vmap is not >>>> possible? >>> The calls to ttm_bo_kmap/_kunmap() require an instance of struct >>> ttm_bo_kmap_obj that is stored in each driver's private bo structure >>> (e.g., struct drm_gem_vram_object, struct radeon_bo, etc). When I made >>> patch 3, I flirted with the idea of unifying the driver's _vmap code in >>> a shared helper, but I couldn't find a simple way of doing it. That's >>> why it's open-coded in the first place. > > Yeah we'd need a ttm_bo_vunmap I guess to make this work. Which > shouldn't be more than a few lines, but maybe too much to do in this > series. > >> Well that makes kind of sense. Keep in mind that ttm_bo_kmap is >> currently way to complicated. > > Yeah, simplifying this into a ttm_bo_vmap on one side, and a simple > 1-page kmap helper on the other should help a lot. I'm not too happy about the plan, but I'll send out something like this in the next iteration. Best regards Thomas > -Daniel > >> >> Christian. >> >>> >>> Best regards >>> Thomas >>> >>>> -Daniel >>>> >>>> >>>>> Best regards >>>>> Thomas >>>>> >>>>>> I think for the immediate need of rolling this out for vram helpers and >>>>>> fbdev code we should be able to do this, but just postpone the driver wide >>>>>> roll-out for now. >>>>>> >>>>>> Cheers, Daniel >>>>>> >>>>>>> -Daniel >>>>>>> >>>>>>>> Christian. >>>>>>>> >>>>>>>>> -Daniel >>>>>>>>> >>>>>>>>>> Thanks, >>>>>>>>>> Christian. >>>>>>>>>> >>>>>>>>>>> Best regards >>>>>>>>>>> Thomas >>>>>>>>>>> >>>>>>>>>>>> Regards, >>>>>>>>>>>> Christian. >>>>>>>>>>>> >>>>>>>>>>>>> Best regards >>>>>>>>>>>>> Thomas >>>>>>>>>>>>> >>>>>>>>>>>>>> Regards, >>>>>>>>>>>>>> Christian. >>>>>>>>>>>>>> >>>>>>>>>>>>>>> Signed-off-by: Thomas Zimmermann >>>>>>>>>>>>>>> --- >>>>>>>>>>>>>>> include/drm/ttm/ttm_bo_api.h | 24 ++++++++++++++++++++++++ >>>>>>>>>>>>>>> include/linux/dma-buf-map.h | 20 ++++++++++++++++++++ >>>>>>>>>>>>>>> 2 files changed, 44 insertions(+) >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> diff --git a/include/drm/ttm/ttm_bo_api.h b/include/drm/ttm/ttm_bo_api.h >>>>>>>>>>>>>>> index c96a25d571c8..62d89f05a801 100644 >>>>>>>>>>>>>>> --- a/include/drm/ttm/ttm_bo_api.h >>>>>>>>>>>>>>> +++ b/include/drm/ttm/ttm_bo_api.h >>>>>>>>>>>>>>> @@ -34,6 +34,7 @@ >>>>>>>>>>>>>>> #include >>>>>>>>>>>>>>> #include >>>>>>>>>>>>>>> #include >>>>>>>>>>>>>>> +#include >>>>>>>>>>>>>>> #include >>>>>>>>>>>>>>> #include >>>>>>>>>>>>>>> #include >>>>>>>>>>>>>>> @@ -486,6 +487,29 @@ static inline void *ttm_kmap_obj_virtual(struct >>>>>>>>>>>>>>> ttm_bo_kmap_obj *map, >>>>>>>>>>>>>>> return map->virtual; >>>>>>>>>>>>>>> } >>>>>>>>>>>>>>> +/** >>>>>>>>>>>>>>> + * ttm_kmap_obj_to_dma_buf_map >>>>>>>>>>>>>>> + * >>>>>>>>>>>>>>> + * @kmap: A struct ttm_bo_kmap_obj returned from ttm_bo_kmap. >>>>>>>>>>>>>>> + * @map: Returns the mapping as struct dma_buf_map >>>>>>>>>>>>>>> + * >>>>>>>>>>>>>>> + * Converts struct ttm_bo_kmap_obj to struct dma_buf_map. If the memory >>>>>>>>>>>>>>> + * is not mapped, the returned mapping is initialized to NULL. >>>>>>>>>>>>>>> + */ >>>>>>>>>>>>>>> +static inline void ttm_kmap_obj_to_dma_buf_map(struct ttm_bo_kmap_obj >>>>>>>>>>>>>>> *kmap, >>>>>>>>>>>>>>> + struct dma_buf_map *map) >>>>>>>>>>>>>>> +{ >>>>>>>>>>>>>>> + bool is_iomem; >>>>>>>>>>>>>>> + void *vaddr = ttm_kmap_obj_virtual(kmap, &is_iomem); >>>>>>>>>>>>>>> + >>>>>>>>>>>>>>> + if (!vaddr) >>>>>>>>>>>>>>> + dma_buf_map_clear(map); >>>>>>>>>>>>>>> + else if (is_iomem) >>>>>>>>>>>>>>> + dma_buf_map_set_vaddr_iomem(map, (void __force __iomem *)vaddr); >>>>>>>>>>>>>>> + else >>>>>>>>>>>>>>> + dma_buf_map_set_vaddr(map, vaddr); >>>>>>>>>>>>>>> +} >>>>>>>>>>>>>>> + >>>>>>>>>>>>>>> /** >>>>>>>>>>>>>>> * ttm_bo_kmap >>>>>>>>>>>>>>> * >>>>>>>>>>>>>>> diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h >>>>>>>>>>>>>>> index fd1aba545fdf..2e8bbecb5091 100644 >>>>>>>>>>>>>>> --- a/include/linux/dma-buf-map.h >>>>>>>>>>>>>>> +++ b/include/linux/dma-buf-map.h >>>>>>>>>>>>>>> @@ -45,6 +45,12 @@ >>>>>>>>>>>>>>> * >>>>>>>>>>>>>>> * dma_buf_map_set_vaddr(&map. 0xdeadbeaf); >>>>>>>>>>>>>>> * >>>>>>>>>>>>>>> + * To set an address in I/O memory, use dma_buf_map_set_vaddr_iomem(). >>>>>>>>>>>>>>> + * >>>>>>>>>>>>>>> + * .. code-block:: c >>>>>>>>>>>>>>> + * >>>>>>>>>>>>>>> + * dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf); >>>>>>>>>>>>>>> + * >>>>>>>>>>>>>>> * Test if a mapping is valid with either dma_buf_map_is_set() or >>>>>>>>>>>>>>> * dma_buf_map_is_null(). >>>>>>>>>>>>>>> * >>>>>>>>>>>>>>> @@ -118,6 +124,20 @@ static inline void dma_buf_map_set_vaddr(struct >>>>>>>>>>>>>>> dma_buf_map *map, void *vaddr) >>>>>>>>>>>>>>> map->is_iomem = false; >>>>>>>>>>>>>>> } >>>>>>>>>>>>>>> +/** >>>>>>>>>>>>>>> + * dma_buf_map_set_vaddr_iomem - Sets a dma-buf mapping structure to >>>>>>>>>>>>>>> an address in I/O memory >>>>>>>>>>>>>>> + * @map: The dma-buf mapping structure >>>>>>>>>>>>>>> + * @vaddr_iomem: An I/O-memory address >>>>>>>>>>>>>>> + * >>>>>>>>>>>>>>> + * Sets the address and the I/O-memory flag. >>>>>>>>>>>>>>> + */ >>>>>>>>>>>>>>> +static inline void dma_buf_map_set_vaddr_iomem(struct dma_buf_map *map, >>>>>>>>>>>>>>> + void __iomem *vaddr_iomem) >>>>>>>>>>>>>>> +{ >>>>>>>>>>>>>>> + map->vaddr_iomem = vaddr_iomem; >>>>>>>>>>>>>>> + map->is_iomem = true; >>>>>>>>>>>>>>> +} >>>>>>>>>>>>>>> + >>>>>>>>>>>>>>> /** >>>>>>>>>>>>>>> * dma_buf_map_is_equal - Compares two dma-buf mapping structures >>>>>>>>>>>>>>> for equality >>>>>>>>>>>>>>> * @lhs: The dma-buf mapping structure >>>>>>>>>>>>>> _______________________________________________ >>>>>>>>>>>>>> dri-devel mailing list >>>>>>>>>>>>>> dri-devel at lists.freedesktop.org >>>>>>>>>>>>>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Fdri-devel&data=02%7C01%7Cchristian.koenig%40amd.com%7C472c3d655a61411deb6708d86525d1b8%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637370560438965013&sdata=HdHOA%2F1VcIX%2F7YtfYTiAqYEvw7Ag%2FS%2BxS5VwJKOv5y0%3D&reserved=0 >>>>>>>>>>>>> _______________________________________________ >>>>>>>>>>>>> amd-gfx mailing list >>>>>>>>>>>>> amd-gfx at lists.freedesktop.org >>>>>>>>>>>>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Famd-gfx&data=02%7C01%7Cchristian.koenig%40amd.com%7C472c3d655a61411deb6708d86525d1b8%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637370560438965013&sdata=H%2B5HKCsTrksRV2EyEiFGSTyS79jsWCmJimSMoJYusx8%3D&reserved=0 >>>>>>>>>>>> _______________________________________________ >>>>>>>>>>>> dri-devel mailing list >>>>>>>>>>>> dri-devel at lists.freedesktop.org >>>>>>>>>>>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Fdri-devel&data=02%7C01%7Cchristian.koenig%40amd.com%7C472c3d655a61411deb6708d86525d1b8%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637370560438965013&sdata=HdHOA%2F1VcIX%2F7YtfYTiAqYEvw7Ag%2FS%2BxS5VwJKOv5y0%3D&reserved=0 >>>>>>>>>>>> >>>>>>>>>>> _______________________________________________ >>>>>>>>>>> amd-gfx mailing list >>>>>>>>>>> amd-gfx at lists.freedesktop.org >>>>>>>>>>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Famd-gfx&data=02%7C01%7Cchristian.koenig%40amd.com%7C472c3d655a61411deb6708d86525d1b8%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637370560438965013&sdata=H%2B5HKCsTrksRV2EyEiFGSTyS79jsWCmJimSMoJYusx8%3D&reserved=0 >>>>>>> >>>>>>> -- >>>>>>> Daniel Vetter >>>>>>> Software Engineer, Intel Corporation >>>>>>> http://blog.ffwll.ch >>>>> -- >>>>> Thomas Zimmermann >>>>> Graphics Driver Developer >>>>> SUSE Software Solutions Germany GmbH >>>>> Maxfeldstr. 5, 90409 N?rnberg, Germany >>>>> (HRB 36809, AG N?rnberg) >>>>> Gesch?ftsf?hrer: Felix Imend?rffer >>>>> >>>> >> > > -- Thomas Zimmermann Graphics Driver Developer SUSE Software Solutions Germany GmbH Maxfeldstr. 5, 90409 N?rnberg, Germany (HRB 36809, AG N?rnberg) Gesch?ftsf?hrer: Felix Imend?rffer -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 516 bytes Desc: OpenPGP digital signature URL: From tzimmermann at suse.de Thu Oct 8 09:25:13 2020 From: tzimmermann at suse.de (Thomas Zimmermann) Date: Thu, 8 Oct 2020 11:25:13 +0200 Subject: [Spice-devel] [PATCH v3 6/7] drm/fb_helper: Support framebuffers in I/O memory In-Reply-To: References: <20200929151437.19717-1-tzimmermann@suse.de> <20200929151437.19717-7-tzimmermann@suse.de> <20201002180500.GM438822@phenom.ffwll.local> Message-ID: Hi Am 02.10.20 um 20:44 schrieb Daniel Vetter: > On Fri, Oct 2, 2020 at 8:05 PM Daniel Vetter wrote: >> >> On Tue, Sep 29, 2020 at 05:14:36PM +0200, Thomas Zimmermann wrote: >>> At least sparc64 requires I/O-specific access to framebuffers. This >>> patch updates the fbdev console accordingly. >>> >>> For drivers with direct access to the framebuffer memory, the callback >>> functions in struct fb_ops test for the type of memory and call the rsp >>> fb_sys_ of fb_cfb_ functions. >>> >>> For drivers that employ a shadow buffer, fbdev's blit function retrieves >>> the framebuffer address as struct dma_buf_map, and uses dma_buf_map >>> interfaces to access the buffer. >>> >>> The bochs driver on sparc64 uses a workaround to flag the framebuffer as >>> I/O memory and avoid a HW exception. With the introduction of struct >>> dma_buf_map, this is not required any longer. The patch removes the rsp >>> code from both, bochs and fbdev. >>> >>> Signed-off-by: Thomas Zimmermann > > Argh, I accidentally hit send before finishing this ... > >>> --- >>> drivers/gpu/drm/bochs/bochs_kms.c | 1 - >>> drivers/gpu/drm/drm_fb_helper.c | 217 ++++++++++++++++++++++++++++-- >>> include/drm/drm_mode_config.h | 12 -- >>> include/linux/dma-buf-map.h | 72 ++++++++-- >>> 4 files changed, 265 insertions(+), 37 deletions(-) >>> >>> diff --git a/drivers/gpu/drm/bochs/bochs_kms.c b/drivers/gpu/drm/bochs/bochs_kms.c >>> index 13d0d04c4457..853081d186d5 100644 >>> --- a/drivers/gpu/drm/bochs/bochs_kms.c >>> +++ b/drivers/gpu/drm/bochs/bochs_kms.c >>> @@ -151,7 +151,6 @@ int bochs_kms_init(struct bochs_device *bochs) >>> bochs->dev->mode_config.preferred_depth = 24; >>> bochs->dev->mode_config.prefer_shadow = 0; >>> bochs->dev->mode_config.prefer_shadow_fbdev = 1; >>> - bochs->dev->mode_config.fbdev_use_iomem = true; >>> bochs->dev->mode_config.quirk_addfb_prefer_host_byte_order = true; >>> >>> bochs->dev->mode_config.funcs = &bochs_mode_funcs; >>> diff --git a/drivers/gpu/drm/drm_fb_helper.c b/drivers/gpu/drm/drm_fb_helper.c >>> index 343a292f2c7c..f345a314a437 100644 >>> --- a/drivers/gpu/drm/drm_fb_helper.c >>> +++ b/drivers/gpu/drm/drm_fb_helper.c >>> @@ -388,24 +388,22 @@ static void drm_fb_helper_resume_worker(struct work_struct *work) >>> } >>> >>> static void drm_fb_helper_dirty_blit_real(struct drm_fb_helper *fb_helper, >>> - struct drm_clip_rect *clip) >>> + struct drm_clip_rect *clip, >>> + struct dma_buf_map *dst) >>> { >>> struct drm_framebuffer *fb = fb_helper->fb; >>> unsigned int cpp = fb->format->cpp[0]; >>> size_t offset = clip->y1 * fb->pitches[0] + clip->x1 * cpp; >>> void *src = fb_helper->fbdev->screen_buffer + offset; >>> - void *dst = fb_helper->buffer->map.vaddr + offset; >>> size_t len = (clip->x2 - clip->x1) * cpp; >>> unsigned int y; >>> >>> - for (y = clip->y1; y < clip->y2; y++) { >>> - if (!fb_helper->dev->mode_config.fbdev_use_iomem) >>> - memcpy(dst, src, len); >>> - else >>> - memcpy_toio((void __iomem *)dst, src, len); >>> + dma_buf_map_incr(dst, offset); /* go to first pixel within clip rect */ >>> >>> + for (y = clip->y1; y < clip->y2; y++) { >>> + dma_buf_map_memcpy_to(dst, src, len); >>> + dma_buf_map_incr(dst, fb->pitches[0]); >>> src += fb->pitches[0]; >>> - dst += fb->pitches[0]; >>> } >>> } >>> >>> @@ -433,8 +431,9 @@ static void drm_fb_helper_dirty_work(struct work_struct *work) >>> ret = drm_client_buffer_vmap(helper->buffer, &map); >>> if (ret) >>> return; >>> - drm_fb_helper_dirty_blit_real(helper, &clip_copy); >>> + drm_fb_helper_dirty_blit_real(helper, &clip_copy, &map); >>> } >>> + >>> if (helper->fb->funcs->dirty) >>> helper->fb->funcs->dirty(helper->fb, NULL, 0, 0, >>> &clip_copy, 1); >>> @@ -771,6 +770,136 @@ void drm_fb_helper_sys_imageblit(struct fb_info *info, >>> } >>> EXPORT_SYMBOL(drm_fb_helper_sys_imageblit); >>> >>> +static ssize_t drm_fb_helper_cfb_read(struct fb_info *info, char __user *buf, >>> + size_t count, loff_t *ppos) >>> +{ >>> + unsigned long p = *ppos; >>> + u8 *dst; >>> + u8 __iomem *src; >>> + int c, err = 0; >>> + unsigned long total_size; >>> + unsigned long alloc_size; >>> + ssize_t ret = 0; >>> + >>> + if (info->state != FBINFO_STATE_RUNNING) >>> + return -EPERM; >>> + >>> + total_size = info->screen_size; >>> + >>> + if (total_size == 0) >>> + total_size = info->fix.smem_len; >>> + >>> + if (p >= total_size) >>> + return 0; >>> + >>> + if (count >= total_size) >>> + count = total_size; >>> + >>> + if (count + p > total_size) >>> + count = total_size - p; >>> + >>> + src = (u8 __iomem *)(info->screen_base + p); >>> + >>> + alloc_size = min(count, PAGE_SIZE); >>> + >>> + dst = kmalloc(alloc_size, GFP_KERNEL); >>> + if (!dst) >>> + return -ENOMEM; >>> + >>> + while (count) { >>> + c = min(count, alloc_size); >>> + >>> + memcpy_fromio(dst, src, c); >>> + if (copy_to_user(buf, dst, c)) { >>> + err = -EFAULT; >>> + break; >>> + } >>> + >>> + src += c; >>> + *ppos += c; >>> + buf += c; >>> + ret += c; >>> + count -= c; >>> + } >>> + >>> + kfree(dst); >>> + >>> + if (err) >>> + return err; >>> + >>> + return ret; >>> +} >>> + >>> +static ssize_t drm_fb_helper_cfb_write(struct fb_info *info, const char __user *buf, >>> + size_t count, loff_t *ppos) >>> +{ >>> + unsigned long p = *ppos; >>> + u8 *src; >>> + u8 __iomem *dst; >>> + int c, err = 0; >>> + unsigned long total_size; >>> + unsigned long alloc_size; >>> + ssize_t ret = 0; >>> + >>> + if (info->state != FBINFO_STATE_RUNNING) >>> + return -EPERM; >>> + >>> + total_size = info->screen_size; >>> + >>> + if (total_size == 0) >>> + total_size = info->fix.smem_len; >>> + >>> + if (p > total_size) >>> + return -EFBIG; >>> + >>> + if (count > total_size) { >>> + err = -EFBIG; >>> + count = total_size; >>> + } >>> + >>> + if (count + p > total_size) { >>> + /* >>> + * The framebuffer is too small. We do the >>> + * copy operation, but return an error code >>> + * afterwards. Taken from fbdev. >>> + */ >>> + if (!err) >>> + err = -ENOSPC; >>> + count = total_size - p; >>> + } >>> + >>> + alloc_size = min(count, PAGE_SIZE); >>> + >>> + src = kmalloc(alloc_size, GFP_KERNEL); >>> + if (!src) >>> + return -ENOMEM; >>> + >>> + dst = (u8 __iomem *)(info->screen_base + p); >>> + >>> + while (count) { >>> + c = min(count, alloc_size); >>> + >>> + if (copy_from_user(src, buf, c)) { >>> + err = -EFAULT; >>> + break; >>> + } >>> + memcpy_toio(dst, src, c); >>> + >>> + dst += c; >>> + *ppos += c; >>> + buf += c; >>> + ret += c; >>> + count -= c; >>> + } >>> + >>> + kfree(src); >>> + >>> + if (err) >>> + return err; >>> + >>> + return ret; >>> +} > > The duplication is a bit annoying here, but can't really be avoided. I > do think though we should maybe go a bit further, and have drm > implementations of this stuff instead of following fbdev concepts as > closely as possible. So here roughly: > > - if we have a shadow fb, construct a dma_buf_map for that, otherwise > take the one from the driver > - have a full generic implementation using that one directly (and > checking size limits against the underlying gem buffer) > - ideally also with some testcases in the fbdev testcase we have (very > bare-bones right now) in igt > > But I'm not really sure whether that's worth all the trouble. It's > just that the fbdev-ness here in this copied code sticks out a lot :-) > >>> + >>> /** >>> * drm_fb_helper_cfb_fillrect - wrapper around cfb_fillrect >>> * @info: fbdev registered by the helper >>> @@ -2043,6 +2172,66 @@ static int drm_fbdev_fb_mmap(struct fb_info *info, struct vm_area_struct *vma) >>> return -ENODEV; >>> } >>> >>> +static ssize_t drm_fbdev_fb_read(struct fb_info *info, char __user *buf, >>> + size_t count, loff_t *ppos) >>> +{ >>> + struct drm_fb_helper *fb_helper = info->par; >>> + struct drm_client_buffer *buffer = fb_helper->buffer; >>> + >>> + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem) >>> + return drm_fb_helper_sys_read(info, buf, count, ppos); >>> + else >>> + return drm_fb_helper_cfb_read(info, buf, count, ppos); >>> +} >>> + >>> +static ssize_t drm_fbdev_fb_write(struct fb_info *info, const char __user *buf, >>> + size_t count, loff_t *ppos) >>> +{ >>> + struct drm_fb_helper *fb_helper = info->par; >>> + struct drm_client_buffer *buffer = fb_helper->buffer; >>> + >>> + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem) >>> + return drm_fb_helper_sys_write(info, buf, count, ppos); >>> + else >>> + return drm_fb_helper_cfb_write(info, buf, count, ppos); >>> +} >>> + >>> +static void drm_fbdev_fb_fillrect(struct fb_info *info, >>> + const struct fb_fillrect *rect) >>> +{ >>> + struct drm_fb_helper *fb_helper = info->par; >>> + struct drm_client_buffer *buffer = fb_helper->buffer; >>> + >>> + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem) >>> + drm_fb_helper_sys_fillrect(info, rect); >>> + else >>> + drm_fb_helper_cfb_fillrect(info, rect); >>> +} >>> + >>> +static void drm_fbdev_fb_copyarea(struct fb_info *info, >>> + const struct fb_copyarea *area) >>> +{ >>> + struct drm_fb_helper *fb_helper = info->par; >>> + struct drm_client_buffer *buffer = fb_helper->buffer; >>> + >>> + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem) >>> + drm_fb_helper_sys_copyarea(info, area); >>> + else >>> + drm_fb_helper_cfb_copyarea(info, area); >>> +} >>> + >>> +static void drm_fbdev_fb_imageblit(struct fb_info *info, >>> + const struct fb_image *image) >>> +{ >>> + struct drm_fb_helper *fb_helper = info->par; >>> + struct drm_client_buffer *buffer = fb_helper->buffer; >>> + >>> + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem) >>> + drm_fb_helper_sys_imageblit(info, image); >>> + else >>> + drm_fb_helper_cfb_imageblit(info, image); >>> +} > > I think a todo.rst entry to make the new generic functions the real ones, and > drivers not using the sys/cfb ones anymore would be a good addition. > It's kinda covered by the move to the generic helpers, but maybe we > can convert a few more drivers over to these here. Would also allow us > to maybe flatten the code a bit and use more of the dma_buf_map stuff > directly (instead of reusing crusty fbdev code written 20 years ago or > so). I wouldn't mind doing our own thing, but dma_buf_map is not a good fit here. Mostly because the _cfb_ code first does a reads from I/O to system memory, and then copies to userspace. The _sys_ functions copy directly to userspace. (Same for write, but in the other direction.) There's some code at the top and bottom of these functions that could be shared. If we want to share the copy loops, we'd probably end up with additional memcpys in the _sys_ case. Best regards Thomas > >>> + >>> static const struct fb_ops drm_fbdev_fb_ops = { >>> .owner = THIS_MODULE, >>> DRM_FB_HELPER_DEFAULT_OPS, >>> @@ -2050,11 +2239,11 @@ static const struct fb_ops drm_fbdev_fb_ops = { >>> .fb_release = drm_fbdev_fb_release, >>> .fb_destroy = drm_fbdev_fb_destroy, >>> .fb_mmap = drm_fbdev_fb_mmap, >>> - .fb_read = drm_fb_helper_sys_read, >>> - .fb_write = drm_fb_helper_sys_write, >>> - .fb_fillrect = drm_fb_helper_sys_fillrect, >>> - .fb_copyarea = drm_fb_helper_sys_copyarea, >>> - .fb_imageblit = drm_fb_helper_sys_imageblit, >>> + .fb_read = drm_fbdev_fb_read, >>> + .fb_write = drm_fbdev_fb_write, >>> + .fb_fillrect = drm_fbdev_fb_fillrect, >>> + .fb_copyarea = drm_fbdev_fb_copyarea, >>> + .fb_imageblit = drm_fbdev_fb_imageblit, >>> }; >>> >>> static struct fb_deferred_io drm_fbdev_defio = { >>> diff --git a/include/drm/drm_mode_config.h b/include/drm/drm_mode_config.h >>> index 5ffbb4ed5b35..ab424ddd7665 100644 >>> --- a/include/drm/drm_mode_config.h >>> +++ b/include/drm/drm_mode_config.h >>> @@ -877,18 +877,6 @@ struct drm_mode_config { >>> */ >>> bool prefer_shadow_fbdev; >>> >>> - /** >>> - * @fbdev_use_iomem: >>> - * >>> - * Set to true if framebuffer reside in iomem. >>> - * When set to true memcpy_toio() is used when copying the framebuffer in >>> - * drm_fb_helper.drm_fb_helper_dirty_blit_real(). >>> - * >>> - * FIXME: This should be replaced with a per-mapping is_iomem >>> - * flag (like ttm does), and then used everywhere in fbdev code. >>> - */ >>> - bool fbdev_use_iomem; >>> - >>> /** >>> * @quirk_addfb_prefer_xbgr_30bpp: >>> * >>> diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h > > I think the below should be split out as a prep patch. > >>> index 2e8bbecb5091..6ca0f304dda2 100644 >>> --- a/include/linux/dma-buf-map.h >>> +++ b/include/linux/dma-buf-map.h >>> @@ -32,6 +32,14 @@ >>> * accessing the buffer. Use the returned instance and the helper functions >>> * to access the buffer's memory in the correct way. >>> * >>> + * The type :c:type:`struct dma_buf_map ` and its helpers are >>> + * actually independent from the dma-buf infrastructure. When sharing buffers >>> + * among devices, drivers have to know the location of the memory to access >>> + * the buffers in a safe way. :c:type:`struct dma_buf_map ` >>> + * solves this problem for dma-buf and its users. If other drivers or >>> + * sub-systems require similar functionality, the type could be generalized >>> + * and moved to a more prominent header file. >>> + * >>> * Open-coding access to :c:type:`struct dma_buf_map ` is >>> * considered bad style. Rather then accessing its fields directly, use one >>> * of the provided helper functions, or implement your own. For example, >>> @@ -51,6 +59,14 @@ >>> * >>> * dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf); >>> * >>> + * Instances of struct dma_buf_map do not have to be cleaned up, but >>> + * can be cleared to NULL with dma_buf_map_clear(). Cleared mappings >>> + * always refer to system memory. >>> + * >>> + * .. code-block:: c >>> + * >>> + * dma_buf_map_clear(&map); >>> + * >>> * Test if a mapping is valid with either dma_buf_map_is_set() or >>> * dma_buf_map_is_null(). >>> * >>> @@ -73,17 +89,19 @@ >>> * if (dma_buf_map_is_equal(&sys_map, &io_map)) >>> * // always false >>> * >>> - * Instances of struct dma_buf_map do not have to be cleaned up, but >>> - * can be cleared to NULL with dma_buf_map_clear(). Cleared mappings >>> - * always refer to system memory. >>> + * A set up instance of struct dma_buf_map can be used to access or manipulate >>> + * the buffer memory. Depending on the location of the memory, the provided >>> + * helpers will pick the correct operations. Data can be copied into the memory >>> + * with dma_buf_map_memcpy_to(). The address can be manipulated with >>> + * dma_buf_map_incr(). >>> * >>> - * The type :c:type:`struct dma_buf_map ` and its helpers are >>> - * actually independent from the dma-buf infrastructure. When sharing buffers >>> - * among devices, drivers have to know the location of the memory to access >>> - * the buffers in a safe way. :c:type:`struct dma_buf_map ` >>> - * solves this problem for dma-buf and its users. If other drivers or >>> - * sub-systems require similar functionality, the type could be generalized >>> - * and moved to a more prominent header file. >>> + * .. code-block:: c >>> + * >>> + * const void *src = ...; // source buffer >>> + * size_t len = ...; // length of src >>> + * >>> + * dma_buf_map_memcpy_to(&map, src, len); >>> + * dma_buf_map_incr(&map, len); // go to first byte after the memcpy >>> */ >>> >>> /** >>> @@ -210,4 +228,38 @@ static inline void dma_buf_map_clear(struct dma_buf_map *map) >>> } >>> } >>> >>> +/** >>> + * dma_buf_map_memcpy_to - Memcpy into dma-buf mapping >>> + * @dst: The dma-buf mapping structure >>> + * @src: The source buffer >>> + * @len: The number of byte in src >>> + * >>> + * Copies data into a dma-buf mapping. The source buffer is in system >>> + * memory. Depending on the buffer's location, the helper picks the correct >>> + * method of accessing the memory. >>> + */ >>> +static inline void dma_buf_map_memcpy_to(struct dma_buf_map *dst, const void *src, size_t len) >>> +{ >>> + if (dst->is_iomem) >>> + memcpy_toio(dst->vaddr_iomem, src, len); >>> + else >>> + memcpy(dst->vaddr, src, len); >>> +} >>> + >>> +/** >>> + * dma_buf_map_incr - Increments the address stored in a dma-buf mapping >>> + * @map: The dma-buf mapping structure >>> + * @incr: The number of bytes to increment >>> + * >>> + * Increments the address stored in a dma-buf mapping. Depending on the >>> + * buffer's location, the correct value will be updated. >>> + */ >>> +static inline void dma_buf_map_incr(struct dma_buf_map *map, size_t incr) >>> +{ >>> + if (map->is_iomem) >>> + map->vaddr_iomem += incr; >>> + else >>> + map->vaddr += incr; >>> +} >>> + >>> #endif /* __DMA_BUF_MAP_H__ */ >>> -- >>> 2.28.0 > > Aside from the details I think looks all reasonable. > -Daniel > -- Thomas Zimmermann Graphics Driver Developer SUSE Software Solutions Germany GmbH Maxfeldstr. 5, 90409 N?rnberg, Germany (HRB 36809, AG N?rnberg) Gesch?ftsf?hrer: Felix Imend?rffer -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 516 bytes Desc: OpenPGP digital signature URL: From daniel at ffwll.ch Thu Oct 8 09:35:35 2020 From: daniel at ffwll.ch (Daniel Vetter) Date: Thu, 8 Oct 2020 11:35:35 +0200 Subject: [Spice-devel] [PATCH v3 6/7] drm/fb_helper: Support framebuffers in I/O memory In-Reply-To: References: <20200929151437.19717-1-tzimmermann@suse.de> <20200929151437.19717-7-tzimmermann@suse.de> <20201002180500.GM438822@phenom.ffwll.local> Message-ID: On Thu, Oct 8, 2020 at 11:25 AM Thomas Zimmermann wrote: > > Hi > > Am 02.10.20 um 20:44 schrieb Daniel Vetter: > > On Fri, Oct 2, 2020 at 8:05 PM Daniel Vetter wrote: > >> > >> On Tue, Sep 29, 2020 at 05:14:36PM +0200, Thomas Zimmermann wrote: > >>> At least sparc64 requires I/O-specific access to framebuffers. This > >>> patch updates the fbdev console accordingly. > >>> > >>> For drivers with direct access to the framebuffer memory, the callback > >>> functions in struct fb_ops test for the type of memory and call the rsp > >>> fb_sys_ of fb_cfb_ functions. > >>> > >>> For drivers that employ a shadow buffer, fbdev's blit function retrieves > >>> the framebuffer address as struct dma_buf_map, and uses dma_buf_map > >>> interfaces to access the buffer. > >>> > >>> The bochs driver on sparc64 uses a workaround to flag the framebuffer as > >>> I/O memory and avoid a HW exception. With the introduction of struct > >>> dma_buf_map, this is not required any longer. The patch removes the rsp > >>> code from both, bochs and fbdev. > >>> > >>> Signed-off-by: Thomas Zimmermann > > > > Argh, I accidentally hit send before finishing this ... > > > >>> --- > >>> drivers/gpu/drm/bochs/bochs_kms.c | 1 - > >>> drivers/gpu/drm/drm_fb_helper.c | 217 ++++++++++++++++++++++++++++-- > >>> include/drm/drm_mode_config.h | 12 -- > >>> include/linux/dma-buf-map.h | 72 ++++++++-- > >>> 4 files changed, 265 insertions(+), 37 deletions(-) > >>> > >>> diff --git a/drivers/gpu/drm/bochs/bochs_kms.c b/drivers/gpu/drm/bochs/bochs_kms.c > >>> index 13d0d04c4457..853081d186d5 100644 > >>> --- a/drivers/gpu/drm/bochs/bochs_kms.c > >>> +++ b/drivers/gpu/drm/bochs/bochs_kms.c > >>> @@ -151,7 +151,6 @@ int bochs_kms_init(struct bochs_device *bochs) > >>> bochs->dev->mode_config.preferred_depth = 24; > >>> bochs->dev->mode_config.prefer_shadow = 0; > >>> bochs->dev->mode_config.prefer_shadow_fbdev = 1; > >>> - bochs->dev->mode_config.fbdev_use_iomem = true; > >>> bochs->dev->mode_config.quirk_addfb_prefer_host_byte_order = true; > >>> > >>> bochs->dev->mode_config.funcs = &bochs_mode_funcs; > >>> diff --git a/drivers/gpu/drm/drm_fb_helper.c b/drivers/gpu/drm/drm_fb_helper.c > >>> index 343a292f2c7c..f345a314a437 100644 > >>> --- a/drivers/gpu/drm/drm_fb_helper.c > >>> +++ b/drivers/gpu/drm/drm_fb_helper.c > >>> @@ -388,24 +388,22 @@ static void drm_fb_helper_resume_worker(struct work_struct *work) > >>> } > >>> > >>> static void drm_fb_helper_dirty_blit_real(struct drm_fb_helper *fb_helper, > >>> - struct drm_clip_rect *clip) > >>> + struct drm_clip_rect *clip, > >>> + struct dma_buf_map *dst) > >>> { > >>> struct drm_framebuffer *fb = fb_helper->fb; > >>> unsigned int cpp = fb->format->cpp[0]; > >>> size_t offset = clip->y1 * fb->pitches[0] + clip->x1 * cpp; > >>> void *src = fb_helper->fbdev->screen_buffer + offset; > >>> - void *dst = fb_helper->buffer->map.vaddr + offset; > >>> size_t len = (clip->x2 - clip->x1) * cpp; > >>> unsigned int y; > >>> > >>> - for (y = clip->y1; y < clip->y2; y++) { > >>> - if (!fb_helper->dev->mode_config.fbdev_use_iomem) > >>> - memcpy(dst, src, len); > >>> - else > >>> - memcpy_toio((void __iomem *)dst, src, len); > >>> + dma_buf_map_incr(dst, offset); /* go to first pixel within clip rect */ > >>> > >>> + for (y = clip->y1; y < clip->y2; y++) { > >>> + dma_buf_map_memcpy_to(dst, src, len); > >>> + dma_buf_map_incr(dst, fb->pitches[0]); > >>> src += fb->pitches[0]; > >>> - dst += fb->pitches[0]; > >>> } > >>> } > >>> > >>> @@ -433,8 +431,9 @@ static void drm_fb_helper_dirty_work(struct work_struct *work) > >>> ret = drm_client_buffer_vmap(helper->buffer, &map); > >>> if (ret) > >>> return; > >>> - drm_fb_helper_dirty_blit_real(helper, &clip_copy); > >>> + drm_fb_helper_dirty_blit_real(helper, &clip_copy, &map); > >>> } > >>> + > >>> if (helper->fb->funcs->dirty) > >>> helper->fb->funcs->dirty(helper->fb, NULL, 0, 0, > >>> &clip_copy, 1); > >>> @@ -771,6 +770,136 @@ void drm_fb_helper_sys_imageblit(struct fb_info *info, > >>> } > >>> EXPORT_SYMBOL(drm_fb_helper_sys_imageblit); > >>> > >>> +static ssize_t drm_fb_helper_cfb_read(struct fb_info *info, char __user *buf, > >>> + size_t count, loff_t *ppos) > >>> +{ > >>> + unsigned long p = *ppos; > >>> + u8 *dst; > >>> + u8 __iomem *src; > >>> + int c, err = 0; > >>> + unsigned long total_size; > >>> + unsigned long alloc_size; > >>> + ssize_t ret = 0; > >>> + > >>> + if (info->state != FBINFO_STATE_RUNNING) > >>> + return -EPERM; > >>> + > >>> + total_size = info->screen_size; > >>> + > >>> + if (total_size == 0) > >>> + total_size = info->fix.smem_len; > >>> + > >>> + if (p >= total_size) > >>> + return 0; > >>> + > >>> + if (count >= total_size) > >>> + count = total_size; > >>> + > >>> + if (count + p > total_size) > >>> + count = total_size - p; > >>> + > >>> + src = (u8 __iomem *)(info->screen_base + p); > >>> + > >>> + alloc_size = min(count, PAGE_SIZE); > >>> + > >>> + dst = kmalloc(alloc_size, GFP_KERNEL); > >>> + if (!dst) > >>> + return -ENOMEM; > >>> + > >>> + while (count) { > >>> + c = min(count, alloc_size); > >>> + > >>> + memcpy_fromio(dst, src, c); > >>> + if (copy_to_user(buf, dst, c)) { > >>> + err = -EFAULT; > >>> + break; > >>> + } > >>> + > >>> + src += c; > >>> + *ppos += c; > >>> + buf += c; > >>> + ret += c; > >>> + count -= c; > >>> + } > >>> + > >>> + kfree(dst); > >>> + > >>> + if (err) > >>> + return err; > >>> + > >>> + return ret; > >>> +} > >>> + > >>> +static ssize_t drm_fb_helper_cfb_write(struct fb_info *info, const char __user *buf, > >>> + size_t count, loff_t *ppos) > >>> +{ > >>> + unsigned long p = *ppos; > >>> + u8 *src; > >>> + u8 __iomem *dst; > >>> + int c, err = 0; > >>> + unsigned long total_size; > >>> + unsigned long alloc_size; > >>> + ssize_t ret = 0; > >>> + > >>> + if (info->state != FBINFO_STATE_RUNNING) > >>> + return -EPERM; > >>> + > >>> + total_size = info->screen_size; > >>> + > >>> + if (total_size == 0) > >>> + total_size = info->fix.smem_len; > >>> + > >>> + if (p > total_size) > >>> + return -EFBIG; > >>> + > >>> + if (count > total_size) { > >>> + err = -EFBIG; > >>> + count = total_size; > >>> + } > >>> + > >>> + if (count + p > total_size) { > >>> + /* > >>> + * The framebuffer is too small. We do the > >>> + * copy operation, but return an error code > >>> + * afterwards. Taken from fbdev. > >>> + */ > >>> + if (!err) > >>> + err = -ENOSPC; > >>> + count = total_size - p; > >>> + } > >>> + > >>> + alloc_size = min(count, PAGE_SIZE); > >>> + > >>> + src = kmalloc(alloc_size, GFP_KERNEL); > >>> + if (!src) > >>> + return -ENOMEM; > >>> + > >>> + dst = (u8 __iomem *)(info->screen_base + p); > >>> + > >>> + while (count) { > >>> + c = min(count, alloc_size); > >>> + > >>> + if (copy_from_user(src, buf, c)) { > >>> + err = -EFAULT; > >>> + break; > >>> + } > >>> + memcpy_toio(dst, src, c); > >>> + > >>> + dst += c; > >>> + *ppos += c; > >>> + buf += c; > >>> + ret += c; > >>> + count -= c; > >>> + } > >>> + > >>> + kfree(src); > >>> + > >>> + if (err) > >>> + return err; > >>> + > >>> + return ret; > >>> +} > > > > The duplication is a bit annoying here, but can't really be avoided. I > > do think though we should maybe go a bit further, and have drm > > implementations of this stuff instead of following fbdev concepts as > > closely as possible. So here roughly: > > > > - if we have a shadow fb, construct a dma_buf_map for that, otherwise > > take the one from the driver > > - have a full generic implementation using that one directly (and > > checking size limits against the underlying gem buffer) > > - ideally also with some testcases in the fbdev testcase we have (very > > bare-bones right now) in igt > > > > But I'm not really sure whether that's worth all the trouble. It's > > just that the fbdev-ness here in this copied code sticks out a lot :-) > > > >>> + > >>> /** > >>> * drm_fb_helper_cfb_fillrect - wrapper around cfb_fillrect > >>> * @info: fbdev registered by the helper > >>> @@ -2043,6 +2172,66 @@ static int drm_fbdev_fb_mmap(struct fb_info *info, struct vm_area_struct *vma) > >>> return -ENODEV; > >>> } > >>> > >>> +static ssize_t drm_fbdev_fb_read(struct fb_info *info, char __user *buf, > >>> + size_t count, loff_t *ppos) > >>> +{ > >>> + struct drm_fb_helper *fb_helper = info->par; > >>> + struct drm_client_buffer *buffer = fb_helper->buffer; > >>> + > >>> + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem) > >>> + return drm_fb_helper_sys_read(info, buf, count, ppos); > >>> + else > >>> + return drm_fb_helper_cfb_read(info, buf, count, ppos); > >>> +} > >>> + > >>> +static ssize_t drm_fbdev_fb_write(struct fb_info *info, const char __user *buf, > >>> + size_t count, loff_t *ppos) > >>> +{ > >>> + struct drm_fb_helper *fb_helper = info->par; > >>> + struct drm_client_buffer *buffer = fb_helper->buffer; > >>> + > >>> + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem) > >>> + return drm_fb_helper_sys_write(info, buf, count, ppos); > >>> + else > >>> + return drm_fb_helper_cfb_write(info, buf, count, ppos); > >>> +} > >>> + > >>> +static void drm_fbdev_fb_fillrect(struct fb_info *info, > >>> + const struct fb_fillrect *rect) > >>> +{ > >>> + struct drm_fb_helper *fb_helper = info->par; > >>> + struct drm_client_buffer *buffer = fb_helper->buffer; > >>> + > >>> + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem) > >>> + drm_fb_helper_sys_fillrect(info, rect); > >>> + else > >>> + drm_fb_helper_cfb_fillrect(info, rect); > >>> +} > >>> + > >>> +static void drm_fbdev_fb_copyarea(struct fb_info *info, > >>> + const struct fb_copyarea *area) > >>> +{ > >>> + struct drm_fb_helper *fb_helper = info->par; > >>> + struct drm_client_buffer *buffer = fb_helper->buffer; > >>> + > >>> + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem) > >>> + drm_fb_helper_sys_copyarea(info, area); > >>> + else > >>> + drm_fb_helper_cfb_copyarea(info, area); > >>> +} > >>> + > >>> +static void drm_fbdev_fb_imageblit(struct fb_info *info, > >>> + const struct fb_image *image) > >>> +{ > >>> + struct drm_fb_helper *fb_helper = info->par; > >>> + struct drm_client_buffer *buffer = fb_helper->buffer; > >>> + > >>> + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem) > >>> + drm_fb_helper_sys_imageblit(info, image); > >>> + else > >>> + drm_fb_helper_cfb_imageblit(info, image); > >>> +} > > > > I think a todo.rst entry to make the new generic functions the real ones, and > > drivers not using the sys/cfb ones anymore would be a good addition. > > It's kinda covered by the move to the generic helpers, but maybe we > > can convert a few more drivers over to these here. Would also allow us > > to maybe flatten the code a bit and use more of the dma_buf_map stuff > > directly (instead of reusing crusty fbdev code written 20 years ago or > > so). > > I wouldn't mind doing our own thing, but dma_buf_map is not a good fit > here. Mostly because the _cfb_ code first does a reads from I/O to > system memory, and then copies to userspace. The _sys_ functions copy > directly to userspace. (Same for write, but in the other direction.) > > There's some code at the top and bottom of these functions that could be > shared. If we want to share the copy loops, we'd probably end up with > additional memcpys in the _sys_ case. Yeah I noticed that. I'd just ignore it. If someone is using a) fbdev and b) read/write on it, they don't care much about performance. We can do another copy or two, no problem. But the duplication is also ok I guess, just a bit less pretty. -Daniel > Best regards > Thomas > > > > >>> + > >>> static const struct fb_ops drm_fbdev_fb_ops = { > >>> .owner = THIS_MODULE, > >>> DRM_FB_HELPER_DEFAULT_OPS, > >>> @@ -2050,11 +2239,11 @@ static const struct fb_ops drm_fbdev_fb_ops = { > >>> .fb_release = drm_fbdev_fb_release, > >>> .fb_destroy = drm_fbdev_fb_destroy, > >>> .fb_mmap = drm_fbdev_fb_mmap, > >>> - .fb_read = drm_fb_helper_sys_read, > >>> - .fb_write = drm_fb_helper_sys_write, > >>> - .fb_fillrect = drm_fb_helper_sys_fillrect, > >>> - .fb_copyarea = drm_fb_helper_sys_copyarea, > >>> - .fb_imageblit = drm_fb_helper_sys_imageblit, > >>> + .fb_read = drm_fbdev_fb_read, > >>> + .fb_write = drm_fbdev_fb_write, > >>> + .fb_fillrect = drm_fbdev_fb_fillrect, > >>> + .fb_copyarea = drm_fbdev_fb_copyarea, > >>> + .fb_imageblit = drm_fbdev_fb_imageblit, > >>> }; > >>> > >>> static struct fb_deferred_io drm_fbdev_defio = { > >>> diff --git a/include/drm/drm_mode_config.h b/include/drm/drm_mode_config.h > >>> index 5ffbb4ed5b35..ab424ddd7665 100644 > >>> --- a/include/drm/drm_mode_config.h > >>> +++ b/include/drm/drm_mode_config.h > >>> @@ -877,18 +877,6 @@ struct drm_mode_config { > >>> */ > >>> bool prefer_shadow_fbdev; > >>> > >>> - /** > >>> - * @fbdev_use_iomem: > >>> - * > >>> - * Set to true if framebuffer reside in iomem. > >>> - * When set to true memcpy_toio() is used when copying the framebuffer in > >>> - * drm_fb_helper.drm_fb_helper_dirty_blit_real(). > >>> - * > >>> - * FIXME: This should be replaced with a per-mapping is_iomem > >>> - * flag (like ttm does), and then used everywhere in fbdev code. > >>> - */ > >>> - bool fbdev_use_iomem; > >>> - > >>> /** > >>> * @quirk_addfb_prefer_xbgr_30bpp: > >>> * > >>> diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h > > > > I think the below should be split out as a prep patch. > > > >>> index 2e8bbecb5091..6ca0f304dda2 100644 > >>> --- a/include/linux/dma-buf-map.h > >>> +++ b/include/linux/dma-buf-map.h > >>> @@ -32,6 +32,14 @@ > >>> * accessing the buffer. Use the returned instance and the helper functions > >>> * to access the buffer's memory in the correct way. > >>> * > >>> + * The type :c:type:`struct dma_buf_map ` and its helpers are > >>> + * actually independent from the dma-buf infrastructure. When sharing buffers > >>> + * among devices, drivers have to know the location of the memory to access > >>> + * the buffers in a safe way. :c:type:`struct dma_buf_map ` > >>> + * solves this problem for dma-buf and its users. If other drivers or > >>> + * sub-systems require similar functionality, the type could be generalized > >>> + * and moved to a more prominent header file. > >>> + * > >>> * Open-coding access to :c:type:`struct dma_buf_map ` is > >>> * considered bad style. Rather then accessing its fields directly, use one > >>> * of the provided helper functions, or implement your own. For example, > >>> @@ -51,6 +59,14 @@ > >>> * > >>> * dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf); > >>> * > >>> + * Instances of struct dma_buf_map do not have to be cleaned up, but > >>> + * can be cleared to NULL with dma_buf_map_clear(). Cleared mappings > >>> + * always refer to system memory. > >>> + * > >>> + * .. code-block:: c > >>> + * > >>> + * dma_buf_map_clear(&map); > >>> + * > >>> * Test if a mapping is valid with either dma_buf_map_is_set() or > >>> * dma_buf_map_is_null(). > >>> * > >>> @@ -73,17 +89,19 @@ > >>> * if (dma_buf_map_is_equal(&sys_map, &io_map)) > >>> * // always false > >>> * > >>> - * Instances of struct dma_buf_map do not have to be cleaned up, but > >>> - * can be cleared to NULL with dma_buf_map_clear(). Cleared mappings > >>> - * always refer to system memory. > >>> + * A set up instance of struct dma_buf_map can be used to access or manipulate > >>> + * the buffer memory. Depending on the location of the memory, the provided > >>> + * helpers will pick the correct operations. Data can be copied into the memory > >>> + * with dma_buf_map_memcpy_to(). The address can be manipulated with > >>> + * dma_buf_map_incr(). > >>> * > >>> - * The type :c:type:`struct dma_buf_map ` and its helpers are > >>> - * actually independent from the dma-buf infrastructure. When sharing buffers > >>> - * among devices, drivers have to know the location of the memory to access > >>> - * the buffers in a safe way. :c:type:`struct dma_buf_map ` > >>> - * solves this problem for dma-buf and its users. If other drivers or > >>> - * sub-systems require similar functionality, the type could be generalized > >>> - * and moved to a more prominent header file. > >>> + * .. code-block:: c > >>> + * > >>> + * const void *src = ...; // source buffer > >>> + * size_t len = ...; // length of src > >>> + * > >>> + * dma_buf_map_memcpy_to(&map, src, len); > >>> + * dma_buf_map_incr(&map, len); // go to first byte after the memcpy > >>> */ > >>> > >>> /** > >>> @@ -210,4 +228,38 @@ static inline void dma_buf_map_clear(struct dma_buf_map *map) > >>> } > >>> } > >>> > >>> +/** > >>> + * dma_buf_map_memcpy_to - Memcpy into dma-buf mapping > >>> + * @dst: The dma-buf mapping structure > >>> + * @src: The source buffer > >>> + * @len: The number of byte in src > >>> + * > >>> + * Copies data into a dma-buf mapping. The source buffer is in system > >>> + * memory. Depending on the buffer's location, the helper picks the correct > >>> + * method of accessing the memory. > >>> + */ > >>> +static inline void dma_buf_map_memcpy_to(struct dma_buf_map *dst, const void *src, size_t len) > >>> +{ > >>> + if (dst->is_iomem) > >>> + memcpy_toio(dst->vaddr_iomem, src, len); > >>> + else > >>> + memcpy(dst->vaddr, src, len); > >>> +} > >>> + > >>> +/** > >>> + * dma_buf_map_incr - Increments the address stored in a dma-buf mapping > >>> + * @map: The dma-buf mapping structure > >>> + * @incr: The number of bytes to increment > >>> + * > >>> + * Increments the address stored in a dma-buf mapping. Depending on the > >>> + * buffer's location, the correct value will be updated. > >>> + */ > >>> +static inline void dma_buf_map_incr(struct dma_buf_map *map, size_t incr) > >>> +{ > >>> + if (map->is_iomem) > >>> + map->vaddr_iomem += incr; > >>> + else > >>> + map->vaddr += incr; > >>> +} > >>> + > >>> #endif /* __DMA_BUF_MAP_H__ */ > >>> -- > >>> 2.28.0 > > > > Aside from the details I think looks all reasonable. > > -Daniel > > > > -- > Thomas Zimmermann > Graphics Driver Developer > SUSE Software Solutions Germany GmbH > Maxfeldstr. 5, 90409 N?rnberg, Germany > (HRB 36809, AG N?rnberg) > Gesch?ftsf?hrer: Felix Imend?rffer > -- Daniel Vetter Software Engineer, Intel Corporation http://blog.ffwll.ch From tzimmermann at suse.de Thu Oct 15 12:37:57 2020 From: tzimmermann at suse.de (Thomas Zimmermann) Date: Thu, 15 Oct 2020 14:37:57 +0200 Subject: [Spice-devel] [PATCH v4 01/10] drm/vram-helper: Remove invariant parameters from internal kmap function In-Reply-To: <20201015123806.32416-1-tzimmermann@suse.de> References: <20201015123806.32416-1-tzimmermann@suse.de> Message-ID: <20201015123806.32416-2-tzimmermann@suse.de> The parameters map and is_iomem are always of the same value. Removed them to prepares the function for conversion to struct dma_buf_map. v4: * don't check for !kmap->virtual; will always be false Signed-off-by: Thomas Zimmermann Reviewed-by: Daniel Vetter --- drivers/gpu/drm/drm_gem_vram_helper.c | 18 ++++-------------- 1 file changed, 4 insertions(+), 14 deletions(-) diff --git a/drivers/gpu/drm/drm_gem_vram_helper.c b/drivers/gpu/drm/drm_gem_vram_helper.c index 3213429f8444..2d5ed30518f1 100644 --- a/drivers/gpu/drm/drm_gem_vram_helper.c +++ b/drivers/gpu/drm/drm_gem_vram_helper.c @@ -382,32 +382,22 @@ int drm_gem_vram_unpin(struct drm_gem_vram_object *gbo) } EXPORT_SYMBOL(drm_gem_vram_unpin); -static void *drm_gem_vram_kmap_locked(struct drm_gem_vram_object *gbo, - bool map, bool *is_iomem) +static void *drm_gem_vram_kmap_locked(struct drm_gem_vram_object *gbo) { int ret; struct ttm_bo_kmap_obj *kmap = &gbo->kmap; + bool is_iomem; if (gbo->kmap_use_count > 0) goto out; - if (kmap->virtual || !map) - goto out; - ret = ttm_bo_kmap(&gbo->bo, 0, gbo->bo.num_pages, kmap); if (ret) return ERR_PTR(ret); out: - if (!kmap->virtual) { - if (is_iomem) - *is_iomem = false; - return NULL; /* not mapped; don't increment ref */ - } ++gbo->kmap_use_count; - if (is_iomem) - return ttm_kmap_obj_virtual(kmap, is_iomem); - return kmap->virtual; + return ttm_kmap_obj_virtual(kmap, &is_iomem); } static void drm_gem_vram_kunmap_locked(struct drm_gem_vram_object *gbo) @@ -452,7 +442,7 @@ void *drm_gem_vram_vmap(struct drm_gem_vram_object *gbo) ret = drm_gem_vram_pin_locked(gbo, 0); if (ret) goto err_ttm_bo_unreserve; - base = drm_gem_vram_kmap_locked(gbo, true, NULL); + base = drm_gem_vram_kmap_locked(gbo); if (IS_ERR(base)) { ret = PTR_ERR(base); goto err_drm_gem_vram_unpin_locked; -- 2.28.0 From tzimmermann at suse.de Thu Oct 15 12:37:59 2020 From: tzimmermann at suse.de (Thomas Zimmermann) Date: Thu, 15 Oct 2020 14:37:59 +0200 Subject: [Spice-devel] [PATCH v4 03/10] drm/etnaviv: Remove empty etnaviv_gem_prime_vunmap() In-Reply-To: <20201015123806.32416-1-tzimmermann@suse.de> References: <20201015123806.32416-1-tzimmermann@suse.de> Message-ID: <20201015123806.32416-4-tzimmermann@suse.de> The function etnaviv_gem_prime_vunmap() is empty. Remove it before changing the interface to use struct drm_buf_map. Signed-off-by: Thomas Zimmermann --- drivers/gpu/drm/etnaviv/etnaviv_drv.h | 1 - drivers/gpu/drm/etnaviv/etnaviv_gem.c | 1 - drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c | 5 ----- 3 files changed, 7 deletions(-) diff --git a/drivers/gpu/drm/etnaviv/etnaviv_drv.h b/drivers/gpu/drm/etnaviv/etnaviv_drv.h index 914f0867ff71..9682c26d89bb 100644 --- a/drivers/gpu/drm/etnaviv/etnaviv_drv.h +++ b/drivers/gpu/drm/etnaviv/etnaviv_drv.h @@ -52,7 +52,6 @@ int etnaviv_gem_mmap(struct file *filp, struct vm_area_struct *vma); int etnaviv_gem_mmap_offset(struct drm_gem_object *obj, u64 *offset); struct sg_table *etnaviv_gem_prime_get_sg_table(struct drm_gem_object *obj); void *etnaviv_gem_prime_vmap(struct drm_gem_object *obj); -void etnaviv_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr); int etnaviv_gem_prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma); struct drm_gem_object *etnaviv_gem_prime_import_sg_table(struct drm_device *dev, diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem.c b/drivers/gpu/drm/etnaviv/etnaviv_gem.c index 67d9a2b9ea6a..bbd235473645 100644 --- a/drivers/gpu/drm/etnaviv/etnaviv_gem.c +++ b/drivers/gpu/drm/etnaviv/etnaviv_gem.c @@ -571,7 +571,6 @@ static const struct drm_gem_object_funcs etnaviv_gem_object_funcs = { .unpin = etnaviv_gem_prime_unpin, .get_sg_table = etnaviv_gem_prime_get_sg_table, .vmap = etnaviv_gem_prime_vmap, - .vunmap = etnaviv_gem_prime_vunmap, .vm_ops = &vm_ops, }; diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c index 135fbff6fecf..a6d9932a32ae 100644 --- a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c +++ b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c @@ -27,11 +27,6 @@ void *etnaviv_gem_prime_vmap(struct drm_gem_object *obj) return etnaviv_gem_vmap(obj); } -void etnaviv_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr) -{ - /* TODO msm_gem_vunmap() */ -} - int etnaviv_gem_prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma) { -- 2.28.0 From tzimmermann at suse.de Thu Oct 15 12:37:58 2020 From: tzimmermann at suse.de (Thomas Zimmermann) Date: Thu, 15 Oct 2020 14:37:58 +0200 Subject: [Spice-devel] [PATCH v4 02/10] drm/cma-helper: Remove empty drm_gem_cma_prime_vunmap() In-Reply-To: <20201015123806.32416-1-tzimmermann@suse.de> References: <20201015123806.32416-1-tzimmermann@suse.de> Message-ID: <20201015123806.32416-3-tzimmermann@suse.de> The function drm_gem_cma_prime_vunmap() is empty. Remove it before changing the interface to use struct drm_buf_map. Signed-off-by: Thomas Zimmermann --- drivers/gpu/drm/drm_gem_cma_helper.c | 17 ----------------- drivers/gpu/drm/vc4/vc4_bo.c | 1 - include/drm/drm_gem_cma_helper.h | 1 - 3 files changed, 19 deletions(-) diff --git a/drivers/gpu/drm/drm_gem_cma_helper.c b/drivers/gpu/drm/drm_gem_cma_helper.c index 2165633c9b9e..d527485ea0b7 100644 --- a/drivers/gpu/drm/drm_gem_cma_helper.c +++ b/drivers/gpu/drm/drm_gem_cma_helper.c @@ -537,23 +537,6 @@ void *drm_gem_cma_prime_vmap(struct drm_gem_object *obj) } EXPORT_SYMBOL_GPL(drm_gem_cma_prime_vmap); -/** - * drm_gem_cma_prime_vunmap - unmap a CMA GEM object from the kernel's virtual - * address space - * @obj: GEM object - * @vaddr: kernel virtual address where the CMA GEM object was mapped - * - * This function removes a buffer exported via DRM PRIME from the kernel's - * virtual address space. This is a no-op because CMA buffers cannot be - * unmapped from kernel space. Drivers using the CMA helpers should set this - * as their &drm_gem_object_funcs.vunmap callback. - */ -void drm_gem_cma_prime_vunmap(struct drm_gem_object *obj, void *vaddr) -{ - /* Nothing to do */ -} -EXPORT_SYMBOL_GPL(drm_gem_cma_prime_vunmap); - static const struct drm_gem_object_funcs drm_gem_cma_default_funcs = { .free = drm_gem_cma_free_object, .print_info = drm_gem_cma_print_info, diff --git a/drivers/gpu/drm/vc4/vc4_bo.c b/drivers/gpu/drm/vc4/vc4_bo.c index f432278173cd..557f0d1e6437 100644 --- a/drivers/gpu/drm/vc4/vc4_bo.c +++ b/drivers/gpu/drm/vc4/vc4_bo.c @@ -387,7 +387,6 @@ static const struct drm_gem_object_funcs vc4_gem_object_funcs = { .export = vc4_prime_export, .get_sg_table = drm_gem_cma_prime_get_sg_table, .vmap = vc4_prime_vmap, - .vunmap = drm_gem_cma_prime_vunmap, .vm_ops = &vc4_vm_ops, }; diff --git a/include/drm/drm_gem_cma_helper.h b/include/drm/drm_gem_cma_helper.h index 2bfa2502607a..a064b0d1c480 100644 --- a/include/drm/drm_gem_cma_helper.h +++ b/include/drm/drm_gem_cma_helper.h @@ -104,7 +104,6 @@ drm_gem_cma_prime_import_sg_table(struct drm_device *dev, int drm_gem_cma_prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma); void *drm_gem_cma_prime_vmap(struct drm_gem_object *obj); -void drm_gem_cma_prime_vunmap(struct drm_gem_object *obj, void *vaddr); struct drm_gem_object * drm_gem_cma_create_object_default_funcs(struct drm_device *dev, size_t size); -- 2.28.0 From tzimmermann at suse.de Thu Oct 15 12:38:00 2020 From: tzimmermann at suse.de (Thomas Zimmermann) Date: Thu, 15 Oct 2020 14:38:00 +0200 Subject: [Spice-devel] [PATCH v4 04/10] drm/exynos: Remove empty exynos_drm_gem_prime_{vmap, vunmap}() In-Reply-To: <20201015123806.32416-1-tzimmermann@suse.de> References: <20201015123806.32416-1-tzimmermann@suse.de> Message-ID: <20201015123806.32416-5-tzimmermann@suse.de> The functions exynos_drm_gem_prime_{vmap,vunmap}() are empty. Remove them before changing the interface to use struct drm_buf_map. As a side effect of removing drm_gem_prime_vmap(), the error code changes from ENOMEM to EOPNOTSUPP. Signed-off-by: Thomas Zimmermann --- drivers/gpu/drm/exynos/exynos_drm_gem.c | 12 ------------ drivers/gpu/drm/exynos/exynos_drm_gem.h | 2 -- 2 files changed, 14 deletions(-) diff --git a/drivers/gpu/drm/exynos/exynos_drm_gem.c b/drivers/gpu/drm/exynos/exynos_drm_gem.c index e7a6eb96f692..13a35623ac04 100644 --- a/drivers/gpu/drm/exynos/exynos_drm_gem.c +++ b/drivers/gpu/drm/exynos/exynos_drm_gem.c @@ -137,8 +137,6 @@ static const struct vm_operations_struct exynos_drm_gem_vm_ops = { static const struct drm_gem_object_funcs exynos_drm_gem_object_funcs = { .free = exynos_drm_gem_free_object, .get_sg_table = exynos_drm_gem_prime_get_sg_table, - .vmap = exynos_drm_gem_prime_vmap, - .vunmap = exynos_drm_gem_prime_vunmap, .vm_ops = &exynos_drm_gem_vm_ops, }; @@ -471,16 +469,6 @@ exynos_drm_gem_prime_import_sg_table(struct drm_device *dev, return &exynos_gem->base; } -void *exynos_drm_gem_prime_vmap(struct drm_gem_object *obj) -{ - return NULL; -} - -void exynos_drm_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr) -{ - /* Nothing to do */ -} - int exynos_drm_gem_prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma) { diff --git a/drivers/gpu/drm/exynos/exynos_drm_gem.h b/drivers/gpu/drm/exynos/exynos_drm_gem.h index 74e926abeff0..a23272fb96fb 100644 --- a/drivers/gpu/drm/exynos/exynos_drm_gem.h +++ b/drivers/gpu/drm/exynos/exynos_drm_gem.h @@ -107,8 +107,6 @@ struct drm_gem_object * exynos_drm_gem_prime_import_sg_table(struct drm_device *dev, struct dma_buf_attachment *attach, struct sg_table *sgt); -void *exynos_drm_gem_prime_vmap(struct drm_gem_object *obj); -void exynos_drm_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr); int exynos_drm_gem_prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma); -- 2.28.0 From tzimmermann at suse.de Thu Oct 15 12:38:01 2020 From: tzimmermann at suse.de (Thomas Zimmermann) Date: Thu, 15 Oct 2020 14:38:01 +0200 Subject: [Spice-devel] [PATCH v4 05/10] drm/ttm: Add vmap/vunmap to TTM and TTM GEM helpers In-Reply-To: <20201015123806.32416-1-tzimmermann@suse.de> References: <20201015123806.32416-1-tzimmermann@suse.de> Message-ID: <20201015123806.32416-6-tzimmermann@suse.de> The new functions ttm_bo_{vmap,vunmap}() map and unmap a TTM BO in kernel address space. The mapping's address is returned as struct dma_buf_map. Each function is a simplified version of TTM's existing kmap code. Both functions respect the memory's location ani/or writecombine flags. On top TTM's functions, GEM TTM helpers got drm_gem_ttm_{vmap,vunmap}(), two helpers that convert a GEM object into the TTM BO and forward the call to TTM's vmap/vunmap. These helpers can be dropped into the rsp GEM object callbacks. v4: * drop ttm_kmap_obj_to_dma_buf() in favor of vmap helpers (Daniel, Christian) Signed-off-by: Thomas Zimmermann --- drivers/gpu/drm/drm_gem_ttm_helper.c | 38 +++++++++++++++ drivers/gpu/drm/ttm/ttm_bo_util.c | 72 ++++++++++++++++++++++++++++ include/drm/drm_gem_ttm_helper.h | 6 +++ include/drm/ttm/ttm_bo_api.h | 28 +++++++++++ include/linux/dma-buf-map.h | 20 ++++++++ 5 files changed, 164 insertions(+) diff --git a/drivers/gpu/drm/drm_gem_ttm_helper.c b/drivers/gpu/drm/drm_gem_ttm_helper.c index 0e4fb9ba43ad..db4c14d78a30 100644 --- a/drivers/gpu/drm/drm_gem_ttm_helper.c +++ b/drivers/gpu/drm/drm_gem_ttm_helper.c @@ -49,6 +49,44 @@ void drm_gem_ttm_print_info(struct drm_printer *p, unsigned int indent, } EXPORT_SYMBOL(drm_gem_ttm_print_info); +/** + * drm_gem_ttm_vmap() - vmap &ttm_buffer_object + * @gem: GEM object. + * @map: [out] returns the dma-buf mapping. + * + * Maps a GEM object with ttm_bo_vmap(). This function can be used as + * &drm_gem_object_funcs.vmap callback. + * + * Returns: + * 0 on success, or a negative errno code otherwise. + */ +int drm_gem_ttm_vmap(struct drm_gem_object *gem, + struct dma_buf_map *map) +{ + struct ttm_buffer_object *bo = drm_gem_ttm_of_gem(gem); + + return ttm_bo_vmap(bo, map); + +} +EXPORT_SYMBOL(drm_gem_ttm_vmap); + +/** + * drm_gem_ttm_vunmap() - vunmap &ttm_buffer_object + * @gem: GEM object. + * @map: dma-buf mapping. + * + * Unmaps a GEM object with ttm_bo_vunmap(). This function can be used as + * &drm_gem_object_funcs.vmap callback. + */ +void drm_gem_ttm_vunmap(struct drm_gem_object *gem, + struct dma_buf_map *map) +{ + struct ttm_buffer_object *bo = drm_gem_ttm_of_gem(gem); + + ttm_bo_vunmap(bo, map); +} +EXPORT_SYMBOL(drm_gem_ttm_vunmap); + /** * drm_gem_ttm_mmap() - mmap &ttm_buffer_object * @gem: GEM object. diff --git a/drivers/gpu/drm/ttm/ttm_bo_util.c b/drivers/gpu/drm/ttm/ttm_bo_util.c index bdee4df1f3f2..80c42c774c7d 100644 --- a/drivers/gpu/drm/ttm/ttm_bo_util.c +++ b/drivers/gpu/drm/ttm/ttm_bo_util.c @@ -32,6 +32,7 @@ #include #include #include +#include #include #include #include @@ -526,6 +527,77 @@ void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map) } EXPORT_SYMBOL(ttm_bo_kunmap); +int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map) +{ + struct ttm_resource *mem = &bo->mem; + int ret; + + ret = ttm_mem_io_reserve(bo->bdev, mem); + if (ret) + return ret; + + if (mem->bus.is_iomem) { + void __iomem *vaddr_iomem; + unsigned long size = bo->num_pages << PAGE_SHIFT; + + if (mem->bus.addr) + vaddr_iomem = (void *)(((u8 *)mem->bus.addr)); + else if (mem->placement & TTM_PL_FLAG_WC) + vaddr_iomem = ioremap_wc(mem->bus.offset, size); + else + vaddr_iomem = ioremap(mem->bus.offset, size); + + if (!vaddr_iomem) + return -ENOMEM; + + dma_buf_map_set_vaddr_iomem(map, vaddr_iomem); + + } else { + struct ttm_operation_ctx ctx = { + .interruptible = false, + .no_wait_gpu = false + }; + struct ttm_tt *ttm = bo->ttm; + pgprot_t prot; + void *vaddr; + + BUG_ON(!ttm); + + ret = ttm_tt_populate(bo->bdev, ttm, &ctx); + if (ret) + return ret; + + /* + * We need to use vmap to get the desired page protection + * or to make the buffer object look contiguous. + */ + prot = ttm_io_prot(mem->placement, PAGE_KERNEL); + vaddr = vmap(ttm->pages, bo->num_pages, 0, prot); + if (!vaddr) + return -ENOMEM; + + dma_buf_map_set_vaddr(map, vaddr); + } + + return 0; +} +EXPORT_SYMBOL(ttm_bo_vmap); + +void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map *map) +{ + if (dma_buf_map_is_null(map)) + return; + + if (map->is_iomem) + iounmap(map->vaddr_iomem); + else + vunmap(map->vaddr); + dma_buf_map_clear(map); + + ttm_mem_io_free(bo->bdev, &bo->mem); +} +EXPORT_SYMBOL(ttm_bo_vunmap); + static int ttm_bo_wait_free_node(struct ttm_buffer_object *bo, bool dst_use_tt) { diff --git a/include/drm/drm_gem_ttm_helper.h b/include/drm/drm_gem_ttm_helper.h index 118cef76f84f..7c6d874910b8 100644 --- a/include/drm/drm_gem_ttm_helper.h +++ b/include/drm/drm_gem_ttm_helper.h @@ -10,11 +10,17 @@ #include #include +struct dma_buf_map; + #define drm_gem_ttm_of_gem(gem_obj) \ container_of(gem_obj, struct ttm_buffer_object, base) void drm_gem_ttm_print_info(struct drm_printer *p, unsigned int indent, const struct drm_gem_object *gem); +int drm_gem_ttm_vmap(struct drm_gem_object *gem, + struct dma_buf_map *map); +void drm_gem_ttm_vunmap(struct drm_gem_object *gem, + struct dma_buf_map *map); int drm_gem_ttm_mmap(struct drm_gem_object *gem, struct vm_area_struct *vma); diff --git a/include/drm/ttm/ttm_bo_api.h b/include/drm/ttm/ttm_bo_api.h index 37102e45e496..2c59a785374c 100644 --- a/include/drm/ttm/ttm_bo_api.h +++ b/include/drm/ttm/ttm_bo_api.h @@ -48,6 +48,8 @@ struct ttm_bo_global; struct ttm_bo_device; +struct dma_buf_map; + struct drm_mm_node; struct ttm_placement; @@ -494,6 +496,32 @@ int ttm_bo_kmap(struct ttm_buffer_object *bo, unsigned long start_page, */ void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map); +/** + * ttm_bo_vmap + * + * @bo: The buffer object. + * @map: pointer to a struct dma_buf_map representing the map. + * + * Sets up a kernel virtual mapping, using ioremap or vmap to the + * data in the buffer object. The parameter @map returns the virtual + * address as struct dma_buf_map. Unmap the buffer with ttm_bo_vunmap(). + * + * Returns + * -ENOMEM: Out of memory. + * -EINVAL: Invalid range. + */ +int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map); + +/** + * ttm_bo_vunmap + * + * @bo: The buffer object. + * @map: Object describing the map to unmap. + * + * Unmaps a kernel map set up by ttm_bo_vmap(). + */ +void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map *map); + /** * ttm_bo_mmap_obj - mmap memory backed by a ttm buffer object. * diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h index fd1aba545fdf..2e8bbecb5091 100644 --- a/include/linux/dma-buf-map.h +++ b/include/linux/dma-buf-map.h @@ -45,6 +45,12 @@ * * dma_buf_map_set_vaddr(&map. 0xdeadbeaf); * + * To set an address in I/O memory, use dma_buf_map_set_vaddr_iomem(). + * + * .. code-block:: c + * + * dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf); + * * Test if a mapping is valid with either dma_buf_map_is_set() or * dma_buf_map_is_null(). * @@ -118,6 +124,20 @@ static inline void dma_buf_map_set_vaddr(struct dma_buf_map *map, void *vaddr) map->is_iomem = false; } +/** + * dma_buf_map_set_vaddr_iomem - Sets a dma-buf mapping structure to an address in I/O memory + * @map: The dma-buf mapping structure + * @vaddr_iomem: An I/O-memory address + * + * Sets the address and the I/O-memory flag. + */ +static inline void dma_buf_map_set_vaddr_iomem(struct dma_buf_map *map, + void __iomem *vaddr_iomem) +{ + map->vaddr_iomem = vaddr_iomem; + map->is_iomem = true; +} + /** * dma_buf_map_is_equal - Compares two dma-buf mapping structures for equality * @lhs: The dma-buf mapping structure -- 2.28.0 From tzimmermann at suse.de Thu Oct 15 12:38:02 2020 From: tzimmermann at suse.de (Thomas Zimmermann) Date: Thu, 15 Oct 2020 14:38:02 +0200 Subject: [Spice-devel] [PATCH v4 06/10] drm/gem: Use struct dma_buf_map in GEM vmap ops and convert GEM backends In-Reply-To: <20201015123806.32416-1-tzimmermann@suse.de> References: <20201015123806.32416-1-tzimmermann@suse.de> Message-ID: <20201015123806.32416-7-tzimmermann@suse.de> This patch replaces the vmap/vunmap's use of raw pointers in GEM object functions with instances of struct dma_buf_map. GEM backends are converted as well. For most of them, this simply changes the returned type. TTM-based drivers now return information about the location of the memory, either system or I/O memory. GEM VRAM helpers and qxl now use ttm_bo_vmap() et al. Amdgpu, nouveau and radeon use drm_gem_ttm_vmap() et al instead of implementing their own vmap callbacks. v4: * use ttm_bo_vmap(), drm_gem_ttm_vmap(), et al. (Daniel, Christian) * fix a trailing { in drm_gem_vmap() * remove several empty functions instead of converting them (Daniel) * comment uses of raw pointers with a TODO (Daniel) * TODO list: convert more helpers to use struct dma_buf_map Signed-off-by: Thomas Zimmermann --- Documentation/gpu/todo.rst | 18 ++++ drivers/gpu/drm/Kconfig | 2 + drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c | 36 ------- drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.h | 2 - drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c | 5 +- drivers/gpu/drm/amd/amdgpu/amdgpu_object.h | 1 - drivers/gpu/drm/ast/ast_cursor.c | 27 +++-- drivers/gpu/drm/ast/ast_drv.h | 7 +- drivers/gpu/drm/drm_gem.c | 23 +++-- drivers/gpu/drm/drm_gem_cma_helper.c | 10 +- drivers/gpu/drm/drm_gem_shmem_helper.c | 48 +++++---- drivers/gpu/drm/drm_gem_vram_helper.c | 107 ++++++++++---------- drivers/gpu/drm/etnaviv/etnaviv_drv.h | 2 +- drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c | 9 +- drivers/gpu/drm/lima/lima_gem.c | 6 +- drivers/gpu/drm/lima/lima_sched.c | 11 +- drivers/gpu/drm/mgag200/mgag200_mode.c | 10 +- drivers/gpu/drm/nouveau/Kconfig | 1 + drivers/gpu/drm/nouveau/nouveau_bo.h | 2 - drivers/gpu/drm/nouveau/nouveau_gem.c | 6 +- drivers/gpu/drm/nouveau/nouveau_gem.h | 2 - drivers/gpu/drm/nouveau/nouveau_prime.c | 20 ---- drivers/gpu/drm/panfrost/panfrost_perfcnt.c | 14 +-- drivers/gpu/drm/qxl/qxl_display.c | 11 +- drivers/gpu/drm/qxl/qxl_draw.c | 14 ++- drivers/gpu/drm/qxl/qxl_drv.h | 11 +- drivers/gpu/drm/qxl/qxl_object.c | 31 +++--- drivers/gpu/drm/qxl/qxl_object.h | 2 +- drivers/gpu/drm/qxl/qxl_prime.c | 12 +-- drivers/gpu/drm/radeon/radeon.h | 1 - drivers/gpu/drm/radeon/radeon_gem.c | 7 +- drivers/gpu/drm/radeon/radeon_prime.c | 20 ---- drivers/gpu/drm/rockchip/rockchip_drm_gem.c | 22 ++-- drivers/gpu/drm/rockchip/rockchip_drm_gem.h | 4 +- drivers/gpu/drm/tiny/cirrus.c | 10 +- drivers/gpu/drm/tiny/gm12u320.c | 10 +- drivers/gpu/drm/udl/udl_modeset.c | 8 +- drivers/gpu/drm/vboxvideo/vbox_mode.c | 11 +- drivers/gpu/drm/vc4/vc4_bo.c | 6 +- drivers/gpu/drm/vc4/vc4_drv.h | 2 +- drivers/gpu/drm/vgem/vgem_drv.c | 16 ++- drivers/gpu/drm/xen/xen_drm_front_gem.c | 18 ++-- drivers/gpu/drm/xen/xen_drm_front_gem.h | 6 +- include/drm/drm_gem.h | 5 +- include/drm/drm_gem_cma_helper.h | 2 +- include/drm/drm_gem_shmem_helper.h | 4 +- include/drm/drm_gem_vram_helper.h | 14 +-- 47 files changed, 321 insertions(+), 295 deletions(-) diff --git a/Documentation/gpu/todo.rst b/Documentation/gpu/todo.rst index 700637e25ecd..7e6fc3c04add 100644 --- a/Documentation/gpu/todo.rst +++ b/Documentation/gpu/todo.rst @@ -446,6 +446,24 @@ Contact: Ville Syrj?l?, Daniel Vetter Level: Intermediate +Use struct dma_buf_map throughout codebase +------------------------------------------ + +Pointers to shared device memory are stored in struct dma_buf_map. Each +instance knows whether it refers to system or I/O memory. Most of the DRM-wide +interface have been converted to use struct dma_buf_map, but implementations +often still use raw pointers. + +The task is to use struct dma_buf_map where it makes sense. + +* Memory managers should use struct dma_buf_map for dma-buf-imported buffers. +* TTM might benefit from using struct dma_buf_map internally. +* Framebuffer copying and blitting helpers should operate on struct dma_buf_map. + +Contact: Thomas Zimmermann , Christian K?nig, Daniel Vetter + +Level: Intermediate + Core refactorings ================= diff --git a/drivers/gpu/drm/Kconfig b/drivers/gpu/drm/Kconfig index 147d61b9674e..319839b87d37 100644 --- a/drivers/gpu/drm/Kconfig +++ b/drivers/gpu/drm/Kconfig @@ -239,6 +239,7 @@ config DRM_RADEON select FW_LOADER select DRM_KMS_HELPER select DRM_TTM + select DRM_TTM_HELPER select POWER_SUPPLY select HWMON select BACKLIGHT_CLASS_DEVICE @@ -259,6 +260,7 @@ config DRM_AMDGPU select DRM_KMS_HELPER select DRM_SCHED select DRM_TTM + select DRM_TTM_HELPER select POWER_SUPPLY select HWMON select BACKLIGHT_CLASS_DEVICE diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c index 5b465ab774d1..e5919efca870 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c @@ -41,42 +41,6 @@ #include #include -/** - * amdgpu_gem_prime_vmap - &dma_buf_ops.vmap implementation - * @obj: GEM BO - * - * Sets up an in-kernel virtual mapping of the BO's memory. - * - * Returns: - * The virtual address of the mapping or an error pointer. - */ -void *amdgpu_gem_prime_vmap(struct drm_gem_object *obj) -{ - struct amdgpu_bo *bo = gem_to_amdgpu_bo(obj); - int ret; - - ret = ttm_bo_kmap(&bo->tbo, 0, bo->tbo.num_pages, - &bo->dma_buf_vmap); - if (ret) - return ERR_PTR(ret); - - return bo->dma_buf_vmap.virtual; -} - -/** - * amdgpu_gem_prime_vunmap - &dma_buf_ops.vunmap implementation - * @obj: GEM BO - * @vaddr: Virtual address (unused) - * - * Tears down the in-kernel virtual mapping of the BO's memory. - */ -void amdgpu_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr) -{ - struct amdgpu_bo *bo = gem_to_amdgpu_bo(obj); - - ttm_bo_kunmap(&bo->dma_buf_vmap); -} - /** * amdgpu_gem_prime_mmap - &drm_driver.gem_prime_mmap implementation * @obj: GEM BO diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.h index 2c5c84a06bb9..39b5b9616fd8 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.h +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.h @@ -31,8 +31,6 @@ struct drm_gem_object *amdgpu_gem_prime_import(struct drm_device *dev, struct dma_buf *dma_buf); bool amdgpu_dmabuf_is_xgmi_accessible(struct amdgpu_device *adev, struct amdgpu_bo *bo); -void *amdgpu_gem_prime_vmap(struct drm_gem_object *obj); -void amdgpu_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr); int amdgpu_gem_prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma); diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c index be08a63ef58c..576659827e74 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c @@ -33,6 +33,7 @@ #include #include +#include #include "amdgpu.h" #include "amdgpu_display.h" @@ -220,8 +221,8 @@ static const struct drm_gem_object_funcs amdgpu_gem_object_funcs = { .open = amdgpu_gem_object_open, .close = amdgpu_gem_object_close, .export = amdgpu_gem_prime_export, - .vmap = amdgpu_gem_prime_vmap, - .vunmap = amdgpu_gem_prime_vunmap, + .vmap = drm_gem_ttm_vmap, + .vunmap = drm_gem_ttm_vunmap, }; /* diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h index 132e5f955180..01296ef0d673 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h @@ -100,7 +100,6 @@ struct amdgpu_bo { struct amdgpu_bo *parent; struct amdgpu_bo *shadow; - struct ttm_bo_kmap_obj dma_buf_vmap; struct amdgpu_mn *mn; diff --git a/drivers/gpu/drm/ast/ast_cursor.c b/drivers/gpu/drm/ast/ast_cursor.c index e0f4613918ad..742d43a7edf4 100644 --- a/drivers/gpu/drm/ast/ast_cursor.c +++ b/drivers/gpu/drm/ast/ast_cursor.c @@ -39,7 +39,7 @@ static void ast_cursor_fini(struct ast_private *ast) for (i = 0; i < ARRAY_SIZE(ast->cursor.gbo); ++i) { gbo = ast->cursor.gbo[i]; - drm_gem_vram_vunmap(gbo, ast->cursor.vaddr[i]); + drm_gem_vram_vunmap(gbo, &ast->cursor.map[i]); drm_gem_vram_unpin(gbo); drm_gem_vram_put(gbo); } @@ -60,7 +60,7 @@ int ast_cursor_init(struct ast_private *ast) struct drm_device *dev = &ast->base; size_t size, i; struct drm_gem_vram_object *gbo; - void __iomem *vaddr; + struct dma_buf_map map; int ret; size = roundup(AST_HWC_SIZE + AST_HWC_SIGNATURE_SIZE, PAGE_SIZE); @@ -77,16 +77,15 @@ int ast_cursor_init(struct ast_private *ast) drm_gem_vram_put(gbo); goto err_drm_gem_vram_put; } - vaddr = drm_gem_vram_vmap(gbo); - if (IS_ERR(vaddr)) { - ret = PTR_ERR(vaddr); + ret = drm_gem_vram_vmap(gbo, &map); + if (ret) { drm_gem_vram_unpin(gbo); drm_gem_vram_put(gbo); goto err_drm_gem_vram_put; } ast->cursor.gbo[i] = gbo; - ast->cursor.vaddr[i] = vaddr; + ast->cursor.map[i] = map; } return drmm_add_action_or_reset(dev, ast_cursor_release, NULL); @@ -95,7 +94,7 @@ int ast_cursor_init(struct ast_private *ast) while (i) { --i; gbo = ast->cursor.gbo[i]; - drm_gem_vram_vunmap(gbo, ast->cursor.vaddr[i]); + drm_gem_vram_vunmap(gbo, &ast->cursor.map[i]); drm_gem_vram_unpin(gbo); drm_gem_vram_put(gbo); } @@ -170,6 +169,7 @@ int ast_cursor_blit(struct ast_private *ast, struct drm_framebuffer *fb) { struct drm_device *dev = &ast->base; struct drm_gem_vram_object *gbo; + struct dma_buf_map map; int ret; void *src; void __iomem *dst; @@ -183,18 +183,17 @@ int ast_cursor_blit(struct ast_private *ast, struct drm_framebuffer *fb) ret = drm_gem_vram_pin(gbo, 0); if (ret) return ret; - src = drm_gem_vram_vmap(gbo); - if (IS_ERR(src)) { - ret = PTR_ERR(src); + ret = drm_gem_vram_vmap(gbo, &map); + if (ret) goto err_drm_gem_vram_unpin; - } + src = map.vaddr; /* TODO: Use mapping abstraction properly */ - dst = ast->cursor.vaddr[ast->cursor.next_index]; + dst = ast->cursor.map[ast->cursor.next_index].vaddr_iomem; /* do data transfer to cursor BO */ update_cursor_image(dst, src, fb->width, fb->height); - drm_gem_vram_vunmap(gbo, src); + drm_gem_vram_vunmap(gbo, &map); drm_gem_vram_unpin(gbo); return 0; @@ -257,7 +256,7 @@ void ast_cursor_show(struct ast_private *ast, int x, int y, u8 __iomem *sig; u8 jreg; - dst = ast->cursor.vaddr[ast->cursor.next_index]; + dst = ast->cursor.map[ast->cursor.next_index].vaddr; sig = dst + AST_HWC_SIZE; writel(x, sig + AST_HWC_SIGNATURE_X); diff --git a/drivers/gpu/drm/ast/ast_drv.h b/drivers/gpu/drm/ast/ast_drv.h index 467049ca8430..f963141dd851 100644 --- a/drivers/gpu/drm/ast/ast_drv.h +++ b/drivers/gpu/drm/ast/ast_drv.h @@ -28,10 +28,11 @@ #ifndef __AST_DRV_H__ #define __AST_DRV_H__ -#include -#include +#include #include #include +#include +#include #include #include @@ -131,7 +132,7 @@ struct ast_private { struct { struct drm_gem_vram_object *gbo[AST_DEFAULT_HWC_NUM]; - void __iomem *vaddr[AST_DEFAULT_HWC_NUM]; + struct dma_buf_map map[AST_DEFAULT_HWC_NUM]; unsigned int next_index; } cursor; diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c index 1da67d34e55d..a89ad4570e3c 100644 --- a/drivers/gpu/drm/drm_gem.c +++ b/drivers/gpu/drm/drm_gem.c @@ -36,6 +36,7 @@ #include #include #include +#include #include #include @@ -1207,26 +1208,30 @@ void drm_gem_unpin(struct drm_gem_object *obj) void *drm_gem_vmap(struct drm_gem_object *obj) { - void *vaddr; + struct dma_buf_map map; + int ret; - if (obj->funcs->vmap) - vaddr = obj->funcs->vmap(obj); - else - vaddr = ERR_PTR(-EOPNOTSUPP); + if (!obj->funcs->vmap) + return ERR_PTR(-EOPNOTSUPP); - if (!vaddr) - vaddr = ERR_PTR(-ENOMEM); + ret = obj->funcs->vmap(obj, &map); + if (ret) + return ERR_PTR(ret); + else if (dma_buf_map_is_null(&map)) + return ERR_PTR(-ENOMEM); - return vaddr; + return map.vaddr; } void drm_gem_vunmap(struct drm_gem_object *obj, void *vaddr) { + struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(vaddr); + if (!vaddr) return; if (obj->funcs->vunmap) - obj->funcs->vunmap(obj, vaddr); + obj->funcs->vunmap(obj, &map); } /** diff --git a/drivers/gpu/drm/drm_gem_cma_helper.c b/drivers/gpu/drm/drm_gem_cma_helper.c index d527485ea0b7..b57e3e9222f0 100644 --- a/drivers/gpu/drm/drm_gem_cma_helper.c +++ b/drivers/gpu/drm/drm_gem_cma_helper.c @@ -519,6 +519,8 @@ EXPORT_SYMBOL_GPL(drm_gem_cma_prime_mmap); * drm_gem_cma_prime_vmap - map a CMA GEM object into the kernel's virtual * address space * @obj: GEM object + * @map: Returns the kernel virtual address of the CMA GEM object's backing + * store. * * This function maps a buffer exported via DRM PRIME into the kernel's * virtual address space. Since the CMA buffers are already mapped into the @@ -527,13 +529,15 @@ EXPORT_SYMBOL_GPL(drm_gem_cma_prime_mmap); * driver's &drm_gem_object_funcs.vmap callback. * * Returns: - * The kernel virtual address of the CMA GEM object's backing store. + * 0 on success, or a negative error code otherwise. */ -void *drm_gem_cma_prime_vmap(struct drm_gem_object *obj) +int drm_gem_cma_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map) { struct drm_gem_cma_object *cma_obj = to_drm_gem_cma_obj(obj); - return cma_obj->vaddr; + dma_buf_map_set_vaddr(map, cma_obj->vaddr); + + return 0; } EXPORT_SYMBOL_GPL(drm_gem_cma_prime_vmap); diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c index fb11df7aced5..5553f58f68f3 100644 --- a/drivers/gpu/drm/drm_gem_shmem_helper.c +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c @@ -258,19 +258,25 @@ void drm_gem_shmem_unpin(struct drm_gem_object *obj) } EXPORT_SYMBOL(drm_gem_shmem_unpin); -static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem) +static int drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem, struct dma_buf_map *map) { struct drm_gem_object *obj = &shmem->base; - struct dma_buf_map map; int ret = 0; - if (shmem->vmap_use_count++ > 0) - return shmem->vaddr; + if (shmem->vmap_use_count++ > 0) { + dma_buf_map_set_vaddr(map, shmem->vaddr); + return 0; + } if (obj->import_attach) { - ret = dma_buf_vmap(obj->import_attach->dmabuf, &map); - if (!ret) - shmem->vaddr = map.vaddr; + ret = dma_buf_vmap(obj->import_attach->dmabuf, map); + if (!ret) { + if (WARN_ON(map->is_iomem)) { + ret = -EIO; + goto err_put_pages; + } + shmem->vaddr = map->vaddr; + } } else { pgprot_t prot = PAGE_KERNEL; @@ -284,6 +290,8 @@ static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem) VM_MAP, prot); if (!shmem->vaddr) ret = -ENOMEM; + else + dma_buf_map_set_vaddr(map, shmem->vaddr); } if (ret) { @@ -291,7 +299,7 @@ static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem) goto err_put_pages; } - return shmem->vaddr; + return 0; err_put_pages: if (!obj->import_attach) @@ -299,12 +307,14 @@ static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem) err_zero_use: shmem->vmap_use_count = 0; - return ERR_PTR(ret); + return ret; } /* * drm_gem_shmem_vmap - Create a virtual mapping for a shmem GEM object * @shmem: shmem GEM object + * @map: Returns the kernel virtual address of the SHMEM GEM object's backing + * store. * * This function makes sure that a contiguous kernel virtual address mapping * exists for the buffer backing the shmem GEM object. @@ -318,26 +328,25 @@ static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem) * Returns: * 0 on success or a negative error code on failure. */ -void *drm_gem_shmem_vmap(struct drm_gem_object *obj) +int drm_gem_shmem_vmap(struct drm_gem_object *obj, struct dma_buf_map *map) { struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj); - void *vaddr; int ret; ret = mutex_lock_interruptible(&shmem->vmap_lock); if (ret) - return ERR_PTR(ret); - vaddr = drm_gem_shmem_vmap_locked(shmem); + return ret; + ret = drm_gem_shmem_vmap_locked(shmem, map); mutex_unlock(&shmem->vmap_lock); - return vaddr; + return ret; } EXPORT_SYMBOL(drm_gem_shmem_vmap); -static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem) +static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem, + struct dma_buf_map *map) { struct drm_gem_object *obj = &shmem->base; - struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(shmem->vaddr); if (WARN_ON_ONCE(!shmem->vmap_use_count)) return; @@ -346,7 +355,7 @@ static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem) return; if (obj->import_attach) - dma_buf_vunmap(obj->import_attach->dmabuf, &map); + dma_buf_vunmap(obj->import_attach->dmabuf, map); else vunmap(shmem->vaddr); @@ -357,6 +366,7 @@ static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem) /* * drm_gem_shmem_vunmap - Unmap a virtual mapping fo a shmem GEM object * @shmem: shmem GEM object + * @map: Kernel virtual address where the SHMEM GEM object was mapped * * This function cleans up a kernel virtual address mapping acquired by * drm_gem_shmem_vmap(). The mapping is only removed when the use count drops to @@ -366,12 +376,12 @@ static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem) * also be called by drivers directly, in which case it will hide the * differences between dma-buf imported and natively allocated objects. */ -void drm_gem_shmem_vunmap(struct drm_gem_object *obj, void *vaddr) +void drm_gem_shmem_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map) { struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj); mutex_lock(&shmem->vmap_lock); - drm_gem_shmem_vunmap_locked(shmem); + drm_gem_shmem_vunmap_locked(shmem, map); mutex_unlock(&shmem->vmap_lock); } EXPORT_SYMBOL(drm_gem_shmem_vunmap); diff --git a/drivers/gpu/drm/drm_gem_vram_helper.c b/drivers/gpu/drm/drm_gem_vram_helper.c index 2d5ed30518f1..4d8553b28558 100644 --- a/drivers/gpu/drm/drm_gem_vram_helper.c +++ b/drivers/gpu/drm/drm_gem_vram_helper.c @@ -1,5 +1,6 @@ // SPDX-License-Identifier: GPL-2.0-or-later +#include #include #include @@ -113,8 +114,8 @@ static void drm_gem_vram_cleanup(struct drm_gem_vram_object *gbo) * up; only release the GEM object. */ - WARN_ON(gbo->kmap_use_count); - WARN_ON(gbo->kmap.virtual); + WARN_ON(gbo->vmap_use_count); + WARN_ON(dma_buf_map_is_set(&gbo->map)); drm_gem_object_release(&gbo->bo.base); } @@ -382,29 +383,37 @@ int drm_gem_vram_unpin(struct drm_gem_vram_object *gbo) } EXPORT_SYMBOL(drm_gem_vram_unpin); -static void *drm_gem_vram_kmap_locked(struct drm_gem_vram_object *gbo) +static int drm_gem_vram_kmap_locked(struct drm_gem_vram_object *gbo, + struct dma_buf_map *map) { int ret; - struct ttm_bo_kmap_obj *kmap = &gbo->kmap; - bool is_iomem; - if (gbo->kmap_use_count > 0) + if (gbo->vmap_use_count > 0) goto out; - ret = ttm_bo_kmap(&gbo->bo, 0, gbo->bo.num_pages, kmap); + ret = ttm_bo_vmap(&gbo->bo, &gbo->map); if (ret) - return ERR_PTR(ret); + return ret; out: - ++gbo->kmap_use_count; - return ttm_kmap_obj_virtual(kmap, &is_iomem); + ++gbo->vmap_use_count; + *map = gbo->map; + + return 0; } -static void drm_gem_vram_kunmap_locked(struct drm_gem_vram_object *gbo) +static void drm_gem_vram_kunmap_locked(struct drm_gem_vram_object *gbo, + struct dma_buf_map *map) { - if (WARN_ON_ONCE(!gbo->kmap_use_count)) + struct drm_device *dev = gbo->bo.base.dev; + + if (drm_WARN_ON_ONCE(dev, !gbo->vmap_use_count)) return; - if (--gbo->kmap_use_count > 0) + + if (drm_WARN_ON_ONCE(dev, !dma_buf_map_is_equal(&gbo->map, map))) + return; /* BUG: map not mapped from this BO */ + + if (--gbo->vmap_use_count > 0) return; /* @@ -418,7 +427,9 @@ static void drm_gem_vram_kunmap_locked(struct drm_gem_vram_object *gbo) /** * drm_gem_vram_vmap() - Pins and maps a GEM VRAM object into kernel address * space - * @gbo: The GEM VRAM object to map + * @gbo: The GEM VRAM object to map + * @map: Returns the kernel virtual address of the VRAM GEM object's backing + * store. * * The vmap function pins a GEM VRAM object to its current location, either * system or video memory, and maps its buffer into kernel address space. @@ -427,48 +438,44 @@ static void drm_gem_vram_kunmap_locked(struct drm_gem_vram_object *gbo) * unmap and unpin the GEM VRAM object. * * Returns: - * The buffer's virtual address on success, or - * an ERR_PTR()-encoded error code otherwise. + * 0 on success, or a negative error code otherwise. */ -void *drm_gem_vram_vmap(struct drm_gem_vram_object *gbo) +int drm_gem_vram_vmap(struct drm_gem_vram_object *gbo, struct dma_buf_map *map) { int ret; - void *base; ret = ttm_bo_reserve(&gbo->bo, true, false, NULL); if (ret) - return ERR_PTR(ret); + return ret; ret = drm_gem_vram_pin_locked(gbo, 0); if (ret) goto err_ttm_bo_unreserve; - base = drm_gem_vram_kmap_locked(gbo); - if (IS_ERR(base)) { - ret = PTR_ERR(base); + ret = drm_gem_vram_kmap_locked(gbo, map); + if (ret) goto err_drm_gem_vram_unpin_locked; - } ttm_bo_unreserve(&gbo->bo); - return base; + return 0; err_drm_gem_vram_unpin_locked: drm_gem_vram_unpin_locked(gbo); err_ttm_bo_unreserve: ttm_bo_unreserve(&gbo->bo); - return ERR_PTR(ret); + return ret; } EXPORT_SYMBOL(drm_gem_vram_vmap); /** * drm_gem_vram_vunmap() - Unmaps and unpins a GEM VRAM object - * @gbo: The GEM VRAM object to unmap - * @vaddr: The mapping's base address as returned by drm_gem_vram_vmap() + * @gbo: The GEM VRAM object to unmap + * @map: Kernel virtual address where the VRAM GEM object was mapped * * A call to drm_gem_vram_vunmap() unmaps and unpins a GEM VRAM buffer. See * the documentation for drm_gem_vram_vmap() for more information. */ -void drm_gem_vram_vunmap(struct drm_gem_vram_object *gbo, void *vaddr) +void drm_gem_vram_vunmap(struct drm_gem_vram_object *gbo, struct dma_buf_map *map) { int ret; @@ -476,7 +483,7 @@ void drm_gem_vram_vunmap(struct drm_gem_vram_object *gbo, void *vaddr) if (WARN_ONCE(ret, "ttm_bo_reserve_failed(): ret=%d\n", ret)) return; - drm_gem_vram_kunmap_locked(gbo); + drm_gem_vram_kunmap_locked(gbo, map); drm_gem_vram_unpin_locked(gbo); ttm_bo_unreserve(&gbo->bo); @@ -567,15 +574,13 @@ static void drm_gem_vram_bo_driver_move_notify(struct drm_gem_vram_object *gbo, bool evict, struct ttm_resource *new_mem) { - struct ttm_bo_kmap_obj *kmap = &gbo->kmap; + struct ttm_buffer_object *bo = &gbo->bo; + struct drm_device *dev = bo->base.dev; - if (WARN_ON_ONCE(gbo->kmap_use_count)) + if (drm_WARN_ON_ONCE(dev, gbo->vmap_use_count)) return; - if (!kmap->virtual) - return; - ttm_bo_kunmap(kmap); - kmap->virtual = NULL; + ttm_bo_vunmap(bo, &gbo->map); } static int drm_gem_vram_bo_driver_move(struct drm_gem_vram_object *gbo, @@ -832,37 +837,33 @@ static void drm_gem_vram_object_unpin(struct drm_gem_object *gem) } /** - * drm_gem_vram_object_vmap() - \ - Implements &struct drm_gem_object_funcs.vmap - * @gem: The GEM object to map + * drm_gem_vram_object_vmap() - + * Implements &struct drm_gem_object_funcs.vmap + * @gem: The GEM object to map + * @map: Returns the kernel virtual address of the VRAM GEM object's backing + * store. * * Returns: - * The buffers virtual address on success, or - * NULL otherwise. + * 0 on success, or a negative error code otherwise. */ -static void *drm_gem_vram_object_vmap(struct drm_gem_object *gem) +static int drm_gem_vram_object_vmap(struct drm_gem_object *gem, struct dma_buf_map *map) { struct drm_gem_vram_object *gbo = drm_gem_vram_of_gem(gem); - void *base; - base = drm_gem_vram_vmap(gbo); - if (IS_ERR(base)) - return NULL; - return base; + return drm_gem_vram_vmap(gbo, map); } /** - * drm_gem_vram_object_vunmap() - \ - Implements &struct drm_gem_object_funcs.vunmap - * @gem: The GEM object to unmap - * @vaddr: The mapping's base address + * drm_gem_vram_object_vunmap() - + * Implements &struct drm_gem_object_funcs.vunmap + * @gem: The GEM object to unmap + * @map: Kernel virtual address where the VRAM GEM object was mapped */ -static void drm_gem_vram_object_vunmap(struct drm_gem_object *gem, - void *vaddr) +static void drm_gem_vram_object_vunmap(struct drm_gem_object *gem, struct dma_buf_map *map) { struct drm_gem_vram_object *gbo = drm_gem_vram_of_gem(gem); - drm_gem_vram_vunmap(gbo, vaddr); + drm_gem_vram_vunmap(gbo, map); } /* diff --git a/drivers/gpu/drm/etnaviv/etnaviv_drv.h b/drivers/gpu/drm/etnaviv/etnaviv_drv.h index 9682c26d89bb..f5be627e1de0 100644 --- a/drivers/gpu/drm/etnaviv/etnaviv_drv.h +++ b/drivers/gpu/drm/etnaviv/etnaviv_drv.h @@ -51,7 +51,7 @@ int etnaviv_ioctl_gem_submit(struct drm_device *dev, void *data, int etnaviv_gem_mmap(struct file *filp, struct vm_area_struct *vma); int etnaviv_gem_mmap_offset(struct drm_gem_object *obj, u64 *offset); struct sg_table *etnaviv_gem_prime_get_sg_table(struct drm_gem_object *obj); -void *etnaviv_gem_prime_vmap(struct drm_gem_object *obj); +int etnaviv_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map); int etnaviv_gem_prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma); struct drm_gem_object *etnaviv_gem_prime_import_sg_table(struct drm_device *dev, diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c index a6d9932a32ae..bc2543dd987d 100644 --- a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c +++ b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c @@ -22,9 +22,14 @@ struct sg_table *etnaviv_gem_prime_get_sg_table(struct drm_gem_object *obj) return drm_prime_pages_to_sg(obj->dev, etnaviv_obj->pages, npages); } -void *etnaviv_gem_prime_vmap(struct drm_gem_object *obj) +int etnaviv_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map) { - return etnaviv_gem_vmap(obj); + void *vaddr = etnaviv_gem_vmap(obj); + if (!vaddr) + return -ENOMEM; + dma_buf_map_set_vaddr(map, vaddr); + + return 0; } int etnaviv_gem_prime_mmap(struct drm_gem_object *obj, diff --git a/drivers/gpu/drm/lima/lima_gem.c b/drivers/gpu/drm/lima/lima_gem.c index 11223fe348df..832e5280a6ed 100644 --- a/drivers/gpu/drm/lima/lima_gem.c +++ b/drivers/gpu/drm/lima/lima_gem.c @@ -182,14 +182,14 @@ static int lima_gem_pin(struct drm_gem_object *obj) return drm_gem_shmem_pin(obj); } -static void *lima_gem_vmap(struct drm_gem_object *obj) +static int lima_gem_vmap(struct drm_gem_object *obj, struct dma_buf_map *map) { struct lima_bo *bo = to_lima_bo(obj); if (bo->heap_size) - return ERR_PTR(-EINVAL); + return -EINVAL; - return drm_gem_shmem_vmap(obj); + return drm_gem_shmem_vmap(obj, map); } static int lima_gem_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma) diff --git a/drivers/gpu/drm/lima/lima_sched.c b/drivers/gpu/drm/lima/lima_sched.c index dc6df9e9a40d..a070a85f8f36 100644 --- a/drivers/gpu/drm/lima/lima_sched.c +++ b/drivers/gpu/drm/lima/lima_sched.c @@ -1,6 +1,7 @@ // SPDX-License-Identifier: GPL-2.0 OR MIT /* Copyright 2017-2019 Qiang Yu */ +#include #include #include #include @@ -303,6 +304,8 @@ static void lima_sched_build_error_task_list(struct lima_sched_task *task) struct lima_dump_chunk_buffer *buffer_chunk; u32 size, task_size, mem_size; int i; + struct dma_buf_map map; + int ret; mutex_lock(&dev->error_task_list_lock); @@ -388,15 +391,15 @@ static void lima_sched_build_error_task_list(struct lima_sched_task *task) } else { buffer_chunk->size = lima_bo_size(bo); - data = drm_gem_shmem_vmap(&bo->base.base); - if (IS_ERR_OR_NULL(data)) { + ret = drm_gem_shmem_vmap(&bo->base.base, &map); + if (ret) { kvfree(et); goto out; } - memcpy(buffer_chunk + 1, data, buffer_chunk->size); + memcpy(buffer_chunk + 1, map.vaddr, buffer_chunk->size); - drm_gem_shmem_vunmap(&bo->base.base, data); + drm_gem_shmem_vunmap(&bo->base.base, &map); } buffer_chunk = (void *)(buffer_chunk + 1) + buffer_chunk->size; diff --git a/drivers/gpu/drm/mgag200/mgag200_mode.c b/drivers/gpu/drm/mgag200/mgag200_mode.c index 38672f9e5c4f..8ef76769b97f 100644 --- a/drivers/gpu/drm/mgag200/mgag200_mode.c +++ b/drivers/gpu/drm/mgag200/mgag200_mode.c @@ -9,6 +9,7 @@ */ #include +#include #include #include @@ -1556,15 +1557,18 @@ mgag200_handle_damage(struct mga_device *mdev, struct drm_framebuffer *fb, struct drm_rect *clip) { struct drm_device *dev = &mdev->base; + struct dma_buf_map map; void *vmap; + int ret; - vmap = drm_gem_shmem_vmap(fb->obj[0]); - if (drm_WARN_ON(dev, !vmap)) + ret = drm_gem_shmem_vmap(fb->obj[0], &map); + if (drm_WARN_ON(dev, ret)) return; /* BUG: SHMEM BO should always be vmapped */ + vmap = map.vaddr; /* TODO: Use mapping abstraction properly */ drm_fb_memcpy_dstclip(mdev->vram, vmap, fb, clip); - drm_gem_shmem_vunmap(fb->obj[0], vmap); + drm_gem_shmem_vunmap(fb->obj[0], &map); /* Always scanout image at VRAM offset 0 */ mgag200_set_startadd(mdev, (u32)0); diff --git a/drivers/gpu/drm/nouveau/Kconfig b/drivers/gpu/drm/nouveau/Kconfig index 5dec1e5694b7..9436310d0854 100644 --- a/drivers/gpu/drm/nouveau/Kconfig +++ b/drivers/gpu/drm/nouveau/Kconfig @@ -6,6 +6,7 @@ config DRM_NOUVEAU select FW_LOADER select DRM_KMS_HELPER select DRM_TTM + select DRM_TTM_HELPER select BACKLIGHT_CLASS_DEVICE if DRM_NOUVEAU_BACKLIGHT select ACPI_VIDEO if ACPI && X86 && BACKLIGHT_CLASS_DEVICE && INPUT select X86_PLATFORM_DEVICES if ACPI && X86 diff --git a/drivers/gpu/drm/nouveau/nouveau_bo.h b/drivers/gpu/drm/nouveau/nouveau_bo.h index 641ef6298a0e..6045b85a762a 100644 --- a/drivers/gpu/drm/nouveau/nouveau_bo.h +++ b/drivers/gpu/drm/nouveau/nouveau_bo.h @@ -39,8 +39,6 @@ struct nouveau_bo { unsigned mode; struct nouveau_drm_tile *tile; - - struct ttm_bo_kmap_obj dma_buf_vmap; }; static inline struct nouveau_bo * diff --git a/drivers/gpu/drm/nouveau/nouveau_gem.c b/drivers/gpu/drm/nouveau/nouveau_gem.c index 9a421c3949de..f942b526b0a5 100644 --- a/drivers/gpu/drm/nouveau/nouveau_gem.c +++ b/drivers/gpu/drm/nouveau/nouveau_gem.c @@ -24,6 +24,8 @@ * */ +#include + #include "nouveau_drv.h" #include "nouveau_dma.h" #include "nouveau_fence.h" @@ -176,8 +178,8 @@ const struct drm_gem_object_funcs nouveau_gem_object_funcs = { .pin = nouveau_gem_prime_pin, .unpin = nouveau_gem_prime_unpin, .get_sg_table = nouveau_gem_prime_get_sg_table, - .vmap = nouveau_gem_prime_vmap, - .vunmap = nouveau_gem_prime_vunmap, + .vmap = drm_gem_ttm_vmap, + .vunmap = drm_gem_ttm_vunmap, }; int diff --git a/drivers/gpu/drm/nouveau/nouveau_gem.h b/drivers/gpu/drm/nouveau/nouveau_gem.h index b35c180322e2..3b919c7c931c 100644 --- a/drivers/gpu/drm/nouveau/nouveau_gem.h +++ b/drivers/gpu/drm/nouveau/nouveau_gem.h @@ -37,7 +37,5 @@ extern void nouveau_gem_prime_unpin(struct drm_gem_object *); extern struct sg_table *nouveau_gem_prime_get_sg_table(struct drm_gem_object *); extern struct drm_gem_object *nouveau_gem_prime_import_sg_table( struct drm_device *, struct dma_buf_attachment *, struct sg_table *); -extern void *nouveau_gem_prime_vmap(struct drm_gem_object *); -extern void nouveau_gem_prime_vunmap(struct drm_gem_object *, void *); #endif diff --git a/drivers/gpu/drm/nouveau/nouveau_prime.c b/drivers/gpu/drm/nouveau/nouveau_prime.c index a8264aebf3d4..2f16b5249283 100644 --- a/drivers/gpu/drm/nouveau/nouveau_prime.c +++ b/drivers/gpu/drm/nouveau/nouveau_prime.c @@ -35,26 +35,6 @@ struct sg_table *nouveau_gem_prime_get_sg_table(struct drm_gem_object *obj) return drm_prime_pages_to_sg(obj->dev, nvbo->bo.ttm->pages, npages); } -void *nouveau_gem_prime_vmap(struct drm_gem_object *obj) -{ - struct nouveau_bo *nvbo = nouveau_gem_object(obj); - int ret; - - ret = ttm_bo_kmap(&nvbo->bo, 0, nvbo->bo.num_pages, - &nvbo->dma_buf_vmap); - if (ret) - return ERR_PTR(ret); - - return nvbo->dma_buf_vmap.virtual; -} - -void nouveau_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr) -{ - struct nouveau_bo *nvbo = nouveau_gem_object(obj); - - ttm_bo_kunmap(&nvbo->dma_buf_vmap); -} - struct drm_gem_object *nouveau_gem_prime_import_sg_table(struct drm_device *dev, struct dma_buf_attachment *attach, struct sg_table *sg) diff --git a/drivers/gpu/drm/panfrost/panfrost_perfcnt.c b/drivers/gpu/drm/panfrost/panfrost_perfcnt.c index fdbc8d949135..5ab03d605f57 100644 --- a/drivers/gpu/drm/panfrost/panfrost_perfcnt.c +++ b/drivers/gpu/drm/panfrost/panfrost_perfcnt.c @@ -5,6 +5,7 @@ #include #include #include +#include #include #include #include @@ -72,6 +73,7 @@ static int panfrost_perfcnt_enable_locked(struct panfrost_device *pfdev, { struct panfrost_file_priv *user = file_priv->driver_priv; struct panfrost_perfcnt *perfcnt = pfdev->perfcnt; + struct dma_buf_map map; struct drm_gem_shmem_object *bo; u32 cfg, as; int ret; @@ -103,11 +105,10 @@ static int panfrost_perfcnt_enable_locked(struct panfrost_device *pfdev, goto err_close_bo; } - perfcnt->buf = drm_gem_shmem_vmap(&bo->base); - if (IS_ERR(perfcnt->buf)) { - ret = PTR_ERR(perfcnt->buf); + ret = drm_gem_shmem_vmap(&bo->base, &map); + if (ret) goto err_put_mapping; - } + perfcnt->buf = map.vaddr; /* * Invalidate the cache and clear the counters to start from a fresh @@ -163,7 +164,7 @@ static int panfrost_perfcnt_enable_locked(struct panfrost_device *pfdev, return 0; err_vunmap: - drm_gem_shmem_vunmap(&bo->base, perfcnt->buf); + drm_gem_shmem_vunmap(&bo->base, &map); err_put_mapping: panfrost_gem_mapping_put(perfcnt->mapping); err_close_bo: @@ -180,6 +181,7 @@ static int panfrost_perfcnt_disable_locked(struct panfrost_device *pfdev, { struct panfrost_file_priv *user = file_priv->driver_priv; struct panfrost_perfcnt *perfcnt = pfdev->perfcnt; + struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(perfcnt->buf); if (user != perfcnt->user) return -EINVAL; @@ -192,7 +194,7 @@ static int panfrost_perfcnt_disable_locked(struct panfrost_device *pfdev, GPU_PERFCNT_CFG_MODE(GPU_PERFCNT_CFG_MODE_OFF)); perfcnt->user = NULL; - drm_gem_shmem_vunmap(&perfcnt->mapping->obj->base.base, perfcnt->buf); + drm_gem_shmem_vunmap(&perfcnt->mapping->obj->base.base, &map); perfcnt->buf = NULL; panfrost_gem_close(&perfcnt->mapping->obj->base.base, file_priv); panfrost_mmu_as_put(pfdev, perfcnt->mapping->mmu); diff --git a/drivers/gpu/drm/qxl/qxl_display.c b/drivers/gpu/drm/qxl/qxl_display.c index 45fd76e04bdc..e165fa9b2089 100644 --- a/drivers/gpu/drm/qxl/qxl_display.c +++ b/drivers/gpu/drm/qxl/qxl_display.c @@ -25,6 +25,7 @@ #include #include +#include #include #include @@ -581,6 +582,8 @@ static void qxl_cursor_atomic_update(struct drm_plane *plane, struct drm_gem_object *obj; struct qxl_bo *cursor_bo = NULL, *user_bo = NULL, *old_cursor_bo = NULL; int ret; + struct dma_buf_map user_map; + struct dma_buf_map cursor_map; void *user_ptr; int size = 64*64*4; @@ -595,9 +598,10 @@ static void qxl_cursor_atomic_update(struct drm_plane *plane, user_bo = gem_to_qxl_bo(obj); /* pinning is done in the prepare/cleanup framevbuffer */ - ret = qxl_bo_kmap(user_bo, &user_ptr); + ret = qxl_bo_kmap(user_bo, &user_map); if (ret) goto out_free_release; + user_ptr = user_map.vaddr; /* TODO: Use mapping abstraction properly */ ret = qxl_alloc_bo_reserved(qdev, release, sizeof(struct qxl_cursor) + size, @@ -613,7 +617,7 @@ static void qxl_cursor_atomic_update(struct drm_plane *plane, if (ret) goto out_unpin; - ret = qxl_bo_kmap(cursor_bo, (void **)&cursor); + ret = qxl_bo_kmap(cursor_bo, &cursor_map); if (ret) goto out_backoff; @@ -1133,6 +1137,7 @@ int qxl_create_monitors_object(struct qxl_device *qdev) { int ret; struct drm_gem_object *gobj; + struct dma_buf_map map; int monitors_config_size = sizeof(struct qxl_monitors_config) + qxl_num_crtc * sizeof(struct qxl_head); @@ -1149,7 +1154,7 @@ int qxl_create_monitors_object(struct qxl_device *qdev) if (ret) return ret; - qxl_bo_kmap(qdev->monitors_config_bo, NULL); + qxl_bo_kmap(qdev->monitors_config_bo, &map); qdev->monitors_config = qdev->monitors_config_bo->kptr; qdev->ram_header->monitors_config = diff --git a/drivers/gpu/drm/qxl/qxl_draw.c b/drivers/gpu/drm/qxl/qxl_draw.c index 3599db096973..7b7acb910780 100644 --- a/drivers/gpu/drm/qxl/qxl_draw.c +++ b/drivers/gpu/drm/qxl/qxl_draw.c @@ -20,6 +20,8 @@ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. */ +#include + #include #include "qxl_drv.h" @@ -42,13 +44,15 @@ static struct qxl_rect *drawable_set_clipping(struct qxl_device *qdev, unsigned int num_clips, struct qxl_bo *clips_bo) { + struct dma_buf_map map; struct qxl_clip_rects *dev_clips; int ret; - ret = qxl_bo_kmap(clips_bo, (void **)&dev_clips); - if (ret) { + ret = qxl_bo_kmap(clips_bo, &map); + if (ret) return NULL; - } + dev_clips = map.vaddr; /* TODO: Use mapping abstraction properly */ + dev_clips->num_rects = num_clips; dev_clips->chunk.next_chunk = 0; dev_clips->chunk.prev_chunk = 0; @@ -142,6 +146,7 @@ void qxl_draw_dirty_fb(struct qxl_device *qdev, int stride = fb->pitches[0]; /* depth is not actually interesting, we don't mask with it */ int depth = fb->format->cpp[0] * 8; + struct dma_buf_map surface_map; uint8_t *surface_base; struct qxl_release *release; struct qxl_bo *clips_bo; @@ -197,9 +202,10 @@ void qxl_draw_dirty_fb(struct qxl_device *qdev, if (ret) goto out_release_backoff; - ret = qxl_bo_kmap(bo, (void **)&surface_base); + ret = qxl_bo_kmap(bo, &surface_map); if (ret) goto out_release_backoff; + surface_base = surface_map.vaddr; /* TODO: Use mapping abstraction properly */ ret = qxl_image_init(qdev, release, dimage, surface_base, left - dumb_shadow_offset, diff --git a/drivers/gpu/drm/qxl/qxl_drv.h b/drivers/gpu/drm/qxl/qxl_drv.h index 3602e8b34189..eb437fea5d9e 100644 --- a/drivers/gpu/drm/qxl/qxl_drv.h +++ b/drivers/gpu/drm/qxl/qxl_drv.h @@ -30,6 +30,7 @@ * Definitions taken from spice-protocol, plus kernel driver specific bits. */ +#include #include #include #include @@ -50,6 +51,8 @@ #include "qxl_dev.h" +struct dma_buf_map; + #define DRIVER_AUTHOR "Dave Airlie" #define DRIVER_NAME "qxl" @@ -79,7 +82,7 @@ struct qxl_bo { /* Protected by tbo.reserved */ struct ttm_place placements[3]; struct ttm_placement placement; - struct ttm_bo_kmap_obj kmap; + struct dma_buf_map map; void *kptr; unsigned int map_count; int type; @@ -335,7 +338,6 @@ int qxl_gem_object_open(struct drm_gem_object *obj, struct drm_file *file_priv); void qxl_gem_object_close(struct drm_gem_object *obj, struct drm_file *file_priv); void qxl_bo_force_delete(struct qxl_device *qdev); -int qxl_bo_kmap(struct qxl_bo *bo, void **ptr); /* qxl_dumb.c */ int qxl_mode_dumb_create(struct drm_file *file_priv, @@ -445,8 +447,9 @@ struct sg_table *qxl_gem_prime_get_sg_table(struct drm_gem_object *obj); struct drm_gem_object *qxl_gem_prime_import_sg_table( struct drm_device *dev, struct dma_buf_attachment *attach, struct sg_table *sgt); -void *qxl_gem_prime_vmap(struct drm_gem_object *obj); -void qxl_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr); +int qxl_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map); +void qxl_gem_prime_vunmap(struct drm_gem_object *obj, + struct dma_buf_map *map); int qxl_gem_prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma); diff --git a/drivers/gpu/drm/qxl/qxl_object.c b/drivers/gpu/drm/qxl/qxl_object.c index 940e99354f49..755df4d8f95f 100644 --- a/drivers/gpu/drm/qxl/qxl_object.c +++ b/drivers/gpu/drm/qxl/qxl_object.c @@ -23,10 +23,12 @@ * Alon Levy */ +#include +#include + #include "qxl_drv.h" #include "qxl_object.h" -#include static void qxl_ttm_bo_destroy(struct ttm_buffer_object *tbo) { struct qxl_bo *bo; @@ -152,24 +154,27 @@ int qxl_bo_create(struct qxl_device *qdev, return 0; } -int qxl_bo_kmap(struct qxl_bo *bo, void **ptr) +int qxl_bo_kmap(struct qxl_bo *bo, struct dma_buf_map *map) { - bool is_iomem; int r; if (bo->kptr) { - if (ptr) - *ptr = bo->kptr; bo->map_count++; - return 0; + goto out; } - r = ttm_bo_kmap(&bo->tbo, 0, bo->tbo.num_pages, &bo->kmap); + r = ttm_bo_vmap(&bo->tbo, &bo->map); if (r) return r; - bo->kptr = ttm_kmap_obj_virtual(&bo->kmap, &is_iomem); - if (ptr) - *ptr = bo->kptr; bo->map_count = 1; + + /* TODO: Remove kptr in favor of map everywhere. */ + if (bo->map.is_iomem) + bo->kptr = (void *)bo->map.vaddr_iomem; + else + bo->kptr = bo->map.vaddr; + +out: + *map = bo->map; return 0; } @@ -180,6 +185,7 @@ void *qxl_bo_kmap_atomic_page(struct qxl_device *qdev, void *rptr; int ret; struct io_mapping *map; + struct dma_buf_map bo_map; if (bo->tbo.mem.mem_type == TTM_PL_VRAM) map = qdev->vram_mapping; @@ -196,9 +202,10 @@ void *qxl_bo_kmap_atomic_page(struct qxl_device *qdev, return rptr; } - ret = qxl_bo_kmap(bo, &rptr); + ret = qxl_bo_kmap(bo, &bo_map); if (ret) return NULL; + rptr = bo_map.vaddr; /* TODO: Use mapping abstraction properly */ rptr += page_offset * PAGE_SIZE; return rptr; @@ -212,7 +219,7 @@ void qxl_bo_kunmap(struct qxl_bo *bo) if (bo->map_count > 0) return; bo->kptr = NULL; - ttm_bo_kunmap(&bo->kmap); + ttm_bo_vunmap(&bo->tbo, &bo->map); } void qxl_bo_kunmap_atomic_page(struct qxl_device *qdev, diff --git a/drivers/gpu/drm/qxl/qxl_object.h b/drivers/gpu/drm/qxl/qxl_object.h index 09a5c818324d..ebf24c9d2bf2 100644 --- a/drivers/gpu/drm/qxl/qxl_object.h +++ b/drivers/gpu/drm/qxl/qxl_object.h @@ -63,7 +63,7 @@ extern int qxl_bo_create(struct qxl_device *qdev, bool kernel, bool pinned, u32 domain, struct qxl_surface *surf, struct qxl_bo **bo_ptr); -extern int qxl_bo_kmap(struct qxl_bo *bo, void **ptr); +extern int qxl_bo_kmap(struct qxl_bo *bo, struct dma_buf_map *map); extern void qxl_bo_kunmap(struct qxl_bo *bo); void *qxl_bo_kmap_atomic_page(struct qxl_device *qdev, struct qxl_bo *bo, int page_offset); void qxl_bo_kunmap_atomic_page(struct qxl_device *qdev, struct qxl_bo *bo, void *map); diff --git a/drivers/gpu/drm/qxl/qxl_prime.c b/drivers/gpu/drm/qxl/qxl_prime.c index 7d3816fca5a8..4aa949799446 100644 --- a/drivers/gpu/drm/qxl/qxl_prime.c +++ b/drivers/gpu/drm/qxl/qxl_prime.c @@ -54,20 +54,20 @@ struct drm_gem_object *qxl_gem_prime_import_sg_table( return ERR_PTR(-ENOSYS); } -void *qxl_gem_prime_vmap(struct drm_gem_object *obj) +int qxl_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map) { struct qxl_bo *bo = gem_to_qxl_bo(obj); - void *ptr; int ret; - ret = qxl_bo_kmap(bo, &ptr); + ret = qxl_bo_kmap(bo, map); if (ret < 0) - return ERR_PTR(ret); + return ret; - return ptr; + return 0; } -void qxl_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr) +void qxl_gem_prime_vunmap(struct drm_gem_object *obj, + struct dma_buf_map *map) { struct qxl_bo *bo = gem_to_qxl_bo(obj); diff --git a/drivers/gpu/drm/radeon/radeon.h b/drivers/gpu/drm/radeon/radeon.h index 5d54bccebd4d..44cb5ee6fc20 100644 --- a/drivers/gpu/drm/radeon/radeon.h +++ b/drivers/gpu/drm/radeon/radeon.h @@ -509,7 +509,6 @@ struct radeon_bo { /* Constant after initialization */ struct radeon_device *rdev; - struct ttm_bo_kmap_obj dma_buf_vmap; pid_t pid; #ifdef CONFIG_MMU_NOTIFIER diff --git a/drivers/gpu/drm/radeon/radeon_gem.c b/drivers/gpu/drm/radeon/radeon_gem.c index 0ccd7213e41f..d2876ce3bc9e 100644 --- a/drivers/gpu/drm/radeon/radeon_gem.c +++ b/drivers/gpu/drm/radeon/radeon_gem.c @@ -31,6 +31,7 @@ #include #include #include +#include #include #include "radeon.h" @@ -40,8 +41,6 @@ struct dma_buf *radeon_gem_prime_export(struct drm_gem_object *gobj, struct sg_table *radeon_gem_prime_get_sg_table(struct drm_gem_object *obj); int radeon_gem_prime_pin(struct drm_gem_object *obj); void radeon_gem_prime_unpin(struct drm_gem_object *obj); -void *radeon_gem_prime_vmap(struct drm_gem_object *obj); -void radeon_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr); static const struct drm_gem_object_funcs radeon_gem_object_funcs; @@ -235,8 +234,8 @@ static const struct drm_gem_object_funcs radeon_gem_object_funcs = { .pin = radeon_gem_prime_pin, .unpin = radeon_gem_prime_unpin, .get_sg_table = radeon_gem_prime_get_sg_table, - .vmap = radeon_gem_prime_vmap, - .vunmap = radeon_gem_prime_vunmap, + .vmap = drm_gem_ttm_vmap, + .vunmap = drm_gem_ttm_vunmap, }; /* diff --git a/drivers/gpu/drm/radeon/radeon_prime.c b/drivers/gpu/drm/radeon/radeon_prime.c index b9de0e51c0be..088d39a51c0d 100644 --- a/drivers/gpu/drm/radeon/radeon_prime.c +++ b/drivers/gpu/drm/radeon/radeon_prime.c @@ -39,26 +39,6 @@ struct sg_table *radeon_gem_prime_get_sg_table(struct drm_gem_object *obj) return drm_prime_pages_to_sg(obj->dev, bo->tbo.ttm->pages, npages); } -void *radeon_gem_prime_vmap(struct drm_gem_object *obj) -{ - struct radeon_bo *bo = gem_to_radeon_bo(obj); - int ret; - - ret = ttm_bo_kmap(&bo->tbo, 0, bo->tbo.num_pages, - &bo->dma_buf_vmap); - if (ret) - return ERR_PTR(ret); - - return bo->dma_buf_vmap.virtual; -} - -void radeon_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr) -{ - struct radeon_bo *bo = gem_to_radeon_bo(obj); - - ttm_bo_kunmap(&bo->dma_buf_vmap); -} - struct drm_gem_object *radeon_gem_prime_import_sg_table(struct drm_device *dev, struct dma_buf_attachment *attach, struct sg_table *sg) diff --git a/drivers/gpu/drm/rockchip/rockchip_drm_gem.c b/drivers/gpu/drm/rockchip/rockchip_drm_gem.c index 7d5ebb10323b..7971f57436dd 100644 --- a/drivers/gpu/drm/rockchip/rockchip_drm_gem.c +++ b/drivers/gpu/drm/rockchip/rockchip_drm_gem.c @@ -532,26 +532,32 @@ rockchip_gem_prime_import_sg_table(struct drm_device *drm, return ERR_PTR(ret); } -void *rockchip_gem_prime_vmap(struct drm_gem_object *obj) +int rockchip_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map) { struct rockchip_gem_object *rk_obj = to_rockchip_obj(obj); - if (rk_obj->pages) - return vmap(rk_obj->pages, rk_obj->num_pages, VM_MAP, - pgprot_writecombine(PAGE_KERNEL)); + if (rk_obj->pages) { + void *vaddr = vmap(rk_obj->pages, rk_obj->num_pages, VM_MAP, + pgprot_writecombine(PAGE_KERNEL)); + if (!vaddr) + return -ENOMEM; + dma_buf_map_set_vaddr(map, vaddr); + return 0; + } if (rk_obj->dma_attrs & DMA_ATTR_NO_KERNEL_MAPPING) - return NULL; + return -ENOMEM; + dma_buf_map_set_vaddr(map, rk_obj->kvaddr); - return rk_obj->kvaddr; + return 0; } -void rockchip_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr) +void rockchip_gem_prime_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map) { struct rockchip_gem_object *rk_obj = to_rockchip_obj(obj); if (rk_obj->pages) { - vunmap(vaddr); + vunmap(map->vaddr); return; } diff --git a/drivers/gpu/drm/rockchip/rockchip_drm_gem.h b/drivers/gpu/drm/rockchip/rockchip_drm_gem.h index 7ffc541bea07..5a70a56cd406 100644 --- a/drivers/gpu/drm/rockchip/rockchip_drm_gem.h +++ b/drivers/gpu/drm/rockchip/rockchip_drm_gem.h @@ -31,8 +31,8 @@ struct drm_gem_object * rockchip_gem_prime_import_sg_table(struct drm_device *dev, struct dma_buf_attachment *attach, struct sg_table *sg); -void *rockchip_gem_prime_vmap(struct drm_gem_object *obj); -void rockchip_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr); +int rockchip_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map); +void rockchip_gem_prime_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map); /* drm driver mmap file operations */ int rockchip_gem_mmap(struct file *filp, struct vm_area_struct *vma); diff --git a/drivers/gpu/drm/tiny/cirrus.c b/drivers/gpu/drm/tiny/cirrus.c index 744a8e337e41..c02e35ed6e76 100644 --- a/drivers/gpu/drm/tiny/cirrus.c +++ b/drivers/gpu/drm/tiny/cirrus.c @@ -17,6 +17,7 @@ */ #include +#include #include #include @@ -314,6 +315,7 @@ static int cirrus_fb_blit_rect(struct drm_framebuffer *fb, struct drm_rect *rect) { struct cirrus_device *cirrus = to_cirrus(fb->dev); + struct dma_buf_map map; void *vmap; int idx, ret; @@ -321,10 +323,10 @@ static int cirrus_fb_blit_rect(struct drm_framebuffer *fb, if (!drm_dev_enter(&cirrus->dev, &idx)) goto out; - ret = -ENOMEM; - vmap = drm_gem_shmem_vmap(fb->obj[0]); - if (!vmap) + ret = drm_gem_shmem_vmap(fb->obj[0], &map); + if (ret) goto out_dev_exit; + vmap = map.vaddr; /* TODO: Use mapping abstraction properly */ if (cirrus->cpp == fb->format->cpp[0]) drm_fb_memcpy_dstclip(cirrus->vram, @@ -343,7 +345,7 @@ static int cirrus_fb_blit_rect(struct drm_framebuffer *fb, else WARN_ON_ONCE("cpp mismatch"); - drm_gem_shmem_vunmap(fb->obj[0], vmap); + drm_gem_shmem_vunmap(fb->obj[0], &map); ret = 0; out_dev_exit: diff --git a/drivers/gpu/drm/tiny/gm12u320.c b/drivers/gpu/drm/tiny/gm12u320.c index cc397671f689..12a890cea6e9 100644 --- a/drivers/gpu/drm/tiny/gm12u320.c +++ b/drivers/gpu/drm/tiny/gm12u320.c @@ -248,6 +248,7 @@ static void gm12u320_copy_fb_to_blocks(struct gm12u320_device *gm12u320) { int block, dst_offset, len, remain, ret, x1, x2, y1, y2; struct drm_framebuffer *fb; + struct dma_buf_map map; void *vaddr; u8 *src; @@ -262,11 +263,12 @@ static void gm12u320_copy_fb_to_blocks(struct gm12u320_device *gm12u320) y1 = gm12u320->fb_update.rect.y1; y2 = gm12u320->fb_update.rect.y2; - vaddr = drm_gem_shmem_vmap(fb->obj[0]); - if (IS_ERR(vaddr)) { - GM12U320_ERR("failed to vmap fb: %ld\n", PTR_ERR(vaddr)); + ret = drm_gem_shmem_vmap(fb->obj[0], &map); + if (ret) { + GM12U320_ERR("failed to vmap fb: %d\n", ret); goto put_fb; } + vaddr = map.vaddr; /* TODO: Use mapping abstraction properly */ if (fb->obj[0]->import_attach) { ret = dma_buf_begin_cpu_access( @@ -318,7 +320,7 @@ static void gm12u320_copy_fb_to_blocks(struct gm12u320_device *gm12u320) GM12U320_ERR("dma_buf_end_cpu_access err: %d\n", ret); } vunmap: - drm_gem_shmem_vunmap(fb->obj[0], vaddr); + drm_gem_shmem_vunmap(fb->obj[0], &map); put_fb: drm_framebuffer_put(fb); gm12u320->fb_update.fb = NULL; diff --git a/drivers/gpu/drm/udl/udl_modeset.c b/drivers/gpu/drm/udl/udl_modeset.c index fef43f4e3bac..42eeba1dfdbf 100644 --- a/drivers/gpu/drm/udl/udl_modeset.c +++ b/drivers/gpu/drm/udl/udl_modeset.c @@ -276,6 +276,7 @@ static int udl_handle_damage(struct drm_framebuffer *fb, int x, int y, struct urb *urb; struct drm_rect clip; int log_bpp; + struct dma_buf_map map; void *vaddr; ret = udl_log_cpp(fb->format->cpp[0]); @@ -296,11 +297,12 @@ static int udl_handle_damage(struct drm_framebuffer *fb, int x, int y, return ret; } - vaddr = drm_gem_shmem_vmap(fb->obj[0]); - if (IS_ERR(vaddr)) { + ret = drm_gem_shmem_vmap(fb->obj[0], &map); + if (ret) { DRM_ERROR("failed to vmap fb\n"); goto out_dma_buf_end_cpu_access; } + vaddr = map.vaddr; /* TODO: Use mapping abstraction properly */ urb = udl_get_urb(dev); if (!urb) @@ -333,7 +335,7 @@ static int udl_handle_damage(struct drm_framebuffer *fb, int x, int y, ret = 0; out_drm_gem_shmem_vunmap: - drm_gem_shmem_vunmap(fb->obj[0], vaddr); + drm_gem_shmem_vunmap(fb->obj[0], &map); out_dma_buf_end_cpu_access: if (import_attach) { tmp_ret = dma_buf_end_cpu_access(import_attach->dmabuf, diff --git a/drivers/gpu/drm/vboxvideo/vbox_mode.c b/drivers/gpu/drm/vboxvideo/vbox_mode.c index 931c55126148..f268fb258c83 100644 --- a/drivers/gpu/drm/vboxvideo/vbox_mode.c +++ b/drivers/gpu/drm/vboxvideo/vbox_mode.c @@ -9,6 +9,8 @@ * Michael Thayer */ + +#include #include #include @@ -384,6 +386,8 @@ static void vbox_cursor_atomic_update(struct drm_plane *plane, u32 height = plane->state->crtc_h; size_t data_size, mask_size; u32 flags; + struct dma_buf_map map; + int ret; u8 *src; /* @@ -397,8 +401,8 @@ static void vbox_cursor_atomic_update(struct drm_plane *plane, vbox_crtc->cursor_enabled = true; - src = drm_gem_vram_vmap(gbo); - if (IS_ERR(src)) { + ret = drm_gem_vram_vmap(gbo, &map); + if (ret) { /* * BUG: we should have pinned the BO in prepare_fb(). */ @@ -406,6 +410,7 @@ static void vbox_cursor_atomic_update(struct drm_plane *plane, DRM_WARN("Could not map cursor bo, skipping update\n"); return; } + src = map.vaddr; /* TODO: Use mapping abstraction properly */ /* * The mask must be calculated based on the alpha @@ -416,7 +421,7 @@ static void vbox_cursor_atomic_update(struct drm_plane *plane, data_size = width * height * 4 + mask_size; copy_cursor_image(src, vbox->cursor_data, width, height, mask_size); - drm_gem_vram_vunmap(gbo, src); + drm_gem_vram_vunmap(gbo, &map); flags = VBOX_MOUSE_POINTER_VISIBLE | VBOX_MOUSE_POINTER_SHAPE | VBOX_MOUSE_POINTER_ALPHA; diff --git a/drivers/gpu/drm/vc4/vc4_bo.c b/drivers/gpu/drm/vc4/vc4_bo.c index 557f0d1e6437..f290a9a942dc 100644 --- a/drivers/gpu/drm/vc4/vc4_bo.c +++ b/drivers/gpu/drm/vc4/vc4_bo.c @@ -785,16 +785,16 @@ int vc4_prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma) return drm_gem_cma_prime_mmap(obj, vma); } -void *vc4_prime_vmap(struct drm_gem_object *obj) +int vc4_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map) { struct vc4_bo *bo = to_vc4_bo(obj); if (bo->validated_shader) { DRM_DEBUG("mmaping of shader BOs not allowed.\n"); - return ERR_PTR(-EINVAL); + return -EINVAL; } - return drm_gem_cma_prime_vmap(obj); + return drm_gem_cma_prime_vmap(obj, map); } struct drm_gem_object * diff --git a/drivers/gpu/drm/vc4/vc4_drv.h b/drivers/gpu/drm/vc4/vc4_drv.h index cc79b1aaa878..904f2c36c963 100644 --- a/drivers/gpu/drm/vc4/vc4_drv.h +++ b/drivers/gpu/drm/vc4/vc4_drv.h @@ -806,7 +806,7 @@ int vc4_prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma); struct drm_gem_object *vc4_prime_import_sg_table(struct drm_device *dev, struct dma_buf_attachment *attach, struct sg_table *sgt); -void *vc4_prime_vmap(struct drm_gem_object *obj); +int vc4_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map); int vc4_bo_cache_init(struct drm_device *dev); void vc4_bo_cache_destroy(struct drm_device *dev); int vc4_bo_inc_usecnt(struct vc4_bo *bo); diff --git a/drivers/gpu/drm/vgem/vgem_drv.c b/drivers/gpu/drm/vgem/vgem_drv.c index fa54a6d1403d..b2aa26e1e4a2 100644 --- a/drivers/gpu/drm/vgem/vgem_drv.c +++ b/drivers/gpu/drm/vgem/vgem_drv.c @@ -361,24 +361,30 @@ static struct drm_gem_object *vgem_prime_import_sg_table(struct drm_device *dev, return &obj->base; } -static void *vgem_prime_vmap(struct drm_gem_object *obj) +static int vgem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map) { struct drm_vgem_gem_object *bo = to_vgem_bo(obj); long n_pages = obj->size >> PAGE_SHIFT; struct page **pages; + void *vaddr; pages = vgem_pin_pages(bo); if (IS_ERR(pages)) - return NULL; + return PTR_ERR(pages); + + vaddr = vmap(pages, n_pages, 0, pgprot_writecombine(PAGE_KERNEL)); + if (!vaddr) + return -ENOMEM; + dma_buf_map_set_vaddr(map, vaddr); - return vmap(pages, n_pages, 0, pgprot_writecombine(PAGE_KERNEL)); + return 0; } -static void vgem_prime_vunmap(struct drm_gem_object *obj, void *vaddr) +static void vgem_prime_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map) { struct drm_vgem_gem_object *bo = to_vgem_bo(obj); - vunmap(vaddr); + vunmap(map->vaddr); vgem_unpin_pages(bo); } diff --git a/drivers/gpu/drm/xen/xen_drm_front_gem.c b/drivers/gpu/drm/xen/xen_drm_front_gem.c index 4f34ef34ba60..74db5a840bed 100644 --- a/drivers/gpu/drm/xen/xen_drm_front_gem.c +++ b/drivers/gpu/drm/xen/xen_drm_front_gem.c @@ -290,22 +290,28 @@ int xen_drm_front_gem_mmap(struct file *filp, struct vm_area_struct *vma) return gem_mmap_obj(xen_obj, vma); } -void *xen_drm_front_gem_prime_vmap(struct drm_gem_object *gem_obj) +int xen_drm_front_gem_prime_vmap(struct drm_gem_object *gem_obj, struct dma_buf_map *map) { struct xen_gem_object *xen_obj = to_xen_gem_obj(gem_obj); + void *vaddr; if (!xen_obj->pages) - return NULL; + return -ENOMEM; /* Please see comment in gem_mmap_obj on mapping and attributes. */ - return vmap(xen_obj->pages, xen_obj->num_pages, - VM_MAP, PAGE_KERNEL); + vaddr = vmap(xen_obj->pages, xen_obj->num_pages, + VM_MAP, PAGE_KERNEL); + if (!vaddr) + return -ENOMEM; + dma_buf_map_set_vaddr(map, vaddr); + + return 0; } void xen_drm_front_gem_prime_vunmap(struct drm_gem_object *gem_obj, - void *vaddr) + struct dma_buf_map *map) { - vunmap(vaddr); + vunmap(map->vaddr); } int xen_drm_front_gem_prime_mmap(struct drm_gem_object *gem_obj, diff --git a/drivers/gpu/drm/xen/xen_drm_front_gem.h b/drivers/gpu/drm/xen/xen_drm_front_gem.h index a39675fa31b2..a4e67d0a149c 100644 --- a/drivers/gpu/drm/xen/xen_drm_front_gem.h +++ b/drivers/gpu/drm/xen/xen_drm_front_gem.h @@ -12,6 +12,7 @@ #define __XEN_DRM_FRONT_GEM_H struct dma_buf_attachment; +struct dma_buf_map; struct drm_device; struct drm_gem_object; struct file; @@ -34,10 +35,11 @@ void xen_drm_front_gem_free_object_unlocked(struct drm_gem_object *gem_obj); int xen_drm_front_gem_mmap(struct file *filp, struct vm_area_struct *vma); -void *xen_drm_front_gem_prime_vmap(struct drm_gem_object *gem_obj); +int xen_drm_front_gem_prime_vmap(struct drm_gem_object *gem_obj, + struct dma_buf_map *map); void xen_drm_front_gem_prime_vunmap(struct drm_gem_object *gem_obj, - void *vaddr); + struct dma_buf_map *map); int xen_drm_front_gem_prime_mmap(struct drm_gem_object *gem_obj, struct vm_area_struct *vma); diff --git a/include/drm/drm_gem.h b/include/drm/drm_gem.h index c38dd35da00b..5e6daa1c982f 100644 --- a/include/drm/drm_gem.h +++ b/include/drm/drm_gem.h @@ -39,6 +39,7 @@ #include +struct dma_buf_map; struct drm_gem_object; /** @@ -138,7 +139,7 @@ struct drm_gem_object_funcs { * * This callback is optional. */ - void *(*vmap)(struct drm_gem_object *obj); + int (*vmap)(struct drm_gem_object *obj, struct dma_buf_map *map); /** * @vunmap: @@ -148,7 +149,7 @@ struct drm_gem_object_funcs { * * This callback is optional. */ - void (*vunmap)(struct drm_gem_object *obj, void *vaddr); + void (*vunmap)(struct drm_gem_object *obj, struct dma_buf_map *map); /** * @mmap: diff --git a/include/drm/drm_gem_cma_helper.h b/include/drm/drm_gem_cma_helper.h index a064b0d1c480..caf98b9cf4b4 100644 --- a/include/drm/drm_gem_cma_helper.h +++ b/include/drm/drm_gem_cma_helper.h @@ -103,7 +103,7 @@ drm_gem_cma_prime_import_sg_table(struct drm_device *dev, struct sg_table *sgt); int drm_gem_cma_prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma); -void *drm_gem_cma_prime_vmap(struct drm_gem_object *obj); +int drm_gem_cma_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map); struct drm_gem_object * drm_gem_cma_create_object_default_funcs(struct drm_device *dev, size_t size); diff --git a/include/drm/drm_gem_shmem_helper.h b/include/drm/drm_gem_shmem_helper.h index 5381f0c8cf6f..3449a0353fe0 100644 --- a/include/drm/drm_gem_shmem_helper.h +++ b/include/drm/drm_gem_shmem_helper.h @@ -113,8 +113,8 @@ int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem); void drm_gem_shmem_put_pages(struct drm_gem_shmem_object *shmem); int drm_gem_shmem_pin(struct drm_gem_object *obj); void drm_gem_shmem_unpin(struct drm_gem_object *obj); -void *drm_gem_shmem_vmap(struct drm_gem_object *obj); -void drm_gem_shmem_vunmap(struct drm_gem_object *obj, void *vaddr); +int drm_gem_shmem_vmap(struct drm_gem_object *obj, struct dma_buf_map *map); +void drm_gem_shmem_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map); int drm_gem_shmem_madvise(struct drm_gem_object *obj, int madv); diff --git a/include/drm/drm_gem_vram_helper.h b/include/drm/drm_gem_vram_helper.h index 128f88174d32..c0d28ba0f5c9 100644 --- a/include/drm/drm_gem_vram_helper.h +++ b/include/drm/drm_gem_vram_helper.h @@ -10,6 +10,7 @@ #include #include +#include #include /* for container_of() */ struct drm_mode_create_dumb; @@ -29,9 +30,8 @@ struct vm_area_struct; /** * struct drm_gem_vram_object - GEM object backed by VRAM - * @gem: GEM object * @bo: TTM buffer object - * @kmap: Mapping information for @bo + * @map: Mapping information for @bo * @placement: TTM placement information. Supported placements are \ %TTM_PL_VRAM and %TTM_PL_SYSTEM * @placements: TTM placement information. @@ -50,15 +50,15 @@ struct vm_area_struct; */ struct drm_gem_vram_object { struct ttm_buffer_object bo; - struct ttm_bo_kmap_obj kmap; + struct dma_buf_map map; /** - * @kmap_use_count: + * @vmap_use_count: * * Reference count on the virtual address. * The address are un-mapped when the count reaches zero. */ - unsigned int kmap_use_count; + unsigned int vmap_use_count; /* Supported placements are %TTM_PL_VRAM and %TTM_PL_SYSTEM */ struct ttm_placement placement; @@ -97,8 +97,8 @@ u64 drm_gem_vram_mmap_offset(struct drm_gem_vram_object *gbo); s64 drm_gem_vram_offset(struct drm_gem_vram_object *gbo); int drm_gem_vram_pin(struct drm_gem_vram_object *gbo, unsigned long pl_flag); int drm_gem_vram_unpin(struct drm_gem_vram_object *gbo); -void *drm_gem_vram_vmap(struct drm_gem_vram_object *gbo); -void drm_gem_vram_vunmap(struct drm_gem_vram_object *gbo, void *vaddr); +int drm_gem_vram_vmap(struct drm_gem_vram_object *gbo, struct dma_buf_map *map); +void drm_gem_vram_vunmap(struct drm_gem_vram_object *gbo, struct dma_buf_map *map); int drm_gem_vram_fill_create_dumb(struct drm_file *file, struct drm_device *dev, -- 2.28.0 From tzimmermann at suse.de Thu Oct 15 12:38:03 2020 From: tzimmermann at suse.de (Thomas Zimmermann) Date: Thu, 15 Oct 2020 14:38:03 +0200 Subject: [Spice-devel] [PATCH v4 07/10] drm/gem: Update internal GEM vmap/vunmap interfaces to use struct dma_buf_map In-Reply-To: <20201015123806.32416-1-tzimmermann@suse.de> References: <20201015123806.32416-1-tzimmermann@suse.de> Message-ID: <20201015123806.32416-8-tzimmermann@suse.de> GEM's vmap and vunmap interfaces now wrap memory pointers in struct dma_buf_map. Signed-off-by: Thomas Zimmermann Reviewed-by: Daniel Vetter --- drivers/gpu/drm/drm_client.c | 18 +++++++++++------- drivers/gpu/drm/drm_gem.c | 26 +++++++++++++------------- drivers/gpu/drm/drm_internal.h | 5 +++-- drivers/gpu/drm/drm_prime.c | 14 ++++---------- 4 files changed, 31 insertions(+), 32 deletions(-) diff --git a/drivers/gpu/drm/drm_client.c b/drivers/gpu/drm/drm_client.c index 495f47d23d87..ac0082bed966 100644 --- a/drivers/gpu/drm/drm_client.c +++ b/drivers/gpu/drm/drm_client.c @@ -3,6 +3,7 @@ * Copyright 2018 Noralf Tr?nnes */ +#include #include #include #include @@ -304,7 +305,8 @@ drm_client_buffer_create(struct drm_client_dev *client, u32 width, u32 height, u */ void *drm_client_buffer_vmap(struct drm_client_buffer *buffer) { - void *vaddr; + struct dma_buf_map map; + int ret; if (buffer->vaddr) return buffer->vaddr; @@ -317,13 +319,13 @@ void *drm_client_buffer_vmap(struct drm_client_buffer *buffer) * fd_install step out of the driver backend hooks, to make that * final step optional for internal users. */ - vaddr = drm_gem_vmap(buffer->gem); - if (IS_ERR(vaddr)) - return vaddr; + ret = drm_gem_vmap(buffer->gem, &map); + if (ret) + return ERR_PTR(ret); - buffer->vaddr = vaddr; + buffer->vaddr = map.vaddr; - return vaddr; + return map.vaddr; } EXPORT_SYMBOL(drm_client_buffer_vmap); @@ -337,7 +339,9 @@ EXPORT_SYMBOL(drm_client_buffer_vmap); */ void drm_client_buffer_vunmap(struct drm_client_buffer *buffer) { - drm_gem_vunmap(buffer->gem, buffer->vaddr); + struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(buffer->vaddr); + + drm_gem_vunmap(buffer->gem, &map); buffer->vaddr = NULL; } EXPORT_SYMBOL(drm_client_buffer_vunmap); diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c index a89ad4570e3c..4d5fff4bd821 100644 --- a/drivers/gpu/drm/drm_gem.c +++ b/drivers/gpu/drm/drm_gem.c @@ -1206,32 +1206,32 @@ void drm_gem_unpin(struct drm_gem_object *obj) obj->funcs->unpin(obj); } -void *drm_gem_vmap(struct drm_gem_object *obj) +int drm_gem_vmap(struct drm_gem_object *obj, struct dma_buf_map *map) { - struct dma_buf_map map; int ret; if (!obj->funcs->vmap) - return ERR_PTR(-EOPNOTSUPP); + return -EOPNOTSUPP; - ret = obj->funcs->vmap(obj, &map); + ret = obj->funcs->vmap(obj, map); if (ret) - return ERR_PTR(ret); - else if (dma_buf_map_is_null(&map)) - return ERR_PTR(-ENOMEM); + return ret; + else if (dma_buf_map_is_null(map)) + return -ENOMEM; - return map.vaddr; + return 0; } -void drm_gem_vunmap(struct drm_gem_object *obj, void *vaddr) +void drm_gem_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map) { - struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(vaddr); - - if (!vaddr) + if (dma_buf_map_is_null(map)) return; if (obj->funcs->vunmap) - obj->funcs->vunmap(obj, &map); + obj->funcs->vunmap(obj, map); + + /* Always set the mapping to NULL. Callers may rely on this. */ + dma_buf_map_clear(map); } /** diff --git a/drivers/gpu/drm/drm_internal.h b/drivers/gpu/drm/drm_internal.h index b65865c630b0..58832d75a9bd 100644 --- a/drivers/gpu/drm/drm_internal.h +++ b/drivers/gpu/drm/drm_internal.h @@ -33,6 +33,7 @@ struct dentry; struct dma_buf; +struct dma_buf_map; struct drm_connector; struct drm_crtc; struct drm_framebuffer; @@ -187,8 +188,8 @@ void drm_gem_print_info(struct drm_printer *p, unsigned int indent, int drm_gem_pin(struct drm_gem_object *obj); void drm_gem_unpin(struct drm_gem_object *obj); -void *drm_gem_vmap(struct drm_gem_object *obj); -void drm_gem_vunmap(struct drm_gem_object *obj, void *vaddr); +int drm_gem_vmap(struct drm_gem_object *obj, struct dma_buf_map *map); +void drm_gem_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map); /* drm_debugfs.c drm_debugfs_crc.c */ #if defined(CONFIG_DEBUG_FS) diff --git a/drivers/gpu/drm/drm_prime.c b/drivers/gpu/drm/drm_prime.c index 89e2a2496734..cb8fbeeb731b 100644 --- a/drivers/gpu/drm/drm_prime.c +++ b/drivers/gpu/drm/drm_prime.c @@ -667,21 +667,15 @@ EXPORT_SYMBOL(drm_gem_unmap_dma_buf); * * Sets up a kernel virtual mapping. This can be used as the &dma_buf_ops.vmap * callback. Calls into &drm_gem_object_funcs.vmap for device specific handling. + * The kernel virtual address is returned in map. * - * Returns the kernel virtual address or NULL on failure. + * Returns 0 on success or a negative errno code otherwise. */ int drm_gem_dmabuf_vmap(struct dma_buf *dma_buf, struct dma_buf_map *map) { struct drm_gem_object *obj = dma_buf->priv; - void *vaddr; - vaddr = drm_gem_vmap(obj); - if (IS_ERR(vaddr)) - return PTR_ERR(vaddr); - - dma_buf_map_set_vaddr(map, vaddr); - - return 0; + return drm_gem_vmap(obj, map); } EXPORT_SYMBOL(drm_gem_dmabuf_vmap); @@ -697,7 +691,7 @@ void drm_gem_dmabuf_vunmap(struct dma_buf *dma_buf, struct dma_buf_map *map) { struct drm_gem_object *obj = dma_buf->priv; - drm_gem_vunmap(obj, map->vaddr); + drm_gem_vunmap(obj, map); } EXPORT_SYMBOL(drm_gem_dmabuf_vunmap); -- 2.28.0 From tzimmermann at suse.de Thu Oct 15 12:38:04 2020 From: tzimmermann at suse.de (Thomas Zimmermann) Date: Thu, 15 Oct 2020 14:38:04 +0200 Subject: [Spice-devel] [PATCH v4 08/10] drm/gem: Store client buffer mappings as struct dma_buf_map In-Reply-To: <20201015123806.32416-1-tzimmermann@suse.de> References: <20201015123806.32416-1-tzimmermann@suse.de> Message-ID: <20201015123806.32416-9-tzimmermann@suse.de> Kernel DRM clients now store their framebuffer address in an instance of struct dma_buf_map. Depending on the buffer's location, the address refers to system or I/O memory. Callers of drm_client_buffer_vmap() receive a copy of the value in the call's supplied arguments. It can be accessed and modified with dma_buf_map interfaces. Signed-off-by: Thomas Zimmermann Reviewed-by: Daniel Vetter --- drivers/gpu/drm/drm_client.c | 34 +++++++++++++++++++-------------- drivers/gpu/drm/drm_fb_helper.c | 23 +++++++++++++--------- include/drm/drm_client.h | 7 ++++--- 3 files changed, 38 insertions(+), 26 deletions(-) diff --git a/drivers/gpu/drm/drm_client.c b/drivers/gpu/drm/drm_client.c index ac0082bed966..fe573acf1067 100644 --- a/drivers/gpu/drm/drm_client.c +++ b/drivers/gpu/drm/drm_client.c @@ -235,7 +235,7 @@ static void drm_client_buffer_delete(struct drm_client_buffer *buffer) { struct drm_device *dev = buffer->client->dev; - drm_gem_vunmap(buffer->gem, buffer->vaddr); + drm_gem_vunmap(buffer->gem, &buffer->map); if (buffer->gem) drm_gem_object_put(buffer->gem); @@ -291,25 +291,31 @@ drm_client_buffer_create(struct drm_client_dev *client, u32 width, u32 height, u /** * drm_client_buffer_vmap - Map DRM client buffer into address space * @buffer: DRM client buffer + * @map_copy: Returns the mapped memory's address * * This function maps a client buffer into kernel address space. If the - * buffer is already mapped, it returns the mapping's address. + * buffer is already mapped, it returns the existing mapping's address. * * Client buffer mappings are not ref'counted. Each call to * drm_client_buffer_vmap() should be followed by a call to * drm_client_buffer_vunmap(); or the client buffer should be mapped * throughout its lifetime. * + * The returned address is a copy of the internal value. In contrast to + * other vmap interfaces, you don't need it for the client's vunmap + * function. So you can modify it at will during blit and draw operations. + * * Returns: - * The mapped memory's address + * 0 on success, or a negative errno code otherwise. */ -void *drm_client_buffer_vmap(struct drm_client_buffer *buffer) +int +drm_client_buffer_vmap(struct drm_client_buffer *buffer, struct dma_buf_map *map_copy) { - struct dma_buf_map map; + struct dma_buf_map *map = &buffer->map; int ret; - if (buffer->vaddr) - return buffer->vaddr; + if (dma_buf_map_is_set(map)) + goto out; /* * FIXME: The dependency on GEM here isn't required, we could @@ -319,13 +325,14 @@ void *drm_client_buffer_vmap(struct drm_client_buffer *buffer) * fd_install step out of the driver backend hooks, to make that * final step optional for internal users. */ - ret = drm_gem_vmap(buffer->gem, &map); + ret = drm_gem_vmap(buffer->gem, map); if (ret) - return ERR_PTR(ret); + return ret; - buffer->vaddr = map.vaddr; +out: + *map_copy = *map; - return map.vaddr; + return 0; } EXPORT_SYMBOL(drm_client_buffer_vmap); @@ -339,10 +346,9 @@ EXPORT_SYMBOL(drm_client_buffer_vmap); */ void drm_client_buffer_vunmap(struct drm_client_buffer *buffer) { - struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(buffer->vaddr); + struct dma_buf_map *map = &buffer->map; - drm_gem_vunmap(buffer->gem, &map); - buffer->vaddr = NULL; + drm_gem_vunmap(buffer->gem, map); } EXPORT_SYMBOL(drm_client_buffer_vunmap); diff --git a/drivers/gpu/drm/drm_fb_helper.c b/drivers/gpu/drm/drm_fb_helper.c index c2f72bb6afb1..6212cd7cde1d 100644 --- a/drivers/gpu/drm/drm_fb_helper.c +++ b/drivers/gpu/drm/drm_fb_helper.c @@ -378,7 +378,7 @@ static void drm_fb_helper_dirty_blit_real(struct drm_fb_helper *fb_helper, unsigned int cpp = fb->format->cpp[0]; size_t offset = clip->y1 * fb->pitches[0] + clip->x1 * cpp; void *src = fb_helper->fbdev->screen_buffer + offset; - void *dst = fb_helper->buffer->vaddr + offset; + void *dst = fb_helper->buffer->map.vaddr + offset; size_t len = (clip->x2 - clip->x1) * cpp; unsigned int y; @@ -400,7 +400,8 @@ static void drm_fb_helper_dirty_work(struct work_struct *work) struct drm_clip_rect *clip = &helper->dirty_clip; struct drm_clip_rect clip_copy; unsigned long flags; - void *vaddr; + struct dma_buf_map map; + int ret; spin_lock_irqsave(&helper->dirty_lock, flags); clip_copy = *clip; @@ -413,8 +414,8 @@ static void drm_fb_helper_dirty_work(struct work_struct *work) /* Generic fbdev uses a shadow buffer */ if (helper->buffer) { - vaddr = drm_client_buffer_vmap(helper->buffer); - if (IS_ERR(vaddr)) + ret = drm_client_buffer_vmap(helper->buffer, &map); + if (ret) return; drm_fb_helper_dirty_blit_real(helper, &clip_copy); } @@ -2060,7 +2061,8 @@ static int drm_fb_helper_generic_probe(struct drm_fb_helper *fb_helper, struct drm_framebuffer *fb; struct fb_info *fbi; u32 format; - void *vaddr; + struct dma_buf_map map; + int ret; drm_dbg_kms(dev, "surface width(%d), height(%d) and bpp(%d)\n", sizes->surface_width, sizes->surface_height, @@ -2096,11 +2098,14 @@ static int drm_fb_helper_generic_probe(struct drm_fb_helper *fb_helper, fb_deferred_io_init(fbi); } else { /* buffer is mapped for HW framebuffer */ - vaddr = drm_client_buffer_vmap(fb_helper->buffer); - if (IS_ERR(vaddr)) - return PTR_ERR(vaddr); + ret = drm_client_buffer_vmap(fb_helper->buffer, &map); + if (ret) + return ret; + if (map.is_iomem) + fbi->screen_base = map.vaddr_iomem; + else + fbi->screen_buffer = map.vaddr; - fbi->screen_buffer = vaddr; /* Shamelessly leak the physical address to user-space */ #if IS_ENABLED(CONFIG_DRM_FBDEV_LEAK_PHYS_SMEM) if (drm_leak_fbdev_smem && fbi->fix.smem_start == 0) diff --git a/include/drm/drm_client.h b/include/drm/drm_client.h index 7aaea665bfc2..f07f2fb02e75 100644 --- a/include/drm/drm_client.h +++ b/include/drm/drm_client.h @@ -3,6 +3,7 @@ #ifndef _DRM_CLIENT_H_ #define _DRM_CLIENT_H_ +#include #include #include #include @@ -141,9 +142,9 @@ struct drm_client_buffer { struct drm_gem_object *gem; /** - * @vaddr: Virtual address for the buffer + * @map: Virtual address for the buffer */ - void *vaddr; + struct dma_buf_map map; /** * @fb: DRM framebuffer @@ -155,7 +156,7 @@ struct drm_client_buffer * drm_client_framebuffer_create(struct drm_client_dev *client, u32 width, u32 height, u32 format); void drm_client_framebuffer_delete(struct drm_client_buffer *buffer); int drm_client_framebuffer_flush(struct drm_client_buffer *buffer, struct drm_rect *rect); -void *drm_client_buffer_vmap(struct drm_client_buffer *buffer); +int drm_client_buffer_vmap(struct drm_client_buffer *buffer, struct dma_buf_map *map); void drm_client_buffer_vunmap(struct drm_client_buffer *buffer); int drm_client_modeset_create(struct drm_client_dev *client); -- 2.28.0 From tzimmermann at suse.de Thu Oct 15 12:37:56 2020 From: tzimmermann at suse.de (Thomas Zimmermann) Date: Thu, 15 Oct 2020 14:37:56 +0200 Subject: [Spice-devel] [PATCH v4 00/10] Support GEM object mappings from I/O memory Message-ID: <20201015123806.32416-1-tzimmermann@suse.de> DRM's fbdev console uses regular load and store operations to update framebuffer memory. The bochs driver on sparc64 requires the use of I/O-specific load and store operations. We have a workaround, but need a long-term solution to the problem. This patchset changes GEM's vmap/vunmap interfaces to forward pointers of type struct dma_buf_map and updates the generic fbdev emulation to use them correctly. This enables I/O-memory operations on all framebuffers that require and support them. Patches #1 to #4 prepare VRAM helpers and drivers. Next is the update of the GEM vmap functions. Patch #5 adds vmap and vunmap that is usable with TTM-based GEM drivers, and patch #6 updates GEM's vmap/vunmap callback to forward instances of type struct dma_buf_map. While the patch touches many files throughout the DRM modules, the applied changes are mostly trivial interface fixes. Several TTM-based GEM drivers now use the new vmap code. Patch #7 updates GEM's internal vmap/vunmap functions to forward struct dma_buf_map. With struct dma_buf_map propagated through the layers, patches #9 and #10 convert DRM clients and generic fbdev emulation to use it. Updating the fbdev framebuffer will select the correct functions, either for system or I/O memory. v4: * provide TTM vmap/vunmap plus GEM helpers and convert drivers over (Christian, Daniel) * remove several empty functions * more TODOs and documentation (Daniel) v3: * recreate the whole patchset on top of struct dma_buf_map v2: * RFC patchset Thomas Zimmermann (10): drm/vram-helper: Remove invariant parameters from internal kmap function drm/cma-helper: Remove empty drm_gem_cma_prime_vunmap() drm/etnaviv: Remove empty etnaviv_gem_prime_vunmap() drm/exynos: Remove empty exynos_drm_gem_prime_{vmap,vunmap}() drm/ttm: Add vmap/vunmap to TTM and TTM GEM helpers drm/gem: Use struct dma_buf_map in GEM vmap ops and convert GEM backends drm/gem: Update internal GEM vmap/vunmap interfaces to use struct dma_buf_map drm/gem: Store client buffer mappings as struct dma_buf_map dma-buf-map: Add memcpy and pointer-increment interfaces drm/fb_helper: Support framebuffers in I/O memory Documentation/gpu/todo.rst | 37 ++- drivers/gpu/drm/Kconfig | 2 + drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c | 36 --- drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.h | 2 - drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c | 5 +- drivers/gpu/drm/amd/amdgpu/amdgpu_object.h | 1 - drivers/gpu/drm/ast/ast_cursor.c | 27 ++- drivers/gpu/drm/ast/ast_drv.h | 7 +- drivers/gpu/drm/bochs/bochs_kms.c | 1 - drivers/gpu/drm/drm_client.c | 38 ++-- drivers/gpu/drm/drm_fb_helper.c | 238 ++++++++++++++++++-- drivers/gpu/drm/drm_gem.c | 29 ++- drivers/gpu/drm/drm_gem_cma_helper.c | 27 +-- drivers/gpu/drm/drm_gem_shmem_helper.c | 48 ++-- drivers/gpu/drm/drm_gem_ttm_helper.c | 38 ++++ drivers/gpu/drm/drm_gem_vram_helper.c | 117 +++++----- drivers/gpu/drm/drm_internal.h | 5 +- drivers/gpu/drm/drm_prime.c | 14 +- drivers/gpu/drm/etnaviv/etnaviv_drv.h | 3 +- drivers/gpu/drm/etnaviv/etnaviv_gem.c | 1 - drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c | 12 +- drivers/gpu/drm/exynos/exynos_drm_gem.c | 12 - drivers/gpu/drm/exynos/exynos_drm_gem.h | 2 - drivers/gpu/drm/lima/lima_gem.c | 6 +- drivers/gpu/drm/lima/lima_sched.c | 11 +- drivers/gpu/drm/mgag200/mgag200_mode.c | 10 +- drivers/gpu/drm/nouveau/Kconfig | 1 + drivers/gpu/drm/nouveau/nouveau_bo.h | 2 - drivers/gpu/drm/nouveau/nouveau_gem.c | 6 +- drivers/gpu/drm/nouveau/nouveau_gem.h | 2 - drivers/gpu/drm/nouveau/nouveau_prime.c | 20 -- drivers/gpu/drm/panfrost/panfrost_perfcnt.c | 14 +- drivers/gpu/drm/qxl/qxl_display.c | 11 +- drivers/gpu/drm/qxl/qxl_draw.c | 14 +- drivers/gpu/drm/qxl/qxl_drv.h | 11 +- drivers/gpu/drm/qxl/qxl_object.c | 31 ++- drivers/gpu/drm/qxl/qxl_object.h | 2 +- drivers/gpu/drm/qxl/qxl_prime.c | 12 +- drivers/gpu/drm/radeon/radeon.h | 1 - drivers/gpu/drm/radeon/radeon_gem.c | 7 +- drivers/gpu/drm/radeon/radeon_prime.c | 20 -- drivers/gpu/drm/rockchip/rockchip_drm_gem.c | 22 +- drivers/gpu/drm/rockchip/rockchip_drm_gem.h | 4 +- drivers/gpu/drm/tiny/cirrus.c | 10 +- drivers/gpu/drm/tiny/gm12u320.c | 10 +- drivers/gpu/drm/ttm/ttm_bo_util.c | 72 ++++++ drivers/gpu/drm/udl/udl_modeset.c | 8 +- drivers/gpu/drm/vboxvideo/vbox_mode.c | 11 +- drivers/gpu/drm/vc4/vc4_bo.c | 7 +- drivers/gpu/drm/vc4/vc4_drv.h | 2 +- drivers/gpu/drm/vgem/vgem_drv.c | 16 +- drivers/gpu/drm/xen/xen_drm_front_gem.c | 18 +- drivers/gpu/drm/xen/xen_drm_front_gem.h | 6 +- include/drm/drm_client.h | 7 +- include/drm/drm_gem.h | 5 +- include/drm/drm_gem_cma_helper.h | 3 +- include/drm/drm_gem_shmem_helper.h | 4 +- include/drm/drm_gem_ttm_helper.h | 6 + include/drm/drm_gem_vram_helper.h | 14 +- include/drm/drm_mode_config.h | 12 - include/drm/ttm/ttm_bo_api.h | 28 +++ include/linux/dma-buf-map.h | 92 +++++++- 62 files changed, 817 insertions(+), 423 deletions(-) -- 2.28.0 From tzimmermann at suse.de Thu Oct 15 12:38:05 2020 From: tzimmermann at suse.de (Thomas Zimmermann) Date: Thu, 15 Oct 2020 14:38:05 +0200 Subject: [Spice-devel] [PATCH v4 09/10] dma-buf-map: Add memcpy and pointer-increment interfaces In-Reply-To: <20201015123806.32416-1-tzimmermann@suse.de> References: <20201015123806.32416-1-tzimmermann@suse.de> Message-ID: <20201015123806.32416-10-tzimmermann@suse.de> To do framebuffer updates, one needs memcpy from system memory and a pointer-increment function. Add both interfaces with documentation. Signed-off-by: Thomas Zimmermann --- include/linux/dma-buf-map.h | 72 +++++++++++++++++++++++++++++++------ 1 file changed, 62 insertions(+), 10 deletions(-) diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h index 2e8bbecb5091..6ca0f304dda2 100644 --- a/include/linux/dma-buf-map.h +++ b/include/linux/dma-buf-map.h @@ -32,6 +32,14 @@ * accessing the buffer. Use the returned instance and the helper functions * to access the buffer's memory in the correct way. * + * The type :c:type:`struct dma_buf_map ` and its helpers are + * actually independent from the dma-buf infrastructure. When sharing buffers + * among devices, drivers have to know the location of the memory to access + * the buffers in a safe way. :c:type:`struct dma_buf_map ` + * solves this problem for dma-buf and its users. If other drivers or + * sub-systems require similar functionality, the type could be generalized + * and moved to a more prominent header file. + * * Open-coding access to :c:type:`struct dma_buf_map ` is * considered bad style. Rather then accessing its fields directly, use one * of the provided helper functions, or implement your own. For example, @@ -51,6 +59,14 @@ * * dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf); * + * Instances of struct dma_buf_map do not have to be cleaned up, but + * can be cleared to NULL with dma_buf_map_clear(). Cleared mappings + * always refer to system memory. + * + * .. code-block:: c + * + * dma_buf_map_clear(&map); + * * Test if a mapping is valid with either dma_buf_map_is_set() or * dma_buf_map_is_null(). * @@ -73,17 +89,19 @@ * if (dma_buf_map_is_equal(&sys_map, &io_map)) * // always false * - * Instances of struct dma_buf_map do not have to be cleaned up, but - * can be cleared to NULL with dma_buf_map_clear(). Cleared mappings - * always refer to system memory. + * A set up instance of struct dma_buf_map can be used to access or manipulate + * the buffer memory. Depending on the location of the memory, the provided + * helpers will pick the correct operations. Data can be copied into the memory + * with dma_buf_map_memcpy_to(). The address can be manipulated with + * dma_buf_map_incr(). * - * The type :c:type:`struct dma_buf_map ` and its helpers are - * actually independent from the dma-buf infrastructure. When sharing buffers - * among devices, drivers have to know the location of the memory to access - * the buffers in a safe way. :c:type:`struct dma_buf_map ` - * solves this problem for dma-buf and its users. If other drivers or - * sub-systems require similar functionality, the type could be generalized - * and moved to a more prominent header file. + * .. code-block:: c + * + * const void *src = ...; // source buffer + * size_t len = ...; // length of src + * + * dma_buf_map_memcpy_to(&map, src, len); + * dma_buf_map_incr(&map, len); // go to first byte after the memcpy */ /** @@ -210,4 +228,38 @@ static inline void dma_buf_map_clear(struct dma_buf_map *map) } } +/** + * dma_buf_map_memcpy_to - Memcpy into dma-buf mapping + * @dst: The dma-buf mapping structure + * @src: The source buffer + * @len: The number of byte in src + * + * Copies data into a dma-buf mapping. The source buffer is in system + * memory. Depending on the buffer's location, the helper picks the correct + * method of accessing the memory. + */ +static inline void dma_buf_map_memcpy_to(struct dma_buf_map *dst, const void *src, size_t len) +{ + if (dst->is_iomem) + memcpy_toio(dst->vaddr_iomem, src, len); + else + memcpy(dst->vaddr, src, len); +} + +/** + * dma_buf_map_incr - Increments the address stored in a dma-buf mapping + * @map: The dma-buf mapping structure + * @incr: The number of bytes to increment + * + * Increments the address stored in a dma-buf mapping. Depending on the + * buffer's location, the correct value will be updated. + */ +static inline void dma_buf_map_incr(struct dma_buf_map *map, size_t incr) +{ + if (map->is_iomem) + map->vaddr_iomem += incr; + else + map->vaddr += incr; +} + #endif /* __DMA_BUF_MAP_H__ */ -- 2.28.0 From tzimmermann at suse.de Thu Oct 15 12:38:06 2020 From: tzimmermann at suse.de (Thomas Zimmermann) Date: Thu, 15 Oct 2020 14:38:06 +0200 Subject: [Spice-devel] [PATCH v4 10/10] drm/fb_helper: Support framebuffers in I/O memory In-Reply-To: <20201015123806.32416-1-tzimmermann@suse.de> References: <20201015123806.32416-1-tzimmermann@suse.de> Message-ID: <20201015123806.32416-11-tzimmermann@suse.de> At least sparc64 requires I/O-specific access to framebuffers. This patch updates the fbdev console accordingly. For drivers with direct access to the framebuffer memory, the callback functions in struct fb_ops test for the type of memory and call the rsp fb_sys_ of fb_cfb_ functions. For drivers that employ a shadow buffer, fbdev's blit function retrieves the framebuffer address as struct dma_buf_map, and uses dma_buf_map interfaces to access the buffer. The bochs driver on sparc64 uses a workaround to flag the framebuffer as I/O memory and avoid a HW exception. With the introduction of struct dma_buf_map, this is not required any longer. The patch removes the rsp code from both, bochs and fbdev. v4: * move dma_buf_map changes into separate patch (Daniel) * TODO list: comment on fbdev updates (Daniel) Signed-off-by: Thomas Zimmermann --- Documentation/gpu/todo.rst | 19 ++- drivers/gpu/drm/bochs/bochs_kms.c | 1 - drivers/gpu/drm/drm_fb_helper.c | 217 ++++++++++++++++++++++++++++-- include/drm/drm_mode_config.h | 12 -- 4 files changed, 220 insertions(+), 29 deletions(-) diff --git a/Documentation/gpu/todo.rst b/Documentation/gpu/todo.rst index 7e6fc3c04add..638b7f704339 100644 --- a/Documentation/gpu/todo.rst +++ b/Documentation/gpu/todo.rst @@ -197,13 +197,28 @@ Convert drivers to use drm_fbdev_generic_setup() ------------------------------------------------ Most drivers can use drm_fbdev_generic_setup(). Driver have to implement -atomic modesetting and GEM vmap support. Current generic fbdev emulation -expects the framebuffer in system memory (or system-like memory). +atomic modesetting and GEM vmap support. Historically, generic fbdev emulation +expected the framebuffer in system memory or system-like memory. By employing +struct dma_buf_map, drivers with frambuffers in I/O memory can be supported +as well. Contact: Maintainer of the driver you plan to convert Level: Intermediate +Reimplement functions in drm_fbdev_fb_ops without fbdev +------------------------------------------------------- + +A number of callback functions in drm_fbdev_fb_ops could benefit from +being rewritten without dependencies on the fbdev module. Some of the +helpers could further benefit from using struct dma_buf_map instead of +raw pointers. + +Contact: Thomas Zimmermann , Daniel Vetter + +Level: Advanced + + drm_framebuffer_funcs and drm_mode_config_funcs.fb_create cleanup ----------------------------------------------------------------- diff --git a/drivers/gpu/drm/bochs/bochs_kms.c b/drivers/gpu/drm/bochs/bochs_kms.c index 13d0d04c4457..853081d186d5 100644 --- a/drivers/gpu/drm/bochs/bochs_kms.c +++ b/drivers/gpu/drm/bochs/bochs_kms.c @@ -151,7 +151,6 @@ int bochs_kms_init(struct bochs_device *bochs) bochs->dev->mode_config.preferred_depth = 24; bochs->dev->mode_config.prefer_shadow = 0; bochs->dev->mode_config.prefer_shadow_fbdev = 1; - bochs->dev->mode_config.fbdev_use_iomem = true; bochs->dev->mode_config.quirk_addfb_prefer_host_byte_order = true; bochs->dev->mode_config.funcs = &bochs_mode_funcs; diff --git a/drivers/gpu/drm/drm_fb_helper.c b/drivers/gpu/drm/drm_fb_helper.c index 6212cd7cde1d..462b0c130ebb 100644 --- a/drivers/gpu/drm/drm_fb_helper.c +++ b/drivers/gpu/drm/drm_fb_helper.c @@ -372,24 +372,22 @@ static void drm_fb_helper_resume_worker(struct work_struct *work) } static void drm_fb_helper_dirty_blit_real(struct drm_fb_helper *fb_helper, - struct drm_clip_rect *clip) + struct drm_clip_rect *clip, + struct dma_buf_map *dst) { struct drm_framebuffer *fb = fb_helper->fb; unsigned int cpp = fb->format->cpp[0]; size_t offset = clip->y1 * fb->pitches[0] + clip->x1 * cpp; void *src = fb_helper->fbdev->screen_buffer + offset; - void *dst = fb_helper->buffer->map.vaddr + offset; size_t len = (clip->x2 - clip->x1) * cpp; unsigned int y; - for (y = clip->y1; y < clip->y2; y++) { - if (!fb_helper->dev->mode_config.fbdev_use_iomem) - memcpy(dst, src, len); - else - memcpy_toio((void __iomem *)dst, src, len); + dma_buf_map_incr(dst, offset); /* go to first pixel within clip rect */ + for (y = clip->y1; y < clip->y2; y++) { + dma_buf_map_memcpy_to(dst, src, len); + dma_buf_map_incr(dst, fb->pitches[0]); src += fb->pitches[0]; - dst += fb->pitches[0]; } } @@ -417,8 +415,9 @@ static void drm_fb_helper_dirty_work(struct work_struct *work) ret = drm_client_buffer_vmap(helper->buffer, &map); if (ret) return; - drm_fb_helper_dirty_blit_real(helper, &clip_copy); + drm_fb_helper_dirty_blit_real(helper, &clip_copy, &map); } + if (helper->fb->funcs->dirty) helper->fb->funcs->dirty(helper->fb, NULL, 0, 0, &clip_copy, 1); @@ -755,6 +754,136 @@ void drm_fb_helper_sys_imageblit(struct fb_info *info, } EXPORT_SYMBOL(drm_fb_helper_sys_imageblit); +static ssize_t drm_fb_helper_cfb_read(struct fb_info *info, char __user *buf, + size_t count, loff_t *ppos) +{ + unsigned long p = *ppos; + u8 *dst; + u8 __iomem *src; + int c, err = 0; + unsigned long total_size; + unsigned long alloc_size; + ssize_t ret = 0; + + if (info->state != FBINFO_STATE_RUNNING) + return -EPERM; + + total_size = info->screen_size; + + if (total_size == 0) + total_size = info->fix.smem_len; + + if (p >= total_size) + return 0; + + if (count >= total_size) + count = total_size; + + if (count + p > total_size) + count = total_size - p; + + src = (u8 __iomem *)(info->screen_base + p); + + alloc_size = min(count, PAGE_SIZE); + + dst = kmalloc(alloc_size, GFP_KERNEL); + if (!dst) + return -ENOMEM; + + while (count) { + c = min(count, alloc_size); + + memcpy_fromio(dst, src, c); + if (copy_to_user(buf, dst, c)) { + err = -EFAULT; + break; + } + + src += c; + *ppos += c; + buf += c; + ret += c; + count -= c; + } + + kfree(dst); + + if (err) + return err; + + return ret; +} + +static ssize_t drm_fb_helper_cfb_write(struct fb_info *info, const char __user *buf, + size_t count, loff_t *ppos) +{ + unsigned long p = *ppos; + u8 *src; + u8 __iomem *dst; + int c, err = 0; + unsigned long total_size; + unsigned long alloc_size; + ssize_t ret = 0; + + if (info->state != FBINFO_STATE_RUNNING) + return -EPERM; + + total_size = info->screen_size; + + if (total_size == 0) + total_size = info->fix.smem_len; + + if (p > total_size) + return -EFBIG; + + if (count > total_size) { + err = -EFBIG; + count = total_size; + } + + if (count + p > total_size) { + /* + * The framebuffer is too small. We do the + * copy operation, but return an error code + * afterwards. Taken from fbdev. + */ + if (!err) + err = -ENOSPC; + count = total_size - p; + } + + alloc_size = min(count, PAGE_SIZE); + + src = kmalloc(alloc_size, GFP_KERNEL); + if (!src) + return -ENOMEM; + + dst = (u8 __iomem *)(info->screen_base + p); + + while (count) { + c = min(count, alloc_size); + + if (copy_from_user(src, buf, c)) { + err = -EFAULT; + break; + } + memcpy_toio(dst, src, c); + + dst += c; + *ppos += c; + buf += c; + ret += c; + count -= c; + } + + kfree(src); + + if (err) + return err; + + return ret; +} + /** * drm_fb_helper_cfb_fillrect - wrapper around cfb_fillrect * @info: fbdev registered by the helper @@ -2027,6 +2156,66 @@ static int drm_fbdev_fb_mmap(struct fb_info *info, struct vm_area_struct *vma) return -ENODEV; } +static ssize_t drm_fbdev_fb_read(struct fb_info *info, char __user *buf, + size_t count, loff_t *ppos) +{ + struct drm_fb_helper *fb_helper = info->par; + struct drm_client_buffer *buffer = fb_helper->buffer; + + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem) + return drm_fb_helper_sys_read(info, buf, count, ppos); + else + return drm_fb_helper_cfb_read(info, buf, count, ppos); +} + +static ssize_t drm_fbdev_fb_write(struct fb_info *info, const char __user *buf, + size_t count, loff_t *ppos) +{ + struct drm_fb_helper *fb_helper = info->par; + struct drm_client_buffer *buffer = fb_helper->buffer; + + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem) + return drm_fb_helper_sys_write(info, buf, count, ppos); + else + return drm_fb_helper_cfb_write(info, buf, count, ppos); +} + +static void drm_fbdev_fb_fillrect(struct fb_info *info, + const struct fb_fillrect *rect) +{ + struct drm_fb_helper *fb_helper = info->par; + struct drm_client_buffer *buffer = fb_helper->buffer; + + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem) + drm_fb_helper_sys_fillrect(info, rect); + else + drm_fb_helper_cfb_fillrect(info, rect); +} + +static void drm_fbdev_fb_copyarea(struct fb_info *info, + const struct fb_copyarea *area) +{ + struct drm_fb_helper *fb_helper = info->par; + struct drm_client_buffer *buffer = fb_helper->buffer; + + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem) + drm_fb_helper_sys_copyarea(info, area); + else + drm_fb_helper_cfb_copyarea(info, area); +} + +static void drm_fbdev_fb_imageblit(struct fb_info *info, + const struct fb_image *image) +{ + struct drm_fb_helper *fb_helper = info->par; + struct drm_client_buffer *buffer = fb_helper->buffer; + + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem) + drm_fb_helper_sys_imageblit(info, image); + else + drm_fb_helper_cfb_imageblit(info, image); +} + static const struct fb_ops drm_fbdev_fb_ops = { .owner = THIS_MODULE, DRM_FB_HELPER_DEFAULT_OPS, @@ -2034,11 +2223,11 @@ static const struct fb_ops drm_fbdev_fb_ops = { .fb_release = drm_fbdev_fb_release, .fb_destroy = drm_fbdev_fb_destroy, .fb_mmap = drm_fbdev_fb_mmap, - .fb_read = drm_fb_helper_sys_read, - .fb_write = drm_fb_helper_sys_write, - .fb_fillrect = drm_fb_helper_sys_fillrect, - .fb_copyarea = drm_fb_helper_sys_copyarea, - .fb_imageblit = drm_fb_helper_sys_imageblit, + .fb_read = drm_fbdev_fb_read, + .fb_write = drm_fbdev_fb_write, + .fb_fillrect = drm_fbdev_fb_fillrect, + .fb_copyarea = drm_fbdev_fb_copyarea, + .fb_imageblit = drm_fbdev_fb_imageblit, }; static struct fb_deferred_io drm_fbdev_defio = { diff --git a/include/drm/drm_mode_config.h b/include/drm/drm_mode_config.h index 5ffbb4ed5b35..ab424ddd7665 100644 --- a/include/drm/drm_mode_config.h +++ b/include/drm/drm_mode_config.h @@ -877,18 +877,6 @@ struct drm_mode_config { */ bool prefer_shadow_fbdev; - /** - * @fbdev_use_iomem: - * - * Set to true if framebuffer reside in iomem. - * When set to true memcpy_toio() is used when copying the framebuffer in - * drm_fb_helper.drm_fb_helper_dirty_blit_real(). - * - * FIXME: This should be replaced with a per-mapping is_iomem - * flag (like ttm does), and then used everywhere in fbdev code. - */ - bool fbdev_use_iomem; - /** * @quirk_addfb_prefer_xbgr_30bpp: * -- 2.28.0 From christian.koenig at amd.com Thu Oct 15 13:58:49 2020 From: christian.koenig at amd.com (=?UTF-8?Q?Christian_K=c3=b6nig?=) Date: Thu, 15 Oct 2020 15:58:49 +0200 Subject: [Spice-devel] [PATCH v4 02/10] drm/cma-helper: Remove empty drm_gem_cma_prime_vunmap() In-Reply-To: <20201015123806.32416-3-tzimmermann@suse.de> References: <20201015123806.32416-1-tzimmermann@suse.de> <20201015123806.32416-3-tzimmermann@suse.de> Message-ID: <3ff4a9ce-ff3a-392e-e67e-a7687b0826e9@amd.com> Am 15.10.20 um 14:37 schrieb Thomas Zimmermann: > The function drm_gem_cma_prime_vunmap() is empty. Remove it before > changing the interface to use struct drm_buf_map. > > Signed-off-by: Thomas Zimmermann Reviewed-by: Christian K?nig > --- > drivers/gpu/drm/drm_gem_cma_helper.c | 17 ----------------- > drivers/gpu/drm/vc4/vc4_bo.c | 1 - > include/drm/drm_gem_cma_helper.h | 1 - > 3 files changed, 19 deletions(-) > > diff --git a/drivers/gpu/drm/drm_gem_cma_helper.c b/drivers/gpu/drm/drm_gem_cma_helper.c > index 2165633c9b9e..d527485ea0b7 100644 > --- a/drivers/gpu/drm/drm_gem_cma_helper.c > +++ b/drivers/gpu/drm/drm_gem_cma_helper.c > @@ -537,23 +537,6 @@ void *drm_gem_cma_prime_vmap(struct drm_gem_object *obj) > } > EXPORT_SYMBOL_GPL(drm_gem_cma_prime_vmap); > > -/** > - * drm_gem_cma_prime_vunmap - unmap a CMA GEM object from the kernel's virtual > - * address space > - * @obj: GEM object > - * @vaddr: kernel virtual address where the CMA GEM object was mapped > - * > - * This function removes a buffer exported via DRM PRIME from the kernel's > - * virtual address space. This is a no-op because CMA buffers cannot be > - * unmapped from kernel space. Drivers using the CMA helpers should set this > - * as their &drm_gem_object_funcs.vunmap callback. > - */ > -void drm_gem_cma_prime_vunmap(struct drm_gem_object *obj, void *vaddr) > -{ > - /* Nothing to do */ > -} > -EXPORT_SYMBOL_GPL(drm_gem_cma_prime_vunmap); > - > static const struct drm_gem_object_funcs drm_gem_cma_default_funcs = { > .free = drm_gem_cma_free_object, > .print_info = drm_gem_cma_print_info, > diff --git a/drivers/gpu/drm/vc4/vc4_bo.c b/drivers/gpu/drm/vc4/vc4_bo.c > index f432278173cd..557f0d1e6437 100644 > --- a/drivers/gpu/drm/vc4/vc4_bo.c > +++ b/drivers/gpu/drm/vc4/vc4_bo.c > @@ -387,7 +387,6 @@ static const struct drm_gem_object_funcs vc4_gem_object_funcs = { > .export = vc4_prime_export, > .get_sg_table = drm_gem_cma_prime_get_sg_table, > .vmap = vc4_prime_vmap, > - .vunmap = drm_gem_cma_prime_vunmap, > .vm_ops = &vc4_vm_ops, > }; > > diff --git a/include/drm/drm_gem_cma_helper.h b/include/drm/drm_gem_cma_helper.h > index 2bfa2502607a..a064b0d1c480 100644 > --- a/include/drm/drm_gem_cma_helper.h > +++ b/include/drm/drm_gem_cma_helper.h > @@ -104,7 +104,6 @@ drm_gem_cma_prime_import_sg_table(struct drm_device *dev, > int drm_gem_cma_prime_mmap(struct drm_gem_object *obj, > struct vm_area_struct *vma); > void *drm_gem_cma_prime_vmap(struct drm_gem_object *obj); > -void drm_gem_cma_prime_vunmap(struct drm_gem_object *obj, void *vaddr); > > struct drm_gem_object * > drm_gem_cma_create_object_default_funcs(struct drm_device *dev, size_t size); From christian.koenig at amd.com Thu Oct 15 13:59:51 2020 From: christian.koenig at amd.com (=?UTF-8?Q?Christian_K=c3=b6nig?=) Date: Thu, 15 Oct 2020 15:59:51 +0200 Subject: [Spice-devel] [PATCH v4 03/10] drm/etnaviv: Remove empty etnaviv_gem_prime_vunmap() In-Reply-To: <20201015123806.32416-4-tzimmermann@suse.de> References: <20201015123806.32416-1-tzimmermann@suse.de> <20201015123806.32416-4-tzimmermann@suse.de> Message-ID: <2a01560b-d7f7-e59e-cf71-50b36e0ee078@amd.com> Am 15.10.20 um 14:37 schrieb Thomas Zimmermann: > The function etnaviv_gem_prime_vunmap() is empty. Remove it before > changing the interface to use struct drm_buf_map. > > Signed-off-by: Thomas Zimmermann Acked-by: Christian K?nig > --- > drivers/gpu/drm/etnaviv/etnaviv_drv.h | 1 - > drivers/gpu/drm/etnaviv/etnaviv_gem.c | 1 - > drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c | 5 ----- > 3 files changed, 7 deletions(-) > > diff --git a/drivers/gpu/drm/etnaviv/etnaviv_drv.h b/drivers/gpu/drm/etnaviv/etnaviv_drv.h > index 914f0867ff71..9682c26d89bb 100644 > --- a/drivers/gpu/drm/etnaviv/etnaviv_drv.h > +++ b/drivers/gpu/drm/etnaviv/etnaviv_drv.h > @@ -52,7 +52,6 @@ int etnaviv_gem_mmap(struct file *filp, struct vm_area_struct *vma); > int etnaviv_gem_mmap_offset(struct drm_gem_object *obj, u64 *offset); > struct sg_table *etnaviv_gem_prime_get_sg_table(struct drm_gem_object *obj); > void *etnaviv_gem_prime_vmap(struct drm_gem_object *obj); > -void etnaviv_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr); > int etnaviv_gem_prime_mmap(struct drm_gem_object *obj, > struct vm_area_struct *vma); > struct drm_gem_object *etnaviv_gem_prime_import_sg_table(struct drm_device *dev, > diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem.c b/drivers/gpu/drm/etnaviv/etnaviv_gem.c > index 67d9a2b9ea6a..bbd235473645 100644 > --- a/drivers/gpu/drm/etnaviv/etnaviv_gem.c > +++ b/drivers/gpu/drm/etnaviv/etnaviv_gem.c > @@ -571,7 +571,6 @@ static const struct drm_gem_object_funcs etnaviv_gem_object_funcs = { > .unpin = etnaviv_gem_prime_unpin, > .get_sg_table = etnaviv_gem_prime_get_sg_table, > .vmap = etnaviv_gem_prime_vmap, > - .vunmap = etnaviv_gem_prime_vunmap, > .vm_ops = &vm_ops, > }; > > diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c > index 135fbff6fecf..a6d9932a32ae 100644 > --- a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c > +++ b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c > @@ -27,11 +27,6 @@ void *etnaviv_gem_prime_vmap(struct drm_gem_object *obj) > return etnaviv_gem_vmap(obj); > } > > -void etnaviv_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr) > -{ > - /* TODO msm_gem_vunmap() */ > -} > - > int etnaviv_gem_prime_mmap(struct drm_gem_object *obj, > struct vm_area_struct *vma) > { From christian.koenig at amd.com Thu Oct 15 14:00:42 2020 From: christian.koenig at amd.com (=?UTF-8?Q?Christian_K=c3=b6nig?=) Date: Thu, 15 Oct 2020 16:00:42 +0200 Subject: [Spice-devel] [PATCH v4 04/10] drm/exynos: Remove empty exynos_drm_gem_prime_{vmap, vunmap}() In-Reply-To: <20201015123806.32416-5-tzimmermann@suse.de> References: <20201015123806.32416-1-tzimmermann@suse.de> <20201015123806.32416-5-tzimmermann@suse.de> Message-ID: <7a6f6526-1b67-61c8-2239-50f2bfbdc29d@amd.com> Am 15.10.20 um 14:38 schrieb Thomas Zimmermann: > The functions exynos_drm_gem_prime_{vmap,vunmap}() are empty. Remove > them before changing the interface to use struct drm_buf_map. As a side > effect of removing drm_gem_prime_vmap(), the error code changes from > ENOMEM to EOPNOTSUPP. > > Signed-off-by: Thomas Zimmermann Acked-by: Christian K?nig > --- > drivers/gpu/drm/exynos/exynos_drm_gem.c | 12 ------------ > drivers/gpu/drm/exynos/exynos_drm_gem.h | 2 -- > 2 files changed, 14 deletions(-) > > diff --git a/drivers/gpu/drm/exynos/exynos_drm_gem.c b/drivers/gpu/drm/exynos/exynos_drm_gem.c > index e7a6eb96f692..13a35623ac04 100644 > --- a/drivers/gpu/drm/exynos/exynos_drm_gem.c > +++ b/drivers/gpu/drm/exynos/exynos_drm_gem.c > @@ -137,8 +137,6 @@ static const struct vm_operations_struct exynos_drm_gem_vm_ops = { > static const struct drm_gem_object_funcs exynos_drm_gem_object_funcs = { > .free = exynos_drm_gem_free_object, > .get_sg_table = exynos_drm_gem_prime_get_sg_table, > - .vmap = exynos_drm_gem_prime_vmap, > - .vunmap = exynos_drm_gem_prime_vunmap, > .vm_ops = &exynos_drm_gem_vm_ops, > }; > > @@ -471,16 +469,6 @@ exynos_drm_gem_prime_import_sg_table(struct drm_device *dev, > return &exynos_gem->base; > } > > -void *exynos_drm_gem_prime_vmap(struct drm_gem_object *obj) > -{ > - return NULL; > -} > - > -void exynos_drm_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr) > -{ > - /* Nothing to do */ > -} > - > int exynos_drm_gem_prime_mmap(struct drm_gem_object *obj, > struct vm_area_struct *vma) > { > diff --git a/drivers/gpu/drm/exynos/exynos_drm_gem.h b/drivers/gpu/drm/exynos/exynos_drm_gem.h > index 74e926abeff0..a23272fb96fb 100644 > --- a/drivers/gpu/drm/exynos/exynos_drm_gem.h > +++ b/drivers/gpu/drm/exynos/exynos_drm_gem.h > @@ -107,8 +107,6 @@ struct drm_gem_object * > exynos_drm_gem_prime_import_sg_table(struct drm_device *dev, > struct dma_buf_attachment *attach, > struct sg_table *sgt); > -void *exynos_drm_gem_prime_vmap(struct drm_gem_object *obj); > -void exynos_drm_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr); > int exynos_drm_gem_prime_mmap(struct drm_gem_object *obj, > struct vm_area_struct *vma); > From christian.koenig at amd.com Thu Oct 15 14:21:31 2020 From: christian.koenig at amd.com (=?UTF-8?Q?Christian_K=c3=b6nig?=) Date: Thu, 15 Oct 2020 16:21:31 +0200 Subject: [Spice-devel] [PATCH v4 06/10] drm/gem: Use struct dma_buf_map in GEM vmap ops and convert GEM backends In-Reply-To: <20201015123806.32416-7-tzimmermann@suse.de> References: <20201015123806.32416-1-tzimmermann@suse.de> <20201015123806.32416-7-tzimmermann@suse.de> Message-ID: <6bd7d5cf-06c8-3fd8-9bbe-a80ff6bb327e@amd.com> Am 15.10.20 um 14:38 schrieb Thomas Zimmermann: > This patch replaces the vmap/vunmap's use of raw pointers in GEM object > functions with instances of struct dma_buf_map. GEM backends are > converted as well. For most of them, this simply changes the returned type. > > TTM-based drivers now return information about the location of the memory, > either system or I/O memory. GEM VRAM helpers and qxl now use ttm_bo_vmap() > et al. Amdgpu, nouveau and radeon use drm_gem_ttm_vmap() et al instead of > implementing their own vmap callbacks. > > v4: > * use ttm_bo_vmap(), drm_gem_ttm_vmap(), et al. (Daniel, Christian) > * fix a trailing { in drm_gem_vmap() > * remove several empty functions instead of converting them (Daniel) > * comment uses of raw pointers with a TODO (Daniel) > * TODO list: convert more helpers to use struct dma_buf_map > > Signed-off-by: Thomas Zimmermann The amdgpu changes look good to me, but I can't fully judge the other stuff. Acked-by: Christian K?nig > --- > Documentation/gpu/todo.rst | 18 ++++ > drivers/gpu/drm/Kconfig | 2 + > drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c | 36 ------- > drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.h | 2 - > drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c | 5 +- > drivers/gpu/drm/amd/amdgpu/amdgpu_object.h | 1 - > drivers/gpu/drm/ast/ast_cursor.c | 27 +++-- > drivers/gpu/drm/ast/ast_drv.h | 7 +- > drivers/gpu/drm/drm_gem.c | 23 +++-- > drivers/gpu/drm/drm_gem_cma_helper.c | 10 +- > drivers/gpu/drm/drm_gem_shmem_helper.c | 48 +++++---- > drivers/gpu/drm/drm_gem_vram_helper.c | 107 ++++++++++---------- > drivers/gpu/drm/etnaviv/etnaviv_drv.h | 2 +- > drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c | 9 +- > drivers/gpu/drm/lima/lima_gem.c | 6 +- > drivers/gpu/drm/lima/lima_sched.c | 11 +- > drivers/gpu/drm/mgag200/mgag200_mode.c | 10 +- > drivers/gpu/drm/nouveau/Kconfig | 1 + > drivers/gpu/drm/nouveau/nouveau_bo.h | 2 - > drivers/gpu/drm/nouveau/nouveau_gem.c | 6 +- > drivers/gpu/drm/nouveau/nouveau_gem.h | 2 - > drivers/gpu/drm/nouveau/nouveau_prime.c | 20 ---- > drivers/gpu/drm/panfrost/panfrost_perfcnt.c | 14 +-- > drivers/gpu/drm/qxl/qxl_display.c | 11 +- > drivers/gpu/drm/qxl/qxl_draw.c | 14 ++- > drivers/gpu/drm/qxl/qxl_drv.h | 11 +- > drivers/gpu/drm/qxl/qxl_object.c | 31 +++--- > drivers/gpu/drm/qxl/qxl_object.h | 2 +- > drivers/gpu/drm/qxl/qxl_prime.c | 12 +-- > drivers/gpu/drm/radeon/radeon.h | 1 - > drivers/gpu/drm/radeon/radeon_gem.c | 7 +- > drivers/gpu/drm/radeon/radeon_prime.c | 20 ---- > drivers/gpu/drm/rockchip/rockchip_drm_gem.c | 22 ++-- > drivers/gpu/drm/rockchip/rockchip_drm_gem.h | 4 +- > drivers/gpu/drm/tiny/cirrus.c | 10 +- > drivers/gpu/drm/tiny/gm12u320.c | 10 +- > drivers/gpu/drm/udl/udl_modeset.c | 8 +- > drivers/gpu/drm/vboxvideo/vbox_mode.c | 11 +- > drivers/gpu/drm/vc4/vc4_bo.c | 6 +- > drivers/gpu/drm/vc4/vc4_drv.h | 2 +- > drivers/gpu/drm/vgem/vgem_drv.c | 16 ++- > drivers/gpu/drm/xen/xen_drm_front_gem.c | 18 ++-- > drivers/gpu/drm/xen/xen_drm_front_gem.h | 6 +- > include/drm/drm_gem.h | 5 +- > include/drm/drm_gem_cma_helper.h | 2 +- > include/drm/drm_gem_shmem_helper.h | 4 +- > include/drm/drm_gem_vram_helper.h | 14 +-- > 47 files changed, 321 insertions(+), 295 deletions(-) > > diff --git a/Documentation/gpu/todo.rst b/Documentation/gpu/todo.rst > index 700637e25ecd..7e6fc3c04add 100644 > --- a/Documentation/gpu/todo.rst > +++ b/Documentation/gpu/todo.rst > @@ -446,6 +446,24 @@ Contact: Ville Syrj?l?, Daniel Vetter > > Level: Intermediate > > +Use struct dma_buf_map throughout codebase > +------------------------------------------ > + > +Pointers to shared device memory are stored in struct dma_buf_map. Each > +instance knows whether it refers to system or I/O memory. Most of the DRM-wide > +interface have been converted to use struct dma_buf_map, but implementations > +often still use raw pointers. > + > +The task is to use struct dma_buf_map where it makes sense. > + > +* Memory managers should use struct dma_buf_map for dma-buf-imported buffers. > +* TTM might benefit from using struct dma_buf_map internally. > +* Framebuffer copying and blitting helpers should operate on struct dma_buf_map. > + > +Contact: Thomas Zimmermann , Christian K?nig, Daniel Vetter > + > +Level: Intermediate > + > > Core refactorings > ================= > diff --git a/drivers/gpu/drm/Kconfig b/drivers/gpu/drm/Kconfig > index 147d61b9674e..319839b87d37 100644 > --- a/drivers/gpu/drm/Kconfig > +++ b/drivers/gpu/drm/Kconfig > @@ -239,6 +239,7 @@ config DRM_RADEON > select FW_LOADER > select DRM_KMS_HELPER > select DRM_TTM > + select DRM_TTM_HELPER > select POWER_SUPPLY > select HWMON > select BACKLIGHT_CLASS_DEVICE > @@ -259,6 +260,7 @@ config DRM_AMDGPU > select DRM_KMS_HELPER > select DRM_SCHED > select DRM_TTM > + select DRM_TTM_HELPER > select POWER_SUPPLY > select HWMON > select BACKLIGHT_CLASS_DEVICE > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c > index 5b465ab774d1..e5919efca870 100644 > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c > @@ -41,42 +41,6 @@ > #include > #include > > -/** > - * amdgpu_gem_prime_vmap - &dma_buf_ops.vmap implementation > - * @obj: GEM BO > - * > - * Sets up an in-kernel virtual mapping of the BO's memory. > - * > - * Returns: > - * The virtual address of the mapping or an error pointer. > - */ > -void *amdgpu_gem_prime_vmap(struct drm_gem_object *obj) > -{ > - struct amdgpu_bo *bo = gem_to_amdgpu_bo(obj); > - int ret; > - > - ret = ttm_bo_kmap(&bo->tbo, 0, bo->tbo.num_pages, > - &bo->dma_buf_vmap); > - if (ret) > - return ERR_PTR(ret); > - > - return bo->dma_buf_vmap.virtual; > -} > - > -/** > - * amdgpu_gem_prime_vunmap - &dma_buf_ops.vunmap implementation > - * @obj: GEM BO > - * @vaddr: Virtual address (unused) > - * > - * Tears down the in-kernel virtual mapping of the BO's memory. > - */ > -void amdgpu_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr) > -{ > - struct amdgpu_bo *bo = gem_to_amdgpu_bo(obj); > - > - ttm_bo_kunmap(&bo->dma_buf_vmap); > -} > - > /** > * amdgpu_gem_prime_mmap - &drm_driver.gem_prime_mmap implementation > * @obj: GEM BO > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.h > index 2c5c84a06bb9..39b5b9616fd8 100644 > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.h > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.h > @@ -31,8 +31,6 @@ struct drm_gem_object *amdgpu_gem_prime_import(struct drm_device *dev, > struct dma_buf *dma_buf); > bool amdgpu_dmabuf_is_xgmi_accessible(struct amdgpu_device *adev, > struct amdgpu_bo *bo); > -void *amdgpu_gem_prime_vmap(struct drm_gem_object *obj); > -void amdgpu_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr); > int amdgpu_gem_prime_mmap(struct drm_gem_object *obj, > struct vm_area_struct *vma); > > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c > index be08a63ef58c..576659827e74 100644 > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c > @@ -33,6 +33,7 @@ > > #include > #include > +#include > > #include "amdgpu.h" > #include "amdgpu_display.h" > @@ -220,8 +221,8 @@ static const struct drm_gem_object_funcs amdgpu_gem_object_funcs = { > .open = amdgpu_gem_object_open, > .close = amdgpu_gem_object_close, > .export = amdgpu_gem_prime_export, > - .vmap = amdgpu_gem_prime_vmap, > - .vunmap = amdgpu_gem_prime_vunmap, > + .vmap = drm_gem_ttm_vmap, > + .vunmap = drm_gem_ttm_vunmap, > }; > > /* > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h > index 132e5f955180..01296ef0d673 100644 > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h > @@ -100,7 +100,6 @@ struct amdgpu_bo { > struct amdgpu_bo *parent; > struct amdgpu_bo *shadow; > > - struct ttm_bo_kmap_obj dma_buf_vmap; > struct amdgpu_mn *mn; > > > diff --git a/drivers/gpu/drm/ast/ast_cursor.c b/drivers/gpu/drm/ast/ast_cursor.c > index e0f4613918ad..742d43a7edf4 100644 > --- a/drivers/gpu/drm/ast/ast_cursor.c > +++ b/drivers/gpu/drm/ast/ast_cursor.c > @@ -39,7 +39,7 @@ static void ast_cursor_fini(struct ast_private *ast) > > for (i = 0; i < ARRAY_SIZE(ast->cursor.gbo); ++i) { > gbo = ast->cursor.gbo[i]; > - drm_gem_vram_vunmap(gbo, ast->cursor.vaddr[i]); > + drm_gem_vram_vunmap(gbo, &ast->cursor.map[i]); > drm_gem_vram_unpin(gbo); > drm_gem_vram_put(gbo); > } > @@ -60,7 +60,7 @@ int ast_cursor_init(struct ast_private *ast) > struct drm_device *dev = &ast->base; > size_t size, i; > struct drm_gem_vram_object *gbo; > - void __iomem *vaddr; > + struct dma_buf_map map; > int ret; > > size = roundup(AST_HWC_SIZE + AST_HWC_SIGNATURE_SIZE, PAGE_SIZE); > @@ -77,16 +77,15 @@ int ast_cursor_init(struct ast_private *ast) > drm_gem_vram_put(gbo); > goto err_drm_gem_vram_put; > } > - vaddr = drm_gem_vram_vmap(gbo); > - if (IS_ERR(vaddr)) { > - ret = PTR_ERR(vaddr); > + ret = drm_gem_vram_vmap(gbo, &map); > + if (ret) { > drm_gem_vram_unpin(gbo); > drm_gem_vram_put(gbo); > goto err_drm_gem_vram_put; > } > > ast->cursor.gbo[i] = gbo; > - ast->cursor.vaddr[i] = vaddr; > + ast->cursor.map[i] = map; > } > > return drmm_add_action_or_reset(dev, ast_cursor_release, NULL); > @@ -95,7 +94,7 @@ int ast_cursor_init(struct ast_private *ast) > while (i) { > --i; > gbo = ast->cursor.gbo[i]; > - drm_gem_vram_vunmap(gbo, ast->cursor.vaddr[i]); > + drm_gem_vram_vunmap(gbo, &ast->cursor.map[i]); > drm_gem_vram_unpin(gbo); > drm_gem_vram_put(gbo); > } > @@ -170,6 +169,7 @@ int ast_cursor_blit(struct ast_private *ast, struct drm_framebuffer *fb) > { > struct drm_device *dev = &ast->base; > struct drm_gem_vram_object *gbo; > + struct dma_buf_map map; > int ret; > void *src; > void __iomem *dst; > @@ -183,18 +183,17 @@ int ast_cursor_blit(struct ast_private *ast, struct drm_framebuffer *fb) > ret = drm_gem_vram_pin(gbo, 0); > if (ret) > return ret; > - src = drm_gem_vram_vmap(gbo); > - if (IS_ERR(src)) { > - ret = PTR_ERR(src); > + ret = drm_gem_vram_vmap(gbo, &map); > + if (ret) > goto err_drm_gem_vram_unpin; > - } > + src = map.vaddr; /* TODO: Use mapping abstraction properly */ > > - dst = ast->cursor.vaddr[ast->cursor.next_index]; > + dst = ast->cursor.map[ast->cursor.next_index].vaddr_iomem; > > /* do data transfer to cursor BO */ > update_cursor_image(dst, src, fb->width, fb->height); > > - drm_gem_vram_vunmap(gbo, src); > + drm_gem_vram_vunmap(gbo, &map); > drm_gem_vram_unpin(gbo); > > return 0; > @@ -257,7 +256,7 @@ void ast_cursor_show(struct ast_private *ast, int x, int y, > u8 __iomem *sig; > u8 jreg; > > - dst = ast->cursor.vaddr[ast->cursor.next_index]; > + dst = ast->cursor.map[ast->cursor.next_index].vaddr; > > sig = dst + AST_HWC_SIZE; > writel(x, sig + AST_HWC_SIGNATURE_X); > diff --git a/drivers/gpu/drm/ast/ast_drv.h b/drivers/gpu/drm/ast/ast_drv.h > index 467049ca8430..f963141dd851 100644 > --- a/drivers/gpu/drm/ast/ast_drv.h > +++ b/drivers/gpu/drm/ast/ast_drv.h > @@ -28,10 +28,11 @@ > #ifndef __AST_DRV_H__ > #define __AST_DRV_H__ > > -#include > -#include > +#include > #include > #include > +#include > +#include > > #include > #include > @@ -131,7 +132,7 @@ struct ast_private { > > struct { > struct drm_gem_vram_object *gbo[AST_DEFAULT_HWC_NUM]; > - void __iomem *vaddr[AST_DEFAULT_HWC_NUM]; > + struct dma_buf_map map[AST_DEFAULT_HWC_NUM]; > unsigned int next_index; > } cursor; > > diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c > index 1da67d34e55d..a89ad4570e3c 100644 > --- a/drivers/gpu/drm/drm_gem.c > +++ b/drivers/gpu/drm/drm_gem.c > @@ -36,6 +36,7 @@ > #include > #include > #include > +#include > #include > #include > > @@ -1207,26 +1208,30 @@ void drm_gem_unpin(struct drm_gem_object *obj) > > void *drm_gem_vmap(struct drm_gem_object *obj) > { > - void *vaddr; > + struct dma_buf_map map; > + int ret; > > - if (obj->funcs->vmap) > - vaddr = obj->funcs->vmap(obj); > - else > - vaddr = ERR_PTR(-EOPNOTSUPP); > + if (!obj->funcs->vmap) > + return ERR_PTR(-EOPNOTSUPP); > > - if (!vaddr) > - vaddr = ERR_PTR(-ENOMEM); > + ret = obj->funcs->vmap(obj, &map); > + if (ret) > + return ERR_PTR(ret); > + else if (dma_buf_map_is_null(&map)) > + return ERR_PTR(-ENOMEM); > > - return vaddr; > + return map.vaddr; > } > > void drm_gem_vunmap(struct drm_gem_object *obj, void *vaddr) > { > + struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(vaddr); > + > if (!vaddr) > return; > > if (obj->funcs->vunmap) > - obj->funcs->vunmap(obj, vaddr); > + obj->funcs->vunmap(obj, &map); > } > > /** > diff --git a/drivers/gpu/drm/drm_gem_cma_helper.c b/drivers/gpu/drm/drm_gem_cma_helper.c > index d527485ea0b7..b57e3e9222f0 100644 > --- a/drivers/gpu/drm/drm_gem_cma_helper.c > +++ b/drivers/gpu/drm/drm_gem_cma_helper.c > @@ -519,6 +519,8 @@ EXPORT_SYMBOL_GPL(drm_gem_cma_prime_mmap); > * drm_gem_cma_prime_vmap - map a CMA GEM object into the kernel's virtual > * address space > * @obj: GEM object > + * @map: Returns the kernel virtual address of the CMA GEM object's backing > + * store. > * > * This function maps a buffer exported via DRM PRIME into the kernel's > * virtual address space. Since the CMA buffers are already mapped into the > @@ -527,13 +529,15 @@ EXPORT_SYMBOL_GPL(drm_gem_cma_prime_mmap); > * driver's &drm_gem_object_funcs.vmap callback. > * > * Returns: > - * The kernel virtual address of the CMA GEM object's backing store. > + * 0 on success, or a negative error code otherwise. > */ > -void *drm_gem_cma_prime_vmap(struct drm_gem_object *obj) > +int drm_gem_cma_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map) > { > struct drm_gem_cma_object *cma_obj = to_drm_gem_cma_obj(obj); > > - return cma_obj->vaddr; > + dma_buf_map_set_vaddr(map, cma_obj->vaddr); > + > + return 0; > } > EXPORT_SYMBOL_GPL(drm_gem_cma_prime_vmap); > > diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c > index fb11df7aced5..5553f58f68f3 100644 > --- a/drivers/gpu/drm/drm_gem_shmem_helper.c > +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c > @@ -258,19 +258,25 @@ void drm_gem_shmem_unpin(struct drm_gem_object *obj) > } > EXPORT_SYMBOL(drm_gem_shmem_unpin); > > -static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem) > +static int drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem, struct dma_buf_map *map) > { > struct drm_gem_object *obj = &shmem->base; > - struct dma_buf_map map; > int ret = 0; > > - if (shmem->vmap_use_count++ > 0) > - return shmem->vaddr; > + if (shmem->vmap_use_count++ > 0) { > + dma_buf_map_set_vaddr(map, shmem->vaddr); > + return 0; > + } > > if (obj->import_attach) { > - ret = dma_buf_vmap(obj->import_attach->dmabuf, &map); > - if (!ret) > - shmem->vaddr = map.vaddr; > + ret = dma_buf_vmap(obj->import_attach->dmabuf, map); > + if (!ret) { > + if (WARN_ON(map->is_iomem)) { > + ret = -EIO; > + goto err_put_pages; > + } > + shmem->vaddr = map->vaddr; > + } > } else { > pgprot_t prot = PAGE_KERNEL; > > @@ -284,6 +290,8 @@ static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem) > VM_MAP, prot); > if (!shmem->vaddr) > ret = -ENOMEM; > + else > + dma_buf_map_set_vaddr(map, shmem->vaddr); > } > > if (ret) { > @@ -291,7 +299,7 @@ static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem) > goto err_put_pages; > } > > - return shmem->vaddr; > + return 0; > > err_put_pages: > if (!obj->import_attach) > @@ -299,12 +307,14 @@ static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem) > err_zero_use: > shmem->vmap_use_count = 0; > > - return ERR_PTR(ret); > + return ret; > } > > /* > * drm_gem_shmem_vmap - Create a virtual mapping for a shmem GEM object > * @shmem: shmem GEM object > + * @map: Returns the kernel virtual address of the SHMEM GEM object's backing > + * store. > * > * This function makes sure that a contiguous kernel virtual address mapping > * exists for the buffer backing the shmem GEM object. > @@ -318,26 +328,25 @@ static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem) > * Returns: > * 0 on success or a negative error code on failure. > */ > -void *drm_gem_shmem_vmap(struct drm_gem_object *obj) > +int drm_gem_shmem_vmap(struct drm_gem_object *obj, struct dma_buf_map *map) > { > struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj); > - void *vaddr; > int ret; > > ret = mutex_lock_interruptible(&shmem->vmap_lock); > if (ret) > - return ERR_PTR(ret); > - vaddr = drm_gem_shmem_vmap_locked(shmem); > + return ret; > + ret = drm_gem_shmem_vmap_locked(shmem, map); > mutex_unlock(&shmem->vmap_lock); > > - return vaddr; > + return ret; > } > EXPORT_SYMBOL(drm_gem_shmem_vmap); > > -static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem) > +static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem, > + struct dma_buf_map *map) > { > struct drm_gem_object *obj = &shmem->base; > - struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(shmem->vaddr); > > if (WARN_ON_ONCE(!shmem->vmap_use_count)) > return; > @@ -346,7 +355,7 @@ static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem) > return; > > if (obj->import_attach) > - dma_buf_vunmap(obj->import_attach->dmabuf, &map); > + dma_buf_vunmap(obj->import_attach->dmabuf, map); > else > vunmap(shmem->vaddr); > > @@ -357,6 +366,7 @@ static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem) > /* > * drm_gem_shmem_vunmap - Unmap a virtual mapping fo a shmem GEM object > * @shmem: shmem GEM object > + * @map: Kernel virtual address where the SHMEM GEM object was mapped > * > * This function cleans up a kernel virtual address mapping acquired by > * drm_gem_shmem_vmap(). The mapping is only removed when the use count drops to > @@ -366,12 +376,12 @@ static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem) > * also be called by drivers directly, in which case it will hide the > * differences between dma-buf imported and natively allocated objects. > */ > -void drm_gem_shmem_vunmap(struct drm_gem_object *obj, void *vaddr) > +void drm_gem_shmem_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map) > { > struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj); > > mutex_lock(&shmem->vmap_lock); > - drm_gem_shmem_vunmap_locked(shmem); > + drm_gem_shmem_vunmap_locked(shmem, map); > mutex_unlock(&shmem->vmap_lock); > } > EXPORT_SYMBOL(drm_gem_shmem_vunmap); > diff --git a/drivers/gpu/drm/drm_gem_vram_helper.c b/drivers/gpu/drm/drm_gem_vram_helper.c > index 2d5ed30518f1..4d8553b28558 100644 > --- a/drivers/gpu/drm/drm_gem_vram_helper.c > +++ b/drivers/gpu/drm/drm_gem_vram_helper.c > @@ -1,5 +1,6 @@ > // SPDX-License-Identifier: GPL-2.0-or-later > > +#include > #include > > #include > @@ -113,8 +114,8 @@ static void drm_gem_vram_cleanup(struct drm_gem_vram_object *gbo) > * up; only release the GEM object. > */ > > - WARN_ON(gbo->kmap_use_count); > - WARN_ON(gbo->kmap.virtual); > + WARN_ON(gbo->vmap_use_count); > + WARN_ON(dma_buf_map_is_set(&gbo->map)); > > drm_gem_object_release(&gbo->bo.base); > } > @@ -382,29 +383,37 @@ int drm_gem_vram_unpin(struct drm_gem_vram_object *gbo) > } > EXPORT_SYMBOL(drm_gem_vram_unpin); > > -static void *drm_gem_vram_kmap_locked(struct drm_gem_vram_object *gbo) > +static int drm_gem_vram_kmap_locked(struct drm_gem_vram_object *gbo, > + struct dma_buf_map *map) > { > int ret; > - struct ttm_bo_kmap_obj *kmap = &gbo->kmap; > - bool is_iomem; > > - if (gbo->kmap_use_count > 0) > + if (gbo->vmap_use_count > 0) > goto out; > > - ret = ttm_bo_kmap(&gbo->bo, 0, gbo->bo.num_pages, kmap); > + ret = ttm_bo_vmap(&gbo->bo, &gbo->map); > if (ret) > - return ERR_PTR(ret); > + return ret; > > out: > - ++gbo->kmap_use_count; > - return ttm_kmap_obj_virtual(kmap, &is_iomem); > + ++gbo->vmap_use_count; > + *map = gbo->map; > + > + return 0; > } > > -static void drm_gem_vram_kunmap_locked(struct drm_gem_vram_object *gbo) > +static void drm_gem_vram_kunmap_locked(struct drm_gem_vram_object *gbo, > + struct dma_buf_map *map) > { > - if (WARN_ON_ONCE(!gbo->kmap_use_count)) > + struct drm_device *dev = gbo->bo.base.dev; > + > + if (drm_WARN_ON_ONCE(dev, !gbo->vmap_use_count)) > return; > - if (--gbo->kmap_use_count > 0) > + > + if (drm_WARN_ON_ONCE(dev, !dma_buf_map_is_equal(&gbo->map, map))) > + return; /* BUG: map not mapped from this BO */ > + > + if (--gbo->vmap_use_count > 0) > return; > > /* > @@ -418,7 +427,9 @@ static void drm_gem_vram_kunmap_locked(struct drm_gem_vram_object *gbo) > /** > * drm_gem_vram_vmap() - Pins and maps a GEM VRAM object into kernel address > * space > - * @gbo: The GEM VRAM object to map > + * @gbo: The GEM VRAM object to map > + * @map: Returns the kernel virtual address of the VRAM GEM object's backing > + * store. > * > * The vmap function pins a GEM VRAM object to its current location, either > * system or video memory, and maps its buffer into kernel address space. > @@ -427,48 +438,44 @@ static void drm_gem_vram_kunmap_locked(struct drm_gem_vram_object *gbo) > * unmap and unpin the GEM VRAM object. > * > * Returns: > - * The buffer's virtual address on success, or > - * an ERR_PTR()-encoded error code otherwise. > + * 0 on success, or a negative error code otherwise. > */ > -void *drm_gem_vram_vmap(struct drm_gem_vram_object *gbo) > +int drm_gem_vram_vmap(struct drm_gem_vram_object *gbo, struct dma_buf_map *map) > { > int ret; > - void *base; > > ret = ttm_bo_reserve(&gbo->bo, true, false, NULL); > if (ret) > - return ERR_PTR(ret); > + return ret; > > ret = drm_gem_vram_pin_locked(gbo, 0); > if (ret) > goto err_ttm_bo_unreserve; > - base = drm_gem_vram_kmap_locked(gbo); > - if (IS_ERR(base)) { > - ret = PTR_ERR(base); > + ret = drm_gem_vram_kmap_locked(gbo, map); > + if (ret) > goto err_drm_gem_vram_unpin_locked; > - } > > ttm_bo_unreserve(&gbo->bo); > > - return base; > + return 0; > > err_drm_gem_vram_unpin_locked: > drm_gem_vram_unpin_locked(gbo); > err_ttm_bo_unreserve: > ttm_bo_unreserve(&gbo->bo); > - return ERR_PTR(ret); > + return ret; > } > EXPORT_SYMBOL(drm_gem_vram_vmap); > > /** > * drm_gem_vram_vunmap() - Unmaps and unpins a GEM VRAM object > - * @gbo: The GEM VRAM object to unmap > - * @vaddr: The mapping's base address as returned by drm_gem_vram_vmap() > + * @gbo: The GEM VRAM object to unmap > + * @map: Kernel virtual address where the VRAM GEM object was mapped > * > * A call to drm_gem_vram_vunmap() unmaps and unpins a GEM VRAM buffer. See > * the documentation for drm_gem_vram_vmap() for more information. > */ > -void drm_gem_vram_vunmap(struct drm_gem_vram_object *gbo, void *vaddr) > +void drm_gem_vram_vunmap(struct drm_gem_vram_object *gbo, struct dma_buf_map *map) > { > int ret; > > @@ -476,7 +483,7 @@ void drm_gem_vram_vunmap(struct drm_gem_vram_object *gbo, void *vaddr) > if (WARN_ONCE(ret, "ttm_bo_reserve_failed(): ret=%d\n", ret)) > return; > > - drm_gem_vram_kunmap_locked(gbo); > + drm_gem_vram_kunmap_locked(gbo, map); > drm_gem_vram_unpin_locked(gbo); > > ttm_bo_unreserve(&gbo->bo); > @@ -567,15 +574,13 @@ static void drm_gem_vram_bo_driver_move_notify(struct drm_gem_vram_object *gbo, > bool evict, > struct ttm_resource *new_mem) > { > - struct ttm_bo_kmap_obj *kmap = &gbo->kmap; > + struct ttm_buffer_object *bo = &gbo->bo; > + struct drm_device *dev = bo->base.dev; > > - if (WARN_ON_ONCE(gbo->kmap_use_count)) > + if (drm_WARN_ON_ONCE(dev, gbo->vmap_use_count)) > return; > > - if (!kmap->virtual) > - return; > - ttm_bo_kunmap(kmap); > - kmap->virtual = NULL; > + ttm_bo_vunmap(bo, &gbo->map); > } > > static int drm_gem_vram_bo_driver_move(struct drm_gem_vram_object *gbo, > @@ -832,37 +837,33 @@ static void drm_gem_vram_object_unpin(struct drm_gem_object *gem) > } > > /** > - * drm_gem_vram_object_vmap() - \ > - Implements &struct drm_gem_object_funcs.vmap > - * @gem: The GEM object to map > + * drm_gem_vram_object_vmap() - > + * Implements &struct drm_gem_object_funcs.vmap > + * @gem: The GEM object to map > + * @map: Returns the kernel virtual address of the VRAM GEM object's backing > + * store. > * > * Returns: > - * The buffers virtual address on success, or > - * NULL otherwise. > + * 0 on success, or a negative error code otherwise. > */ > -static void *drm_gem_vram_object_vmap(struct drm_gem_object *gem) > +static int drm_gem_vram_object_vmap(struct drm_gem_object *gem, struct dma_buf_map *map) > { > struct drm_gem_vram_object *gbo = drm_gem_vram_of_gem(gem); > - void *base; > > - base = drm_gem_vram_vmap(gbo); > - if (IS_ERR(base)) > - return NULL; > - return base; > + return drm_gem_vram_vmap(gbo, map); > } > > /** > - * drm_gem_vram_object_vunmap() - \ > - Implements &struct drm_gem_object_funcs.vunmap > - * @gem: The GEM object to unmap > - * @vaddr: The mapping's base address > + * drm_gem_vram_object_vunmap() - > + * Implements &struct drm_gem_object_funcs.vunmap > + * @gem: The GEM object to unmap > + * @map: Kernel virtual address where the VRAM GEM object was mapped > */ > -static void drm_gem_vram_object_vunmap(struct drm_gem_object *gem, > - void *vaddr) > +static void drm_gem_vram_object_vunmap(struct drm_gem_object *gem, struct dma_buf_map *map) > { > struct drm_gem_vram_object *gbo = drm_gem_vram_of_gem(gem); > > - drm_gem_vram_vunmap(gbo, vaddr); > + drm_gem_vram_vunmap(gbo, map); > } > > /* > diff --git a/drivers/gpu/drm/etnaviv/etnaviv_drv.h b/drivers/gpu/drm/etnaviv/etnaviv_drv.h > index 9682c26d89bb..f5be627e1de0 100644 > --- a/drivers/gpu/drm/etnaviv/etnaviv_drv.h > +++ b/drivers/gpu/drm/etnaviv/etnaviv_drv.h > @@ -51,7 +51,7 @@ int etnaviv_ioctl_gem_submit(struct drm_device *dev, void *data, > int etnaviv_gem_mmap(struct file *filp, struct vm_area_struct *vma); > int etnaviv_gem_mmap_offset(struct drm_gem_object *obj, u64 *offset); > struct sg_table *etnaviv_gem_prime_get_sg_table(struct drm_gem_object *obj); > -void *etnaviv_gem_prime_vmap(struct drm_gem_object *obj); > +int etnaviv_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map); > int etnaviv_gem_prime_mmap(struct drm_gem_object *obj, > struct vm_area_struct *vma); > struct drm_gem_object *etnaviv_gem_prime_import_sg_table(struct drm_device *dev, > diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c > index a6d9932a32ae..bc2543dd987d 100644 > --- a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c > +++ b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c > @@ -22,9 +22,14 @@ struct sg_table *etnaviv_gem_prime_get_sg_table(struct drm_gem_object *obj) > return drm_prime_pages_to_sg(obj->dev, etnaviv_obj->pages, npages); > } > > -void *etnaviv_gem_prime_vmap(struct drm_gem_object *obj) > +int etnaviv_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map) > { > - return etnaviv_gem_vmap(obj); > + void *vaddr = etnaviv_gem_vmap(obj); > + if (!vaddr) > + return -ENOMEM; > + dma_buf_map_set_vaddr(map, vaddr); > + > + return 0; > } > > int etnaviv_gem_prime_mmap(struct drm_gem_object *obj, > diff --git a/drivers/gpu/drm/lima/lima_gem.c b/drivers/gpu/drm/lima/lima_gem.c > index 11223fe348df..832e5280a6ed 100644 > --- a/drivers/gpu/drm/lima/lima_gem.c > +++ b/drivers/gpu/drm/lima/lima_gem.c > @@ -182,14 +182,14 @@ static int lima_gem_pin(struct drm_gem_object *obj) > return drm_gem_shmem_pin(obj); > } > > -static void *lima_gem_vmap(struct drm_gem_object *obj) > +static int lima_gem_vmap(struct drm_gem_object *obj, struct dma_buf_map *map) > { > struct lima_bo *bo = to_lima_bo(obj); > > if (bo->heap_size) > - return ERR_PTR(-EINVAL); > + return -EINVAL; > > - return drm_gem_shmem_vmap(obj); > + return drm_gem_shmem_vmap(obj, map); > } > > static int lima_gem_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma) > diff --git a/drivers/gpu/drm/lima/lima_sched.c b/drivers/gpu/drm/lima/lima_sched.c > index dc6df9e9a40d..a070a85f8f36 100644 > --- a/drivers/gpu/drm/lima/lima_sched.c > +++ b/drivers/gpu/drm/lima/lima_sched.c > @@ -1,6 +1,7 @@ > // SPDX-License-Identifier: GPL-2.0 OR MIT > /* Copyright 2017-2019 Qiang Yu */ > > +#include > #include > #include > #include > @@ -303,6 +304,8 @@ static void lima_sched_build_error_task_list(struct lima_sched_task *task) > struct lima_dump_chunk_buffer *buffer_chunk; > u32 size, task_size, mem_size; > int i; > + struct dma_buf_map map; > + int ret; > > mutex_lock(&dev->error_task_list_lock); > > @@ -388,15 +391,15 @@ static void lima_sched_build_error_task_list(struct lima_sched_task *task) > } else { > buffer_chunk->size = lima_bo_size(bo); > > - data = drm_gem_shmem_vmap(&bo->base.base); > - if (IS_ERR_OR_NULL(data)) { > + ret = drm_gem_shmem_vmap(&bo->base.base, &map); > + if (ret) { > kvfree(et); > goto out; > } > > - memcpy(buffer_chunk + 1, data, buffer_chunk->size); > + memcpy(buffer_chunk + 1, map.vaddr, buffer_chunk->size); > > - drm_gem_shmem_vunmap(&bo->base.base, data); > + drm_gem_shmem_vunmap(&bo->base.base, &map); > } > > buffer_chunk = (void *)(buffer_chunk + 1) + buffer_chunk->size; > diff --git a/drivers/gpu/drm/mgag200/mgag200_mode.c b/drivers/gpu/drm/mgag200/mgag200_mode.c > index 38672f9e5c4f..8ef76769b97f 100644 > --- a/drivers/gpu/drm/mgag200/mgag200_mode.c > +++ b/drivers/gpu/drm/mgag200/mgag200_mode.c > @@ -9,6 +9,7 @@ > */ > > #include > +#include > > #include > #include > @@ -1556,15 +1557,18 @@ mgag200_handle_damage(struct mga_device *mdev, struct drm_framebuffer *fb, > struct drm_rect *clip) > { > struct drm_device *dev = &mdev->base; > + struct dma_buf_map map; > void *vmap; > + int ret; > > - vmap = drm_gem_shmem_vmap(fb->obj[0]); > - if (drm_WARN_ON(dev, !vmap)) > + ret = drm_gem_shmem_vmap(fb->obj[0], &map); > + if (drm_WARN_ON(dev, ret)) > return; /* BUG: SHMEM BO should always be vmapped */ > + vmap = map.vaddr; /* TODO: Use mapping abstraction properly */ > > drm_fb_memcpy_dstclip(mdev->vram, vmap, fb, clip); > > - drm_gem_shmem_vunmap(fb->obj[0], vmap); > + drm_gem_shmem_vunmap(fb->obj[0], &map); > > /* Always scanout image at VRAM offset 0 */ > mgag200_set_startadd(mdev, (u32)0); > diff --git a/drivers/gpu/drm/nouveau/Kconfig b/drivers/gpu/drm/nouveau/Kconfig > index 5dec1e5694b7..9436310d0854 100644 > --- a/drivers/gpu/drm/nouveau/Kconfig > +++ b/drivers/gpu/drm/nouveau/Kconfig > @@ -6,6 +6,7 @@ config DRM_NOUVEAU > select FW_LOADER > select DRM_KMS_HELPER > select DRM_TTM > + select DRM_TTM_HELPER > select BACKLIGHT_CLASS_DEVICE if DRM_NOUVEAU_BACKLIGHT > select ACPI_VIDEO if ACPI && X86 && BACKLIGHT_CLASS_DEVICE && INPUT > select X86_PLATFORM_DEVICES if ACPI && X86 > diff --git a/drivers/gpu/drm/nouveau/nouveau_bo.h b/drivers/gpu/drm/nouveau/nouveau_bo.h > index 641ef6298a0e..6045b85a762a 100644 > --- a/drivers/gpu/drm/nouveau/nouveau_bo.h > +++ b/drivers/gpu/drm/nouveau/nouveau_bo.h > @@ -39,8 +39,6 @@ struct nouveau_bo { > unsigned mode; > > struct nouveau_drm_tile *tile; > - > - struct ttm_bo_kmap_obj dma_buf_vmap; > }; > > static inline struct nouveau_bo * > diff --git a/drivers/gpu/drm/nouveau/nouveau_gem.c b/drivers/gpu/drm/nouveau/nouveau_gem.c > index 9a421c3949de..f942b526b0a5 100644 > --- a/drivers/gpu/drm/nouveau/nouveau_gem.c > +++ b/drivers/gpu/drm/nouveau/nouveau_gem.c > @@ -24,6 +24,8 @@ > * > */ > > +#include > + > #include "nouveau_drv.h" > #include "nouveau_dma.h" > #include "nouveau_fence.h" > @@ -176,8 +178,8 @@ const struct drm_gem_object_funcs nouveau_gem_object_funcs = { > .pin = nouveau_gem_prime_pin, > .unpin = nouveau_gem_prime_unpin, > .get_sg_table = nouveau_gem_prime_get_sg_table, > - .vmap = nouveau_gem_prime_vmap, > - .vunmap = nouveau_gem_prime_vunmap, > + .vmap = drm_gem_ttm_vmap, > + .vunmap = drm_gem_ttm_vunmap, > }; > > int > diff --git a/drivers/gpu/drm/nouveau/nouveau_gem.h b/drivers/gpu/drm/nouveau/nouveau_gem.h > index b35c180322e2..3b919c7c931c 100644 > --- a/drivers/gpu/drm/nouveau/nouveau_gem.h > +++ b/drivers/gpu/drm/nouveau/nouveau_gem.h > @@ -37,7 +37,5 @@ extern void nouveau_gem_prime_unpin(struct drm_gem_object *); > extern struct sg_table *nouveau_gem_prime_get_sg_table(struct drm_gem_object *); > extern struct drm_gem_object *nouveau_gem_prime_import_sg_table( > struct drm_device *, struct dma_buf_attachment *, struct sg_table *); > -extern void *nouveau_gem_prime_vmap(struct drm_gem_object *); > -extern void nouveau_gem_prime_vunmap(struct drm_gem_object *, void *); > > #endif > diff --git a/drivers/gpu/drm/nouveau/nouveau_prime.c b/drivers/gpu/drm/nouveau/nouveau_prime.c > index a8264aebf3d4..2f16b5249283 100644 > --- a/drivers/gpu/drm/nouveau/nouveau_prime.c > +++ b/drivers/gpu/drm/nouveau/nouveau_prime.c > @@ -35,26 +35,6 @@ struct sg_table *nouveau_gem_prime_get_sg_table(struct drm_gem_object *obj) > return drm_prime_pages_to_sg(obj->dev, nvbo->bo.ttm->pages, npages); > } > > -void *nouveau_gem_prime_vmap(struct drm_gem_object *obj) > -{ > - struct nouveau_bo *nvbo = nouveau_gem_object(obj); > - int ret; > - > - ret = ttm_bo_kmap(&nvbo->bo, 0, nvbo->bo.num_pages, > - &nvbo->dma_buf_vmap); > - if (ret) > - return ERR_PTR(ret); > - > - return nvbo->dma_buf_vmap.virtual; > -} > - > -void nouveau_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr) > -{ > - struct nouveau_bo *nvbo = nouveau_gem_object(obj); > - > - ttm_bo_kunmap(&nvbo->dma_buf_vmap); > -} > - > struct drm_gem_object *nouveau_gem_prime_import_sg_table(struct drm_device *dev, > struct dma_buf_attachment *attach, > struct sg_table *sg) > diff --git a/drivers/gpu/drm/panfrost/panfrost_perfcnt.c b/drivers/gpu/drm/panfrost/panfrost_perfcnt.c > index fdbc8d949135..5ab03d605f57 100644 > --- a/drivers/gpu/drm/panfrost/panfrost_perfcnt.c > +++ b/drivers/gpu/drm/panfrost/panfrost_perfcnt.c > @@ -5,6 +5,7 @@ > #include > #include > #include > +#include > #include > #include > #include > @@ -72,6 +73,7 @@ static int panfrost_perfcnt_enable_locked(struct panfrost_device *pfdev, > { > struct panfrost_file_priv *user = file_priv->driver_priv; > struct panfrost_perfcnt *perfcnt = pfdev->perfcnt; > + struct dma_buf_map map; > struct drm_gem_shmem_object *bo; > u32 cfg, as; > int ret; > @@ -103,11 +105,10 @@ static int panfrost_perfcnt_enable_locked(struct panfrost_device *pfdev, > goto err_close_bo; > } > > - perfcnt->buf = drm_gem_shmem_vmap(&bo->base); > - if (IS_ERR(perfcnt->buf)) { > - ret = PTR_ERR(perfcnt->buf); > + ret = drm_gem_shmem_vmap(&bo->base, &map); > + if (ret) > goto err_put_mapping; > - } > + perfcnt->buf = map.vaddr; > > /* > * Invalidate the cache and clear the counters to start from a fresh > @@ -163,7 +164,7 @@ static int panfrost_perfcnt_enable_locked(struct panfrost_device *pfdev, > return 0; > > err_vunmap: > - drm_gem_shmem_vunmap(&bo->base, perfcnt->buf); > + drm_gem_shmem_vunmap(&bo->base, &map); > err_put_mapping: > panfrost_gem_mapping_put(perfcnt->mapping); > err_close_bo: > @@ -180,6 +181,7 @@ static int panfrost_perfcnt_disable_locked(struct panfrost_device *pfdev, > { > struct panfrost_file_priv *user = file_priv->driver_priv; > struct panfrost_perfcnt *perfcnt = pfdev->perfcnt; > + struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(perfcnt->buf); > > if (user != perfcnt->user) > return -EINVAL; > @@ -192,7 +194,7 @@ static int panfrost_perfcnt_disable_locked(struct panfrost_device *pfdev, > GPU_PERFCNT_CFG_MODE(GPU_PERFCNT_CFG_MODE_OFF)); > > perfcnt->user = NULL; > - drm_gem_shmem_vunmap(&perfcnt->mapping->obj->base.base, perfcnt->buf); > + drm_gem_shmem_vunmap(&perfcnt->mapping->obj->base.base, &map); > perfcnt->buf = NULL; > panfrost_gem_close(&perfcnt->mapping->obj->base.base, file_priv); > panfrost_mmu_as_put(pfdev, perfcnt->mapping->mmu); > diff --git a/drivers/gpu/drm/qxl/qxl_display.c b/drivers/gpu/drm/qxl/qxl_display.c > index 45fd76e04bdc..e165fa9b2089 100644 > --- a/drivers/gpu/drm/qxl/qxl_display.c > +++ b/drivers/gpu/drm/qxl/qxl_display.c > @@ -25,6 +25,7 @@ > > #include > #include > +#include > > #include > #include > @@ -581,6 +582,8 @@ static void qxl_cursor_atomic_update(struct drm_plane *plane, > struct drm_gem_object *obj; > struct qxl_bo *cursor_bo = NULL, *user_bo = NULL, *old_cursor_bo = NULL; > int ret; > + struct dma_buf_map user_map; > + struct dma_buf_map cursor_map; > void *user_ptr; > int size = 64*64*4; > > @@ -595,9 +598,10 @@ static void qxl_cursor_atomic_update(struct drm_plane *plane, > user_bo = gem_to_qxl_bo(obj); > > /* pinning is done in the prepare/cleanup framevbuffer */ > - ret = qxl_bo_kmap(user_bo, &user_ptr); > + ret = qxl_bo_kmap(user_bo, &user_map); > if (ret) > goto out_free_release; > + user_ptr = user_map.vaddr; /* TODO: Use mapping abstraction properly */ > > ret = qxl_alloc_bo_reserved(qdev, release, > sizeof(struct qxl_cursor) + size, > @@ -613,7 +617,7 @@ static void qxl_cursor_atomic_update(struct drm_plane *plane, > if (ret) > goto out_unpin; > > - ret = qxl_bo_kmap(cursor_bo, (void **)&cursor); > + ret = qxl_bo_kmap(cursor_bo, &cursor_map); > if (ret) > goto out_backoff; > > @@ -1133,6 +1137,7 @@ int qxl_create_monitors_object(struct qxl_device *qdev) > { > int ret; > struct drm_gem_object *gobj; > + struct dma_buf_map map; > int monitors_config_size = sizeof(struct qxl_monitors_config) + > qxl_num_crtc * sizeof(struct qxl_head); > > @@ -1149,7 +1154,7 @@ int qxl_create_monitors_object(struct qxl_device *qdev) > if (ret) > return ret; > > - qxl_bo_kmap(qdev->monitors_config_bo, NULL); > + qxl_bo_kmap(qdev->monitors_config_bo, &map); > > qdev->monitors_config = qdev->monitors_config_bo->kptr; > qdev->ram_header->monitors_config = > diff --git a/drivers/gpu/drm/qxl/qxl_draw.c b/drivers/gpu/drm/qxl/qxl_draw.c > index 3599db096973..7b7acb910780 100644 > --- a/drivers/gpu/drm/qxl/qxl_draw.c > +++ b/drivers/gpu/drm/qxl/qxl_draw.c > @@ -20,6 +20,8 @@ > * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. > */ > > +#include > + > #include > > #include "qxl_drv.h" > @@ -42,13 +44,15 @@ static struct qxl_rect *drawable_set_clipping(struct qxl_device *qdev, > unsigned int num_clips, > struct qxl_bo *clips_bo) > { > + struct dma_buf_map map; > struct qxl_clip_rects *dev_clips; > int ret; > > - ret = qxl_bo_kmap(clips_bo, (void **)&dev_clips); > - if (ret) { > + ret = qxl_bo_kmap(clips_bo, &map); > + if (ret) > return NULL; > - } > + dev_clips = map.vaddr; /* TODO: Use mapping abstraction properly */ > + > dev_clips->num_rects = num_clips; > dev_clips->chunk.next_chunk = 0; > dev_clips->chunk.prev_chunk = 0; > @@ -142,6 +146,7 @@ void qxl_draw_dirty_fb(struct qxl_device *qdev, > int stride = fb->pitches[0]; > /* depth is not actually interesting, we don't mask with it */ > int depth = fb->format->cpp[0] * 8; > + struct dma_buf_map surface_map; > uint8_t *surface_base; > struct qxl_release *release; > struct qxl_bo *clips_bo; > @@ -197,9 +202,10 @@ void qxl_draw_dirty_fb(struct qxl_device *qdev, > if (ret) > goto out_release_backoff; > > - ret = qxl_bo_kmap(bo, (void **)&surface_base); > + ret = qxl_bo_kmap(bo, &surface_map); > if (ret) > goto out_release_backoff; > + surface_base = surface_map.vaddr; /* TODO: Use mapping abstraction properly */ > > ret = qxl_image_init(qdev, release, dimage, surface_base, > left - dumb_shadow_offset, > diff --git a/drivers/gpu/drm/qxl/qxl_drv.h b/drivers/gpu/drm/qxl/qxl_drv.h > index 3602e8b34189..eb437fea5d9e 100644 > --- a/drivers/gpu/drm/qxl/qxl_drv.h > +++ b/drivers/gpu/drm/qxl/qxl_drv.h > @@ -30,6 +30,7 @@ > * Definitions taken from spice-protocol, plus kernel driver specific bits. > */ > > +#include > #include > #include > #include > @@ -50,6 +51,8 @@ > > #include "qxl_dev.h" > > +struct dma_buf_map; > + > #define DRIVER_AUTHOR "Dave Airlie" > > #define DRIVER_NAME "qxl" > @@ -79,7 +82,7 @@ struct qxl_bo { > /* Protected by tbo.reserved */ > struct ttm_place placements[3]; > struct ttm_placement placement; > - struct ttm_bo_kmap_obj kmap; > + struct dma_buf_map map; > void *kptr; > unsigned int map_count; > int type; > @@ -335,7 +338,6 @@ int qxl_gem_object_open(struct drm_gem_object *obj, struct drm_file *file_priv); > void qxl_gem_object_close(struct drm_gem_object *obj, > struct drm_file *file_priv); > void qxl_bo_force_delete(struct qxl_device *qdev); > -int qxl_bo_kmap(struct qxl_bo *bo, void **ptr); > > /* qxl_dumb.c */ > int qxl_mode_dumb_create(struct drm_file *file_priv, > @@ -445,8 +447,9 @@ struct sg_table *qxl_gem_prime_get_sg_table(struct drm_gem_object *obj); > struct drm_gem_object *qxl_gem_prime_import_sg_table( > struct drm_device *dev, struct dma_buf_attachment *attach, > struct sg_table *sgt); > -void *qxl_gem_prime_vmap(struct drm_gem_object *obj); > -void qxl_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr); > +int qxl_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map); > +void qxl_gem_prime_vunmap(struct drm_gem_object *obj, > + struct dma_buf_map *map); > int qxl_gem_prime_mmap(struct drm_gem_object *obj, > struct vm_area_struct *vma); > > diff --git a/drivers/gpu/drm/qxl/qxl_object.c b/drivers/gpu/drm/qxl/qxl_object.c > index 940e99354f49..755df4d8f95f 100644 > --- a/drivers/gpu/drm/qxl/qxl_object.c > +++ b/drivers/gpu/drm/qxl/qxl_object.c > @@ -23,10 +23,12 @@ > * Alon Levy > */ > > +#include > +#include > + > #include "qxl_drv.h" > #include "qxl_object.h" > > -#include > static void qxl_ttm_bo_destroy(struct ttm_buffer_object *tbo) > { > struct qxl_bo *bo; > @@ -152,24 +154,27 @@ int qxl_bo_create(struct qxl_device *qdev, > return 0; > } > > -int qxl_bo_kmap(struct qxl_bo *bo, void **ptr) > +int qxl_bo_kmap(struct qxl_bo *bo, struct dma_buf_map *map) > { > - bool is_iomem; > int r; > > if (bo->kptr) { > - if (ptr) > - *ptr = bo->kptr; > bo->map_count++; > - return 0; > + goto out; > } > - r = ttm_bo_kmap(&bo->tbo, 0, bo->tbo.num_pages, &bo->kmap); > + r = ttm_bo_vmap(&bo->tbo, &bo->map); > if (r) > return r; > - bo->kptr = ttm_kmap_obj_virtual(&bo->kmap, &is_iomem); > - if (ptr) > - *ptr = bo->kptr; > bo->map_count = 1; > + > + /* TODO: Remove kptr in favor of map everywhere. */ > + if (bo->map.is_iomem) > + bo->kptr = (void *)bo->map.vaddr_iomem; > + else > + bo->kptr = bo->map.vaddr; > + > +out: > + *map = bo->map; > return 0; > } > > @@ -180,6 +185,7 @@ void *qxl_bo_kmap_atomic_page(struct qxl_device *qdev, > void *rptr; > int ret; > struct io_mapping *map; > + struct dma_buf_map bo_map; > > if (bo->tbo.mem.mem_type == TTM_PL_VRAM) > map = qdev->vram_mapping; > @@ -196,9 +202,10 @@ void *qxl_bo_kmap_atomic_page(struct qxl_device *qdev, > return rptr; > } > > - ret = qxl_bo_kmap(bo, &rptr); > + ret = qxl_bo_kmap(bo, &bo_map); > if (ret) > return NULL; > + rptr = bo_map.vaddr; /* TODO: Use mapping abstraction properly */ > > rptr += page_offset * PAGE_SIZE; > return rptr; > @@ -212,7 +219,7 @@ void qxl_bo_kunmap(struct qxl_bo *bo) > if (bo->map_count > 0) > return; > bo->kptr = NULL; > - ttm_bo_kunmap(&bo->kmap); > + ttm_bo_vunmap(&bo->tbo, &bo->map); > } > > void qxl_bo_kunmap_atomic_page(struct qxl_device *qdev, > diff --git a/drivers/gpu/drm/qxl/qxl_object.h b/drivers/gpu/drm/qxl/qxl_object.h > index 09a5c818324d..ebf24c9d2bf2 100644 > --- a/drivers/gpu/drm/qxl/qxl_object.h > +++ b/drivers/gpu/drm/qxl/qxl_object.h > @@ -63,7 +63,7 @@ extern int qxl_bo_create(struct qxl_device *qdev, > bool kernel, bool pinned, u32 domain, > struct qxl_surface *surf, > struct qxl_bo **bo_ptr); > -extern int qxl_bo_kmap(struct qxl_bo *bo, void **ptr); > +extern int qxl_bo_kmap(struct qxl_bo *bo, struct dma_buf_map *map); > extern void qxl_bo_kunmap(struct qxl_bo *bo); > void *qxl_bo_kmap_atomic_page(struct qxl_device *qdev, struct qxl_bo *bo, int page_offset); > void qxl_bo_kunmap_atomic_page(struct qxl_device *qdev, struct qxl_bo *bo, void *map); > diff --git a/drivers/gpu/drm/qxl/qxl_prime.c b/drivers/gpu/drm/qxl/qxl_prime.c > index 7d3816fca5a8..4aa949799446 100644 > --- a/drivers/gpu/drm/qxl/qxl_prime.c > +++ b/drivers/gpu/drm/qxl/qxl_prime.c > @@ -54,20 +54,20 @@ struct drm_gem_object *qxl_gem_prime_import_sg_table( > return ERR_PTR(-ENOSYS); > } > > -void *qxl_gem_prime_vmap(struct drm_gem_object *obj) > +int qxl_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map) > { > struct qxl_bo *bo = gem_to_qxl_bo(obj); > - void *ptr; > int ret; > > - ret = qxl_bo_kmap(bo, &ptr); > + ret = qxl_bo_kmap(bo, map); > if (ret < 0) > - return ERR_PTR(ret); > + return ret; > > - return ptr; > + return 0; > } > > -void qxl_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr) > +void qxl_gem_prime_vunmap(struct drm_gem_object *obj, > + struct dma_buf_map *map) > { > struct qxl_bo *bo = gem_to_qxl_bo(obj); > > diff --git a/drivers/gpu/drm/radeon/radeon.h b/drivers/gpu/drm/radeon/radeon.h > index 5d54bccebd4d..44cb5ee6fc20 100644 > --- a/drivers/gpu/drm/radeon/radeon.h > +++ b/drivers/gpu/drm/radeon/radeon.h > @@ -509,7 +509,6 @@ struct radeon_bo { > /* Constant after initialization */ > struct radeon_device *rdev; > > - struct ttm_bo_kmap_obj dma_buf_vmap; > pid_t pid; > > #ifdef CONFIG_MMU_NOTIFIER > diff --git a/drivers/gpu/drm/radeon/radeon_gem.c b/drivers/gpu/drm/radeon/radeon_gem.c > index 0ccd7213e41f..d2876ce3bc9e 100644 > --- a/drivers/gpu/drm/radeon/radeon_gem.c > +++ b/drivers/gpu/drm/radeon/radeon_gem.c > @@ -31,6 +31,7 @@ > #include > #include > #include > +#include > #include > > #include "radeon.h" > @@ -40,8 +41,6 @@ struct dma_buf *radeon_gem_prime_export(struct drm_gem_object *gobj, > struct sg_table *radeon_gem_prime_get_sg_table(struct drm_gem_object *obj); > int radeon_gem_prime_pin(struct drm_gem_object *obj); > void radeon_gem_prime_unpin(struct drm_gem_object *obj); > -void *radeon_gem_prime_vmap(struct drm_gem_object *obj); > -void radeon_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr); > > static const struct drm_gem_object_funcs radeon_gem_object_funcs; > > @@ -235,8 +234,8 @@ static const struct drm_gem_object_funcs radeon_gem_object_funcs = { > .pin = radeon_gem_prime_pin, > .unpin = radeon_gem_prime_unpin, > .get_sg_table = radeon_gem_prime_get_sg_table, > - .vmap = radeon_gem_prime_vmap, > - .vunmap = radeon_gem_prime_vunmap, > + .vmap = drm_gem_ttm_vmap, > + .vunmap = drm_gem_ttm_vunmap, > }; > > /* > diff --git a/drivers/gpu/drm/radeon/radeon_prime.c b/drivers/gpu/drm/radeon/radeon_prime.c > index b9de0e51c0be..088d39a51c0d 100644 > --- a/drivers/gpu/drm/radeon/radeon_prime.c > +++ b/drivers/gpu/drm/radeon/radeon_prime.c > @@ -39,26 +39,6 @@ struct sg_table *radeon_gem_prime_get_sg_table(struct drm_gem_object *obj) > return drm_prime_pages_to_sg(obj->dev, bo->tbo.ttm->pages, npages); > } > > -void *radeon_gem_prime_vmap(struct drm_gem_object *obj) > -{ > - struct radeon_bo *bo = gem_to_radeon_bo(obj); > - int ret; > - > - ret = ttm_bo_kmap(&bo->tbo, 0, bo->tbo.num_pages, > - &bo->dma_buf_vmap); > - if (ret) > - return ERR_PTR(ret); > - > - return bo->dma_buf_vmap.virtual; > -} > - > -void radeon_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr) > -{ > - struct radeon_bo *bo = gem_to_radeon_bo(obj); > - > - ttm_bo_kunmap(&bo->dma_buf_vmap); > -} > - > struct drm_gem_object *radeon_gem_prime_import_sg_table(struct drm_device *dev, > struct dma_buf_attachment *attach, > struct sg_table *sg) > diff --git a/drivers/gpu/drm/rockchip/rockchip_drm_gem.c b/drivers/gpu/drm/rockchip/rockchip_drm_gem.c > index 7d5ebb10323b..7971f57436dd 100644 > --- a/drivers/gpu/drm/rockchip/rockchip_drm_gem.c > +++ b/drivers/gpu/drm/rockchip/rockchip_drm_gem.c > @@ -532,26 +532,32 @@ rockchip_gem_prime_import_sg_table(struct drm_device *drm, > return ERR_PTR(ret); > } > > -void *rockchip_gem_prime_vmap(struct drm_gem_object *obj) > +int rockchip_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map) > { > struct rockchip_gem_object *rk_obj = to_rockchip_obj(obj); > > - if (rk_obj->pages) > - return vmap(rk_obj->pages, rk_obj->num_pages, VM_MAP, > - pgprot_writecombine(PAGE_KERNEL)); > + if (rk_obj->pages) { > + void *vaddr = vmap(rk_obj->pages, rk_obj->num_pages, VM_MAP, > + pgprot_writecombine(PAGE_KERNEL)); > + if (!vaddr) > + return -ENOMEM; > + dma_buf_map_set_vaddr(map, vaddr); > + return 0; > + } > > if (rk_obj->dma_attrs & DMA_ATTR_NO_KERNEL_MAPPING) > - return NULL; > + return -ENOMEM; > + dma_buf_map_set_vaddr(map, rk_obj->kvaddr); > > - return rk_obj->kvaddr; > + return 0; > } > > -void rockchip_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr) > +void rockchip_gem_prime_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map) > { > struct rockchip_gem_object *rk_obj = to_rockchip_obj(obj); > > if (rk_obj->pages) { > - vunmap(vaddr); > + vunmap(map->vaddr); > return; > } > > diff --git a/drivers/gpu/drm/rockchip/rockchip_drm_gem.h b/drivers/gpu/drm/rockchip/rockchip_drm_gem.h > index 7ffc541bea07..5a70a56cd406 100644 > --- a/drivers/gpu/drm/rockchip/rockchip_drm_gem.h > +++ b/drivers/gpu/drm/rockchip/rockchip_drm_gem.h > @@ -31,8 +31,8 @@ struct drm_gem_object * > rockchip_gem_prime_import_sg_table(struct drm_device *dev, > struct dma_buf_attachment *attach, > struct sg_table *sg); > -void *rockchip_gem_prime_vmap(struct drm_gem_object *obj); > -void rockchip_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr); > +int rockchip_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map); > +void rockchip_gem_prime_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map); > > /* drm driver mmap file operations */ > int rockchip_gem_mmap(struct file *filp, struct vm_area_struct *vma); > diff --git a/drivers/gpu/drm/tiny/cirrus.c b/drivers/gpu/drm/tiny/cirrus.c > index 744a8e337e41..c02e35ed6e76 100644 > --- a/drivers/gpu/drm/tiny/cirrus.c > +++ b/drivers/gpu/drm/tiny/cirrus.c > @@ -17,6 +17,7 @@ > */ > > #include > +#include > #include > #include > > @@ -314,6 +315,7 @@ static int cirrus_fb_blit_rect(struct drm_framebuffer *fb, > struct drm_rect *rect) > { > struct cirrus_device *cirrus = to_cirrus(fb->dev); > + struct dma_buf_map map; > void *vmap; > int idx, ret; > > @@ -321,10 +323,10 @@ static int cirrus_fb_blit_rect(struct drm_framebuffer *fb, > if (!drm_dev_enter(&cirrus->dev, &idx)) > goto out; > > - ret = -ENOMEM; > - vmap = drm_gem_shmem_vmap(fb->obj[0]); > - if (!vmap) > + ret = drm_gem_shmem_vmap(fb->obj[0], &map); > + if (ret) > goto out_dev_exit; > + vmap = map.vaddr; /* TODO: Use mapping abstraction properly */ > > if (cirrus->cpp == fb->format->cpp[0]) > drm_fb_memcpy_dstclip(cirrus->vram, > @@ -343,7 +345,7 @@ static int cirrus_fb_blit_rect(struct drm_framebuffer *fb, > else > WARN_ON_ONCE("cpp mismatch"); > > - drm_gem_shmem_vunmap(fb->obj[0], vmap); > + drm_gem_shmem_vunmap(fb->obj[0], &map); > ret = 0; > > out_dev_exit: > diff --git a/drivers/gpu/drm/tiny/gm12u320.c b/drivers/gpu/drm/tiny/gm12u320.c > index cc397671f689..12a890cea6e9 100644 > --- a/drivers/gpu/drm/tiny/gm12u320.c > +++ b/drivers/gpu/drm/tiny/gm12u320.c > @@ -248,6 +248,7 @@ static void gm12u320_copy_fb_to_blocks(struct gm12u320_device *gm12u320) > { > int block, dst_offset, len, remain, ret, x1, x2, y1, y2; > struct drm_framebuffer *fb; > + struct dma_buf_map map; > void *vaddr; > u8 *src; > > @@ -262,11 +263,12 @@ static void gm12u320_copy_fb_to_blocks(struct gm12u320_device *gm12u320) > y1 = gm12u320->fb_update.rect.y1; > y2 = gm12u320->fb_update.rect.y2; > > - vaddr = drm_gem_shmem_vmap(fb->obj[0]); > - if (IS_ERR(vaddr)) { > - GM12U320_ERR("failed to vmap fb: %ld\n", PTR_ERR(vaddr)); > + ret = drm_gem_shmem_vmap(fb->obj[0], &map); > + if (ret) { > + GM12U320_ERR("failed to vmap fb: %d\n", ret); > goto put_fb; > } > + vaddr = map.vaddr; /* TODO: Use mapping abstraction properly */ > > if (fb->obj[0]->import_attach) { > ret = dma_buf_begin_cpu_access( > @@ -318,7 +320,7 @@ static void gm12u320_copy_fb_to_blocks(struct gm12u320_device *gm12u320) > GM12U320_ERR("dma_buf_end_cpu_access err: %d\n", ret); > } > vunmap: > - drm_gem_shmem_vunmap(fb->obj[0], vaddr); > + drm_gem_shmem_vunmap(fb->obj[0], &map); > put_fb: > drm_framebuffer_put(fb); > gm12u320->fb_update.fb = NULL; > diff --git a/drivers/gpu/drm/udl/udl_modeset.c b/drivers/gpu/drm/udl/udl_modeset.c > index fef43f4e3bac..42eeba1dfdbf 100644 > --- a/drivers/gpu/drm/udl/udl_modeset.c > +++ b/drivers/gpu/drm/udl/udl_modeset.c > @@ -276,6 +276,7 @@ static int udl_handle_damage(struct drm_framebuffer *fb, int x, int y, > struct urb *urb; > struct drm_rect clip; > int log_bpp; > + struct dma_buf_map map; > void *vaddr; > > ret = udl_log_cpp(fb->format->cpp[0]); > @@ -296,11 +297,12 @@ static int udl_handle_damage(struct drm_framebuffer *fb, int x, int y, > return ret; > } > > - vaddr = drm_gem_shmem_vmap(fb->obj[0]); > - if (IS_ERR(vaddr)) { > + ret = drm_gem_shmem_vmap(fb->obj[0], &map); > + if (ret) { > DRM_ERROR("failed to vmap fb\n"); > goto out_dma_buf_end_cpu_access; > } > + vaddr = map.vaddr; /* TODO: Use mapping abstraction properly */ > > urb = udl_get_urb(dev); > if (!urb) > @@ -333,7 +335,7 @@ static int udl_handle_damage(struct drm_framebuffer *fb, int x, int y, > ret = 0; > > out_drm_gem_shmem_vunmap: > - drm_gem_shmem_vunmap(fb->obj[0], vaddr); > + drm_gem_shmem_vunmap(fb->obj[0], &map); > out_dma_buf_end_cpu_access: > if (import_attach) { > tmp_ret = dma_buf_end_cpu_access(import_attach->dmabuf, > diff --git a/drivers/gpu/drm/vboxvideo/vbox_mode.c b/drivers/gpu/drm/vboxvideo/vbox_mode.c > index 931c55126148..f268fb258c83 100644 > --- a/drivers/gpu/drm/vboxvideo/vbox_mode.c > +++ b/drivers/gpu/drm/vboxvideo/vbox_mode.c > @@ -9,6 +9,8 @@ > * Michael Thayer * Hans de Goede > */ > + > +#include > #include > > #include > @@ -384,6 +386,8 @@ static void vbox_cursor_atomic_update(struct drm_plane *plane, > u32 height = plane->state->crtc_h; > size_t data_size, mask_size; > u32 flags; > + struct dma_buf_map map; > + int ret; > u8 *src; > > /* > @@ -397,8 +401,8 @@ static void vbox_cursor_atomic_update(struct drm_plane *plane, > > vbox_crtc->cursor_enabled = true; > > - src = drm_gem_vram_vmap(gbo); > - if (IS_ERR(src)) { > + ret = drm_gem_vram_vmap(gbo, &map); > + if (ret) { > /* > * BUG: we should have pinned the BO in prepare_fb(). > */ > @@ -406,6 +410,7 @@ static void vbox_cursor_atomic_update(struct drm_plane *plane, > DRM_WARN("Could not map cursor bo, skipping update\n"); > return; > } > + src = map.vaddr; /* TODO: Use mapping abstraction properly */ > > /* > * The mask must be calculated based on the alpha > @@ -416,7 +421,7 @@ static void vbox_cursor_atomic_update(struct drm_plane *plane, > data_size = width * height * 4 + mask_size; > > copy_cursor_image(src, vbox->cursor_data, width, height, mask_size); > - drm_gem_vram_vunmap(gbo, src); > + drm_gem_vram_vunmap(gbo, &map); > > flags = VBOX_MOUSE_POINTER_VISIBLE | VBOX_MOUSE_POINTER_SHAPE | > VBOX_MOUSE_POINTER_ALPHA; > diff --git a/drivers/gpu/drm/vc4/vc4_bo.c b/drivers/gpu/drm/vc4/vc4_bo.c > index 557f0d1e6437..f290a9a942dc 100644 > --- a/drivers/gpu/drm/vc4/vc4_bo.c > +++ b/drivers/gpu/drm/vc4/vc4_bo.c > @@ -785,16 +785,16 @@ int vc4_prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma) > return drm_gem_cma_prime_mmap(obj, vma); > } > > -void *vc4_prime_vmap(struct drm_gem_object *obj) > +int vc4_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map) > { > struct vc4_bo *bo = to_vc4_bo(obj); > > if (bo->validated_shader) { > DRM_DEBUG("mmaping of shader BOs not allowed.\n"); > - return ERR_PTR(-EINVAL); > + return -EINVAL; > } > > - return drm_gem_cma_prime_vmap(obj); > + return drm_gem_cma_prime_vmap(obj, map); > } > > struct drm_gem_object * > diff --git a/drivers/gpu/drm/vc4/vc4_drv.h b/drivers/gpu/drm/vc4/vc4_drv.h > index cc79b1aaa878..904f2c36c963 100644 > --- a/drivers/gpu/drm/vc4/vc4_drv.h > +++ b/drivers/gpu/drm/vc4/vc4_drv.h > @@ -806,7 +806,7 @@ int vc4_prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma); > struct drm_gem_object *vc4_prime_import_sg_table(struct drm_device *dev, > struct dma_buf_attachment *attach, > struct sg_table *sgt); > -void *vc4_prime_vmap(struct drm_gem_object *obj); > +int vc4_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map); > int vc4_bo_cache_init(struct drm_device *dev); > void vc4_bo_cache_destroy(struct drm_device *dev); > int vc4_bo_inc_usecnt(struct vc4_bo *bo); > diff --git a/drivers/gpu/drm/vgem/vgem_drv.c b/drivers/gpu/drm/vgem/vgem_drv.c > index fa54a6d1403d..b2aa26e1e4a2 100644 > --- a/drivers/gpu/drm/vgem/vgem_drv.c > +++ b/drivers/gpu/drm/vgem/vgem_drv.c > @@ -361,24 +361,30 @@ static struct drm_gem_object *vgem_prime_import_sg_table(struct drm_device *dev, > return &obj->base; > } > > -static void *vgem_prime_vmap(struct drm_gem_object *obj) > +static int vgem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map) > { > struct drm_vgem_gem_object *bo = to_vgem_bo(obj); > long n_pages = obj->size >> PAGE_SHIFT; > struct page **pages; > + void *vaddr; > > pages = vgem_pin_pages(bo); > if (IS_ERR(pages)) > - return NULL; > + return PTR_ERR(pages); > + > + vaddr = vmap(pages, n_pages, 0, pgprot_writecombine(PAGE_KERNEL)); > + if (!vaddr) > + return -ENOMEM; > + dma_buf_map_set_vaddr(map, vaddr); > > - return vmap(pages, n_pages, 0, pgprot_writecombine(PAGE_KERNEL)); > + return 0; > } > > -static void vgem_prime_vunmap(struct drm_gem_object *obj, void *vaddr) > +static void vgem_prime_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map) > { > struct drm_vgem_gem_object *bo = to_vgem_bo(obj); > > - vunmap(vaddr); > + vunmap(map->vaddr); > vgem_unpin_pages(bo); > } > > diff --git a/drivers/gpu/drm/xen/xen_drm_front_gem.c b/drivers/gpu/drm/xen/xen_drm_front_gem.c > index 4f34ef34ba60..74db5a840bed 100644 > --- a/drivers/gpu/drm/xen/xen_drm_front_gem.c > +++ b/drivers/gpu/drm/xen/xen_drm_front_gem.c > @@ -290,22 +290,28 @@ int xen_drm_front_gem_mmap(struct file *filp, struct vm_area_struct *vma) > return gem_mmap_obj(xen_obj, vma); > } > > -void *xen_drm_front_gem_prime_vmap(struct drm_gem_object *gem_obj) > +int xen_drm_front_gem_prime_vmap(struct drm_gem_object *gem_obj, struct dma_buf_map *map) > { > struct xen_gem_object *xen_obj = to_xen_gem_obj(gem_obj); > + void *vaddr; > > if (!xen_obj->pages) > - return NULL; > + return -ENOMEM; > > /* Please see comment in gem_mmap_obj on mapping and attributes. */ > - return vmap(xen_obj->pages, xen_obj->num_pages, > - VM_MAP, PAGE_KERNEL); > + vaddr = vmap(xen_obj->pages, xen_obj->num_pages, > + VM_MAP, PAGE_KERNEL); > + if (!vaddr) > + return -ENOMEM; > + dma_buf_map_set_vaddr(map, vaddr); > + > + return 0; > } > > void xen_drm_front_gem_prime_vunmap(struct drm_gem_object *gem_obj, > - void *vaddr) > + struct dma_buf_map *map) > { > - vunmap(vaddr); > + vunmap(map->vaddr); > } > > int xen_drm_front_gem_prime_mmap(struct drm_gem_object *gem_obj, > diff --git a/drivers/gpu/drm/xen/xen_drm_front_gem.h b/drivers/gpu/drm/xen/xen_drm_front_gem.h > index a39675fa31b2..a4e67d0a149c 100644 > --- a/drivers/gpu/drm/xen/xen_drm_front_gem.h > +++ b/drivers/gpu/drm/xen/xen_drm_front_gem.h > @@ -12,6 +12,7 @@ > #define __XEN_DRM_FRONT_GEM_H > > struct dma_buf_attachment; > +struct dma_buf_map; > struct drm_device; > struct drm_gem_object; > struct file; > @@ -34,10 +35,11 @@ void xen_drm_front_gem_free_object_unlocked(struct drm_gem_object *gem_obj); > > int xen_drm_front_gem_mmap(struct file *filp, struct vm_area_struct *vma); > > -void *xen_drm_front_gem_prime_vmap(struct drm_gem_object *gem_obj); > +int xen_drm_front_gem_prime_vmap(struct drm_gem_object *gem_obj, > + struct dma_buf_map *map); > > void xen_drm_front_gem_prime_vunmap(struct drm_gem_object *gem_obj, > - void *vaddr); > + struct dma_buf_map *map); > > int xen_drm_front_gem_prime_mmap(struct drm_gem_object *gem_obj, > struct vm_area_struct *vma); > diff --git a/include/drm/drm_gem.h b/include/drm/drm_gem.h > index c38dd35da00b..5e6daa1c982f 100644 > --- a/include/drm/drm_gem.h > +++ b/include/drm/drm_gem.h > @@ -39,6 +39,7 @@ > > #include > > +struct dma_buf_map; > struct drm_gem_object; > > /** > @@ -138,7 +139,7 @@ struct drm_gem_object_funcs { > * > * This callback is optional. > */ > - void *(*vmap)(struct drm_gem_object *obj); > + int (*vmap)(struct drm_gem_object *obj, struct dma_buf_map *map); > > /** > * @vunmap: > @@ -148,7 +149,7 @@ struct drm_gem_object_funcs { > * > * This callback is optional. > */ > - void (*vunmap)(struct drm_gem_object *obj, void *vaddr); > + void (*vunmap)(struct drm_gem_object *obj, struct dma_buf_map *map); > > /** > * @mmap: > diff --git a/include/drm/drm_gem_cma_helper.h b/include/drm/drm_gem_cma_helper.h > index a064b0d1c480..caf98b9cf4b4 100644 > --- a/include/drm/drm_gem_cma_helper.h > +++ b/include/drm/drm_gem_cma_helper.h > @@ -103,7 +103,7 @@ drm_gem_cma_prime_import_sg_table(struct drm_device *dev, > struct sg_table *sgt); > int drm_gem_cma_prime_mmap(struct drm_gem_object *obj, > struct vm_area_struct *vma); > -void *drm_gem_cma_prime_vmap(struct drm_gem_object *obj); > +int drm_gem_cma_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map); > > struct drm_gem_object * > drm_gem_cma_create_object_default_funcs(struct drm_device *dev, size_t size); > diff --git a/include/drm/drm_gem_shmem_helper.h b/include/drm/drm_gem_shmem_helper.h > index 5381f0c8cf6f..3449a0353fe0 100644 > --- a/include/drm/drm_gem_shmem_helper.h > +++ b/include/drm/drm_gem_shmem_helper.h > @@ -113,8 +113,8 @@ int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem); > void drm_gem_shmem_put_pages(struct drm_gem_shmem_object *shmem); > int drm_gem_shmem_pin(struct drm_gem_object *obj); > void drm_gem_shmem_unpin(struct drm_gem_object *obj); > -void *drm_gem_shmem_vmap(struct drm_gem_object *obj); > -void drm_gem_shmem_vunmap(struct drm_gem_object *obj, void *vaddr); > +int drm_gem_shmem_vmap(struct drm_gem_object *obj, struct dma_buf_map *map); > +void drm_gem_shmem_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map); > > int drm_gem_shmem_madvise(struct drm_gem_object *obj, int madv); > > diff --git a/include/drm/drm_gem_vram_helper.h b/include/drm/drm_gem_vram_helper.h > index 128f88174d32..c0d28ba0f5c9 100644 > --- a/include/drm/drm_gem_vram_helper.h > +++ b/include/drm/drm_gem_vram_helper.h > @@ -10,6 +10,7 @@ > #include > #include > > +#include > #include /* for container_of() */ > > struct drm_mode_create_dumb; > @@ -29,9 +30,8 @@ struct vm_area_struct; > > /** > * struct drm_gem_vram_object - GEM object backed by VRAM > - * @gem: GEM object > * @bo: TTM buffer object > - * @kmap: Mapping information for @bo > + * @map: Mapping information for @bo > * @placement: TTM placement information. Supported placements are \ > %TTM_PL_VRAM and %TTM_PL_SYSTEM > * @placements: TTM placement information. > @@ -50,15 +50,15 @@ struct vm_area_struct; > */ > struct drm_gem_vram_object { > struct ttm_buffer_object bo; > - struct ttm_bo_kmap_obj kmap; > + struct dma_buf_map map; > > /** > - * @kmap_use_count: > + * @vmap_use_count: > * > * Reference count on the virtual address. > * The address are un-mapped when the count reaches zero. > */ > - unsigned int kmap_use_count; > + unsigned int vmap_use_count; > > /* Supported placements are %TTM_PL_VRAM and %TTM_PL_SYSTEM */ > struct ttm_placement placement; > @@ -97,8 +97,8 @@ u64 drm_gem_vram_mmap_offset(struct drm_gem_vram_object *gbo); > s64 drm_gem_vram_offset(struct drm_gem_vram_object *gbo); > int drm_gem_vram_pin(struct drm_gem_vram_object *gbo, unsigned long pl_flag); > int drm_gem_vram_unpin(struct drm_gem_vram_object *gbo); > -void *drm_gem_vram_vmap(struct drm_gem_vram_object *gbo); > -void drm_gem_vram_vunmap(struct drm_gem_vram_object *gbo, void *vaddr); > +int drm_gem_vram_vmap(struct drm_gem_vram_object *gbo, struct dma_buf_map *map); > +void drm_gem_vram_vunmap(struct drm_gem_vram_object *gbo, struct dma_buf_map *map); > > int drm_gem_vram_fill_create_dumb(struct drm_file *file, > struct drm_device *dev, From ckoenig.leichtzumerken at gmail.com Thu Oct 15 13:57:14 2020 From: ckoenig.leichtzumerken at gmail.com (=?UTF-8?Q?Christian_K=c3=b6nig?=) Date: Thu, 15 Oct 2020 15:57:14 +0200 Subject: [Spice-devel] [PATCH v4 01/10] drm/vram-helper: Remove invariant parameters from internal kmap function In-Reply-To: <20201015123806.32416-2-tzimmermann@suse.de> References: <20201015123806.32416-1-tzimmermann@suse.de> <20201015123806.32416-2-tzimmermann@suse.de> Message-ID: <06cab96a-5224-46dc-dbd2-8eb4950946cc@gmail.com> Am 15.10.20 um 14:37 schrieb Thomas Zimmermann: > The parameters map and is_iomem are always of the same value. Removed them > to prepares the function for conversion to struct dma_buf_map. > > v4: > * don't check for !kmap->virtual; will always be false > > Signed-off-by: Thomas Zimmermann > Reviewed-by: Daniel Vetter Reviewed-by: Christian K?nig > --- > drivers/gpu/drm/drm_gem_vram_helper.c | 18 ++++-------------- > 1 file changed, 4 insertions(+), 14 deletions(-) > > diff --git a/drivers/gpu/drm/drm_gem_vram_helper.c b/drivers/gpu/drm/drm_gem_vram_helper.c > index 3213429f8444..2d5ed30518f1 100644 > --- a/drivers/gpu/drm/drm_gem_vram_helper.c > +++ b/drivers/gpu/drm/drm_gem_vram_helper.c > @@ -382,32 +382,22 @@ int drm_gem_vram_unpin(struct drm_gem_vram_object *gbo) > } > EXPORT_SYMBOL(drm_gem_vram_unpin); > > -static void *drm_gem_vram_kmap_locked(struct drm_gem_vram_object *gbo, > - bool map, bool *is_iomem) > +static void *drm_gem_vram_kmap_locked(struct drm_gem_vram_object *gbo) > { > int ret; > struct ttm_bo_kmap_obj *kmap = &gbo->kmap; > + bool is_iomem; > > if (gbo->kmap_use_count > 0) > goto out; > > - if (kmap->virtual || !map) > - goto out; > - > ret = ttm_bo_kmap(&gbo->bo, 0, gbo->bo.num_pages, kmap); > if (ret) > return ERR_PTR(ret); > > out: > - if (!kmap->virtual) { > - if (is_iomem) > - *is_iomem = false; > - return NULL; /* not mapped; don't increment ref */ > - } > ++gbo->kmap_use_count; > - if (is_iomem) > - return ttm_kmap_obj_virtual(kmap, is_iomem); > - return kmap->virtual; > + return ttm_kmap_obj_virtual(kmap, &is_iomem); > } > > static void drm_gem_vram_kunmap_locked(struct drm_gem_vram_object *gbo) > @@ -452,7 +442,7 @@ void *drm_gem_vram_vmap(struct drm_gem_vram_object *gbo) > ret = drm_gem_vram_pin_locked(gbo, 0); > if (ret) > goto err_ttm_bo_unreserve; > - base = drm_gem_vram_kmap_locked(gbo, true, NULL); > + base = drm_gem_vram_kmap_locked(gbo); > if (IS_ERR(base)) { > ret = PTR_ERR(base); > goto err_drm_gem_vram_unpin_locked; From christian.koenig at amd.com Thu Oct 15 14:08:13 2020 From: christian.koenig at amd.com (=?UTF-8?Q?Christian_K=c3=b6nig?=) Date: Thu, 15 Oct 2020 16:08:13 +0200 Subject: [Spice-devel] [PATCH v4 05/10] drm/ttm: Add vmap/vunmap to TTM and TTM GEM helpers In-Reply-To: <20201015123806.32416-6-tzimmermann@suse.de> References: <20201015123806.32416-1-tzimmermann@suse.de> <20201015123806.32416-6-tzimmermann@suse.de> Message-ID: <935d5771-5645-62a6-849c-31e286db1e30@amd.com> Am 15.10.20 um 14:38 schrieb Thomas Zimmermann: > The new functions ttm_bo_{vmap,vunmap}() map and unmap a TTM BO in kernel > address space. The mapping's address is returned as struct dma_buf_map. > Each function is a simplified version of TTM's existing kmap code. Both > functions respect the memory's location ani/or writecombine flags. > > On top TTM's functions, GEM TTM helpers got drm_gem_ttm_{vmap,vunmap}(), > two helpers that convert a GEM object into the TTM BO and forward the call > to TTM's vmap/vunmap. These helpers can be dropped into the rsp GEM object > callbacks. > > v4: > * drop ttm_kmap_obj_to_dma_buf() in favor of vmap helpers (Daniel, > Christian) Bunch of minor comments below, but over all look very solid to me. > > Signed-off-by: Thomas Zimmermann > --- > drivers/gpu/drm/drm_gem_ttm_helper.c | 38 +++++++++++++++ > drivers/gpu/drm/ttm/ttm_bo_util.c | 72 ++++++++++++++++++++++++++++ > include/drm/drm_gem_ttm_helper.h | 6 +++ > include/drm/ttm/ttm_bo_api.h | 28 +++++++++++ > include/linux/dma-buf-map.h | 20 ++++++++ > 5 files changed, 164 insertions(+) > > diff --git a/drivers/gpu/drm/drm_gem_ttm_helper.c b/drivers/gpu/drm/drm_gem_ttm_helper.c > index 0e4fb9ba43ad..db4c14d78a30 100644 > --- a/drivers/gpu/drm/drm_gem_ttm_helper.c > +++ b/drivers/gpu/drm/drm_gem_ttm_helper.c > @@ -49,6 +49,44 @@ void drm_gem_ttm_print_info(struct drm_printer *p, unsigned int indent, > } > EXPORT_SYMBOL(drm_gem_ttm_print_info); > > +/** > + * drm_gem_ttm_vmap() - vmap &ttm_buffer_object > + * @gem: GEM object. > + * @map: [out] returns the dma-buf mapping. > + * > + * Maps a GEM object with ttm_bo_vmap(). This function can be used as > + * &drm_gem_object_funcs.vmap callback. > + * > + * Returns: > + * 0 on success, or a negative errno code otherwise. > + */ > +int drm_gem_ttm_vmap(struct drm_gem_object *gem, > + struct dma_buf_map *map) > +{ > + struct ttm_buffer_object *bo = drm_gem_ttm_of_gem(gem); > + > + return ttm_bo_vmap(bo, map); > + > +} > +EXPORT_SYMBOL(drm_gem_ttm_vmap); > + > +/** > + * drm_gem_ttm_vunmap() - vunmap &ttm_buffer_object > + * @gem: GEM object. > + * @map: dma-buf mapping. > + * > + * Unmaps a GEM object with ttm_bo_vunmap(). This function can be used as > + * &drm_gem_object_funcs.vmap callback. > + */ > +void drm_gem_ttm_vunmap(struct drm_gem_object *gem, > + struct dma_buf_map *map) > +{ > + struct ttm_buffer_object *bo = drm_gem_ttm_of_gem(gem); > + > + ttm_bo_vunmap(bo, map); > +} > +EXPORT_SYMBOL(drm_gem_ttm_vunmap); > + > /** > * drm_gem_ttm_mmap() - mmap &ttm_buffer_object > * @gem: GEM object. > diff --git a/drivers/gpu/drm/ttm/ttm_bo_util.c b/drivers/gpu/drm/ttm/ttm_bo_util.c > index bdee4df1f3f2..80c42c774c7d 100644 > --- a/drivers/gpu/drm/ttm/ttm_bo_util.c > +++ b/drivers/gpu/drm/ttm/ttm_bo_util.c > @@ -32,6 +32,7 @@ > #include > #include > #include > +#include > #include > #include > #include > @@ -526,6 +527,77 @@ void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map) > } > EXPORT_SYMBOL(ttm_bo_kunmap); > > +int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map) > +{ > + struct ttm_resource *mem = &bo->mem; > + int ret; > + > + ret = ttm_mem_io_reserve(bo->bdev, mem); > + if (ret) > + return ret; > + > + if (mem->bus.is_iomem) { > + void __iomem *vaddr_iomem; > + unsigned long size = bo->num_pages << PAGE_SHIFT; Please use uint64_t here and make sure to cast bo->num_pages before shifting. We have an unit tests of allocating a 8GB BO and that should work on a 32bit machine as well :) > + > + if (mem->bus.addr) > + vaddr_iomem = (void *)(((u8 *)mem->bus.addr)); > + else if (mem->placement & TTM_PL_FLAG_WC) I've just nuked the TTM_PL_FLAG_WC flag in drm-misc-next. There is a new mem->bus.caching enum as replacement. > + vaddr_iomem = ioremap_wc(mem->bus.offset, size); > + else > + vaddr_iomem = ioremap(mem->bus.offset, size); > + > + if (!vaddr_iomem) > + return -ENOMEM; > + > + dma_buf_map_set_vaddr_iomem(map, vaddr_iomem); > + > + } else { > + struct ttm_operation_ctx ctx = { > + .interruptible = false, > + .no_wait_gpu = false > + }; > + struct ttm_tt *ttm = bo->ttm; > + pgprot_t prot; > + void *vaddr; > + > + BUG_ON(!ttm); I think we can drop this, populate will just crash badly anyway. > + > + ret = ttm_tt_populate(bo->bdev, ttm, &ctx); > + if (ret) > + return ret; > + > + /* > + * We need to use vmap to get the desired page protection > + * or to make the buffer object look contiguous. > + */ > + prot = ttm_io_prot(mem->placement, PAGE_KERNEL); The calling convention has changed on drm-misc-next as well, but should be trivial to adapt. Regards, Christian. > + vaddr = vmap(ttm->pages, bo->num_pages, 0, prot); > + if (!vaddr) > + return -ENOMEM; > + > + dma_buf_map_set_vaddr(map, vaddr); > + } > + > + return 0; > +} > +EXPORT_SYMBOL(ttm_bo_vmap); > + > +void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map *map) > +{ > + if (dma_buf_map_is_null(map)) > + return; > + > + if (map->is_iomem) > + iounmap(map->vaddr_iomem); > + else > + vunmap(map->vaddr); > + dma_buf_map_clear(map); > + > + ttm_mem_io_free(bo->bdev, &bo->mem); > +} > +EXPORT_SYMBOL(ttm_bo_vunmap); > + > static int ttm_bo_wait_free_node(struct ttm_buffer_object *bo, > bool dst_use_tt) > { > diff --git a/include/drm/drm_gem_ttm_helper.h b/include/drm/drm_gem_ttm_helper.h > index 118cef76f84f..7c6d874910b8 100644 > --- a/include/drm/drm_gem_ttm_helper.h > +++ b/include/drm/drm_gem_ttm_helper.h > @@ -10,11 +10,17 @@ > #include > #include > > +struct dma_buf_map; > + > #define drm_gem_ttm_of_gem(gem_obj) \ > container_of(gem_obj, struct ttm_buffer_object, base) > > void drm_gem_ttm_print_info(struct drm_printer *p, unsigned int indent, > const struct drm_gem_object *gem); > +int drm_gem_ttm_vmap(struct drm_gem_object *gem, > + struct dma_buf_map *map); > +void drm_gem_ttm_vunmap(struct drm_gem_object *gem, > + struct dma_buf_map *map); > int drm_gem_ttm_mmap(struct drm_gem_object *gem, > struct vm_area_struct *vma); > > diff --git a/include/drm/ttm/ttm_bo_api.h b/include/drm/ttm/ttm_bo_api.h > index 37102e45e496..2c59a785374c 100644 > --- a/include/drm/ttm/ttm_bo_api.h > +++ b/include/drm/ttm/ttm_bo_api.h > @@ -48,6 +48,8 @@ struct ttm_bo_global; > > struct ttm_bo_device; > > +struct dma_buf_map; > + > struct drm_mm_node; > > struct ttm_placement; > @@ -494,6 +496,32 @@ int ttm_bo_kmap(struct ttm_buffer_object *bo, unsigned long start_page, > */ > void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map); > > +/** > + * ttm_bo_vmap > + * > + * @bo: The buffer object. > + * @map: pointer to a struct dma_buf_map representing the map. > + * > + * Sets up a kernel virtual mapping, using ioremap or vmap to the > + * data in the buffer object. The parameter @map returns the virtual > + * address as struct dma_buf_map. Unmap the buffer with ttm_bo_vunmap(). > + * > + * Returns > + * -ENOMEM: Out of memory. > + * -EINVAL: Invalid range. > + */ > +int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map); > + > +/** > + * ttm_bo_vunmap > + * > + * @bo: The buffer object. > + * @map: Object describing the map to unmap. > + * > + * Unmaps a kernel map set up by ttm_bo_vmap(). > + */ > +void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map *map); > + > /** > * ttm_bo_mmap_obj - mmap memory backed by a ttm buffer object. > * > diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h > index fd1aba545fdf..2e8bbecb5091 100644 > --- a/include/linux/dma-buf-map.h > +++ b/include/linux/dma-buf-map.h > @@ -45,6 +45,12 @@ > * > * dma_buf_map_set_vaddr(&map. 0xdeadbeaf); > * > + * To set an address in I/O memory, use dma_buf_map_set_vaddr_iomem(). > + * > + * .. code-block:: c > + * > + * dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf); > + * > * Test if a mapping is valid with either dma_buf_map_is_set() or > * dma_buf_map_is_null(). > * > @@ -118,6 +124,20 @@ static inline void dma_buf_map_set_vaddr(struct dma_buf_map *map, void *vaddr) > map->is_iomem = false; > } > > +/** > + * dma_buf_map_set_vaddr_iomem - Sets a dma-buf mapping structure to an address in I/O memory > + * @map: The dma-buf mapping structure > + * @vaddr_iomem: An I/O-memory address > + * > + * Sets the address and the I/O-memory flag. > + */ > +static inline void dma_buf_map_set_vaddr_iomem(struct dma_buf_map *map, > + void __iomem *vaddr_iomem) > +{ > + map->vaddr_iomem = vaddr_iomem; > + map->is_iomem = true; > +} > + > /** > * dma_buf_map_is_equal - Compares two dma-buf mapping structures for equality > * @lhs: The dma-buf mapping structure From daniel at ffwll.ch Thu Oct 15 16:49:09 2020 From: daniel at ffwll.ch (Daniel Vetter) Date: Thu, 15 Oct 2020 18:49:09 +0200 Subject: [Spice-devel] [PATCH v4 05/10] drm/ttm: Add vmap/vunmap to TTM and TTM GEM helpers In-Reply-To: <935d5771-5645-62a6-849c-31e286db1e30@amd.com> References: <20201015123806.32416-1-tzimmermann@suse.de> <20201015123806.32416-6-tzimmermann@suse.de> <935d5771-5645-62a6-849c-31e286db1e30@amd.com> Message-ID: <20201015164909.GC401619@phenom.ffwll.local> On Thu, Oct 15, 2020 at 04:08:13PM +0200, Christian K?nig wrote: > Am 15.10.20 um 14:38 schrieb Thomas Zimmermann: > > The new functions ttm_bo_{vmap,vunmap}() map and unmap a TTM BO in kernel > > address space. The mapping's address is returned as struct dma_buf_map. > > Each function is a simplified version of TTM's existing kmap code. Both > > functions respect the memory's location ani/or writecombine flags. > > > > On top TTM's functions, GEM TTM helpers got drm_gem_ttm_{vmap,vunmap}(), > > two helpers that convert a GEM object into the TTM BO and forward the call > > to TTM's vmap/vunmap. These helpers can be dropped into the rsp GEM object > > callbacks. > > > > v4: > > * drop ttm_kmap_obj_to_dma_buf() in favor of vmap helpers (Daniel, > > Christian) > > Bunch of minor comments below, but over all look very solid to me. Yeah I think just duplicating the ttm bo map stuff for vmap is indeed the cleanest. And then we can maybe push the combinatorial monster into vmwgfx, which I think is the only user after this series. Or perhaps a dedicated set of helpers to map an invidual page (again using the dma_buf_map stuff). I'll let Christian with the details, but at a high level this is definitely Acked-by: Daniel Vetter Thanks a lot for doing all this. -Daniel > > > > > Signed-off-by: Thomas Zimmermann > > --- > > drivers/gpu/drm/drm_gem_ttm_helper.c | 38 +++++++++++++++ > > drivers/gpu/drm/ttm/ttm_bo_util.c | 72 ++++++++++++++++++++++++++++ > > include/drm/drm_gem_ttm_helper.h | 6 +++ > > include/drm/ttm/ttm_bo_api.h | 28 +++++++++++ > > include/linux/dma-buf-map.h | 20 ++++++++ > > 5 files changed, 164 insertions(+) > > > > diff --git a/drivers/gpu/drm/drm_gem_ttm_helper.c b/drivers/gpu/drm/drm_gem_ttm_helper.c > > index 0e4fb9ba43ad..db4c14d78a30 100644 > > --- a/drivers/gpu/drm/drm_gem_ttm_helper.c > > +++ b/drivers/gpu/drm/drm_gem_ttm_helper.c > > @@ -49,6 +49,44 @@ void drm_gem_ttm_print_info(struct drm_printer *p, unsigned int indent, > > } > > EXPORT_SYMBOL(drm_gem_ttm_print_info); > > +/** > > + * drm_gem_ttm_vmap() - vmap &ttm_buffer_object > > + * @gem: GEM object. > > + * @map: [out] returns the dma-buf mapping. > > + * > > + * Maps a GEM object with ttm_bo_vmap(). This function can be used as > > + * &drm_gem_object_funcs.vmap callback. > > + * > > + * Returns: > > + * 0 on success, or a negative errno code otherwise. > > + */ > > +int drm_gem_ttm_vmap(struct drm_gem_object *gem, > > + struct dma_buf_map *map) > > +{ > > + struct ttm_buffer_object *bo = drm_gem_ttm_of_gem(gem); > > + > > + return ttm_bo_vmap(bo, map); > > + > > +} > > +EXPORT_SYMBOL(drm_gem_ttm_vmap); > > + > > +/** > > + * drm_gem_ttm_vunmap() - vunmap &ttm_buffer_object > > + * @gem: GEM object. > > + * @map: dma-buf mapping. > > + * > > + * Unmaps a GEM object with ttm_bo_vunmap(). This function can be used as > > + * &drm_gem_object_funcs.vmap callback. > > + */ > > +void drm_gem_ttm_vunmap(struct drm_gem_object *gem, > > + struct dma_buf_map *map) > > +{ > > + struct ttm_buffer_object *bo = drm_gem_ttm_of_gem(gem); > > + > > + ttm_bo_vunmap(bo, map); > > +} > > +EXPORT_SYMBOL(drm_gem_ttm_vunmap); > > + > > /** > > * drm_gem_ttm_mmap() - mmap &ttm_buffer_object > > * @gem: GEM object. > > diff --git a/drivers/gpu/drm/ttm/ttm_bo_util.c b/drivers/gpu/drm/ttm/ttm_bo_util.c > > index bdee4df1f3f2..80c42c774c7d 100644 > > --- a/drivers/gpu/drm/ttm/ttm_bo_util.c > > +++ b/drivers/gpu/drm/ttm/ttm_bo_util.c > > @@ -32,6 +32,7 @@ > > #include > > #include > > #include > > +#include > > #include > > #include > > #include > > @@ -526,6 +527,77 @@ void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map) > > } > > EXPORT_SYMBOL(ttm_bo_kunmap); > > +int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map) > > +{ > > + struct ttm_resource *mem = &bo->mem; > > + int ret; > > + > > + ret = ttm_mem_io_reserve(bo->bdev, mem); > > + if (ret) > > + return ret; > > + > > + if (mem->bus.is_iomem) { > > + void __iomem *vaddr_iomem; > > + unsigned long size = bo->num_pages << PAGE_SHIFT; > > Please use uint64_t here and make sure to cast bo->num_pages before > shifting. > > We have an unit tests of allocating a 8GB BO and that should work on a 32bit > machine as well :) > > > + > > + if (mem->bus.addr) > > + vaddr_iomem = (void *)(((u8 *)mem->bus.addr)); > > + else if (mem->placement & TTM_PL_FLAG_WC) > > I've just nuked the TTM_PL_FLAG_WC flag in drm-misc-next. There is a new > mem->bus.caching enum as replacement. > > > + vaddr_iomem = ioremap_wc(mem->bus.offset, size); > > + else > > + vaddr_iomem = ioremap(mem->bus.offset, size); > > + > > + if (!vaddr_iomem) > > + return -ENOMEM; > > + > > + dma_buf_map_set_vaddr_iomem(map, vaddr_iomem); > > + > > + } else { > > + struct ttm_operation_ctx ctx = { > > + .interruptible = false, > > + .no_wait_gpu = false > > + }; > > + struct ttm_tt *ttm = bo->ttm; > > + pgprot_t prot; > > + void *vaddr; > > + > > + BUG_ON(!ttm); > > I think we can drop this, populate will just crash badly anyway. > > > + > > + ret = ttm_tt_populate(bo->bdev, ttm, &ctx); > > + if (ret) > > + return ret; > > + > > + /* > > + * We need to use vmap to get the desired page protection > > + * or to make the buffer object look contiguous. > > + */ > > + prot = ttm_io_prot(mem->placement, PAGE_KERNEL); > > The calling convention has changed on drm-misc-next as well, but should be > trivial to adapt. > > Regards, > Christian. > > > + vaddr = vmap(ttm->pages, bo->num_pages, 0, prot); > > + if (!vaddr) > > + return -ENOMEM; > > + > > + dma_buf_map_set_vaddr(map, vaddr); > > + } > > + > > + return 0; > > +} > > +EXPORT_SYMBOL(ttm_bo_vmap); > > + > > +void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map *map) > > +{ > > + if (dma_buf_map_is_null(map)) > > + return; > > + > > + if (map->is_iomem) > > + iounmap(map->vaddr_iomem); > > + else > > + vunmap(map->vaddr); > > + dma_buf_map_clear(map); > > + > > + ttm_mem_io_free(bo->bdev, &bo->mem); > > +} > > +EXPORT_SYMBOL(ttm_bo_vunmap); > > + > > static int ttm_bo_wait_free_node(struct ttm_buffer_object *bo, > > bool dst_use_tt) > > { > > diff --git a/include/drm/drm_gem_ttm_helper.h b/include/drm/drm_gem_ttm_helper.h > > index 118cef76f84f..7c6d874910b8 100644 > > --- a/include/drm/drm_gem_ttm_helper.h > > +++ b/include/drm/drm_gem_ttm_helper.h > > @@ -10,11 +10,17 @@ > > #include > > #include > > +struct dma_buf_map; > > + > > #define drm_gem_ttm_of_gem(gem_obj) \ > > container_of(gem_obj, struct ttm_buffer_object, base) > > void drm_gem_ttm_print_info(struct drm_printer *p, unsigned int indent, > > const struct drm_gem_object *gem); > > +int drm_gem_ttm_vmap(struct drm_gem_object *gem, > > + struct dma_buf_map *map); > > +void drm_gem_ttm_vunmap(struct drm_gem_object *gem, > > + struct dma_buf_map *map); > > int drm_gem_ttm_mmap(struct drm_gem_object *gem, > > struct vm_area_struct *vma); > > diff --git a/include/drm/ttm/ttm_bo_api.h b/include/drm/ttm/ttm_bo_api.h > > index 37102e45e496..2c59a785374c 100644 > > --- a/include/drm/ttm/ttm_bo_api.h > > +++ b/include/drm/ttm/ttm_bo_api.h > > @@ -48,6 +48,8 @@ struct ttm_bo_global; > > struct ttm_bo_device; > > +struct dma_buf_map; > > + > > struct drm_mm_node; > > struct ttm_placement; > > @@ -494,6 +496,32 @@ int ttm_bo_kmap(struct ttm_buffer_object *bo, unsigned long start_page, > > */ > > void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map); > > +/** > > + * ttm_bo_vmap > > + * > > + * @bo: The buffer object. > > + * @map: pointer to a struct dma_buf_map representing the map. > > + * > > + * Sets up a kernel virtual mapping, using ioremap or vmap to the > > + * data in the buffer object. The parameter @map returns the virtual > > + * address as struct dma_buf_map. Unmap the buffer with ttm_bo_vunmap(). > > + * > > + * Returns > > + * -ENOMEM: Out of memory. > > + * -EINVAL: Invalid range. > > + */ > > +int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map); > > + > > +/** > > + * ttm_bo_vunmap > > + * > > + * @bo: The buffer object. > > + * @map: Object describing the map to unmap. > > + * > > + * Unmaps a kernel map set up by ttm_bo_vmap(). > > + */ > > +void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map *map); > > + > > /** > > * ttm_bo_mmap_obj - mmap memory backed by a ttm buffer object. > > * > > diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h > > index fd1aba545fdf..2e8bbecb5091 100644 > > --- a/include/linux/dma-buf-map.h > > +++ b/include/linux/dma-buf-map.h > > @@ -45,6 +45,12 @@ > > * > > * dma_buf_map_set_vaddr(&map. 0xdeadbeaf); > > * > > + * To set an address in I/O memory, use dma_buf_map_set_vaddr_iomem(). > > + * > > + * .. code-block:: c > > + * > > + * dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf); > > + * > > * Test if a mapping is valid with either dma_buf_map_is_set() or > > * dma_buf_map_is_null(). > > * > > @@ -118,6 +124,20 @@ static inline void dma_buf_map_set_vaddr(struct dma_buf_map *map, void *vaddr) > > map->is_iomem = false; > > } > > +/** > > + * dma_buf_map_set_vaddr_iomem - Sets a dma-buf mapping structure to an address in I/O memory > > + * @map: The dma-buf mapping structure > > + * @vaddr_iomem: An I/O-memory address > > + * > > + * Sets the address and the I/O-memory flag. > > + */ > > +static inline void dma_buf_map_set_vaddr_iomem(struct dma_buf_map *map, > > + void __iomem *vaddr_iomem) > > +{ > > + map->vaddr_iomem = vaddr_iomem; > > + map->is_iomem = true; > > +} > > + > > /** > > * dma_buf_map_is_equal - Compares two dma-buf mapping structures for equality > > * @lhs: The dma-buf mapping structure > -- Daniel Vetter Software Engineer, Intel Corporation http://blog.ffwll.ch From tzimmermann at suse.de Thu Oct 15 17:52:04 2020 From: tzimmermann at suse.de (Thomas Zimmermann) Date: Thu, 15 Oct 2020 19:52:04 +0200 Subject: [Spice-devel] [PATCH v4 05/10] drm/ttm: Add vmap/vunmap to TTM and TTM GEM helpers In-Reply-To: <20201015164909.GC401619@phenom.ffwll.local> References: <20201015123806.32416-1-tzimmermann@suse.de> <20201015123806.32416-6-tzimmermann@suse.de> <935d5771-5645-62a6-849c-31e286db1e30@amd.com> <20201015164909.GC401619@phenom.ffwll.local> Message-ID: <20201015195204.1745fe7f@linux-uq9g> Hi On Thu, 15 Oct 2020 18:49:09 +0200 Daniel Vetter wrote: > On Thu, Oct 15, 2020 at 04:08:13PM +0200, Christian K?nig wrote: > > Am 15.10.20 um 14:38 schrieb Thomas Zimmermann: > > > The new functions ttm_bo_{vmap,vunmap}() map and unmap a TTM BO in > > > kernel address space. The mapping's address is returned as struct > > > dma_buf_map. Each function is a simplified version of TTM's existing > > > kmap code. Both functions respect the memory's location ani/or > > > writecombine flags. > > > > > > On top TTM's functions, GEM TTM helpers got drm_gem_ttm_{vmap,vunmap}(), > > > two helpers that convert a GEM object into the TTM BO and forward the > > > call to TTM's vmap/vunmap. These helpers can be dropped into the rsp > > > GEM object callbacks. > > > > > > v4: > > > * drop ttm_kmap_obj_to_dma_buf() in favor of vmap helpers > > > (Daniel, Christian) > > > > Bunch of minor comments below, but over all look very solid to me. > > Yeah I think just duplicating the ttm bo map stuff for vmap is indeed the > cleanest. And then we can maybe push the combinatorial monster into > vmwgfx, which I think is the only user after this series. Or perhaps a > dedicated set of helpers to map an invidual page (again using the > dma_buf_map stuff). From a quick look, I'd say it should be possible to have the same interface for kmap/kunmap as for vmap/vunmap (i.e., parameters are bo and dma-buf-map). All mapping state can be deduced from this. And struct ttm_bo_kmap_obj can be killed off entirely. Best regards Thomas > > I'll let Christian with the details, but at a high level this is > definitely > > Acked-by: Daniel Vetter > > Thanks a lot for doing all this. > -Daniel > > > > > > > > > Signed-off-by: Thomas Zimmermann > > > --- > > > drivers/gpu/drm/drm_gem_ttm_helper.c | 38 +++++++++++++++ > > > drivers/gpu/drm/ttm/ttm_bo_util.c | 72 ++++++++++++++++++++++++++++ > > > include/drm/drm_gem_ttm_helper.h | 6 +++ > > > include/drm/ttm/ttm_bo_api.h | 28 +++++++++++ > > > include/linux/dma-buf-map.h | 20 ++++++++ > > > 5 files changed, 164 insertions(+) > > > > > > diff --git a/drivers/gpu/drm/drm_gem_ttm_helper.c > > > b/drivers/gpu/drm/drm_gem_ttm_helper.c index 0e4fb9ba43ad..db4c14d78a30 > > > 100644 --- a/drivers/gpu/drm/drm_gem_ttm_helper.c > > > +++ b/drivers/gpu/drm/drm_gem_ttm_helper.c > > > @@ -49,6 +49,44 @@ void drm_gem_ttm_print_info(struct drm_printer *p, > > > unsigned int indent, } > > > EXPORT_SYMBOL(drm_gem_ttm_print_info); > > > +/** > > > + * drm_gem_ttm_vmap() - vmap &ttm_buffer_object > > > + * @gem: GEM object. > > > + * @map: [out] returns the dma-buf mapping. > > > + * > > > + * Maps a GEM object with ttm_bo_vmap(). This function can be used as > > > + * &drm_gem_object_funcs.vmap callback. > > > + * > > > + * Returns: > > > + * 0 on success, or a negative errno code otherwise. > > > + */ > > > +int drm_gem_ttm_vmap(struct drm_gem_object *gem, > > > + struct dma_buf_map *map) > > > +{ > > > + struct ttm_buffer_object *bo = drm_gem_ttm_of_gem(gem); > > > + > > > + return ttm_bo_vmap(bo, map); > > > + > > > +} > > > +EXPORT_SYMBOL(drm_gem_ttm_vmap); > > > + > > > +/** > > > + * drm_gem_ttm_vunmap() - vunmap &ttm_buffer_object > > > + * @gem: GEM object. > > > + * @map: dma-buf mapping. > > > + * > > > + * Unmaps a GEM object with ttm_bo_vunmap(). This function can be used > > > as > > > + * &drm_gem_object_funcs.vmap callback. > > > + */ > > > +void drm_gem_ttm_vunmap(struct drm_gem_object *gem, > > > + struct dma_buf_map *map) > > > +{ > > > + struct ttm_buffer_object *bo = drm_gem_ttm_of_gem(gem); > > > + > > > + ttm_bo_vunmap(bo, map); > > > +} > > > +EXPORT_SYMBOL(drm_gem_ttm_vunmap); > > > + > > > /** > > > * drm_gem_ttm_mmap() - mmap &ttm_buffer_object > > > * @gem: GEM object. > > > diff --git a/drivers/gpu/drm/ttm/ttm_bo_util.c > > > b/drivers/gpu/drm/ttm/ttm_bo_util.c index bdee4df1f3f2..80c42c774c7d > > > 100644 --- a/drivers/gpu/drm/ttm/ttm_bo_util.c > > > +++ b/drivers/gpu/drm/ttm/ttm_bo_util.c > > > @@ -32,6 +32,7 @@ > > > #include > > > #include > > > #include > > > +#include > > > #include > > > #include > > > #include > > > @@ -526,6 +527,77 @@ void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map) > > > } > > > EXPORT_SYMBOL(ttm_bo_kunmap); > > > +int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map) > > > +{ > > > + struct ttm_resource *mem = &bo->mem; > > > + int ret; > > > + > > > + ret = ttm_mem_io_reserve(bo->bdev, mem); > > > + if (ret) > > > + return ret; > > > + > > > + if (mem->bus.is_iomem) { > > > + void __iomem *vaddr_iomem; > > > + unsigned long size = bo->num_pages << PAGE_SHIFT; > > > > Please use uint64_t here and make sure to cast bo->num_pages before > > shifting. > > > > We have an unit tests of allocating a 8GB BO and that should work on a > > 32bit machine as well :) > > > > > + > > > + if (mem->bus.addr) > > > + vaddr_iomem = (void *)(((u8 *)mem->bus.addr)); > > > + else if (mem->placement & TTM_PL_FLAG_WC) > > > > I've just nuked the TTM_PL_FLAG_WC flag in drm-misc-next. There is a new > > mem->bus.caching enum as replacement. > > > > > + vaddr_iomem = ioremap_wc(mem->bus.offset, > > > size); > > > + else > > > + vaddr_iomem = ioremap(mem->bus.offset, size); > > > + > > > + if (!vaddr_iomem) > > > + return -ENOMEM; > > > + > > > + dma_buf_map_set_vaddr_iomem(map, vaddr_iomem); > > > + > > > + } else { > > > + struct ttm_operation_ctx ctx = { > > > + .interruptible = false, > > > + .no_wait_gpu = false > > > + }; > > > + struct ttm_tt *ttm = bo->ttm; > > > + pgprot_t prot; > > > + void *vaddr; > > > + > > > + BUG_ON(!ttm); > > > > I think we can drop this, populate will just crash badly anyway. > > > > > + > > > + ret = ttm_tt_populate(bo->bdev, ttm, &ctx); > > > + if (ret) > > > + return ret; > > > + > > > + /* > > > + * We need to use vmap to get the desired page > > > protection > > > + * or to make the buffer object look contiguous. > > > + */ > > > + prot = ttm_io_prot(mem->placement, PAGE_KERNEL); > > > > The calling convention has changed on drm-misc-next as well, but should be > > trivial to adapt. > > > > Regards, > > Christian. > > > > > + vaddr = vmap(ttm->pages, bo->num_pages, 0, prot); > > > + if (!vaddr) > > > + return -ENOMEM; > > > + > > > + dma_buf_map_set_vaddr(map, vaddr); > > > + } > > > + > > > + return 0; > > > +} > > > +EXPORT_SYMBOL(ttm_bo_vmap); > > > + > > > +void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map > > > *map) +{ > > > + if (dma_buf_map_is_null(map)) > > > + return; > > > + > > > + if (map->is_iomem) > > > + iounmap(map->vaddr_iomem); > > > + else > > > + vunmap(map->vaddr); > > > + dma_buf_map_clear(map); > > > + > > > + ttm_mem_io_free(bo->bdev, &bo->mem); > > > +} > > > +EXPORT_SYMBOL(ttm_bo_vunmap); > > > + > > > static int ttm_bo_wait_free_node(struct ttm_buffer_object *bo, > > > bool dst_use_tt) > > > { > > > diff --git a/include/drm/drm_gem_ttm_helper.h > > > b/include/drm/drm_gem_ttm_helper.h index 118cef76f84f..7c6d874910b8 > > > 100644 --- a/include/drm/drm_gem_ttm_helper.h > > > +++ b/include/drm/drm_gem_ttm_helper.h > > > @@ -10,11 +10,17 @@ > > > #include > > > #include > > > +struct dma_buf_map; > > > + > > > #define drm_gem_ttm_of_gem(gem_obj) \ > > > container_of(gem_obj, struct ttm_buffer_object, base) > > > void drm_gem_ttm_print_info(struct drm_printer *p, unsigned int > > > indent, const struct drm_gem_object *gem); > > > +int drm_gem_ttm_vmap(struct drm_gem_object *gem, > > > + struct dma_buf_map *map); > > > +void drm_gem_ttm_vunmap(struct drm_gem_object *gem, > > > + struct dma_buf_map *map); > > > int drm_gem_ttm_mmap(struct drm_gem_object *gem, > > > struct vm_area_struct *vma); > > > diff --git a/include/drm/ttm/ttm_bo_api.h b/include/drm/ttm/ttm_bo_api.h > > > index 37102e45e496..2c59a785374c 100644 > > > --- a/include/drm/ttm/ttm_bo_api.h > > > +++ b/include/drm/ttm/ttm_bo_api.h > > > @@ -48,6 +48,8 @@ struct ttm_bo_global; > > > struct ttm_bo_device; > > > +struct dma_buf_map; > > > + > > > struct drm_mm_node; > > > struct ttm_placement; > > > @@ -494,6 +496,32 @@ int ttm_bo_kmap(struct ttm_buffer_object *bo, > > > unsigned long start_page, */ > > > void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map); > > > +/** > > > + * ttm_bo_vmap > > > + * > > > + * @bo: The buffer object. > > > + * @map: pointer to a struct dma_buf_map representing the map. > > > + * > > > + * Sets up a kernel virtual mapping, using ioremap or vmap to the > > > + * data in the buffer object. The parameter @map returns the virtual > > > + * address as struct dma_buf_map. Unmap the buffer with > > > ttm_bo_vunmap(). > > > + * > > > + * Returns > > > + * -ENOMEM: Out of memory. > > > + * -EINVAL: Invalid range. > > > + */ > > > +int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map); > > > + > > > +/** > > > + * ttm_bo_vunmap > > > + * > > > + * @bo: The buffer object. > > > + * @map: Object describing the map to unmap. > > > + * > > > + * Unmaps a kernel map set up by ttm_bo_vmap(). > > > + */ > > > +void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map > > > *map); + > > > /** > > > * ttm_bo_mmap_obj - mmap memory backed by a ttm buffer object. > > > * > > > diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h > > > index fd1aba545fdf..2e8bbecb5091 100644 > > > --- a/include/linux/dma-buf-map.h > > > +++ b/include/linux/dma-buf-map.h > > > @@ -45,6 +45,12 @@ > > > * > > > * dma_buf_map_set_vaddr(&map. 0xdeadbeaf); > > > * > > > + * To set an address in I/O memory, use dma_buf_map_set_vaddr_iomem(). > > > + * > > > + * .. code-block:: c > > > + * > > > + * dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf); > > > + * > > > * Test if a mapping is valid with either dma_buf_map_is_set() or > > > * dma_buf_map_is_null(). > > > * > > > @@ -118,6 +124,20 @@ static inline void dma_buf_map_set_vaddr(struct > > > dma_buf_map *map, void *vaddr) map->is_iomem = false; > > > } > > > +/** > > > + * dma_buf_map_set_vaddr_iomem - Sets a dma-buf mapping structure to > > > an address in I/O memory > > > + * @map: The dma-buf mapping structure > > > + * @vaddr_iomem: An I/O-memory address > > > + * > > > + * Sets the address and the I/O-memory flag. > > > + */ > > > +static inline void dma_buf_map_set_vaddr_iomem(struct dma_buf_map *map, > > > + void __iomem > > > *vaddr_iomem) +{ > > > + map->vaddr_iomem = vaddr_iomem; > > > + map->is_iomem = true; > > > +} > > > + > > > /** > > > * dma_buf_map_is_equal - Compares two dma-buf mapping structures for > > > equality > > > * @lhs: The dma-buf mapping structure > > > -- Thomas Zimmermann Graphics Driver Developer SUSE Software Solutions Germany GmbH Maxfeldstr. 5, 90409 N?rnberg, Germany (HRB 36809, AG N?rnberg) Gesch?ftsf?hrer: Felix Imend?rffer From tzimmermann at suse.de Thu Oct 15 17:56:34 2020 From: tzimmermann at suse.de (Thomas Zimmermann) Date: Thu, 15 Oct 2020 19:56:34 +0200 Subject: [Spice-devel] [PATCH v4 05/10] drm/ttm: Add vmap/vunmap to TTM and TTM GEM helpers In-Reply-To: <935d5771-5645-62a6-849c-31e286db1e30@amd.com> References: <20201015123806.32416-1-tzimmermann@suse.de> <20201015123806.32416-6-tzimmermann@suse.de> <935d5771-5645-62a6-849c-31e286db1e30@amd.com> Message-ID: <20201015195634.0221c84e@linux-uq9g> Hi On Thu, 15 Oct 2020 16:08:13 +0200 Christian K?nig wrote: > Am 15.10.20 um 14:38 schrieb Thomas Zimmermann: > > The new functions ttm_bo_{vmap,vunmap}() map and unmap a TTM BO in kernel > > address space. The mapping's address is returned as struct dma_buf_map. > > Each function is a simplified version of TTM's existing kmap code. Both > > functions respect the memory's location ani/or writecombine flags. > > > > On top TTM's functions, GEM TTM helpers got drm_gem_ttm_{vmap,vunmap}(), > > two helpers that convert a GEM object into the TTM BO and forward the call > > to TTM's vmap/vunmap. These helpers can be dropped into the rsp GEM object > > callbacks. > > > > v4: > > * drop ttm_kmap_obj_to_dma_buf() in favor of vmap helpers (Daniel, > > Christian) > > Bunch of minor comments below, but over all look very solid to me. > > > > > Signed-off-by: Thomas Zimmermann > > --- > > drivers/gpu/drm/drm_gem_ttm_helper.c | 38 +++++++++++++++ > > drivers/gpu/drm/ttm/ttm_bo_util.c | 72 ++++++++++++++++++++++++++++ > > include/drm/drm_gem_ttm_helper.h | 6 +++ > > include/drm/ttm/ttm_bo_api.h | 28 +++++++++++ > > include/linux/dma-buf-map.h | 20 ++++++++ > > 5 files changed, 164 insertions(+) > > > > diff --git a/drivers/gpu/drm/drm_gem_ttm_helper.c > > b/drivers/gpu/drm/drm_gem_ttm_helper.c index 0e4fb9ba43ad..db4c14d78a30 > > 100644 --- a/drivers/gpu/drm/drm_gem_ttm_helper.c > > +++ b/drivers/gpu/drm/drm_gem_ttm_helper.c > > @@ -49,6 +49,44 @@ void drm_gem_ttm_print_info(struct drm_printer *p, > > unsigned int indent, } > > EXPORT_SYMBOL(drm_gem_ttm_print_info); > > > > +/** > > + * drm_gem_ttm_vmap() - vmap &ttm_buffer_object > > + * @gem: GEM object. > > + * @map: [out] returns the dma-buf mapping. > > + * > > + * Maps a GEM object with ttm_bo_vmap(). This function can be used as > > + * &drm_gem_object_funcs.vmap callback. > > + * > > + * Returns: > > + * 0 on success, or a negative errno code otherwise. > > + */ > > +int drm_gem_ttm_vmap(struct drm_gem_object *gem, > > + struct dma_buf_map *map) > > +{ > > + struct ttm_buffer_object *bo = drm_gem_ttm_of_gem(gem); > > + > > + return ttm_bo_vmap(bo, map); > > + > > +} > > +EXPORT_SYMBOL(drm_gem_ttm_vmap); > > + > > +/** > > + * drm_gem_ttm_vunmap() - vunmap &ttm_buffer_object > > + * @gem: GEM object. > > + * @map: dma-buf mapping. > > + * > > + * Unmaps a GEM object with ttm_bo_vunmap(). This function can be used as > > + * &drm_gem_object_funcs.vmap callback. > > + */ > > +void drm_gem_ttm_vunmap(struct drm_gem_object *gem, > > + struct dma_buf_map *map) > > +{ > > + struct ttm_buffer_object *bo = drm_gem_ttm_of_gem(gem); > > + > > + ttm_bo_vunmap(bo, map); > > +} > > +EXPORT_SYMBOL(drm_gem_ttm_vunmap); > > + > > /** > > * drm_gem_ttm_mmap() - mmap &ttm_buffer_object > > * @gem: GEM object. > > diff --git a/drivers/gpu/drm/ttm/ttm_bo_util.c > > b/drivers/gpu/drm/ttm/ttm_bo_util.c index bdee4df1f3f2..80c42c774c7d > > 100644 --- a/drivers/gpu/drm/ttm/ttm_bo_util.c > > +++ b/drivers/gpu/drm/ttm/ttm_bo_util.c > > @@ -32,6 +32,7 @@ > > #include > > #include > > #include > > +#include > > #include > > #include > > #include > > @@ -526,6 +527,77 @@ void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map) > > } > > EXPORT_SYMBOL(ttm_bo_kunmap); > > > > +int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map) > > +{ > > + struct ttm_resource *mem = &bo->mem; > > + int ret; > > + > > + ret = ttm_mem_io_reserve(bo->bdev, mem); > > + if (ret) > > + return ret; > > + > > + if (mem->bus.is_iomem) { > > + void __iomem *vaddr_iomem; > > + unsigned long size = bo->num_pages << PAGE_SHIFT; > > Please use uint64_t here and make sure to cast bo->num_pages before > shifting. > > We have an unit tests of allocating a 8GB BO and that should work on a > 32bit machine as well :) > > > + > > + if (mem->bus.addr) > > + vaddr_iomem = (void *)(((u8 *)mem->bus.addr)); > > + else if (mem->placement & TTM_PL_FLAG_WC) > > I've just nuked the TTM_PL_FLAG_WC flag in drm-misc-next. There is a new > mem->bus.caching enum as replacement. > > > + vaddr_iomem = ioremap_wc(mem->bus.offset, size); > > + else > > + vaddr_iomem = ioremap(mem->bus.offset, size); > > + > > + if (!vaddr_iomem) > > + return -ENOMEM; > > + > > + dma_buf_map_set_vaddr_iomem(map, vaddr_iomem); > > + > > + } else { > > + struct ttm_operation_ctx ctx = { > > + .interruptible = false, > > + .no_wait_gpu = false > > + }; > > + struct ttm_tt *ttm = bo->ttm; > > + pgprot_t prot; > > + void *vaddr; > > + > > + BUG_ON(!ttm); > > I think we can drop this, populate will just crash badly anyway. > > > + > > + ret = ttm_tt_populate(bo->bdev, ttm, &ctx); > > + if (ret) > > + return ret; > > + > > + /* > > + * We need to use vmap to get the desired page protection > > + * or to make the buffer object look contiguous. > > + */ > > + prot = ttm_io_prot(mem->placement, PAGE_KERNEL); > > The calling convention has changed on drm-misc-next as well, but should > be trivial to adapt. Thanks for quickly reviewing these patches. My drm-tip seems out of date (last Sunday). TTM is moving fast these days and I still have to get used to that. :) Best regards Thomas > > Regards, > Christian. > > > + vaddr = vmap(ttm->pages, bo->num_pages, 0, prot); > > + if (!vaddr) > > + return -ENOMEM; > > + > > + dma_buf_map_set_vaddr(map, vaddr); > > + } > > + > > + return 0; > > +} > > +EXPORT_SYMBOL(ttm_bo_vmap); > > + > > +void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map *map) > > +{ > > + if (dma_buf_map_is_null(map)) > > + return; > > + > > + if (map->is_iomem) > > + iounmap(map->vaddr_iomem); > > + else > > + vunmap(map->vaddr); > > + dma_buf_map_clear(map); > > + > > + ttm_mem_io_free(bo->bdev, &bo->mem); > > +} > > +EXPORT_SYMBOL(ttm_bo_vunmap); > > + > > static int ttm_bo_wait_free_node(struct ttm_buffer_object *bo, > > bool dst_use_tt) > > { > > diff --git a/include/drm/drm_gem_ttm_helper.h > > b/include/drm/drm_gem_ttm_helper.h index 118cef76f84f..7c6d874910b8 100644 > > --- a/include/drm/drm_gem_ttm_helper.h > > +++ b/include/drm/drm_gem_ttm_helper.h > > @@ -10,11 +10,17 @@ > > #include > > #include > > > > +struct dma_buf_map; > > + > > #define drm_gem_ttm_of_gem(gem_obj) \ > > container_of(gem_obj, struct ttm_buffer_object, base) > > > > void drm_gem_ttm_print_info(struct drm_printer *p, unsigned int indent, > > const struct drm_gem_object *gem); > > +int drm_gem_ttm_vmap(struct drm_gem_object *gem, > > + struct dma_buf_map *map); > > +void drm_gem_ttm_vunmap(struct drm_gem_object *gem, > > + struct dma_buf_map *map); > > int drm_gem_ttm_mmap(struct drm_gem_object *gem, > > struct vm_area_struct *vma); > > > > diff --git a/include/drm/ttm/ttm_bo_api.h b/include/drm/ttm/ttm_bo_api.h > > index 37102e45e496..2c59a785374c 100644 > > --- a/include/drm/ttm/ttm_bo_api.h > > +++ b/include/drm/ttm/ttm_bo_api.h > > @@ -48,6 +48,8 @@ struct ttm_bo_global; > > > > struct ttm_bo_device; > > > > +struct dma_buf_map; > > + > > struct drm_mm_node; > > > > struct ttm_placement; > > @@ -494,6 +496,32 @@ int ttm_bo_kmap(struct ttm_buffer_object *bo, > > unsigned long start_page, */ > > void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map); > > > > +/** > > + * ttm_bo_vmap > > + * > > + * @bo: The buffer object. > > + * @map: pointer to a struct dma_buf_map representing the map. > > + * > > + * Sets up a kernel virtual mapping, using ioremap or vmap to the > > + * data in the buffer object. The parameter @map returns the virtual > > + * address as struct dma_buf_map. Unmap the buffer with ttm_bo_vunmap(). > > + * > > + * Returns > > + * -ENOMEM: Out of memory. > > + * -EINVAL: Invalid range. > > + */ > > +int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map); > > + > > +/** > > + * ttm_bo_vunmap > > + * > > + * @bo: The buffer object. > > + * @map: Object describing the map to unmap. > > + * > > + * Unmaps a kernel map set up by ttm_bo_vmap(). > > + */ > > +void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map > > *map); + > > /** > > * ttm_bo_mmap_obj - mmap memory backed by a ttm buffer object. > > * > > diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h > > index fd1aba545fdf..2e8bbecb5091 100644 > > --- a/include/linux/dma-buf-map.h > > +++ b/include/linux/dma-buf-map.h > > @@ -45,6 +45,12 @@ > > * > > * dma_buf_map_set_vaddr(&map. 0xdeadbeaf); > > * > > + * To set an address in I/O memory, use dma_buf_map_set_vaddr_iomem(). > > + * > > + * .. code-block:: c > > + * > > + * dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf); > > + * > > * Test if a mapping is valid with either dma_buf_map_is_set() or > > * dma_buf_map_is_null(). > > * > > @@ -118,6 +124,20 @@ static inline void dma_buf_map_set_vaddr(struct > > dma_buf_map *map, void *vaddr) map->is_iomem = false; > > } > > > > +/** > > + * dma_buf_map_set_vaddr_iomem - Sets a dma-buf mapping structure to an > > address in I/O memory > > + * @map: The dma-buf mapping structure > > + * @vaddr_iomem: An I/O-memory address > > + * > > + * Sets the address and the I/O-memory flag. > > + */ > > +static inline void dma_buf_map_set_vaddr_iomem(struct dma_buf_map *map, > > + void __iomem *vaddr_iomem) > > +{ > > + map->vaddr_iomem = vaddr_iomem; > > + map->is_iomem = true; > > +} > > + > > /** > > * dma_buf_map_is_equal - Compares two dma-buf mapping structures for > > equality > > * @lhs: The dma-buf mapping structure > > _______________________________________________ > dri-devel mailing list > dri-devel at lists.freedesktop.org > https://lists.freedesktop.org/mailman/listinfo/dri-devel -- Thomas Zimmermann Graphics Driver Developer SUSE Software Solutions Germany GmbH Maxfeldstr. 5, 90409 N?rnberg, Germany (HRB 36809, AG N?rnberg) Gesch?ftsf?hrer: Felix Imend?rffer From tzimmermann at suse.de Fri Oct 16 10:39:31 2020 From: tzimmermann at suse.de (Thomas Zimmermann) Date: Fri, 16 Oct 2020 12:39:31 +0200 Subject: [Spice-devel] [PATCH v4 09/10] dma-buf-map: Add memcpy and pointer-increment interfaces In-Reply-To: <20201016100854.GA1042954@ravnborg.org> References: <20201015123806.32416-1-tzimmermann@suse.de> <20201015123806.32416-10-tzimmermann@suse.de> <20201016100854.GA1042954@ravnborg.org> Message-ID: <20201016123931.10dd3930@linux-uq9g> Hi Sam On Fri, 16 Oct 2020 12:08:54 +0200 Sam Ravnborg wrote: > Hi Thomas. > > On Thu, Oct 15, 2020 at 02:38:05PM +0200, Thomas Zimmermann wrote: > > To do framebuffer updates, one needs memcpy from system memory and a > > pointer-increment function. Add both interfaces with documentation. > > > > Signed-off-by: Thomas Zimmermann > > Looks good. > Reviewed-by: Sam Ravnborg Thanks. If you have the time, may I ask you to test this patchset on the bochs/sparc64 system that failed with the original code? Best regards Thomas > > > --- > > include/linux/dma-buf-map.h | 72 +++++++++++++++++++++++++++++++------ > > 1 file changed, 62 insertions(+), 10 deletions(-) > > > > diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h > > index 2e8bbecb5091..6ca0f304dda2 100644 > > --- a/include/linux/dma-buf-map.h > > +++ b/include/linux/dma-buf-map.h > > @@ -32,6 +32,14 @@ > > * accessing the buffer. Use the returned instance and the helper > > functions > > * to access the buffer's memory in the correct way. > > * > > + * The type :c:type:`struct dma_buf_map ` and its helpers > > are > > + * actually independent from the dma-buf infrastructure. When sharing > > buffers > > + * among devices, drivers have to know the location of the memory to > > access > > + * the buffers in a safe way. :c:type:`struct dma_buf_map ` > > + * solves this problem for dma-buf and its users. If other drivers or > > + * sub-systems require similar functionality, the type could be > > generalized > > + * and moved to a more prominent header file. > > + * > > * Open-coding access to :c:type:`struct dma_buf_map ` is > > * considered bad style. Rather then accessing its fields directly, use > > one > > * of the provided helper functions, or implement your own. For example, > > @@ -51,6 +59,14 @@ > > * > > * dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf); > > * > > + * Instances of struct dma_buf_map do not have to be cleaned up, but > > + * can be cleared to NULL with dma_buf_map_clear(). Cleared mappings > > + * always refer to system memory. > > + * > > + * .. code-block:: c > > + * > > + * dma_buf_map_clear(&map); > > + * > > * Test if a mapping is valid with either dma_buf_map_is_set() or > > * dma_buf_map_is_null(). > > * > > @@ -73,17 +89,19 @@ > > * if (dma_buf_map_is_equal(&sys_map, &io_map)) > > * // always false > > * > > - * Instances of struct dma_buf_map do not have to be cleaned up, but > > - * can be cleared to NULL with dma_buf_map_clear(). Cleared mappings > > - * always refer to system memory. > > + * A set up instance of struct dma_buf_map can be used to access or > > manipulate > > + * the buffer memory. Depending on the location of the memory, the > > provided > > + * helpers will pick the correct operations. Data can be copied into the > > memory > > + * with dma_buf_map_memcpy_to(). The address can be manipulated with > > + * dma_buf_map_incr(). > > * > > - * The type :c:type:`struct dma_buf_map ` and its helpers > > are > > - * actually independent from the dma-buf infrastructure. When sharing > > buffers > > - * among devices, drivers have to know the location of the memory to > > access > > - * the buffers in a safe way. :c:type:`struct dma_buf_map ` > > - * solves this problem for dma-buf and its users. If other drivers or > > - * sub-systems require similar functionality, the type could be > > generalized > > - * and moved to a more prominent header file. > > + * .. code-block:: c > > + * > > + * const void *src = ...; // source buffer > > + * size_t len = ...; // length of src > > + * > > + * dma_buf_map_memcpy_to(&map, src, len); > > + * dma_buf_map_incr(&map, len); // go to first byte after the > > memcpy */ > > > > /** > > @@ -210,4 +228,38 @@ static inline void dma_buf_map_clear(struct > > dma_buf_map *map) } > > } > > > > +/** > > + * dma_buf_map_memcpy_to - Memcpy into dma-buf mapping > > + * @dst: The dma-buf mapping structure > > + * @src: The source buffer > > + * @len: The number of byte in src > > + * > > + * Copies data into a dma-buf mapping. The source buffer is in system > > + * memory. Depending on the buffer's location, the helper picks the > > correct > > + * method of accessing the memory. > > + */ > > +static inline void dma_buf_map_memcpy_to(struct dma_buf_map *dst, const > > void *src, size_t len) +{ > > + if (dst->is_iomem) > > + memcpy_toio(dst->vaddr_iomem, src, len); > > + else > > + memcpy(dst->vaddr, src, len); > > +} > > + > > +/** > > + * dma_buf_map_incr - Increments the address stored in a dma-buf mapping > > + * @map: The dma-buf mapping structure > > + * @incr: The number of bytes to increment > > + * > > + * Increments the address stored in a dma-buf mapping. Depending on the > > + * buffer's location, the correct value will be updated. > > + */ > > +static inline void dma_buf_map_incr(struct dma_buf_map *map, size_t incr) > > +{ > > + if (map->is_iomem) > > + map->vaddr_iomem += incr; > > + else > > + map->vaddr += incr; > > +} > > + > > #endif /* __DMA_BUF_MAP_H__ */ > > -- > > 2.28.0 > _______________________________________________ > dri-devel mailing list > dri-devel at lists.freedesktop.org > https://lists.freedesktop.org/mailman/listinfo/dri-devel -- Thomas Zimmermann Graphics Driver Developer SUSE Software Solutions Germany GmbH Maxfeldstr. 5, 90409 N?rnberg, Germany (HRB 36809, AG N?rnberg) Gesch?ftsf?hrer: Felix Imend?rffer From christian.koenig at amd.com Fri Oct 16 09:41:18 2020 From: christian.koenig at amd.com (=?UTF-8?Q?Christian_K=c3=b6nig?=) Date: Fri, 16 Oct 2020 11:41:18 +0200 Subject: [Spice-devel] [PATCH v4 05/10] drm/ttm: Add vmap/vunmap to TTM and TTM GEM helpers In-Reply-To: <20201015195204.1745fe7f@linux-uq9g> References: <20201015123806.32416-1-tzimmermann@suse.de> <20201015123806.32416-6-tzimmermann@suse.de> <935d5771-5645-62a6-849c-31e286db1e30@amd.com> <20201015164909.GC401619@phenom.ffwll.local> <20201015195204.1745fe7f@linux-uq9g> Message-ID: <64130e2a-0e45-60da-2929-6378f59bfe97@amd.com> Am 15.10.20 um 19:52 schrieb Thomas Zimmermann: > Hi > > On Thu, 15 Oct 2020 18:49:09 +0200 Daniel Vetter wrote: > >> On Thu, Oct 15, 2020 at 04:08:13PM +0200, Christian K?nig wrote: >>> Am 15.10.20 um 14:38 schrieb Thomas Zimmermann: >>>> The new functions ttm_bo_{vmap,vunmap}() map and unmap a TTM BO in >>>> kernel address space. The mapping's address is returned as struct >>>> dma_buf_map. Each function is a simplified version of TTM's existing >>>> kmap code. Both functions respect the memory's location ani/or >>>> writecombine flags. >>>> >>>> On top TTM's functions, GEM TTM helpers got drm_gem_ttm_{vmap,vunmap}(), >>>> two helpers that convert a GEM object into the TTM BO and forward the >>>> call to TTM's vmap/vunmap. These helpers can be dropped into the rsp >>>> GEM object callbacks. >>>> >>>> v4: >>>> * drop ttm_kmap_obj_to_dma_buf() in favor of vmap helpers >>>> (Daniel, Christian) >>> Bunch of minor comments below, but over all look very solid to me. >> Yeah I think just duplicating the ttm bo map stuff for vmap is indeed the >> cleanest. And then we can maybe push the combinatorial monster into >> vmwgfx, which I think is the only user after this series. Or perhaps a >> dedicated set of helpers to map an invidual page (again using the >> dma_buf_map stuff). > From a quick look, I'd say it should be possible to have the same interface > for kmap/kunmap as for vmap/vunmap (i.e., parameters are bo and dma-buf-map). > All mapping state can be deduced from this. And struct ttm_bo_kmap_obj can be > killed off entirely. Yes, that would be rather nice to have. Thanks, Christian. > > Best regards > Thomas > >> I'll let Christian with the details, but at a high level this is >> definitely >> >> Acked-by: Daniel Vetter >> >> Thanks a lot for doing all this. >> -Daniel >> >>>> Signed-off-by: Thomas Zimmermann >>>> --- >>>> drivers/gpu/drm/drm_gem_ttm_helper.c | 38 +++++++++++++++ >>>> drivers/gpu/drm/ttm/ttm_bo_util.c | 72 ++++++++++++++++++++++++++++ >>>> include/drm/drm_gem_ttm_helper.h | 6 +++ >>>> include/drm/ttm/ttm_bo_api.h | 28 +++++++++++ >>>> include/linux/dma-buf-map.h | 20 ++++++++ >>>> 5 files changed, 164 insertions(+) >>>> >>>> diff --git a/drivers/gpu/drm/drm_gem_ttm_helper.c >>>> b/drivers/gpu/drm/drm_gem_ttm_helper.c index 0e4fb9ba43ad..db4c14d78a30 >>>> 100644 --- a/drivers/gpu/drm/drm_gem_ttm_helper.c >>>> +++ b/drivers/gpu/drm/drm_gem_ttm_helper.c >>>> @@ -49,6 +49,44 @@ void drm_gem_ttm_print_info(struct drm_printer *p, >>>> unsigned int indent, } >>>> EXPORT_SYMBOL(drm_gem_ttm_print_info); >>>> +/** >>>> + * drm_gem_ttm_vmap() - vmap &ttm_buffer_object >>>> + * @gem: GEM object. >>>> + * @map: [out] returns the dma-buf mapping. >>>> + * >>>> + * Maps a GEM object with ttm_bo_vmap(). This function can be used as >>>> + * &drm_gem_object_funcs.vmap callback. >>>> + * >>>> + * Returns: >>>> + * 0 on success, or a negative errno code otherwise. >>>> + */ >>>> +int drm_gem_ttm_vmap(struct drm_gem_object *gem, >>>> + struct dma_buf_map *map) >>>> +{ >>>> + struct ttm_buffer_object *bo = drm_gem_ttm_of_gem(gem); >>>> + >>>> + return ttm_bo_vmap(bo, map); >>>> + >>>> +} >>>> +EXPORT_SYMBOL(drm_gem_ttm_vmap); >>>> + >>>> +/** >>>> + * drm_gem_ttm_vunmap() - vunmap &ttm_buffer_object >>>> + * @gem: GEM object. >>>> + * @map: dma-buf mapping. >>>> + * >>>> + * Unmaps a GEM object with ttm_bo_vunmap(). This function can be used >>>> as >>>> + * &drm_gem_object_funcs.vmap callback. >>>> + */ >>>> +void drm_gem_ttm_vunmap(struct drm_gem_object *gem, >>>> + struct dma_buf_map *map) >>>> +{ >>>> + struct ttm_buffer_object *bo = drm_gem_ttm_of_gem(gem); >>>> + >>>> + ttm_bo_vunmap(bo, map); >>>> +} >>>> +EXPORT_SYMBOL(drm_gem_ttm_vunmap); >>>> + >>>> /** >>>> * drm_gem_ttm_mmap() - mmap &ttm_buffer_object >>>> * @gem: GEM object. >>>> diff --git a/drivers/gpu/drm/ttm/ttm_bo_util.c >>>> b/drivers/gpu/drm/ttm/ttm_bo_util.c index bdee4df1f3f2..80c42c774c7d >>>> 100644 --- a/drivers/gpu/drm/ttm/ttm_bo_util.c >>>> +++ b/drivers/gpu/drm/ttm/ttm_bo_util.c >>>> @@ -32,6 +32,7 @@ >>>> #include >>>> #include >>>> #include >>>> +#include >>>> #include >>>> #include >>>> #include >>>> @@ -526,6 +527,77 @@ void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map) >>>> } >>>> EXPORT_SYMBOL(ttm_bo_kunmap); >>>> +int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map) >>>> +{ >>>> + struct ttm_resource *mem = &bo->mem; >>>> + int ret; >>>> + >>>> + ret = ttm_mem_io_reserve(bo->bdev, mem); >>>> + if (ret) >>>> + return ret; >>>> + >>>> + if (mem->bus.is_iomem) { >>>> + void __iomem *vaddr_iomem; >>>> + unsigned long size = bo->num_pages << PAGE_SHIFT; >>> Please use uint64_t here and make sure to cast bo->num_pages before >>> shifting. >>> >>> We have an unit tests of allocating a 8GB BO and that should work on a >>> 32bit machine as well :) >>> >>>> + >>>> + if (mem->bus.addr) >>>> + vaddr_iomem = (void *)(((u8 *)mem->bus.addr)); >>>> + else if (mem->placement & TTM_PL_FLAG_WC) >>> I've just nuked the TTM_PL_FLAG_WC flag in drm-misc-next. There is a new >>> mem->bus.caching enum as replacement. >>> >>>> + vaddr_iomem = ioremap_wc(mem->bus.offset, >>>> size); >>>> + else >>>> + vaddr_iomem = ioremap(mem->bus.offset, size); >>>> + >>>> + if (!vaddr_iomem) >>>> + return -ENOMEM; >>>> + >>>> + dma_buf_map_set_vaddr_iomem(map, vaddr_iomem); >>>> + >>>> + } else { >>>> + struct ttm_operation_ctx ctx = { >>>> + .interruptible = false, >>>> + .no_wait_gpu = false >>>> + }; >>>> + struct ttm_tt *ttm = bo->ttm; >>>> + pgprot_t prot; >>>> + void *vaddr; >>>> + >>>> + BUG_ON(!ttm); >>> I think we can drop this, populate will just crash badly anyway. >>> >>>> + >>>> + ret = ttm_tt_populate(bo->bdev, ttm, &ctx); >>>> + if (ret) >>>> + return ret; >>>> + >>>> + /* >>>> + * We need to use vmap to get the desired page >>>> protection >>>> + * or to make the buffer object look contiguous. >>>> + */ >>>> + prot = ttm_io_prot(mem->placement, PAGE_KERNEL); >>> The calling convention has changed on drm-misc-next as well, but should be >>> trivial to adapt. >>> >>> Regards, >>> Christian. >>> >>>> + vaddr = vmap(ttm->pages, bo->num_pages, 0, prot); >>>> + if (!vaddr) >>>> + return -ENOMEM; >>>> + >>>> + dma_buf_map_set_vaddr(map, vaddr); >>>> + } >>>> + >>>> + return 0; >>>> +} >>>> +EXPORT_SYMBOL(ttm_bo_vmap); >>>> + >>>> +void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map >>>> *map) +{ >>>> + if (dma_buf_map_is_null(map)) >>>> + return; >>>> + >>>> + if (map->is_iomem) >>>> + iounmap(map->vaddr_iomem); >>>> + else >>>> + vunmap(map->vaddr); >>>> + dma_buf_map_clear(map); >>>> + >>>> + ttm_mem_io_free(bo->bdev, &bo->mem); >>>> +} >>>> +EXPORT_SYMBOL(ttm_bo_vunmap); >>>> + >>>> static int ttm_bo_wait_free_node(struct ttm_buffer_object *bo, >>>> bool dst_use_tt) >>>> { >>>> diff --git a/include/drm/drm_gem_ttm_helper.h >>>> b/include/drm/drm_gem_ttm_helper.h index 118cef76f84f..7c6d874910b8 >>>> 100644 --- a/include/drm/drm_gem_ttm_helper.h >>>> +++ b/include/drm/drm_gem_ttm_helper.h >>>> @@ -10,11 +10,17 @@ >>>> #include >>>> #include >>>> +struct dma_buf_map; >>>> + >>>> #define drm_gem_ttm_of_gem(gem_obj) \ >>>> container_of(gem_obj, struct ttm_buffer_object, base) >>>> void drm_gem_ttm_print_info(struct drm_printer *p, unsigned int >>>> indent, const struct drm_gem_object *gem); >>>> +int drm_gem_ttm_vmap(struct drm_gem_object *gem, >>>> + struct dma_buf_map *map); >>>> +void drm_gem_ttm_vunmap(struct drm_gem_object *gem, >>>> + struct dma_buf_map *map); >>>> int drm_gem_ttm_mmap(struct drm_gem_object *gem, >>>> struct vm_area_struct *vma); >>>> diff --git a/include/drm/ttm/ttm_bo_api.h b/include/drm/ttm/ttm_bo_api.h >>>> index 37102e45e496..2c59a785374c 100644 >>>> --- a/include/drm/ttm/ttm_bo_api.h >>>> +++ b/include/drm/ttm/ttm_bo_api.h >>>> @@ -48,6 +48,8 @@ struct ttm_bo_global; >>>> struct ttm_bo_device; >>>> +struct dma_buf_map; >>>> + >>>> struct drm_mm_node; >>>> struct ttm_placement; >>>> @@ -494,6 +496,32 @@ int ttm_bo_kmap(struct ttm_buffer_object *bo, >>>> unsigned long start_page, */ >>>> void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map); >>>> +/** >>>> + * ttm_bo_vmap >>>> + * >>>> + * @bo: The buffer object. >>>> + * @map: pointer to a struct dma_buf_map representing the map. >>>> + * >>>> + * Sets up a kernel virtual mapping, using ioremap or vmap to the >>>> + * data in the buffer object. The parameter @map returns the virtual >>>> + * address as struct dma_buf_map. Unmap the buffer with >>>> ttm_bo_vunmap(). >>>> + * >>>> + * Returns >>>> + * -ENOMEM: Out of memory. >>>> + * -EINVAL: Invalid range. >>>> + */ >>>> +int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map); >>>> + >>>> +/** >>>> + * ttm_bo_vunmap >>>> + * >>>> + * @bo: The buffer object. >>>> + * @map: Object describing the map to unmap. >>>> + * >>>> + * Unmaps a kernel map set up by ttm_bo_vmap(). >>>> + */ >>>> +void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map >>>> *map); + >>>> /** >>>> * ttm_bo_mmap_obj - mmap memory backed by a ttm buffer object. >>>> * >>>> diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h >>>> index fd1aba545fdf..2e8bbecb5091 100644 >>>> --- a/include/linux/dma-buf-map.h >>>> +++ b/include/linux/dma-buf-map.h >>>> @@ -45,6 +45,12 @@ >>>> * >>>> * dma_buf_map_set_vaddr(&map. 0xdeadbeaf); >>>> * >>>> + * To set an address in I/O memory, use dma_buf_map_set_vaddr_iomem(). >>>> + * >>>> + * .. code-block:: c >>>> + * >>>> + * dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf); >>>> + * >>>> * Test if a mapping is valid with either dma_buf_map_is_set() or >>>> * dma_buf_map_is_null(). >>>> * >>>> @@ -118,6 +124,20 @@ static inline void dma_buf_map_set_vaddr(struct >>>> dma_buf_map *map, void *vaddr) map->is_iomem = false; >>>> } >>>> +/** >>>> + * dma_buf_map_set_vaddr_iomem - Sets a dma-buf mapping structure to >>>> an address in I/O memory >>>> + * @map: The dma-buf mapping structure >>>> + * @vaddr_iomem: An I/O-memory address >>>> + * >>>> + * Sets the address and the I/O-memory flag. >>>> + */ >>>> +static inline void dma_buf_map_set_vaddr_iomem(struct dma_buf_map *map, >>>> + void __iomem >>>> *vaddr_iomem) +{ >>>> + map->vaddr_iomem = vaddr_iomem; >>>> + map->is_iomem = true; >>>> +} >>>> + >>>> /** >>>> * dma_buf_map_is_equal - Compares two dma-buf mapping structures for >>>> equality >>>> * @lhs: The dma-buf mapping structure > > From sam at ravnborg.org Fri Oct 16 10:08:54 2020 From: sam at ravnborg.org (Sam Ravnborg) Date: Fri, 16 Oct 2020 12:08:54 +0200 Subject: [Spice-devel] [PATCH v4 09/10] dma-buf-map: Add memcpy and pointer-increment interfaces In-Reply-To: <20201015123806.32416-10-tzimmermann@suse.de> References: <20201015123806.32416-1-tzimmermann@suse.de> <20201015123806.32416-10-tzimmermann@suse.de> Message-ID: <20201016100854.GA1042954@ravnborg.org> Hi Thomas. On Thu, Oct 15, 2020 at 02:38:05PM +0200, Thomas Zimmermann wrote: > To do framebuffer updates, one needs memcpy from system memory and a > pointer-increment function. Add both interfaces with documentation. > > Signed-off-by: Thomas Zimmermann Looks good. Reviewed-by: Sam Ravnborg > --- > include/linux/dma-buf-map.h | 72 +++++++++++++++++++++++++++++++------ > 1 file changed, 62 insertions(+), 10 deletions(-) > > diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h > index 2e8bbecb5091..6ca0f304dda2 100644 > --- a/include/linux/dma-buf-map.h > +++ b/include/linux/dma-buf-map.h > @@ -32,6 +32,14 @@ > * accessing the buffer. Use the returned instance and the helper functions > * to access the buffer's memory in the correct way. > * > + * The type :c:type:`struct dma_buf_map ` and its helpers are > + * actually independent from the dma-buf infrastructure. When sharing buffers > + * among devices, drivers have to know the location of the memory to access > + * the buffers in a safe way. :c:type:`struct dma_buf_map ` > + * solves this problem for dma-buf and its users. If other drivers or > + * sub-systems require similar functionality, the type could be generalized > + * and moved to a more prominent header file. > + * > * Open-coding access to :c:type:`struct dma_buf_map ` is > * considered bad style. Rather then accessing its fields directly, use one > * of the provided helper functions, or implement your own. For example, > @@ -51,6 +59,14 @@ > * > * dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf); > * > + * Instances of struct dma_buf_map do not have to be cleaned up, but > + * can be cleared to NULL with dma_buf_map_clear(). Cleared mappings > + * always refer to system memory. > + * > + * .. code-block:: c > + * > + * dma_buf_map_clear(&map); > + * > * Test if a mapping is valid with either dma_buf_map_is_set() or > * dma_buf_map_is_null(). > * > @@ -73,17 +89,19 @@ > * if (dma_buf_map_is_equal(&sys_map, &io_map)) > * // always false > * > - * Instances of struct dma_buf_map do not have to be cleaned up, but > - * can be cleared to NULL with dma_buf_map_clear(). Cleared mappings > - * always refer to system memory. > + * A set up instance of struct dma_buf_map can be used to access or manipulate > + * the buffer memory. Depending on the location of the memory, the provided > + * helpers will pick the correct operations. Data can be copied into the memory > + * with dma_buf_map_memcpy_to(). The address can be manipulated with > + * dma_buf_map_incr(). > * > - * The type :c:type:`struct dma_buf_map ` and its helpers are > - * actually independent from the dma-buf infrastructure. When sharing buffers > - * among devices, drivers have to know the location of the memory to access > - * the buffers in a safe way. :c:type:`struct dma_buf_map ` > - * solves this problem for dma-buf and its users. If other drivers or > - * sub-systems require similar functionality, the type could be generalized > - * and moved to a more prominent header file. > + * .. code-block:: c > + * > + * const void *src = ...; // source buffer > + * size_t len = ...; // length of src > + * > + * dma_buf_map_memcpy_to(&map, src, len); > + * dma_buf_map_incr(&map, len); // go to first byte after the memcpy > */ > > /** > @@ -210,4 +228,38 @@ static inline void dma_buf_map_clear(struct dma_buf_map *map) > } > } > > +/** > + * dma_buf_map_memcpy_to - Memcpy into dma-buf mapping > + * @dst: The dma-buf mapping structure > + * @src: The source buffer > + * @len: The number of byte in src > + * > + * Copies data into a dma-buf mapping. The source buffer is in system > + * memory. Depending on the buffer's location, the helper picks the correct > + * method of accessing the memory. > + */ > +static inline void dma_buf_map_memcpy_to(struct dma_buf_map *dst, const void *src, size_t len) > +{ > + if (dst->is_iomem) > + memcpy_toio(dst->vaddr_iomem, src, len); > + else > + memcpy(dst->vaddr, src, len); > +} > + > +/** > + * dma_buf_map_incr - Increments the address stored in a dma-buf mapping > + * @map: The dma-buf mapping structure > + * @incr: The number of bytes to increment > + * > + * Increments the address stored in a dma-buf mapping. Depending on the > + * buffer's location, the correct value will be updated. > + */ > +static inline void dma_buf_map_incr(struct dma_buf_map *map, size_t incr) > +{ > + if (map->is_iomem) > + map->vaddr_iomem += incr; > + else > + map->vaddr += incr; > +} > + > #endif /* __DMA_BUF_MAP_H__ */ > -- > 2.28.0 From sam at ravnborg.org Fri Oct 16 10:58:54 2020 From: sam at ravnborg.org (Sam Ravnborg) Date: Fri, 16 Oct 2020 12:58:54 +0200 Subject: [Spice-devel] [PATCH v4 10/10] drm/fb_helper: Support framebuffers in I/O memory In-Reply-To: <20201015123806.32416-11-tzimmermann@suse.de> References: <20201015123806.32416-1-tzimmermann@suse.de> <20201015123806.32416-11-tzimmermann@suse.de> Message-ID: <20201016105854.GB1042954@ravnborg.org> Hi Thomas. On Thu, Oct 15, 2020 at 02:38:06PM +0200, Thomas Zimmermann wrote: > At least sparc64 requires I/O-specific access to framebuffers. This > patch updates the fbdev console accordingly. > > For drivers with direct access to the framebuffer memory, the callback > functions in struct fb_ops test for the type of memory and call the rsp > fb_sys_ of fb_cfb_ functions. > > For drivers that employ a shadow buffer, fbdev's blit function retrieves > the framebuffer address as struct dma_buf_map, and uses dma_buf_map > interfaces to access the buffer. > > The bochs driver on sparc64 uses a workaround to flag the framebuffer as > I/O memory and avoid a HW exception. With the introduction of struct > dma_buf_map, this is not required any longer. The patch removes the rsp > code from both, bochs and fbdev. > > v4: > * move dma_buf_map changes into separate patch (Daniel) > * TODO list: comment on fbdev updates (Daniel) I have been offline for a while so have not followed all the threads on this. So may comments below may well be addressed but I failed to see it. If the point about fb_sync is already addressed/considered then: Reviewed-by: Sam Ravnborg > Signed-off-by: Thomas Zimmermann > --- > Documentation/gpu/todo.rst | 19 ++- > drivers/gpu/drm/bochs/bochs_kms.c | 1 - > drivers/gpu/drm/drm_fb_helper.c | 217 ++++++++++++++++++++++++++++-- > include/drm/drm_mode_config.h | 12 -- > 4 files changed, 220 insertions(+), 29 deletions(-) > > diff --git a/Documentation/gpu/todo.rst b/Documentation/gpu/todo.rst > index 7e6fc3c04add..638b7f704339 100644 > --- a/Documentation/gpu/todo.rst > +++ b/Documentation/gpu/todo.rst > @@ -197,13 +197,28 @@ Convert drivers to use drm_fbdev_generic_setup() > ------------------------------------------------ > > Most drivers can use drm_fbdev_generic_setup(). Driver have to implement > -atomic modesetting and GEM vmap support. Current generic fbdev emulation > -expects the framebuffer in system memory (or system-like memory). > +atomic modesetting and GEM vmap support. Historically, generic fbdev emulation > +expected the framebuffer in system memory or system-like memory. By employing > +struct dma_buf_map, drivers with frambuffers in I/O memory can be supported > +as well. > > Contact: Maintainer of the driver you plan to convert > > Level: Intermediate > > +Reimplement functions in drm_fbdev_fb_ops without fbdev > +------------------------------------------------------- > + > +A number of callback functions in drm_fbdev_fb_ops could benefit from > +being rewritten without dependencies on the fbdev module. Some of the > +helpers could further benefit from using struct dma_buf_map instead of > +raw pointers. > + > +Contact: Thomas Zimmermann , Daniel Vetter > + > +Level: Advanced > + > + > drm_framebuffer_funcs and drm_mode_config_funcs.fb_create cleanup > ----------------------------------------------------------------- > > diff --git a/drivers/gpu/drm/bochs/bochs_kms.c b/drivers/gpu/drm/bochs/bochs_kms.c > index 13d0d04c4457..853081d186d5 100644 > --- a/drivers/gpu/drm/bochs/bochs_kms.c > +++ b/drivers/gpu/drm/bochs/bochs_kms.c > @@ -151,7 +151,6 @@ int bochs_kms_init(struct bochs_device *bochs) > bochs->dev->mode_config.preferred_depth = 24; > bochs->dev->mode_config.prefer_shadow = 0; > bochs->dev->mode_config.prefer_shadow_fbdev = 1; > - bochs->dev->mode_config.fbdev_use_iomem = true; > bochs->dev->mode_config.quirk_addfb_prefer_host_byte_order = true; > > bochs->dev->mode_config.funcs = &bochs_mode_funcs; Good to see this workaround gone again! > diff --git a/drivers/gpu/drm/drm_fb_helper.c b/drivers/gpu/drm/drm_fb_helper.c > index 6212cd7cde1d..462b0c130ebb 100644 > --- a/drivers/gpu/drm/drm_fb_helper.c > +++ b/drivers/gpu/drm/drm_fb_helper.c > @@ -372,24 +372,22 @@ static void drm_fb_helper_resume_worker(struct work_struct *work) > } > > static void drm_fb_helper_dirty_blit_real(struct drm_fb_helper *fb_helper, > - struct drm_clip_rect *clip) > + struct drm_clip_rect *clip, > + struct dma_buf_map *dst) > { > struct drm_framebuffer *fb = fb_helper->fb; > unsigned int cpp = fb->format->cpp[0]; > size_t offset = clip->y1 * fb->pitches[0] + clip->x1 * cpp; > void *src = fb_helper->fbdev->screen_buffer + offset; > - void *dst = fb_helper->buffer->map.vaddr + offset; > size_t len = (clip->x2 - clip->x1) * cpp; > unsigned int y; > > - for (y = clip->y1; y < clip->y2; y++) { > - if (!fb_helper->dev->mode_config.fbdev_use_iomem) > - memcpy(dst, src, len); > - else > - memcpy_toio((void __iomem *)dst, src, len); > + dma_buf_map_incr(dst, offset); /* go to first pixel within clip rect */ > > + for (y = clip->y1; y < clip->y2; y++) { > + dma_buf_map_memcpy_to(dst, src, len); > + dma_buf_map_incr(dst, fb->pitches[0]); > src += fb->pitches[0]; > - dst += fb->pitches[0]; > } > } > > @@ -417,8 +415,9 @@ static void drm_fb_helper_dirty_work(struct work_struct *work) > ret = drm_client_buffer_vmap(helper->buffer, &map); > if (ret) > return; > - drm_fb_helper_dirty_blit_real(helper, &clip_copy); > + drm_fb_helper_dirty_blit_real(helper, &clip_copy, &map); > } > + > if (helper->fb->funcs->dirty) > helper->fb->funcs->dirty(helper->fb, NULL, 0, 0, > &clip_copy, 1); > @@ -755,6 +754,136 @@ void drm_fb_helper_sys_imageblit(struct fb_info *info, > } > EXPORT_SYMBOL(drm_fb_helper_sys_imageblit); > So far everything looks good. > +static ssize_t drm_fb_helper_cfb_read(struct fb_info *info, char __user *buf, > + size_t count, loff_t *ppos) > +{ > + unsigned long p = *ppos; > + u8 *dst; > + u8 __iomem *src; > + int c, err = 0; > + unsigned long total_size; > + unsigned long alloc_size; > + ssize_t ret = 0; > + > + if (info->state != FBINFO_STATE_RUNNING) > + return -EPERM; > + > + total_size = info->screen_size; > + > + if (total_size == 0) > + total_size = info->fix.smem_len; > + > + if (p >= total_size) > + return 0; > + > + if (count >= total_size) > + count = total_size; > + > + if (count + p > total_size) > + count = total_size - p; > + > + src = (u8 __iomem *)(info->screen_base + p); screen_base is a char __iomem * - so this cast looks semi redundant. > + > + alloc_size = min(count, PAGE_SIZE); > + > + dst = kmalloc(alloc_size, GFP_KERNEL); > + if (!dst) > + return -ENOMEM; > + Same comment as below about fb_sync. > + while (count) { > + c = min(count, alloc_size); > + > + memcpy_fromio(dst, src, c); > + if (copy_to_user(buf, dst, c)) { > + err = -EFAULT; > + break; > + } > + > + src += c; > + *ppos += c; > + buf += c; > + ret += c; > + count -= c; > + } > + > + kfree(dst); > + > + if (err) > + return err; > + > + return ret; > +} > + > +static ssize_t drm_fb_helper_cfb_write(struct fb_info *info, const char __user *buf, > + size_t count, loff_t *ppos) > +{ > + unsigned long p = *ppos; > + u8 *src; > + u8 __iomem *dst; > + int c, err = 0; > + unsigned long total_size; > + unsigned long alloc_size; > + ssize_t ret = 0; > + > + if (info->state != FBINFO_STATE_RUNNING) > + return -EPERM; > + > + total_size = info->screen_size; > + > + if (total_size == 0) > + total_size = info->fix.smem_len; > + > + if (p > total_size) > + return -EFBIG; > + > + if (count > total_size) { > + err = -EFBIG; > + count = total_size; > + } > + > + if (count + p > total_size) { > + /* > + * The framebuffer is too small. We do the > + * copy operation, but return an error code > + * afterwards. Taken from fbdev. > + */ > + if (!err) > + err = -ENOSPC; > + count = total_size - p; > + } > + > + alloc_size = min(count, PAGE_SIZE); > + > + src = kmalloc(alloc_size, GFP_KERNEL); > + if (!src) > + return -ENOMEM; > + > + dst = (u8 __iomem *)(info->screen_base + p); > + The fbdev variant call the fb_sync callback here. noveau and gma500 implments the fb_sync callback - but no-one else. > + while (count) { > + c = min(count, alloc_size); > + > + if (copy_from_user(src, buf, c)) { > + err = -EFAULT; > + break; > + } > + memcpy_toio(dst, src, c); When we rewrite this part to use dma_buf_map_memcpy_to() then we can merge the two variants of helper_{sys,cfb}_read()? Which is part of the todo - so OK > + > + dst += c; > + *ppos += c; > + buf += c; > + ret += c; > + count -= c; > + } > + > + kfree(src); > + > + if (err) > + return err; > + > + return ret; > +} > + > /** > * drm_fb_helper_cfb_fillrect - wrapper around cfb_fillrect > * @info: fbdev registered by the helper > @@ -2027,6 +2156,66 @@ static int drm_fbdev_fb_mmap(struct fb_info *info, struct vm_area_struct *vma) > return -ENODEV; > } > > +static ssize_t drm_fbdev_fb_read(struct fb_info *info, char __user *buf, > + size_t count, loff_t *ppos) > +{ > + struct drm_fb_helper *fb_helper = info->par; > + struct drm_client_buffer *buffer = fb_helper->buffer; > + > + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem) > + return drm_fb_helper_sys_read(info, buf, count, ppos); > + else > + return drm_fb_helper_cfb_read(info, buf, count, ppos); > +} > + > +static ssize_t drm_fbdev_fb_write(struct fb_info *info, const char __user *buf, > + size_t count, loff_t *ppos) > +{ > + struct drm_fb_helper *fb_helper = info->par; > + struct drm_client_buffer *buffer = fb_helper->buffer; > + > + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem) > + return drm_fb_helper_sys_write(info, buf, count, ppos); > + else > + return drm_fb_helper_cfb_write(info, buf, count, ppos); > +} > + > +static void drm_fbdev_fb_fillrect(struct fb_info *info, > + const struct fb_fillrect *rect) > +{ > + struct drm_fb_helper *fb_helper = info->par; > + struct drm_client_buffer *buffer = fb_helper->buffer; > + > + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem) > + drm_fb_helper_sys_fillrect(info, rect); > + else > + drm_fb_helper_cfb_fillrect(info, rect); > +} > + > +static void drm_fbdev_fb_copyarea(struct fb_info *info, > + const struct fb_copyarea *area) > +{ > + struct drm_fb_helper *fb_helper = info->par; > + struct drm_client_buffer *buffer = fb_helper->buffer; > + > + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem) > + drm_fb_helper_sys_copyarea(info, area); > + else > + drm_fb_helper_cfb_copyarea(info, area); > +} > + > +static void drm_fbdev_fb_imageblit(struct fb_info *info, > + const struct fb_image *image) > +{ > + struct drm_fb_helper *fb_helper = info->par; > + struct drm_client_buffer *buffer = fb_helper->buffer; > + > + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem) > + drm_fb_helper_sys_imageblit(info, image); > + else > + drm_fb_helper_cfb_imageblit(info, image); > +} > + > static const struct fb_ops drm_fbdev_fb_ops = { > .owner = THIS_MODULE, > DRM_FB_HELPER_DEFAULT_OPS, > @@ -2034,11 +2223,11 @@ static const struct fb_ops drm_fbdev_fb_ops = { > .fb_release = drm_fbdev_fb_release, > .fb_destroy = drm_fbdev_fb_destroy, > .fb_mmap = drm_fbdev_fb_mmap, > - .fb_read = drm_fb_helper_sys_read, > - .fb_write = drm_fb_helper_sys_write, > - .fb_fillrect = drm_fb_helper_sys_fillrect, > - .fb_copyarea = drm_fb_helper_sys_copyarea, > - .fb_imageblit = drm_fb_helper_sys_imageblit, > + .fb_read = drm_fbdev_fb_read, > + .fb_write = drm_fbdev_fb_write, > + .fb_fillrect = drm_fbdev_fb_fillrect, > + .fb_copyarea = drm_fbdev_fb_copyarea, > + .fb_imageblit = drm_fbdev_fb_imageblit, > }; > > static struct fb_deferred_io drm_fbdev_defio = { > diff --git a/include/drm/drm_mode_config.h b/include/drm/drm_mode_config.h > index 5ffbb4ed5b35..ab424ddd7665 100644 > --- a/include/drm/drm_mode_config.h > +++ b/include/drm/drm_mode_config.h > @@ -877,18 +877,6 @@ struct drm_mode_config { > */ > bool prefer_shadow_fbdev; > > - /** > - * @fbdev_use_iomem: > - * > - * Set to true if framebuffer reside in iomem. > - * When set to true memcpy_toio() is used when copying the framebuffer in > - * drm_fb_helper.drm_fb_helper_dirty_blit_real(). > - * > - * FIXME: This should be replaced with a per-mapping is_iomem > - * flag (like ttm does), and then used everywhere in fbdev code. > - */ > - bool fbdev_use_iomem; > - > /** > * @quirk_addfb_prefer_xbgr_30bpp: > * > -- > 2.28.0 From sam at ravnborg.org Fri Oct 16 11:31:41 2020 From: sam at ravnborg.org (Sam Ravnborg) Date: Fri, 16 Oct 2020 13:31:41 +0200 Subject: [Spice-devel] [PATCH v4 09/10] dma-buf-map: Add memcpy and pointer-increment interfaces In-Reply-To: <20201015123806.32416-10-tzimmermann@suse.de> References: <20201015123806.32416-1-tzimmermann@suse.de> <20201015123806.32416-10-tzimmermann@suse.de> Message-ID: <20201016113141.GA1125266@ravnborg.org> Hi Thomas. On Thu, Oct 15, 2020 at 02:38:05PM +0200, Thomas Zimmermann wrote: > To do framebuffer updates, one needs memcpy from system memory and a > pointer-increment function. Add both interfaces with documentation. > > Signed-off-by: Thomas Zimmermann > --- > include/linux/dma-buf-map.h | 72 +++++++++++++++++++++++++++++++------ > 1 file changed, 62 insertions(+), 10 deletions(-) > > diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h > index 2e8bbecb5091..6ca0f304dda2 100644 > --- a/include/linux/dma-buf-map.h > +++ b/include/linux/dma-buf-map.h > @@ -32,6 +32,14 @@ > * accessing the buffer. Use the returned instance and the helper functions > * to access the buffer's memory in the correct way. > * > + * The type :c:type:`struct dma_buf_map ` and its helpers are > + * actually independent from the dma-buf infrastructure. When sharing buffers > + * among devices, drivers have to know the location of the memory to access > + * the buffers in a safe way. :c:type:`struct dma_buf_map ` > + * solves this problem for dma-buf and its users. If other drivers or > + * sub-systems require similar functionality, the type could be generalized > + * and moved to a more prominent header file. > + * > * Open-coding access to :c:type:`struct dma_buf_map ` is > * considered bad style. Rather then accessing its fields directly, use one > * of the provided helper functions, or implement your own. For example, > @@ -51,6 +59,14 @@ > * > * dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf); > * > + * Instances of struct dma_buf_map do not have to be cleaned up, but > + * can be cleared to NULL with dma_buf_map_clear(). Cleared mappings > + * always refer to system memory. > + * > + * .. code-block:: c > + * > + * dma_buf_map_clear(&map); > + * > * Test if a mapping is valid with either dma_buf_map_is_set() or > * dma_buf_map_is_null(). > * > @@ -73,17 +89,19 @@ > * if (dma_buf_map_is_equal(&sys_map, &io_map)) > * // always false > * > - * Instances of struct dma_buf_map do not have to be cleaned up, but > - * can be cleared to NULL with dma_buf_map_clear(). Cleared mappings > - * always refer to system memory. > + * A set up instance of struct dma_buf_map can be used to access or manipulate > + * the buffer memory. Depending on the location of the memory, the provided > + * helpers will pick the correct operations. Data can be copied into the memory > + * with dma_buf_map_memcpy_to(). The address can be manipulated with > + * dma_buf_map_incr(). > * > - * The type :c:type:`struct dma_buf_map ` and its helpers are > - * actually independent from the dma-buf infrastructure. When sharing buffers > - * among devices, drivers have to know the location of the memory to access > - * the buffers in a safe way. :c:type:`struct dma_buf_map ` > - * solves this problem for dma-buf and its users. If other drivers or > - * sub-systems require similar functionality, the type could be generalized > - * and moved to a more prominent header file. > + * .. code-block:: c > + * > + * const void *src = ...; // source buffer > + * size_t len = ...; // length of src > + * > + * dma_buf_map_memcpy_to(&map, src, len); > + * dma_buf_map_incr(&map, len); // go to first byte after the memcpy > */ > > /** > @@ -210,4 +228,38 @@ static inline void dma_buf_map_clear(struct dma_buf_map *map) > } > } > > +/** > + * dma_buf_map_memcpy_to - Memcpy into dma-buf mapping > + * @dst: The dma-buf mapping structure > + * @src: The source buffer > + * @len: The number of byte in src > + * > + * Copies data into a dma-buf mapping. The source buffer is in system > + * memory. Depending on the buffer's location, the helper picks the correct > + * method of accessing the memory. > + */ > +static inline void dma_buf_map_memcpy_to(struct dma_buf_map *dst, const void *src, size_t len) > +{ > + if (dst->is_iomem) > + memcpy_toio(dst->vaddr_iomem, src, len); > + else > + memcpy(dst->vaddr, src, len); sparc64 needs "#include " to build as is does not get this via io.h Sam > +} > + > +/** > + * dma_buf_map_incr - Increments the address stored in a dma-buf mapping > + * @map: The dma-buf mapping structure > + * @incr: The number of bytes to increment > + * > + * Increments the address stored in a dma-buf mapping. Depending on the > + * buffer's location, the correct value will be updated. > + */ > +static inline void dma_buf_map_incr(struct dma_buf_map *map, size_t incr) > +{ > + if (map->is_iomem) > + map->vaddr_iomem += incr; > + else > + map->vaddr += incr; > +} > + > #endif /* __DMA_BUF_MAP_H__ */ > -- > 2.28.0 From tzimmermann at suse.de Fri Oct 16 11:34:40 2020 From: tzimmermann at suse.de (Thomas Zimmermann) Date: Fri, 16 Oct 2020 13:34:40 +0200 Subject: [Spice-devel] [PATCH v4 10/10] drm/fb_helper: Support framebuffers in I/O memory In-Reply-To: <20201016105854.GB1042954@ravnborg.org> References: <20201015123806.32416-1-tzimmermann@suse.de> <20201015123806.32416-11-tzimmermann@suse.de> <20201016105854.GB1042954@ravnborg.org> Message-ID: <20201016133440.65cadb6d@linux-uq9g> Hi On Fri, 16 Oct 2020 12:58:54 +0200 Sam Ravnborg wrote: > Hi Thomas. > > On Thu, Oct 15, 2020 at 02:38:06PM +0200, Thomas Zimmermann wrote: > > At least sparc64 requires I/O-specific access to framebuffers. This > > patch updates the fbdev console accordingly. > > > > For drivers with direct access to the framebuffer memory, the callback > > functions in struct fb_ops test for the type of memory and call the rsp > > fb_sys_ of fb_cfb_ functions. > > > > For drivers that employ a shadow buffer, fbdev's blit function retrieves > > the framebuffer address as struct dma_buf_map, and uses dma_buf_map > > interfaces to access the buffer. > > > > The bochs driver on sparc64 uses a workaround to flag the framebuffer as > > I/O memory and avoid a HW exception. With the introduction of struct > > dma_buf_map, this is not required any longer. The patch removes the rsp > > code from both, bochs and fbdev. > > > > v4: > > * move dma_buf_map changes into separate patch (Daniel) > > * TODO list: comment on fbdev updates (Daniel) > > I have been offline for a while so have not followed all the threads on > this. So may comments below may well be addressed but I failed to see > it. > > If the point about fb_sync is already addressed/considered then: > Reviewed-by: Sam Ravnborg It has not been brought up yet. See below. > > > > Signed-off-by: Thomas Zimmermann > > --- > > Documentation/gpu/todo.rst | 19 ++- > > drivers/gpu/drm/bochs/bochs_kms.c | 1 - > > drivers/gpu/drm/drm_fb_helper.c | 217 ++++++++++++++++++++++++++++-- > > include/drm/drm_mode_config.h | 12 -- > > 4 files changed, 220 insertions(+), 29 deletions(-) > > > > diff --git a/Documentation/gpu/todo.rst b/Documentation/gpu/todo.rst > > index 7e6fc3c04add..638b7f704339 100644 > > --- a/Documentation/gpu/todo.rst > > +++ b/Documentation/gpu/todo.rst > > @@ -197,13 +197,28 @@ Convert drivers to use drm_fbdev_generic_setup() > > ------------------------------------------------ > > > > Most drivers can use drm_fbdev_generic_setup(). Driver have to implement > > -atomic modesetting and GEM vmap support. Current generic fbdev emulation > > -expects the framebuffer in system memory (or system-like memory). > > +atomic modesetting and GEM vmap support. Historically, generic fbdev > > emulation +expected the framebuffer in system memory or system-like > > memory. By employing +struct dma_buf_map, drivers with frambuffers in I/O > > memory can be supported +as well. > > > > Contact: Maintainer of the driver you plan to convert > > > > Level: Intermediate > > > > +Reimplement functions in drm_fbdev_fb_ops without fbdev > > +------------------------------------------------------- > > + > > +A number of callback functions in drm_fbdev_fb_ops could benefit from > > +being rewritten without dependencies on the fbdev module. Some of the > > +helpers could further benefit from using struct dma_buf_map instead of > > +raw pointers. > > + > > +Contact: Thomas Zimmermann , Daniel Vetter > > + > > +Level: Advanced > > + > > + > > drm_framebuffer_funcs and drm_mode_config_funcs.fb_create cleanup > > ----------------------------------------------------------------- > > > > diff --git a/drivers/gpu/drm/bochs/bochs_kms.c > > b/drivers/gpu/drm/bochs/bochs_kms.c index 13d0d04c4457..853081d186d5 > > 100644 --- a/drivers/gpu/drm/bochs/bochs_kms.c > > +++ b/drivers/gpu/drm/bochs/bochs_kms.c > > @@ -151,7 +151,6 @@ int bochs_kms_init(struct bochs_device *bochs) > > bochs->dev->mode_config.preferred_depth = 24; > > bochs->dev->mode_config.prefer_shadow = 0; > > bochs->dev->mode_config.prefer_shadow_fbdev = 1; > > - bochs->dev->mode_config.fbdev_use_iomem = true; > > bochs->dev->mode_config.quirk_addfb_prefer_host_byte_order = > > true; > > bochs->dev->mode_config.funcs = &bochs_mode_funcs; > Good to see this workaround gone again! > > > diff --git a/drivers/gpu/drm/drm_fb_helper.c > > b/drivers/gpu/drm/drm_fb_helper.c index 6212cd7cde1d..462b0c130ebb 100644 > > --- a/drivers/gpu/drm/drm_fb_helper.c > > +++ b/drivers/gpu/drm/drm_fb_helper.c > > @@ -372,24 +372,22 @@ static void drm_fb_helper_resume_worker(struct > > work_struct *work) } > > > > static void drm_fb_helper_dirty_blit_real(struct drm_fb_helper > > *fb_helper, > > - struct drm_clip_rect *clip) > > + struct drm_clip_rect *clip, > > + struct dma_buf_map *dst) > > { > > struct drm_framebuffer *fb = fb_helper->fb; > > unsigned int cpp = fb->format->cpp[0]; > > size_t offset = clip->y1 * fb->pitches[0] + clip->x1 * cpp; > > void *src = fb_helper->fbdev->screen_buffer + offset; > > - void *dst = fb_helper->buffer->map.vaddr + offset; > > size_t len = (clip->x2 - clip->x1) * cpp; > > unsigned int y; > > > > - for (y = clip->y1; y < clip->y2; y++) { > > - if (!fb_helper->dev->mode_config.fbdev_use_iomem) > > - memcpy(dst, src, len); > > - else > > - memcpy_toio((void __iomem *)dst, src, len); > > + dma_buf_map_incr(dst, offset); /* go to first pixel within clip > > rect */ > > + for (y = clip->y1; y < clip->y2; y++) { > > + dma_buf_map_memcpy_to(dst, src, len); > > + dma_buf_map_incr(dst, fb->pitches[0]); > > src += fb->pitches[0]; > > - dst += fb->pitches[0]; > > } > > } > > > > @@ -417,8 +415,9 @@ static void drm_fb_helper_dirty_work(struct > > work_struct *work) ret = drm_client_buffer_vmap(helper->buffer, &map); > > if (ret) > > return; > > - drm_fb_helper_dirty_blit_real(helper, > > &clip_copy); > > + drm_fb_helper_dirty_blit_real(helper, > > &clip_copy, &map); } > > + > > if (helper->fb->funcs->dirty) > > helper->fb->funcs->dirty(helper->fb, NULL, 0, 0, > > &clip_copy, 1); > > @@ -755,6 +754,136 @@ void drm_fb_helper_sys_imageblit(struct fb_info > > *info, } > > EXPORT_SYMBOL(drm_fb_helper_sys_imageblit); > > > So far everything looks good. > > > +static ssize_t drm_fb_helper_cfb_read(struct fb_info *info, char __user > > *buf, > > + size_t count, loff_t *ppos) > > +{ > > + unsigned long p = *ppos; > > + u8 *dst; > > + u8 __iomem *src; > > + int c, err = 0; > > + unsigned long total_size; > > + unsigned long alloc_size; > > + ssize_t ret = 0; > > + > > + if (info->state != FBINFO_STATE_RUNNING) > > + return -EPERM; > > + > > + total_size = info->screen_size; > > + > > + if (total_size == 0) > > + total_size = info->fix.smem_len; > > + > > + if (p >= total_size) > > + return 0; > > + > > + if (count >= total_size) > > + count = total_size; > > + > > + if (count + p > total_size) > > + count = total_size - p; > > + > > + src = (u8 __iomem *)(info->screen_base + p); > screen_base is a char __iomem * - so this cast looks semi redundant. I took the basic code from fbdev. Maybe there's a reason for the case, otherwise I'll remove it. > > > + > > + alloc_size = min(count, PAGE_SIZE); > > + > > + dst = kmalloc(alloc_size, GFP_KERNEL); > > + if (!dst) > > + return -ENOMEM; > > + > Same comment as below about fb_sync. > > > > + while (count) { > > + c = min(count, alloc_size); > > + > > + memcpy_fromio(dst, src, c); > > + if (copy_to_user(buf, dst, c)) { > > + err = -EFAULT; > > + break; > > + } > > + > > + src += c; > > + *ppos += c; > > + buf += c; > > + ret += c; > > + count -= c; > > + } > > + > > + kfree(dst); > > + > > + if (err) > > + return err; > > + > > + return ret; > > +} > > + > > +static ssize_t drm_fb_helper_cfb_write(struct fb_info *info, const char > > __user *buf, > > + size_t count, loff_t *ppos) > > +{ > > + unsigned long p = *ppos; > > + u8 *src; > > + u8 __iomem *dst; > > + int c, err = 0; > > + unsigned long total_size; > > + unsigned long alloc_size; > > + ssize_t ret = 0; > > + > > + if (info->state != FBINFO_STATE_RUNNING) > > + return -EPERM; > > + > > + total_size = info->screen_size; > > + > > + if (total_size == 0) > > + total_size = info->fix.smem_len; > > + > > + if (p > total_size) > > + return -EFBIG; > > + > > + if (count > total_size) { > > + err = -EFBIG; > > + count = total_size; > > + } > > + > > + if (count + p > total_size) { > > + /* > > + * The framebuffer is too small. We do the > > + * copy operation, but return an error code > > + * afterwards. Taken from fbdev. > > + */ > > + if (!err) > > + err = -ENOSPC; > > + count = total_size - p; > > + } > > + > > + alloc_size = min(count, PAGE_SIZE); > > + > > + src = kmalloc(alloc_size, GFP_KERNEL); > > + if (!src) > > + return -ENOMEM; > > + > > + dst = (u8 __iomem *)(info->screen_base + p); > > + > > The fbdev variant call the fb_sync callback here. > noveau and gma500 implments the fb_sync callback - but no-one else. These drivers implement some form of HW acceleration. If they have a HW blit/draw/etc op queued up, they have to wait for it to complete. Otherwise, the copied memory would contain an old state. The fb_sync acts as the fence. Fbdev only uses software copying, so the fb_sync is not required. From what I heard, the HW acceleration is not useful on modern machines. I hope to convert more drivers to generic fbdev after these patches for I/O-memory support have been merged. > > > > + while (count) { > > + c = min(count, alloc_size); > > + > > + if (copy_from_user(src, buf, c)) { > > + err = -EFAULT; > > + break; > > + } > > + memcpy_toio(dst, src, c); > When we rewrite this part to use dma_buf_map_memcpy_to() then we can > merge the two variants of helper_{sys,cfb}_read()? > Which is part of the todo - so OK I'm not sure if dma_buf_map is a good fit here. The I/O-memory function does an additional copy between system memory and I/O memory. Of course, the top and bottom of both functions are similar and could probably be shared. Best regards Thomas > > + > > + dst += c; > > + *ppos += c; > > + buf += c; > > + ret += c; > > + count -= c; > > + } > > + > > + kfree(src); > > + > > + if (err) > > + return err; > > + > > + return ret; > > +} > > + > > /** > > * drm_fb_helper_cfb_fillrect - wrapper around cfb_fillrect > > * @info: fbdev registered by the helper > > @@ -2027,6 +2156,66 @@ static int drm_fbdev_fb_mmap(struct fb_info *info, > > struct vm_area_struct *vma) return -ENODEV; > > } > > > > +static ssize_t drm_fbdev_fb_read(struct fb_info *info, char __user *buf, > > + size_t count, loff_t *ppos) > > +{ > > + struct drm_fb_helper *fb_helper = info->par; > > + struct drm_client_buffer *buffer = fb_helper->buffer; > > + > > + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem) > > + return drm_fb_helper_sys_read(info, buf, count, ppos); > > + else > > + return drm_fb_helper_cfb_read(info, buf, count, ppos); > > +} > > + > > +static ssize_t drm_fbdev_fb_write(struct fb_info *info, const char > > __user *buf, > > + size_t count, loff_t *ppos) > > +{ > > + struct drm_fb_helper *fb_helper = info->par; > > + struct drm_client_buffer *buffer = fb_helper->buffer; > > + > > + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem) > > + return drm_fb_helper_sys_write(info, buf, count, ppos); > > + else > > + return drm_fb_helper_cfb_write(info, buf, count, ppos); > > +} > > + > > +static void drm_fbdev_fb_fillrect(struct fb_info *info, > > + const struct fb_fillrect *rect) > > +{ > > + struct drm_fb_helper *fb_helper = info->par; > > + struct drm_client_buffer *buffer = fb_helper->buffer; > > + > > + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem) > > + drm_fb_helper_sys_fillrect(info, rect); > > + else > > + drm_fb_helper_cfb_fillrect(info, rect); > > +} > > + > > +static void drm_fbdev_fb_copyarea(struct fb_info *info, > > + const struct fb_copyarea *area) > > +{ > > + struct drm_fb_helper *fb_helper = info->par; > > + struct drm_client_buffer *buffer = fb_helper->buffer; > > + > > + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem) > > + drm_fb_helper_sys_copyarea(info, area); > > + else > > + drm_fb_helper_cfb_copyarea(info, area); > > +} > > + > > +static void drm_fbdev_fb_imageblit(struct fb_info *info, > > + const struct fb_image *image) > > +{ > > + struct drm_fb_helper *fb_helper = info->par; > > + struct drm_client_buffer *buffer = fb_helper->buffer; > > + > > + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem) > > + drm_fb_helper_sys_imageblit(info, image); > > + else > > + drm_fb_helper_cfb_imageblit(info, image); > > +} > > + > > static const struct fb_ops drm_fbdev_fb_ops = { > > .owner = THIS_MODULE, > > DRM_FB_HELPER_DEFAULT_OPS, > > @@ -2034,11 +2223,11 @@ static const struct fb_ops drm_fbdev_fb_ops = { > > .fb_release = drm_fbdev_fb_release, > > .fb_destroy = drm_fbdev_fb_destroy, > > .fb_mmap = drm_fbdev_fb_mmap, > > - .fb_read = drm_fb_helper_sys_read, > > - .fb_write = drm_fb_helper_sys_write, > > - .fb_fillrect = drm_fb_helper_sys_fillrect, > > - .fb_copyarea = drm_fb_helper_sys_copyarea, > > - .fb_imageblit = drm_fb_helper_sys_imageblit, > > + .fb_read = drm_fbdev_fb_read, > > + .fb_write = drm_fbdev_fb_write, > > + .fb_fillrect = drm_fbdev_fb_fillrect, > > + .fb_copyarea = drm_fbdev_fb_copyarea, > > + .fb_imageblit = drm_fbdev_fb_imageblit, > > }; > > > > static struct fb_deferred_io drm_fbdev_defio = { > > diff --git a/include/drm/drm_mode_config.h b/include/drm/drm_mode_config.h > > index 5ffbb4ed5b35..ab424ddd7665 100644 > > --- a/include/drm/drm_mode_config.h > > +++ b/include/drm/drm_mode_config.h > > @@ -877,18 +877,6 @@ struct drm_mode_config { > > */ > > bool prefer_shadow_fbdev; > > > > - /** > > - * @fbdev_use_iomem: > > - * > > - * Set to true if framebuffer reside in iomem. > > - * When set to true memcpy_toio() is used when copying the > > framebuffer in > > - * drm_fb_helper.drm_fb_helper_dirty_blit_real(). > > - * > > - * FIXME: This should be replaced with a per-mapping is_iomem > > - * flag (like ttm does), and then used everywhere in fbdev code. > > - */ > > - bool fbdev_use_iomem; > > - > > /** > > * @quirk_addfb_prefer_xbgr_30bpp: > > * > > -- > > 2.28.0 > _______________________________________________ > dri-devel mailing list > dri-devel at lists.freedesktop.org > https://lists.freedesktop.org/mailman/listinfo/dri-devel -- Thomas Zimmermann Graphics Driver Developer SUSE Software Solutions Germany GmbH Maxfeldstr. 5, 90409 N?rnberg, Germany (HRB 36809, AG N?rnberg) Gesch?ftsf?hrer: Felix Imend?rffer From sam at ravnborg.org Fri Oct 16 12:03:47 2020 From: sam at ravnborg.org (Sam Ravnborg) Date: Fri, 16 Oct 2020 14:03:47 +0200 Subject: [Spice-devel] [PATCH v4 10/10] drm/fb_helper: Support framebuffers in I/O memory In-Reply-To: <20201015123806.32416-11-tzimmermann@suse.de> References: <20201015123806.32416-1-tzimmermann@suse.de> <20201015123806.32416-11-tzimmermann@suse.de> Message-ID: <20201016120347.GB1125266@ravnborg.org> Hi Thomas. On Thu, Oct 15, 2020 at 02:38:06PM +0200, Thomas Zimmermann wrote: > At least sparc64 requires I/O-specific access to framebuffers. This > patch updates the fbdev console accordingly. > > For drivers with direct access to the framebuffer memory, the callback > functions in struct fb_ops test for the type of memory and call the rsp > fb_sys_ of fb_cfb_ functions. > > For drivers that employ a shadow buffer, fbdev's blit function retrieves > the framebuffer address as struct dma_buf_map, and uses dma_buf_map > interfaces to access the buffer. > > The bochs driver on sparc64 uses a workaround to flag the framebuffer as > I/O memory and avoid a HW exception. With the introduction of struct > dma_buf_map, this is not required any longer. The patch removes the rsp > code from both, bochs and fbdev. > > v4: > * move dma_buf_map changes into separate patch (Daniel) > * TODO list: comment on fbdev updates (Daniel) > > Signed-off-by: Thomas Zimmermann The original workaround fixed it so we could run qemu with the -nographic option. So I went ahead and tried to run quemu version: v5.0.0-1970-g0b100c8e72-dirty. And with the BOCHS driver built-in. With the following command line: qemu-system-sparc64 -m 512 -kernel vmlinux -append console=ttyS0 -nographic Behaviour was the same before and after applying this patch. (panic due to VFS: Unable to mount root fs on unknown-block(0,0)) So I consider it fixed for real now and not just a workaround. I also tested with: qemu-system-sparc64 -m 512 -kernel vmlinux -append console=ttyS0 -serial stdio and it worked in both cases too. All the comments above so future-me have an easier time finding how to reproduce. Tested-by: Sam Ravnborg Sam > --- > Documentation/gpu/todo.rst | 19 ++- > drivers/gpu/drm/bochs/bochs_kms.c | 1 - > drivers/gpu/drm/drm_fb_helper.c | 217 ++++++++++++++++++++++++++++-- > include/drm/drm_mode_config.h | 12 -- > 4 files changed, 220 insertions(+), 29 deletions(-) > > diff --git a/Documentation/gpu/todo.rst b/Documentation/gpu/todo.rst > index 7e6fc3c04add..638b7f704339 100644 > --- a/Documentation/gpu/todo.rst > +++ b/Documentation/gpu/todo.rst > @@ -197,13 +197,28 @@ Convert drivers to use drm_fbdev_generic_setup() > ------------------------------------------------ > > Most drivers can use drm_fbdev_generic_setup(). Driver have to implement > -atomic modesetting and GEM vmap support. Current generic fbdev emulation > -expects the framebuffer in system memory (or system-like memory). > +atomic modesetting and GEM vmap support. Historically, generic fbdev emulation > +expected the framebuffer in system memory or system-like memory. By employing > +struct dma_buf_map, drivers with frambuffers in I/O memory can be supported > +as well. > > Contact: Maintainer of the driver you plan to convert > > Level: Intermediate > > +Reimplement functions in drm_fbdev_fb_ops without fbdev > +------------------------------------------------------- > + > +A number of callback functions in drm_fbdev_fb_ops could benefit from > +being rewritten without dependencies on the fbdev module. Some of the > +helpers could further benefit from using struct dma_buf_map instead of > +raw pointers. > + > +Contact: Thomas Zimmermann , Daniel Vetter > + > +Level: Advanced > + > + > drm_framebuffer_funcs and drm_mode_config_funcs.fb_create cleanup > ----------------------------------------------------------------- > > diff --git a/drivers/gpu/drm/bochs/bochs_kms.c b/drivers/gpu/drm/bochs/bochs_kms.c > index 13d0d04c4457..853081d186d5 100644 > --- a/drivers/gpu/drm/bochs/bochs_kms.c > +++ b/drivers/gpu/drm/bochs/bochs_kms.c > @@ -151,7 +151,6 @@ int bochs_kms_init(struct bochs_device *bochs) > bochs->dev->mode_config.preferred_depth = 24; > bochs->dev->mode_config.prefer_shadow = 0; > bochs->dev->mode_config.prefer_shadow_fbdev = 1; > - bochs->dev->mode_config.fbdev_use_iomem = true; > bochs->dev->mode_config.quirk_addfb_prefer_host_byte_order = true; > > bochs->dev->mode_config.funcs = &bochs_mode_funcs; > diff --git a/drivers/gpu/drm/drm_fb_helper.c b/drivers/gpu/drm/drm_fb_helper.c > index 6212cd7cde1d..462b0c130ebb 100644 > --- a/drivers/gpu/drm/drm_fb_helper.c > +++ b/drivers/gpu/drm/drm_fb_helper.c > @@ -372,24 +372,22 @@ static void drm_fb_helper_resume_worker(struct work_struct *work) > } > > static void drm_fb_helper_dirty_blit_real(struct drm_fb_helper *fb_helper, > - struct drm_clip_rect *clip) > + struct drm_clip_rect *clip, > + struct dma_buf_map *dst) > { > struct drm_framebuffer *fb = fb_helper->fb; > unsigned int cpp = fb->format->cpp[0]; > size_t offset = clip->y1 * fb->pitches[0] + clip->x1 * cpp; > void *src = fb_helper->fbdev->screen_buffer + offset; > - void *dst = fb_helper->buffer->map.vaddr + offset; > size_t len = (clip->x2 - clip->x1) * cpp; > unsigned int y; > > - for (y = clip->y1; y < clip->y2; y++) { > - if (!fb_helper->dev->mode_config.fbdev_use_iomem) > - memcpy(dst, src, len); > - else > - memcpy_toio((void __iomem *)dst, src, len); > + dma_buf_map_incr(dst, offset); /* go to first pixel within clip rect */ > > + for (y = clip->y1; y < clip->y2; y++) { > + dma_buf_map_memcpy_to(dst, src, len); > + dma_buf_map_incr(dst, fb->pitches[0]); > src += fb->pitches[0]; > - dst += fb->pitches[0]; > } > } > > @@ -417,8 +415,9 @@ static void drm_fb_helper_dirty_work(struct work_struct *work) > ret = drm_client_buffer_vmap(helper->buffer, &map); > if (ret) > return; > - drm_fb_helper_dirty_blit_real(helper, &clip_copy); > + drm_fb_helper_dirty_blit_real(helper, &clip_copy, &map); > } > + > if (helper->fb->funcs->dirty) > helper->fb->funcs->dirty(helper->fb, NULL, 0, 0, > &clip_copy, 1); > @@ -755,6 +754,136 @@ void drm_fb_helper_sys_imageblit(struct fb_info *info, > } > EXPORT_SYMBOL(drm_fb_helper_sys_imageblit); > > +static ssize_t drm_fb_helper_cfb_read(struct fb_info *info, char __user *buf, > + size_t count, loff_t *ppos) > +{ > + unsigned long p = *ppos; > + u8 *dst; > + u8 __iomem *src; > + int c, err = 0; > + unsigned long total_size; > + unsigned long alloc_size; > + ssize_t ret = 0; > + > + if (info->state != FBINFO_STATE_RUNNING) > + return -EPERM; > + > + total_size = info->screen_size; > + > + if (total_size == 0) > + total_size = info->fix.smem_len; > + > + if (p >= total_size) > + return 0; > + > + if (count >= total_size) > + count = total_size; > + > + if (count + p > total_size) > + count = total_size - p; > + > + src = (u8 __iomem *)(info->screen_base + p); > + > + alloc_size = min(count, PAGE_SIZE); > + > + dst = kmalloc(alloc_size, GFP_KERNEL); > + if (!dst) > + return -ENOMEM; > + > + while (count) { > + c = min(count, alloc_size); > + > + memcpy_fromio(dst, src, c); > + if (copy_to_user(buf, dst, c)) { > + err = -EFAULT; > + break; > + } > + > + src += c; > + *ppos += c; > + buf += c; > + ret += c; > + count -= c; > + } > + > + kfree(dst); > + > + if (err) > + return err; > + > + return ret; > +} > + > +static ssize_t drm_fb_helper_cfb_write(struct fb_info *info, const char __user *buf, > + size_t count, loff_t *ppos) > +{ > + unsigned long p = *ppos; > + u8 *src; > + u8 __iomem *dst; > + int c, err = 0; > + unsigned long total_size; > + unsigned long alloc_size; > + ssize_t ret = 0; > + > + if (info->state != FBINFO_STATE_RUNNING) > + return -EPERM; > + > + total_size = info->screen_size; > + > + if (total_size == 0) > + total_size = info->fix.smem_len; > + > + if (p > total_size) > + return -EFBIG; > + > + if (count > total_size) { > + err = -EFBIG; > + count = total_size; > + } > + > + if (count + p > total_size) { > + /* > + * The framebuffer is too small. We do the > + * copy operation, but return an error code > + * afterwards. Taken from fbdev. > + */ > + if (!err) > + err = -ENOSPC; > + count = total_size - p; > + } > + > + alloc_size = min(count, PAGE_SIZE); > + > + src = kmalloc(alloc_size, GFP_KERNEL); > + if (!src) > + return -ENOMEM; > + > + dst = (u8 __iomem *)(info->screen_base + p); > + > + while (count) { > + c = min(count, alloc_size); > + > + if (copy_from_user(src, buf, c)) { > + err = -EFAULT; > + break; > + } > + memcpy_toio(dst, src, c); > + > + dst += c; > + *ppos += c; > + buf += c; > + ret += c; > + count -= c; > + } > + > + kfree(src); > + > + if (err) > + return err; > + > + return ret; > +} > + > /** > * drm_fb_helper_cfb_fillrect - wrapper around cfb_fillrect > * @info: fbdev registered by the helper > @@ -2027,6 +2156,66 @@ static int drm_fbdev_fb_mmap(struct fb_info *info, struct vm_area_struct *vma) > return -ENODEV; > } > > +static ssize_t drm_fbdev_fb_read(struct fb_info *info, char __user *buf, > + size_t count, loff_t *ppos) > +{ > + struct drm_fb_helper *fb_helper = info->par; > + struct drm_client_buffer *buffer = fb_helper->buffer; > + > + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem) > + return drm_fb_helper_sys_read(info, buf, count, ppos); > + else > + return drm_fb_helper_cfb_read(info, buf, count, ppos); > +} > + > +static ssize_t drm_fbdev_fb_write(struct fb_info *info, const char __user *buf, > + size_t count, loff_t *ppos) > +{ > + struct drm_fb_helper *fb_helper = info->par; > + struct drm_client_buffer *buffer = fb_helper->buffer; > + > + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem) > + return drm_fb_helper_sys_write(info, buf, count, ppos); > + else > + return drm_fb_helper_cfb_write(info, buf, count, ppos); > +} > + > +static void drm_fbdev_fb_fillrect(struct fb_info *info, > + const struct fb_fillrect *rect) > +{ > + struct drm_fb_helper *fb_helper = info->par; > + struct drm_client_buffer *buffer = fb_helper->buffer; > + > + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem) > + drm_fb_helper_sys_fillrect(info, rect); > + else > + drm_fb_helper_cfb_fillrect(info, rect); > +} > + > +static void drm_fbdev_fb_copyarea(struct fb_info *info, > + const struct fb_copyarea *area) > +{ > + struct drm_fb_helper *fb_helper = info->par; > + struct drm_client_buffer *buffer = fb_helper->buffer; > + > + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem) > + drm_fb_helper_sys_copyarea(info, area); > + else > + drm_fb_helper_cfb_copyarea(info, area); > +} > + > +static void drm_fbdev_fb_imageblit(struct fb_info *info, > + const struct fb_image *image) > +{ > + struct drm_fb_helper *fb_helper = info->par; > + struct drm_client_buffer *buffer = fb_helper->buffer; > + > + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem) > + drm_fb_helper_sys_imageblit(info, image); > + else > + drm_fb_helper_cfb_imageblit(info, image); > +} > + > static const struct fb_ops drm_fbdev_fb_ops = { > .owner = THIS_MODULE, > DRM_FB_HELPER_DEFAULT_OPS, > @@ -2034,11 +2223,11 @@ static const struct fb_ops drm_fbdev_fb_ops = { > .fb_release = drm_fbdev_fb_release, > .fb_destroy = drm_fbdev_fb_destroy, > .fb_mmap = drm_fbdev_fb_mmap, > - .fb_read = drm_fb_helper_sys_read, > - .fb_write = drm_fb_helper_sys_write, > - .fb_fillrect = drm_fb_helper_sys_fillrect, > - .fb_copyarea = drm_fb_helper_sys_copyarea, > - .fb_imageblit = drm_fb_helper_sys_imageblit, > + .fb_read = drm_fbdev_fb_read, > + .fb_write = drm_fbdev_fb_write, > + .fb_fillrect = drm_fbdev_fb_fillrect, > + .fb_copyarea = drm_fbdev_fb_copyarea, > + .fb_imageblit = drm_fbdev_fb_imageblit, > }; > > static struct fb_deferred_io drm_fbdev_defio = { > diff --git a/include/drm/drm_mode_config.h b/include/drm/drm_mode_config.h > index 5ffbb4ed5b35..ab424ddd7665 100644 > --- a/include/drm/drm_mode_config.h > +++ b/include/drm/drm_mode_config.h > @@ -877,18 +877,6 @@ struct drm_mode_config { > */ > bool prefer_shadow_fbdev; > > - /** > - * @fbdev_use_iomem: > - * > - * Set to true if framebuffer reside in iomem. > - * When set to true memcpy_toio() is used when copying the framebuffer in > - * drm_fb_helper.drm_fb_helper_dirty_blit_real(). > - * > - * FIXME: This should be replaced with a per-mapping is_iomem > - * flag (like ttm does), and then used everywhere in fbdev code. > - */ > - bool fbdev_use_iomem; > - > /** > * @quirk_addfb_prefer_xbgr_30bpp: > * > -- > 2.28.0 From tzimmermann at suse.de Fri Oct 16 12:19:42 2020 From: tzimmermann at suse.de (Thomas Zimmermann) Date: Fri, 16 Oct 2020 14:19:42 +0200 Subject: [Spice-devel] [PATCH v4 10/10] drm/fb_helper: Support framebuffers in I/O memory In-Reply-To: <20201016120347.GB1125266@ravnborg.org> References: <20201015123806.32416-1-tzimmermann@suse.de> <20201015123806.32416-11-tzimmermann@suse.de> <20201016120347.GB1125266@ravnborg.org> Message-ID: <20201016141942.111e17f3@linux-uq9g> Hi On Fri, 16 Oct 2020 14:03:47 +0200 Sam Ravnborg wrote: > Hi Thomas. > > On Thu, Oct 15, 2020 at 02:38:06PM +0200, Thomas Zimmermann wrote: > > At least sparc64 requires I/O-specific access to framebuffers. This > > patch updates the fbdev console accordingly. > > > > For drivers with direct access to the framebuffer memory, the callback > > functions in struct fb_ops test for the type of memory and call the rsp > > fb_sys_ of fb_cfb_ functions. > > > > For drivers that employ a shadow buffer, fbdev's blit function retrieves > > the framebuffer address as struct dma_buf_map, and uses dma_buf_map > > interfaces to access the buffer. > > > > The bochs driver on sparc64 uses a workaround to flag the framebuffer as > > I/O memory and avoid a HW exception. With the introduction of struct > > dma_buf_map, this is not required any longer. The patch removes the rsp > > code from both, bochs and fbdev. > > > > v4: > > * move dma_buf_map changes into separate patch (Daniel) > > * TODO list: comment on fbdev updates (Daniel) > > > > Signed-off-by: Thomas Zimmermann > > The original workaround fixed it so we could run qemu with the > -nographic option. > > So I went ahead and tried to run quemu version: > v5.0.0-1970-g0b100c8e72-dirty. > And with the BOCHS driver built-in. > > With the following command line: > qemu-system-sparc64 -m 512 -kernel vmlinux -append console=ttyS0 -nographic > > Behaviour was the same before and after applying this patch. > (panic due to VFS: Unable to mount root fs on unknown-block(0,0)) > So I consider it fixed for real now and not just a workaround. > > I also tested with: > qemu-system-sparc64 -m 512 -kernel vmlinux -append console=ttyS0 -serial > stdio > > and it worked in both cases too. FTR, you booted a kernel and got graphics output. The error is simply that there was no disk to mount? Best regards Thomas > > All the comments above so future-me have an easier time finding how to > reproduce. > > Tested-by: Sam Ravnborg > > Sam > > > --- > > Documentation/gpu/todo.rst | 19 ++- > > drivers/gpu/drm/bochs/bochs_kms.c | 1 - > > drivers/gpu/drm/drm_fb_helper.c | 217 ++++++++++++++++++++++++++++-- > > include/drm/drm_mode_config.h | 12 -- > > 4 files changed, 220 insertions(+), 29 deletions(-) > > > > diff --git a/Documentation/gpu/todo.rst b/Documentation/gpu/todo.rst > > index 7e6fc3c04add..638b7f704339 100644 > > --- a/Documentation/gpu/todo.rst > > +++ b/Documentation/gpu/todo.rst > > @@ -197,13 +197,28 @@ Convert drivers to use drm_fbdev_generic_setup() > > ------------------------------------------------ > > > > Most drivers can use drm_fbdev_generic_setup(). Driver have to implement > > -atomic modesetting and GEM vmap support. Current generic fbdev emulation > > -expects the framebuffer in system memory (or system-like memory). > > +atomic modesetting and GEM vmap support. Historically, generic fbdev > > emulation +expected the framebuffer in system memory or system-like > > memory. By employing +struct dma_buf_map, drivers with frambuffers in I/O > > memory can be supported +as well. > > > > Contact: Maintainer of the driver you plan to convert > > > > Level: Intermediate > > > > +Reimplement functions in drm_fbdev_fb_ops without fbdev > > +------------------------------------------------------- > > + > > +A number of callback functions in drm_fbdev_fb_ops could benefit from > > +being rewritten without dependencies on the fbdev module. Some of the > > +helpers could further benefit from using struct dma_buf_map instead of > > +raw pointers. > > + > > +Contact: Thomas Zimmermann , Daniel Vetter > > + > > +Level: Advanced > > + > > + > > drm_framebuffer_funcs and drm_mode_config_funcs.fb_create cleanup > > ----------------------------------------------------------------- > > > > diff --git a/drivers/gpu/drm/bochs/bochs_kms.c > > b/drivers/gpu/drm/bochs/bochs_kms.c index 13d0d04c4457..853081d186d5 > > 100644 --- a/drivers/gpu/drm/bochs/bochs_kms.c > > +++ b/drivers/gpu/drm/bochs/bochs_kms.c > > @@ -151,7 +151,6 @@ int bochs_kms_init(struct bochs_device *bochs) > > bochs->dev->mode_config.preferred_depth = 24; > > bochs->dev->mode_config.prefer_shadow = 0; > > bochs->dev->mode_config.prefer_shadow_fbdev = 1; > > - bochs->dev->mode_config.fbdev_use_iomem = true; > > bochs->dev->mode_config.quirk_addfb_prefer_host_byte_order = > > true; > > bochs->dev->mode_config.funcs = &bochs_mode_funcs; > > diff --git a/drivers/gpu/drm/drm_fb_helper.c > > b/drivers/gpu/drm/drm_fb_helper.c index 6212cd7cde1d..462b0c130ebb 100644 > > --- a/drivers/gpu/drm/drm_fb_helper.c > > +++ b/drivers/gpu/drm/drm_fb_helper.c > > @@ -372,24 +372,22 @@ static void drm_fb_helper_resume_worker(struct > > work_struct *work) } > > > > static void drm_fb_helper_dirty_blit_real(struct drm_fb_helper > > *fb_helper, > > - struct drm_clip_rect *clip) > > + struct drm_clip_rect *clip, > > + struct dma_buf_map *dst) > > { > > struct drm_framebuffer *fb = fb_helper->fb; > > unsigned int cpp = fb->format->cpp[0]; > > size_t offset = clip->y1 * fb->pitches[0] + clip->x1 * cpp; > > void *src = fb_helper->fbdev->screen_buffer + offset; > > - void *dst = fb_helper->buffer->map.vaddr + offset; > > size_t len = (clip->x2 - clip->x1) * cpp; > > unsigned int y; > > > > - for (y = clip->y1; y < clip->y2; y++) { > > - if (!fb_helper->dev->mode_config.fbdev_use_iomem) > > - memcpy(dst, src, len); > > - else > > - memcpy_toio((void __iomem *)dst, src, len); > > + dma_buf_map_incr(dst, offset); /* go to first pixel within clip > > rect */ > > + for (y = clip->y1; y < clip->y2; y++) { > > + dma_buf_map_memcpy_to(dst, src, len); > > + dma_buf_map_incr(dst, fb->pitches[0]); > > src += fb->pitches[0]; > > - dst += fb->pitches[0]; > > } > > } > > > > @@ -417,8 +415,9 @@ static void drm_fb_helper_dirty_work(struct > > work_struct *work) ret = drm_client_buffer_vmap(helper->buffer, &map); > > if (ret) > > return; > > - drm_fb_helper_dirty_blit_real(helper, > > &clip_copy); > > + drm_fb_helper_dirty_blit_real(helper, > > &clip_copy, &map); } > > + > > if (helper->fb->funcs->dirty) > > helper->fb->funcs->dirty(helper->fb, NULL, 0, 0, > > &clip_copy, 1); > > @@ -755,6 +754,136 @@ void drm_fb_helper_sys_imageblit(struct fb_info > > *info, } > > EXPORT_SYMBOL(drm_fb_helper_sys_imageblit); > > > > +static ssize_t drm_fb_helper_cfb_read(struct fb_info *info, char __user > > *buf, > > + size_t count, loff_t *ppos) > > +{ > > + unsigned long p = *ppos; > > + u8 *dst; > > + u8 __iomem *src; > > + int c, err = 0; > > + unsigned long total_size; > > + unsigned long alloc_size; > > + ssize_t ret = 0; > > + > > + if (info->state != FBINFO_STATE_RUNNING) > > + return -EPERM; > > + > > + total_size = info->screen_size; > > + > > + if (total_size == 0) > > + total_size = info->fix.smem_len; > > + > > + if (p >= total_size) > > + return 0; > > + > > + if (count >= total_size) > > + count = total_size; > > + > > + if (count + p > total_size) > > + count = total_size - p; > > + > > + src = (u8 __iomem *)(info->screen_base + p); > > + > > + alloc_size = min(count, PAGE_SIZE); > > + > > + dst = kmalloc(alloc_size, GFP_KERNEL); > > + if (!dst) > > + return -ENOMEM; > > + > > + while (count) { > > + c = min(count, alloc_size); > > + > > + memcpy_fromio(dst, src, c); > > + if (copy_to_user(buf, dst, c)) { > > + err = -EFAULT; > > + break; > > + } > > + > > + src += c; > > + *ppos += c; > > + buf += c; > > + ret += c; > > + count -= c; > > + } > > + > > + kfree(dst); > > + > > + if (err) > > + return err; > > + > > + return ret; > > +} > > + > > +static ssize_t drm_fb_helper_cfb_write(struct fb_info *info, const char > > __user *buf, > > + size_t count, loff_t *ppos) > > +{ > > + unsigned long p = *ppos; > > + u8 *src; > > + u8 __iomem *dst; > > + int c, err = 0; > > + unsigned long total_size; > > + unsigned long alloc_size; > > + ssize_t ret = 0; > > + > > + if (info->state != FBINFO_STATE_RUNNING) > > + return -EPERM; > > + > > + total_size = info->screen_size; > > + > > + if (total_size == 0) > > + total_size = info->fix.smem_len; > > + > > + if (p > total_size) > > + return -EFBIG; > > + > > + if (count > total_size) { > > + err = -EFBIG; > > + count = total_size; > > + } > > + > > + if (count + p > total_size) { > > + /* > > + * The framebuffer is too small. We do the > > + * copy operation, but return an error code > > + * afterwards. Taken from fbdev. > > + */ > > + if (!err) > > + err = -ENOSPC; > > + count = total_size - p; > > + } > > + > > + alloc_size = min(count, PAGE_SIZE); > > + > > + src = kmalloc(alloc_size, GFP_KERNEL); > > + if (!src) > > + return -ENOMEM; > > + > > + dst = (u8 __iomem *)(info->screen_base + p); > > + > > + while (count) { > > + c = min(count, alloc_size); > > + > > + if (copy_from_user(src, buf, c)) { > > + err = -EFAULT; > > + break; > > + } > > + memcpy_toio(dst, src, c); > > + > > + dst += c; > > + *ppos += c; > > + buf += c; > > + ret += c; > > + count -= c; > > + } > > + > > + kfree(src); > > + > > + if (err) > > + return err; > > + > > + return ret; > > +} > > + > > /** > > * drm_fb_helper_cfb_fillrect - wrapper around cfb_fillrect > > * @info: fbdev registered by the helper > > @@ -2027,6 +2156,66 @@ static int drm_fbdev_fb_mmap(struct fb_info *info, > > struct vm_area_struct *vma) return -ENODEV; > > } > > > > +static ssize_t drm_fbdev_fb_read(struct fb_info *info, char __user *buf, > > + size_t count, loff_t *ppos) > > +{ > > + struct drm_fb_helper *fb_helper = info->par; > > + struct drm_client_buffer *buffer = fb_helper->buffer; > > + > > + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem) > > + return drm_fb_helper_sys_read(info, buf, count, ppos); > > + else > > + return drm_fb_helper_cfb_read(info, buf, count, ppos); > > +} > > + > > +static ssize_t drm_fbdev_fb_write(struct fb_info *info, const char > > __user *buf, > > + size_t count, loff_t *ppos) > > +{ > > + struct drm_fb_helper *fb_helper = info->par; > > + struct drm_client_buffer *buffer = fb_helper->buffer; > > + > > + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem) > > + return drm_fb_helper_sys_write(info, buf, count, ppos); > > + else > > + return drm_fb_helper_cfb_write(info, buf, count, ppos); > > +} > > + > > +static void drm_fbdev_fb_fillrect(struct fb_info *info, > > + const struct fb_fillrect *rect) > > +{ > > + struct drm_fb_helper *fb_helper = info->par; > > + struct drm_client_buffer *buffer = fb_helper->buffer; > > + > > + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem) > > + drm_fb_helper_sys_fillrect(info, rect); > > + else > > + drm_fb_helper_cfb_fillrect(info, rect); > > +} > > + > > +static void drm_fbdev_fb_copyarea(struct fb_info *info, > > + const struct fb_copyarea *area) > > +{ > > + struct drm_fb_helper *fb_helper = info->par; > > + struct drm_client_buffer *buffer = fb_helper->buffer; > > + > > + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem) > > + drm_fb_helper_sys_copyarea(info, area); > > + else > > + drm_fb_helper_cfb_copyarea(info, area); > > +} > > + > > +static void drm_fbdev_fb_imageblit(struct fb_info *info, > > + const struct fb_image *image) > > +{ > > + struct drm_fb_helper *fb_helper = info->par; > > + struct drm_client_buffer *buffer = fb_helper->buffer; > > + > > + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem) > > + drm_fb_helper_sys_imageblit(info, image); > > + else > > + drm_fb_helper_cfb_imageblit(info, image); > > +} > > + > > static const struct fb_ops drm_fbdev_fb_ops = { > > .owner = THIS_MODULE, > > DRM_FB_HELPER_DEFAULT_OPS, > > @@ -2034,11 +2223,11 @@ static const struct fb_ops drm_fbdev_fb_ops = { > > .fb_release = drm_fbdev_fb_release, > > .fb_destroy = drm_fbdev_fb_destroy, > > .fb_mmap = drm_fbdev_fb_mmap, > > - .fb_read = drm_fb_helper_sys_read, > > - .fb_write = drm_fb_helper_sys_write, > > - .fb_fillrect = drm_fb_helper_sys_fillrect, > > - .fb_copyarea = drm_fb_helper_sys_copyarea, > > - .fb_imageblit = drm_fb_helper_sys_imageblit, > > + .fb_read = drm_fbdev_fb_read, > > + .fb_write = drm_fbdev_fb_write, > > + .fb_fillrect = drm_fbdev_fb_fillrect, > > + .fb_copyarea = drm_fbdev_fb_copyarea, > > + .fb_imageblit = drm_fbdev_fb_imageblit, > > }; > > > > static struct fb_deferred_io drm_fbdev_defio = { > > diff --git a/include/drm/drm_mode_config.h b/include/drm/drm_mode_config.h > > index 5ffbb4ed5b35..ab424ddd7665 100644 > > --- a/include/drm/drm_mode_config.h > > +++ b/include/drm/drm_mode_config.h > > @@ -877,18 +877,6 @@ struct drm_mode_config { > > */ > > bool prefer_shadow_fbdev; > > > > - /** > > - * @fbdev_use_iomem: > > - * > > - * Set to true if framebuffer reside in iomem. > > - * When set to true memcpy_toio() is used when copying the > > framebuffer in > > - * drm_fb_helper.drm_fb_helper_dirty_blit_real(). > > - * > > - * FIXME: This should be replaced with a per-mapping is_iomem > > - * flag (like ttm does), and then used everywhere in fbdev code. > > - */ > > - bool fbdev_use_iomem; > > - > > /** > > * @quirk_addfb_prefer_xbgr_30bpp: > > * > > -- > > 2.28.0 -- Thomas Zimmermann Graphics Driver Developer SUSE Software Solutions Germany GmbH Maxfeldstr. 5, 90409 N?rnberg, Germany (HRB 36809, AG N?rnberg) Gesch?ftsf?hrer: Felix Imend?rffer From sam at ravnborg.org Fri Oct 16 12:48:50 2020 From: sam at ravnborg.org (Sam Ravnborg) Date: Fri, 16 Oct 2020 14:48:50 +0200 Subject: [Spice-devel] [PATCH v4 10/10] drm/fb_helper: Support framebuffers in I/O memory In-Reply-To: <20201016141942.111e17f3@linux-uq9g> References: <20201015123806.32416-1-tzimmermann@suse.de> <20201015123806.32416-11-tzimmermann@suse.de> <20201016120347.GB1125266@ravnborg.org> <20201016141942.111e17f3@linux-uq9g> Message-ID: <20201016124850.GA1174599@ravnborg.org> On Fri, Oct 16, 2020 at 02:19:42PM +0200, Thomas Zimmermann wrote: > Hi > > On Fri, 16 Oct 2020 14:03:47 +0200 Sam Ravnborg wrote: > > > Hi Thomas. > > > > On Thu, Oct 15, 2020 at 02:38:06PM +0200, Thomas Zimmermann wrote: > > > At least sparc64 requires I/O-specific access to framebuffers. This > > > patch updates the fbdev console accordingly. > > > > > > For drivers with direct access to the framebuffer memory, the callback > > > functions in struct fb_ops test for the type of memory and call the rsp > > > fb_sys_ of fb_cfb_ functions. > > > > > > For drivers that employ a shadow buffer, fbdev's blit function retrieves > > > the framebuffer address as struct dma_buf_map, and uses dma_buf_map > > > interfaces to access the buffer. > > > > > > The bochs driver on sparc64 uses a workaround to flag the framebuffer as > > > I/O memory and avoid a HW exception. With the introduction of struct > > > dma_buf_map, this is not required any longer. The patch removes the rsp > > > code from both, bochs and fbdev. > > > > > > v4: > > > * move dma_buf_map changes into separate patch (Daniel) > > > * TODO list: comment on fbdev updates (Daniel) > > > > > > Signed-off-by: Thomas Zimmermann > > > > The original workaround fixed it so we could run qemu with the > > -nographic option. > > > > So I went ahead and tried to run quemu version: > > v5.0.0-1970-g0b100c8e72-dirty. > > And with the BOCHS driver built-in. > > > > With the following command line: > > qemu-system-sparc64 -m 512 -kernel vmlinux -append console=ttyS0 -nographic > > > > Behaviour was the same before and after applying this patch. > > (panic due to VFS: Unable to mount root fs on unknown-block(0,0)) > > So I consider it fixed for real now and not just a workaround. > > > > I also tested with: > > qemu-system-sparc64 -m 512 -kernel vmlinux -append console=ttyS0 -serial > > stdio > > > > and it worked in both cases too. > > FTR, you booted a kernel and got graphics output. The error is simply that > there was no disk to mount? The short version "Yes". The longer version: With "qemu-system-sparc64 -m 512 -kernel vmlinux -append console=ttyS0 -serial stdio" I got graphical output - one penguin. With "qemu-system-sparc64 -m 512 -kernel vmlinux -append console=ttyS0 -nographic" I got no graphical output, as implied by the -nographic option. But the boot continued - where it would panic before when we accessed IO memory as system memory. In both cases I got an error because I had not specified any rootfs, so qemu failed to mount any rootfs. So expected. Sam > > Best regards > Thomas > > > > > All the comments above so future-me have an easier time finding how to > > reproduce. > > > > Tested-by: Sam Ravnborg > > > > Sam > > > > > --- > > > Documentation/gpu/todo.rst | 19 ++- > > > drivers/gpu/drm/bochs/bochs_kms.c | 1 - > > > drivers/gpu/drm/drm_fb_helper.c | 217 ++++++++++++++++++++++++++++-- > > > include/drm/drm_mode_config.h | 12 -- > > > 4 files changed, 220 insertions(+), 29 deletions(-) > > > > > > diff --git a/Documentation/gpu/todo.rst b/Documentation/gpu/todo.rst > > > index 7e6fc3c04add..638b7f704339 100644 > > > --- a/Documentation/gpu/todo.rst > > > +++ b/Documentation/gpu/todo.rst > > > @@ -197,13 +197,28 @@ Convert drivers to use drm_fbdev_generic_setup() > > > ------------------------------------------------ > > > > > > Most drivers can use drm_fbdev_generic_setup(). Driver have to implement > > > -atomic modesetting and GEM vmap support. Current generic fbdev emulation > > > -expects the framebuffer in system memory (or system-like memory). > > > +atomic modesetting and GEM vmap support. Historically, generic fbdev > > > emulation +expected the framebuffer in system memory or system-like > > > memory. By employing +struct dma_buf_map, drivers with frambuffers in I/O > > > memory can be supported +as well. > > > > > > Contact: Maintainer of the driver you plan to convert > > > > > > Level: Intermediate > > > > > > +Reimplement functions in drm_fbdev_fb_ops without fbdev > > > +------------------------------------------------------- > > > + > > > +A number of callback functions in drm_fbdev_fb_ops could benefit from > > > +being rewritten without dependencies on the fbdev module. Some of the > > > +helpers could further benefit from using struct dma_buf_map instead of > > > +raw pointers. > > > + > > > +Contact: Thomas Zimmermann , Daniel Vetter > > > + > > > +Level: Advanced > > > + > > > + > > > drm_framebuffer_funcs and drm_mode_config_funcs.fb_create cleanup > > > ----------------------------------------------------------------- > > > > > > diff --git a/drivers/gpu/drm/bochs/bochs_kms.c > > > b/drivers/gpu/drm/bochs/bochs_kms.c index 13d0d04c4457..853081d186d5 > > > 100644 --- a/drivers/gpu/drm/bochs/bochs_kms.c > > > +++ b/drivers/gpu/drm/bochs/bochs_kms.c > > > @@ -151,7 +151,6 @@ int bochs_kms_init(struct bochs_device *bochs) > > > bochs->dev->mode_config.preferred_depth = 24; > > > bochs->dev->mode_config.prefer_shadow = 0; > > > bochs->dev->mode_config.prefer_shadow_fbdev = 1; > > > - bochs->dev->mode_config.fbdev_use_iomem = true; > > > bochs->dev->mode_config.quirk_addfb_prefer_host_byte_order = > > > true; > > > bochs->dev->mode_config.funcs = &bochs_mode_funcs; > > > diff --git a/drivers/gpu/drm/drm_fb_helper.c > > > b/drivers/gpu/drm/drm_fb_helper.c index 6212cd7cde1d..462b0c130ebb 100644 > > > --- a/drivers/gpu/drm/drm_fb_helper.c > > > +++ b/drivers/gpu/drm/drm_fb_helper.c > > > @@ -372,24 +372,22 @@ static void drm_fb_helper_resume_worker(struct > > > work_struct *work) } > > > > > > static void drm_fb_helper_dirty_blit_real(struct drm_fb_helper > > > *fb_helper, > > > - struct drm_clip_rect *clip) > > > + struct drm_clip_rect *clip, > > > + struct dma_buf_map *dst) > > > { > > > struct drm_framebuffer *fb = fb_helper->fb; > > > unsigned int cpp = fb->format->cpp[0]; > > > size_t offset = clip->y1 * fb->pitches[0] + clip->x1 * cpp; > > > void *src = fb_helper->fbdev->screen_buffer + offset; > > > - void *dst = fb_helper->buffer->map.vaddr + offset; > > > size_t len = (clip->x2 - clip->x1) * cpp; > > > unsigned int y; > > > > > > - for (y = clip->y1; y < clip->y2; y++) { > > > - if (!fb_helper->dev->mode_config.fbdev_use_iomem) > > > - memcpy(dst, src, len); > > > - else > > > - memcpy_toio((void __iomem *)dst, src, len); > > > + dma_buf_map_incr(dst, offset); /* go to first pixel within clip > > > rect */ > > > + for (y = clip->y1; y < clip->y2; y++) { > > > + dma_buf_map_memcpy_to(dst, src, len); > > > + dma_buf_map_incr(dst, fb->pitches[0]); > > > src += fb->pitches[0]; > > > - dst += fb->pitches[0]; > > > } > > > } > > > > > > @@ -417,8 +415,9 @@ static void drm_fb_helper_dirty_work(struct > > > work_struct *work) ret = drm_client_buffer_vmap(helper->buffer, &map); > > > if (ret) > > > return; > > > - drm_fb_helper_dirty_blit_real(helper, > > > &clip_copy); > > > + drm_fb_helper_dirty_blit_real(helper, > > > &clip_copy, &map); } > > > + > > > if (helper->fb->funcs->dirty) > > > helper->fb->funcs->dirty(helper->fb, NULL, 0, 0, > > > &clip_copy, 1); > > > @@ -755,6 +754,136 @@ void drm_fb_helper_sys_imageblit(struct fb_info > > > *info, } > > > EXPORT_SYMBOL(drm_fb_helper_sys_imageblit); > > > > > > +static ssize_t drm_fb_helper_cfb_read(struct fb_info *info, char __user > > > *buf, > > > + size_t count, loff_t *ppos) > > > +{ > > > + unsigned long p = *ppos; > > > + u8 *dst; > > > + u8 __iomem *src; > > > + int c, err = 0; > > > + unsigned long total_size; > > > + unsigned long alloc_size; > > > + ssize_t ret = 0; > > > + > > > + if (info->state != FBINFO_STATE_RUNNING) > > > + return -EPERM; > > > + > > > + total_size = info->screen_size; > > > + > > > + if (total_size == 0) > > > + total_size = info->fix.smem_len; > > > + > > > + if (p >= total_size) > > > + return 0; > > > + > > > + if (count >= total_size) > > > + count = total_size; > > > + > > > + if (count + p > total_size) > > > + count = total_size - p; > > > + > > > + src = (u8 __iomem *)(info->screen_base + p); > > > + > > > + alloc_size = min(count, PAGE_SIZE); > > > + > > > + dst = kmalloc(alloc_size, GFP_KERNEL); > > > + if (!dst) > > > + return -ENOMEM; > > > + > > > + while (count) { > > > + c = min(count, alloc_size); > > > + > > > + memcpy_fromio(dst, src, c); > > > + if (copy_to_user(buf, dst, c)) { > > > + err = -EFAULT; > > > + break; > > > + } > > > + > > > + src += c; > > > + *ppos += c; > > > + buf += c; > > > + ret += c; > > > + count -= c; > > > + } > > > + > > > + kfree(dst); > > > + > > > + if (err) > > > + return err; > > > + > > > + return ret; > > > +} > > > + > > > +static ssize_t drm_fb_helper_cfb_write(struct fb_info *info, const char > > > __user *buf, > > > + size_t count, loff_t *ppos) > > > +{ > > > + unsigned long p = *ppos; > > > + u8 *src; > > > + u8 __iomem *dst; > > > + int c, err = 0; > > > + unsigned long total_size; > > > + unsigned long alloc_size; > > > + ssize_t ret = 0; > > > + > > > + if (info->state != FBINFO_STATE_RUNNING) > > > + return -EPERM; > > > + > > > + total_size = info->screen_size; > > > + > > > + if (total_size == 0) > > > + total_size = info->fix.smem_len; > > > + > > > + if (p > total_size) > > > + return -EFBIG; > > > + > > > + if (count > total_size) { > > > + err = -EFBIG; > > > + count = total_size; > > > + } > > > + > > > + if (count + p > total_size) { > > > + /* > > > + * The framebuffer is too small. We do the > > > + * copy operation, but return an error code > > > + * afterwards. Taken from fbdev. > > > + */ > > > + if (!err) > > > + err = -ENOSPC; > > > + count = total_size - p; > > > + } > > > + > > > + alloc_size = min(count, PAGE_SIZE); > > > + > > > + src = kmalloc(alloc_size, GFP_KERNEL); > > > + if (!src) > > > + return -ENOMEM; > > > + > > > + dst = (u8 __iomem *)(info->screen_base + p); > > > + > > > + while (count) { > > > + c = min(count, alloc_size); > > > + > > > + if (copy_from_user(src, buf, c)) { > > > + err = -EFAULT; > > > + break; > > > + } > > > + memcpy_toio(dst, src, c); > > > + > > > + dst += c; > > > + *ppos += c; > > > + buf += c; > > > + ret += c; > > > + count -= c; > > > + } > > > + > > > + kfree(src); > > > + > > > + if (err) > > > + return err; > > > + > > > + return ret; > > > +} > > > + > > > /** > > > * drm_fb_helper_cfb_fillrect - wrapper around cfb_fillrect > > > * @info: fbdev registered by the helper > > > @@ -2027,6 +2156,66 @@ static int drm_fbdev_fb_mmap(struct fb_info *info, > > > struct vm_area_struct *vma) return -ENODEV; > > > } > > > > > > +static ssize_t drm_fbdev_fb_read(struct fb_info *info, char __user *buf, > > > + size_t count, loff_t *ppos) > > > +{ > > > + struct drm_fb_helper *fb_helper = info->par; > > > + struct drm_client_buffer *buffer = fb_helper->buffer; > > > + > > > + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem) > > > + return drm_fb_helper_sys_read(info, buf, count, ppos); > > > + else > > > + return drm_fb_helper_cfb_read(info, buf, count, ppos); > > > +} > > > + > > > +static ssize_t drm_fbdev_fb_write(struct fb_info *info, const char > > > __user *buf, > > > + size_t count, loff_t *ppos) > > > +{ > > > + struct drm_fb_helper *fb_helper = info->par; > > > + struct drm_client_buffer *buffer = fb_helper->buffer; > > > + > > > + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem) > > > + return drm_fb_helper_sys_write(info, buf, count, ppos); > > > + else > > > + return drm_fb_helper_cfb_write(info, buf, count, ppos); > > > +} > > > + > > > +static void drm_fbdev_fb_fillrect(struct fb_info *info, > > > + const struct fb_fillrect *rect) > > > +{ > > > + struct drm_fb_helper *fb_helper = info->par; > > > + struct drm_client_buffer *buffer = fb_helper->buffer; > > > + > > > + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem) > > > + drm_fb_helper_sys_fillrect(info, rect); > > > + else > > > + drm_fb_helper_cfb_fillrect(info, rect); > > > +} > > > + > > > +static void drm_fbdev_fb_copyarea(struct fb_info *info, > > > + const struct fb_copyarea *area) > > > +{ > > > + struct drm_fb_helper *fb_helper = info->par; > > > + struct drm_client_buffer *buffer = fb_helper->buffer; > > > + > > > + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem) > > > + drm_fb_helper_sys_copyarea(info, area); > > > + else > > > + drm_fb_helper_cfb_copyarea(info, area); > > > +} > > > + > > > +static void drm_fbdev_fb_imageblit(struct fb_info *info, > > > + const struct fb_image *image) > > > +{ > > > + struct drm_fb_helper *fb_helper = info->par; > > > + struct drm_client_buffer *buffer = fb_helper->buffer; > > > + > > > + if (drm_fbdev_use_shadow_fb(fb_helper) || !buffer->map.is_iomem) > > > + drm_fb_helper_sys_imageblit(info, image); > > > + else > > > + drm_fb_helper_cfb_imageblit(info, image); > > > +} > > > + > > > static const struct fb_ops drm_fbdev_fb_ops = { > > > .owner = THIS_MODULE, > > > DRM_FB_HELPER_DEFAULT_OPS, > > > @@ -2034,11 +2223,11 @@ static const struct fb_ops drm_fbdev_fb_ops = { > > > .fb_release = drm_fbdev_fb_release, > > > .fb_destroy = drm_fbdev_fb_destroy, > > > .fb_mmap = drm_fbdev_fb_mmap, > > > - .fb_read = drm_fb_helper_sys_read, > > > - .fb_write = drm_fb_helper_sys_write, > > > - .fb_fillrect = drm_fb_helper_sys_fillrect, > > > - .fb_copyarea = drm_fb_helper_sys_copyarea, > > > - .fb_imageblit = drm_fb_helper_sys_imageblit, > > > + .fb_read = drm_fbdev_fb_read, > > > + .fb_write = drm_fbdev_fb_write, > > > + .fb_fillrect = drm_fbdev_fb_fillrect, > > > + .fb_copyarea = drm_fbdev_fb_copyarea, > > > + .fb_imageblit = drm_fbdev_fb_imageblit, > > > }; > > > > > > static struct fb_deferred_io drm_fbdev_defio = { > > > diff --git a/include/drm/drm_mode_config.h b/include/drm/drm_mode_config.h > > > index 5ffbb4ed5b35..ab424ddd7665 100644 > > > --- a/include/drm/drm_mode_config.h > > > +++ b/include/drm/drm_mode_config.h > > > @@ -877,18 +877,6 @@ struct drm_mode_config { > > > */ > > > bool prefer_shadow_fbdev; > > > > > > - /** > > > - * @fbdev_use_iomem: > > > - * > > > - * Set to true if framebuffer reside in iomem. > > > - * When set to true memcpy_toio() is used when copying the > > > framebuffer in > > > - * drm_fb_helper.drm_fb_helper_dirty_blit_real(). > > > - * > > > - * FIXME: This should be replaced with a per-mapping is_iomem > > > - * flag (like ttm does), and then used everywhere in fbdev code. > > > - */ > > > - bool fbdev_use_iomem; > > > - > > > /** > > > * @quirk_addfb_prefer_xbgr_30bpp: > > > * > > > -- > > > 2.28.0 > > > > -- > Thomas Zimmermann > Graphics Driver Developer > SUSE Software Solutions Germany GmbH > Maxfeldstr. 5, 90409 N?rnberg, Germany > (HRB 36809, AG N?rnberg) > Gesch?ftsf?hrer: Felix Imend?rffer From tzimmermann at suse.de Mon Oct 19 09:08:51 2020 From: tzimmermann at suse.de (Thomas Zimmermann) Date: Mon, 19 Oct 2020 11:08:51 +0200 Subject: [Spice-devel] [PATCH v4 05/10] drm/ttm: Add vmap/vunmap to TTM and TTM GEM helpers In-Reply-To: <935d5771-5645-62a6-849c-31e286db1e30@amd.com> References: <20201015123806.32416-1-tzimmermann@suse.de> <20201015123806.32416-6-tzimmermann@suse.de> <935d5771-5645-62a6-849c-31e286db1e30@amd.com> Message-ID: <87c7c342-88dc-9a36-31f7-dae6edd34626@suse.de> Hi Christian On 15.10.20 16:08, Christian K?nig wrote: > Am 15.10.20 um 14:38 schrieb Thomas Zimmermann: >> The new functions ttm_bo_{vmap,vunmap}() map and unmap a TTM BO in kernel >> address space. The mapping's address is returned as struct dma_buf_map. >> Each function is a simplified version of TTM's existing kmap code. Both >> functions respect the memory's location ani/or writecombine flags. >> >> On top TTM's functions, GEM TTM helpers got drm_gem_ttm_{vmap,vunmap}(), >> two helpers that convert a GEM object into the TTM BO and forward the >> call >> to TTM's vmap/vunmap. These helpers can be dropped into the rsp GEM >> object >> callbacks. >> >> v4: >> ????* drop ttm_kmap_obj_to_dma_buf() in favor of vmap helpers (Daniel, >> ????? Christian) > > Bunch of minor comments below, but over all look very solid to me. > >> >> Signed-off-by: Thomas Zimmermann >> --- >> ? drivers/gpu/drm/drm_gem_ttm_helper.c | 38 +++++++++++++++ >> ? drivers/gpu/drm/ttm/ttm_bo_util.c??? | 72 ++++++++++++++++++++++++++++ >> ? include/drm/drm_gem_ttm_helper.h???? |? 6 +++ >> ? include/drm/ttm/ttm_bo_api.h???????? | 28 +++++++++++ >> ? include/linux/dma-buf-map.h????????? | 20 ++++++++ >> ? 5 files changed, 164 insertions(+) >> >> diff --git a/drivers/gpu/drm/drm_gem_ttm_helper.c >> b/drivers/gpu/drm/drm_gem_ttm_helper.c >> index 0e4fb9ba43ad..db4c14d78a30 100644 >> --- a/drivers/gpu/drm/drm_gem_ttm_helper.c >> +++ b/drivers/gpu/drm/drm_gem_ttm_helper.c >> @@ -49,6 +49,44 @@ void drm_gem_ttm_print_info(struct drm_printer *p, >> unsigned int indent, >> ? } >> ? EXPORT_SYMBOL(drm_gem_ttm_print_info); >> ? +/** >> + * drm_gem_ttm_vmap() - vmap &ttm_buffer_object >> + * @gem: GEM object. >> + * @map: [out] returns the dma-buf mapping. >> + * >> + * Maps a GEM object with ttm_bo_vmap(). This function can be used as >> + * &drm_gem_object_funcs.vmap callback. >> + * >> + * Returns: >> + * 0 on success, or a negative errno code otherwise. >> + */ >> +int drm_gem_ttm_vmap(struct drm_gem_object *gem, >> +???????????? struct dma_buf_map *map) >> +{ >> +??? struct ttm_buffer_object *bo = drm_gem_ttm_of_gem(gem); >> + >> +??? return ttm_bo_vmap(bo, map); >> + >> +} >> +EXPORT_SYMBOL(drm_gem_ttm_vmap); >> + >> +/** >> + * drm_gem_ttm_vunmap() - vunmap &ttm_buffer_object >> + * @gem: GEM object. >> + * @map: dma-buf mapping. >> + * >> + * Unmaps a GEM object with ttm_bo_vunmap(). This function can be >> used as >> + * &drm_gem_object_funcs.vmap callback. >> + */ >> +void drm_gem_ttm_vunmap(struct drm_gem_object *gem, >> +??????????? struct dma_buf_map *map) >> +{ >> +??? struct ttm_buffer_object *bo = drm_gem_ttm_of_gem(gem); >> + >> +??? ttm_bo_vunmap(bo, map); >> +} >> +EXPORT_SYMBOL(drm_gem_ttm_vunmap); >> + >> ? /** >> ?? * drm_gem_ttm_mmap() - mmap &ttm_buffer_object >> ?? * @gem: GEM object. >> diff --git a/drivers/gpu/drm/ttm/ttm_bo_util.c >> b/drivers/gpu/drm/ttm/ttm_bo_util.c >> index bdee4df1f3f2..80c42c774c7d 100644 >> --- a/drivers/gpu/drm/ttm/ttm_bo_util.c >> +++ b/drivers/gpu/drm/ttm/ttm_bo_util.c >> @@ -32,6 +32,7 @@ >> ? #include >> ? #include >> ? #include >> +#include >> ? #include >> ? #include >> ? #include >> @@ -526,6 +527,77 @@ void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map) >> ? } >> ? EXPORT_SYMBOL(ttm_bo_kunmap); >> ? +int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map) >> +{ >> +??? struct ttm_resource *mem = &bo->mem; >> +??? int ret; >> + >> +??? ret = ttm_mem_io_reserve(bo->bdev, mem); >> +??? if (ret) >> +??????? return ret; >> + >> +??? if (mem->bus.is_iomem) { >> +??????? void __iomem *vaddr_iomem; >> +??????? unsigned long size = bo->num_pages << PAGE_SHIFT; > > Please use uint64_t here and make sure to cast bo->num_pages before > shifting. I thought the rule of thumb is to use u64 in source code. Yet TTM only uses uint*_t types. Is there anything special about TTM? > > We have an unit tests of allocating a 8GB BO and that should work on a > 32bit machine as well :) > >> + >> +??????? if (mem->bus.addr) >> +??????????? vaddr_iomem = (void *)(((u8 *)mem->bus.addr)); I after reading the patch again, I realized that this is the 'ttm_bo_map_premapped' case and it's missing from _vunmap(). I see two options here: ignore this case in _vunmap(), or do an ioremap() unconditionally. Which one is preferable? Best regards Thomas >> +??????? else if (mem->placement & TTM_PL_FLAG_WC) > > I've just nuked the TTM_PL_FLAG_WC flag in drm-misc-next. There is a new > mem->bus.caching enum as replacement. > >> +??????????? vaddr_iomem = ioremap_wc(mem->bus.offset, size); >> +??????? else >> +??????????? vaddr_iomem = ioremap(mem->bus.offset, size); >> + >> +??????? if (!vaddr_iomem) >> +??????????? return -ENOMEM; >> + >> +??????? dma_buf_map_set_vaddr_iomem(map, vaddr_iomem); >> + >> +??? } else { >> +??????? struct ttm_operation_ctx ctx = { >> +??????????? .interruptible = false, >> +??????????? .no_wait_gpu = false >> +??????? }; >> +??????? struct ttm_tt *ttm = bo->ttm; >> +??????? pgprot_t prot; >> +??????? void *vaddr; >> + >> +??????? BUG_ON(!ttm); > > I think we can drop this, populate will just crash badly anyway. > >> + >> +??????? ret = ttm_tt_populate(bo->bdev, ttm, &ctx); >> +??????? if (ret) >> +??????????? return ret; >> + >> +??????? /* >> +???????? * We need to use vmap to get the desired page protection >> +???????? * or to make the buffer object look contiguous. >> +???????? */ >> +??????? prot = ttm_io_prot(mem->placement, PAGE_KERNEL); > > The calling convention has changed on drm-misc-next as well, but should > be trivial to adapt. > > Regards, > Christian. > >> +??????? vaddr = vmap(ttm->pages, bo->num_pages, 0, prot); >> +??????? if (!vaddr) >> +??????????? return -ENOMEM; >> + >> +??????? dma_buf_map_set_vaddr(map, vaddr); >> +??? } >> + >> +??? return 0; >> +} >> +EXPORT_SYMBOL(ttm_bo_vmap); >> + >> +void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map >> *map) >> +{ >> +??? if (dma_buf_map_is_null(map)) >> +??????? return; >> + >> +??? if (map->is_iomem) >> +??????? iounmap(map->vaddr_iomem); >> +??? else >> +??????? vunmap(map->vaddr); >> +??? dma_buf_map_clear(map); >> + >> +??? ttm_mem_io_free(bo->bdev, &bo->mem); >> +} >> +EXPORT_SYMBOL(ttm_bo_vunmap); >> + >> ? static int ttm_bo_wait_free_node(struct ttm_buffer_object *bo, >> ?????????????????? bool dst_use_tt) >> ? { >> diff --git a/include/drm/drm_gem_ttm_helper.h >> b/include/drm/drm_gem_ttm_helper.h >> index 118cef76f84f..7c6d874910b8 100644 >> --- a/include/drm/drm_gem_ttm_helper.h >> +++ b/include/drm/drm_gem_ttm_helper.h >> @@ -10,11 +10,17 @@ >> ? #include >> ? #include >> ? +struct dma_buf_map; >> + >> ? #define drm_gem_ttm_of_gem(gem_obj) \ >> ????? container_of(gem_obj, struct ttm_buffer_object, base) >> ? ? void drm_gem_ttm_print_info(struct drm_printer *p, unsigned int >> indent, >> ????????????????? const struct drm_gem_object *gem); >> +int drm_gem_ttm_vmap(struct drm_gem_object *gem, >> +???????????? struct dma_buf_map *map); >> +void drm_gem_ttm_vunmap(struct drm_gem_object *gem, >> +??????????? struct dma_buf_map *map); >> ? int drm_gem_ttm_mmap(struct drm_gem_object *gem, >> ?????????????? struct vm_area_struct *vma); >> ? diff --git a/include/drm/ttm/ttm_bo_api.h >> b/include/drm/ttm/ttm_bo_api.h >> index 37102e45e496..2c59a785374c 100644 >> --- a/include/drm/ttm/ttm_bo_api.h >> +++ b/include/drm/ttm/ttm_bo_api.h >> @@ -48,6 +48,8 @@ struct ttm_bo_global; >> ? ? struct ttm_bo_device; >> ? +struct dma_buf_map; >> + >> ? struct drm_mm_node; >> ? ? struct ttm_placement; >> @@ -494,6 +496,32 @@ int ttm_bo_kmap(struct ttm_buffer_object *bo, >> unsigned long start_page, >> ?? */ >> ? void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map); >> ? +/** >> + * ttm_bo_vmap >> + * >> + * @bo: The buffer object. >> + * @map: pointer to a struct dma_buf_map representing the map. >> + * >> + * Sets up a kernel virtual mapping, using ioremap or vmap to the >> + * data in the buffer object. The parameter @map returns the virtual >> + * address as struct dma_buf_map. Unmap the buffer with ttm_bo_vunmap(). >> + * >> + * Returns >> + * -ENOMEM: Out of memory. >> + * -EINVAL: Invalid range. >> + */ >> +int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map); >> + >> +/** >> + * ttm_bo_vunmap >> + * >> + * @bo: The buffer object. >> + * @map: Object describing the map to unmap. >> + * >> + * Unmaps a kernel map set up by ttm_bo_vmap(). >> + */ >> +void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map >> *map); >> + >> ? /** >> ?? * ttm_bo_mmap_obj - mmap memory backed by a ttm buffer object. >> ?? * >> diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h >> index fd1aba545fdf..2e8bbecb5091 100644 >> --- a/include/linux/dma-buf-map.h >> +++ b/include/linux/dma-buf-map.h >> @@ -45,6 +45,12 @@ >> ?? * >> ?? *??? dma_buf_map_set_vaddr(&map. 0xdeadbeaf); >> ?? * >> + * To set an address in I/O memory, use dma_buf_map_set_vaddr_iomem(). >> + * >> + * .. code-block:: c >> + * >> + *??? dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf); >> + * >> ?? * Test if a mapping is valid with either dma_buf_map_is_set() or >> ?? * dma_buf_map_is_null(). >> ?? * >> @@ -118,6 +124,20 @@ static inline void dma_buf_map_set_vaddr(struct >> dma_buf_map *map, void *vaddr) >> ????? map->is_iomem = false; >> ? } >> ? +/** >> + * dma_buf_map_set_vaddr_iomem - Sets a dma-buf mapping structure to >> an address in I/O memory >> + * @map:??????? The dma-buf mapping structure >> + * @vaddr_iomem:??? An I/O-memory address >> + * >> + * Sets the address and the I/O-memory flag. >> + */ >> +static inline void dma_buf_map_set_vaddr_iomem(struct dma_buf_map *map, >> +?????????????????????????? void __iomem *vaddr_iomem) >> +{ >> +??? map->vaddr_iomem = vaddr_iomem; >> +??? map->is_iomem = true; >> +} >> + >> ? /** >> ?? * dma_buf_map_is_equal - Compares two dma-buf mapping structures >> for equality >> ?? * @lhs:??? The dma-buf mapping structure > > _______________________________________________ > dri-devel mailing list > dri-devel at lists.freedesktop.org > https://lists.freedesktop.org/mailman/listinfo/dri-devel -- Thomas Zimmermann Graphics Driver Developer SUSE Software Solutions Germany GmbH Maxfeldstr. 5, 90409 N?rnberg, Germany (HRB 36809, AG N?rnberg) Gesch?ftsf?hrer: Felix Imend?rffer From christian.koenig at amd.com Mon Oct 19 09:45:05 2020 From: christian.koenig at amd.com (=?UTF-8?Q?Christian_K=c3=b6nig?=) Date: Mon, 19 Oct 2020 11:45:05 +0200 Subject: [Spice-devel] [PATCH v4 05/10] drm/ttm: Add vmap/vunmap to TTM and TTM GEM helpers In-Reply-To: <87c7c342-88dc-9a36-31f7-dae6edd34626@suse.de> References: <20201015123806.32416-1-tzimmermann@suse.de> <20201015123806.32416-6-tzimmermann@suse.de> <935d5771-5645-62a6-849c-31e286db1e30@amd.com> <87c7c342-88dc-9a36-31f7-dae6edd34626@suse.de> Message-ID: <9236f51c-c1fa-dadc-c7cc-d9d0c09251d1@amd.com> Hi Thomas, [SNIP] >>> ? +int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map) >>> +{ >>> +??? struct ttm_resource *mem = &bo->mem; >>> +??? int ret; >>> + >>> +??? ret = ttm_mem_io_reserve(bo->bdev, mem); >>> +??? if (ret) >>> +??????? return ret; >>> + >>> +??? if (mem->bus.is_iomem) { >>> +??????? void __iomem *vaddr_iomem; >>> +??????? unsigned long size = bo->num_pages << PAGE_SHIFT; >> Please use uint64_t here and make sure to cast bo->num_pages before >> shifting. > I thought the rule of thumb is to use u64 in source code. Yet TTM only > uses uint*_t types. Is there anything special about TTM? My last status is that you can use both and my personal preference is to use the uint*_t types because they are part of a higher level standard. >> We have an unit tests of allocating a 8GB BO and that should work on a >> 32bit machine as well :) >> >>> + >>> +??????? if (mem->bus.addr) >>> +??????????? vaddr_iomem = (void *)(((u8 *)mem->bus.addr)); > I after reading the patch again, I realized that this is the > 'ttm_bo_map_premapped' case and it's missing from _vunmap(). I see two > options here: ignore this case in _vunmap(), or do an ioremap() > unconditionally. Which one is preferable? ioremap would be very very bad, so we should just do nothing. Thanks, Christian. > > Best regards > Thomas > >>> +??????? else if (mem->placement & TTM_PL_FLAG_WC) >> I've just nuked the TTM_PL_FLAG_WC flag in drm-misc-next. There is a new >> mem->bus.caching enum as replacement. >> >>> +??????????? vaddr_iomem = ioremap_wc(mem->bus.offset, size); >>> +??????? else >>> +??????????? vaddr_iomem = ioremap(mem->bus.offset, size); >>> + >>> +??????? if (!vaddr_iomem) >>> +??????????? return -ENOMEM; >>> + >>> +??????? dma_buf_map_set_vaddr_iomem(map, vaddr_iomem); >>> + >>> +??? } else { >>> +??????? struct ttm_operation_ctx ctx = { >>> +??????????? .interruptible = false, >>> +??????????? .no_wait_gpu = false >>> +??????? }; >>> +??????? struct ttm_tt *ttm = bo->ttm; >>> +??????? pgprot_t prot; >>> +??????? void *vaddr; >>> + >>> +??????? BUG_ON(!ttm); >> I think we can drop this, populate will just crash badly anyway. >> >>> + >>> +??????? ret = ttm_tt_populate(bo->bdev, ttm, &ctx); >>> +??????? if (ret) >>> +??????????? return ret; >>> + >>> +??????? /* >>> +???????? * We need to use vmap to get the desired page protection >>> +???????? * or to make the buffer object look contiguous. >>> +???????? */ >>> +??????? prot = ttm_io_prot(mem->placement, PAGE_KERNEL); >> The calling convention has changed on drm-misc-next as well, but should >> be trivial to adapt. >> >> Regards, >> Christian. >> >>> +??????? vaddr = vmap(ttm->pages, bo->num_pages, 0, prot); >>> +??????? if (!vaddr) >>> +??????????? return -ENOMEM; >>> + >>> +??????? dma_buf_map_set_vaddr(map, vaddr); >>> +??? } >>> + >>> +??? return 0; >>> +} >>> +EXPORT_SYMBOL(ttm_bo_vmap); >>> + >>> +void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map >>> *map) >>> +{ >>> +??? if (dma_buf_map_is_null(map)) >>> +??????? return; >>> + >>> +??? if (map->is_iomem) >>> +??????? iounmap(map->vaddr_iomem); >>> +??? else >>> +??????? vunmap(map->vaddr); >>> +??? dma_buf_map_clear(map); >>> + >>> +??? ttm_mem_io_free(bo->bdev, &bo->mem); >>> +} >>> +EXPORT_SYMBOL(ttm_bo_vunmap); >>> + >>> ? static int ttm_bo_wait_free_node(struct ttm_buffer_object *bo, >>> ?????????????????? bool dst_use_tt) >>> ? { >>> diff --git a/include/drm/drm_gem_ttm_helper.h >>> b/include/drm/drm_gem_ttm_helper.h >>> index 118cef76f84f..7c6d874910b8 100644 >>> --- a/include/drm/drm_gem_ttm_helper.h >>> +++ b/include/drm/drm_gem_ttm_helper.h >>> @@ -10,11 +10,17 @@ >>> ? #include >>> ? #include >>> ? +struct dma_buf_map; >>> + >>> ? #define drm_gem_ttm_of_gem(gem_obj) \ >>> ????? container_of(gem_obj, struct ttm_buffer_object, base) >>> ? ? void drm_gem_ttm_print_info(struct drm_printer *p, unsigned int >>> indent, >>> ????????????????? const struct drm_gem_object *gem); >>> +int drm_gem_ttm_vmap(struct drm_gem_object *gem, >>> +???????????? struct dma_buf_map *map); >>> +void drm_gem_ttm_vunmap(struct drm_gem_object *gem, >>> +??????????? struct dma_buf_map *map); >>> ? int drm_gem_ttm_mmap(struct drm_gem_object *gem, >>> ?????????????? struct vm_area_struct *vma); >>> ? diff --git a/include/drm/ttm/ttm_bo_api.h >>> b/include/drm/ttm/ttm_bo_api.h >>> index 37102e45e496..2c59a785374c 100644 >>> --- a/include/drm/ttm/ttm_bo_api.h >>> +++ b/include/drm/ttm/ttm_bo_api.h >>> @@ -48,6 +48,8 @@ struct ttm_bo_global; >>> ? ? struct ttm_bo_device; >>> ? +struct dma_buf_map; >>> + >>> ? struct drm_mm_node; >>> ? ? struct ttm_placement; >>> @@ -494,6 +496,32 @@ int ttm_bo_kmap(struct ttm_buffer_object *bo, >>> unsigned long start_page, >>> ?? */ >>> ? void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map); >>> ? +/** >>> + * ttm_bo_vmap >>> + * >>> + * @bo: The buffer object. >>> + * @map: pointer to a struct dma_buf_map representing the map. >>> + * >>> + * Sets up a kernel virtual mapping, using ioremap or vmap to the >>> + * data in the buffer object. The parameter @map returns the virtual >>> + * address as struct dma_buf_map. Unmap the buffer with ttm_bo_vunmap(). >>> + * >>> + * Returns >>> + * -ENOMEM: Out of memory. >>> + * -EINVAL: Invalid range. >>> + */ >>> +int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map); >>> + >>> +/** >>> + * ttm_bo_vunmap >>> + * >>> + * @bo: The buffer object. >>> + * @map: Object describing the map to unmap. >>> + * >>> + * Unmaps a kernel map set up by ttm_bo_vmap(). >>> + */ >>> +void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map >>> *map); >>> + >>> ? /** >>> ?? * ttm_bo_mmap_obj - mmap memory backed by a ttm buffer object. >>> ?? * >>> diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h >>> index fd1aba545fdf..2e8bbecb5091 100644 >>> --- a/include/linux/dma-buf-map.h >>> +++ b/include/linux/dma-buf-map.h >>> @@ -45,6 +45,12 @@ >>> ?? * >>> ?? *??? dma_buf_map_set_vaddr(&map. 0xdeadbeaf); >>> ?? * >>> + * To set an address in I/O memory, use dma_buf_map_set_vaddr_iomem(). >>> + * >>> + * .. code-block:: c >>> + * >>> + *??? dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf); >>> + * >>> ?? * Test if a mapping is valid with either dma_buf_map_is_set() or >>> ?? * dma_buf_map_is_null(). >>> ?? * >>> @@ -118,6 +124,20 @@ static inline void dma_buf_map_set_vaddr(struct >>> dma_buf_map *map, void *vaddr) >>> ????? map->is_iomem = false; >>> ? } >>> ? +/** >>> + * dma_buf_map_set_vaddr_iomem - Sets a dma-buf mapping structure to >>> an address in I/O memory >>> + * @map:??????? The dma-buf mapping structure >>> + * @vaddr_iomem:??? An I/O-memory address >>> + * >>> + * Sets the address and the I/O-memory flag. >>> + */ >>> +static inline void dma_buf_map_set_vaddr_iomem(struct dma_buf_map *map, >>> +?????????????????????????? void __iomem *vaddr_iomem) >>> +{ >>> +??? map->vaddr_iomem = vaddr_iomem; >>> +??? map->is_iomem = true; >>> +} >>> + >>> ? /** >>> ?? * dma_buf_map_is_equal - Compares two dma-buf mapping structures >>> for equality >>> ?? * @lhs:??? The dma-buf mapping structure >> _______________________________________________ >> dri-devel mailing list >> dri-devel at lists.freedesktop.org >> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Fdri-devel&data=04%7C01%7Cchristian.koenig%40amd.com%7C07bc68af3c6440b5be8d08d8740e9b32%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637386953433558595%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=RlGCmjzyZERvqfnl4kA1bEHez5bkLf3F9OlKi2ybDAM%3D&reserved=0 From sp200606 at gmail.com Mon Oct 19 15:14:30 2020 From: sp200606 at gmail.com (=?UTF-8?Q?St=C3=A9phane_POGGI?=) Date: Mon, 19 Oct 2020 17:14:30 +0200 Subject: [Spice-devel] wMaxPacketSize Message-ID: Hello, While trying to redirect USB Barco ClickShare, I get an error message : GSpice-CRITICAL usbredirhost error received interrupt out packet is larger than wMaxPacketSize Is there a way to increase the wMaxPacketSize on QEMU ? Host OS : Arch Linux (up-to-date) Guest OS : Windows 10 (up-to-date) Thanks a lot, sp -------------- next part -------------- An HTML attachment was scrubbed... URL: From daniel at ffwll.ch Mon Oct 19 15:46:42 2020 From: daniel at ffwll.ch (Daniel Vetter) Date: Mon, 19 Oct 2020 17:46:42 +0200 Subject: [Spice-devel] [PATCH v4 05/10] drm/ttm: Add vmap/vunmap to TTM and TTM GEM helpers In-Reply-To: <9236f51c-c1fa-dadc-c7cc-d9d0c09251d1@amd.com> References: <20201015123806.32416-1-tzimmermann@suse.de> <20201015123806.32416-6-tzimmermann@suse.de> <935d5771-5645-62a6-849c-31e286db1e30@amd.com> <87c7c342-88dc-9a36-31f7-dae6edd34626@suse.de> <9236f51c-c1fa-dadc-c7cc-d9d0c09251d1@amd.com> Message-ID: <20201019154642.GF401619@phenom.ffwll.local> On Mon, Oct 19, 2020 at 11:45:05AM +0200, Christian K?nig wrote: > Hi Thomas, > > [SNIP] > > > > ? +int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map) > > > > +{ > > > > +??? struct ttm_resource *mem = &bo->mem; > > > > +??? int ret; > > > > + > > > > +??? ret = ttm_mem_io_reserve(bo->bdev, mem); > > > > +??? if (ret) > > > > +??????? return ret; > > > > + > > > > +??? if (mem->bus.is_iomem) { > > > > +??????? void __iomem *vaddr_iomem; > > > > +??????? unsigned long size = bo->num_pages << PAGE_SHIFT; > > > Please use uint64_t here and make sure to cast bo->num_pages before > > > shifting. > > I thought the rule of thumb is to use u64 in source code. Yet TTM only > > uses uint*_t types. Is there anything special about TTM? > > My last status is that you can use both and my personal preference is to use > the uint*_t types because they are part of a higher level standard. Yeah the only hard rule is that in uapi headers you need to use the __u64 and similar typedefs, to avoid cluttering the namespace for unrelated stuff in userspace. In the kernel c99 types are perfectly fine, and I think slowly on the rise. -Daniel > > > > We have an unit tests of allocating a 8GB BO and that should work on a > > > 32bit machine as well :) > > > > > > > + > > > > +??????? if (mem->bus.addr) > > > > +??????????? vaddr_iomem = (void *)(((u8 *)mem->bus.addr)); > > I after reading the patch again, I realized that this is the > > 'ttm_bo_map_premapped' case and it's missing from _vunmap(). I see two > > options here: ignore this case in _vunmap(), or do an ioremap() > > unconditionally. Which one is preferable? > > ioremap would be very very bad, so we should just do nothing. > > Thanks, > Christian. > > > > > Best regards > > Thomas > > > > > > +??????? else if (mem->placement & TTM_PL_FLAG_WC) > > > I've just nuked the TTM_PL_FLAG_WC flag in drm-misc-next. There is a new > > > mem->bus.caching enum as replacement. > > > > > > > +??????????? vaddr_iomem = ioremap_wc(mem->bus.offset, size); > > > > +??????? else > > > > +??????????? vaddr_iomem = ioremap(mem->bus.offset, size); > > > > + > > > > +??????? if (!vaddr_iomem) > > > > +??????????? return -ENOMEM; > > > > + > > > > +??????? dma_buf_map_set_vaddr_iomem(map, vaddr_iomem); > > > > + > > > > +??? } else { > > > > +??????? struct ttm_operation_ctx ctx = { > > > > +??????????? .interruptible = false, > > > > +??????????? .no_wait_gpu = false > > > > +??????? }; > > > > +??????? struct ttm_tt *ttm = bo->ttm; > > > > +??????? pgprot_t prot; > > > > +??????? void *vaddr; > > > > + > > > > +??????? BUG_ON(!ttm); > > > I think we can drop this, populate will just crash badly anyway. > > > > > > > + > > > > +??????? ret = ttm_tt_populate(bo->bdev, ttm, &ctx); > > > > +??????? if (ret) > > > > +??????????? return ret; > > > > + > > > > +??????? /* > > > > +???????? * We need to use vmap to get the desired page protection > > > > +???????? * or to make the buffer object look contiguous. > > > > +???????? */ > > > > +??????? prot = ttm_io_prot(mem->placement, PAGE_KERNEL); > > > The calling convention has changed on drm-misc-next as well, but should > > > be trivial to adapt. > > > > > > Regards, > > > Christian. > > > > > > > +??????? vaddr = vmap(ttm->pages, bo->num_pages, 0, prot); > > > > +??????? if (!vaddr) > > > > +??????????? return -ENOMEM; > > > > + > > > > +??????? dma_buf_map_set_vaddr(map, vaddr); > > > > +??? } > > > > + > > > > +??? return 0; > > > > +} > > > > +EXPORT_SYMBOL(ttm_bo_vmap); > > > > + > > > > +void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map > > > > *map) > > > > +{ > > > > +??? if (dma_buf_map_is_null(map)) > > > > +??????? return; > > > > + > > > > +??? if (map->is_iomem) > > > > +??????? iounmap(map->vaddr_iomem); > > > > +??? else > > > > +??????? vunmap(map->vaddr); > > > > +??? dma_buf_map_clear(map); > > > > + > > > > +??? ttm_mem_io_free(bo->bdev, &bo->mem); > > > > +} > > > > +EXPORT_SYMBOL(ttm_bo_vunmap); > > > > + > > > > ? static int ttm_bo_wait_free_node(struct ttm_buffer_object *bo, > > > > ?????????????????? bool dst_use_tt) > > > > ? { > > > > diff --git a/include/drm/drm_gem_ttm_helper.h > > > > b/include/drm/drm_gem_ttm_helper.h > > > > index 118cef76f84f..7c6d874910b8 100644 > > > > --- a/include/drm/drm_gem_ttm_helper.h > > > > +++ b/include/drm/drm_gem_ttm_helper.h > > > > @@ -10,11 +10,17 @@ > > > > ? #include > > > > ? #include > > > > ? +struct dma_buf_map; > > > > + > > > > ? #define drm_gem_ttm_of_gem(gem_obj) \ > > > > ????? container_of(gem_obj, struct ttm_buffer_object, base) > > > > ? ? void drm_gem_ttm_print_info(struct drm_printer *p, unsigned int > > > > indent, > > > > ????????????????? const struct drm_gem_object *gem); > > > > +int drm_gem_ttm_vmap(struct drm_gem_object *gem, > > > > +???????????? struct dma_buf_map *map); > > > > +void drm_gem_ttm_vunmap(struct drm_gem_object *gem, > > > > +??????????? struct dma_buf_map *map); > > > > ? int drm_gem_ttm_mmap(struct drm_gem_object *gem, > > > > ?????????????? struct vm_area_struct *vma); > > > > ? diff --git a/include/drm/ttm/ttm_bo_api.h > > > > b/include/drm/ttm/ttm_bo_api.h > > > > index 37102e45e496..2c59a785374c 100644 > > > > --- a/include/drm/ttm/ttm_bo_api.h > > > > +++ b/include/drm/ttm/ttm_bo_api.h > > > > @@ -48,6 +48,8 @@ struct ttm_bo_global; > > > > ? ? struct ttm_bo_device; > > > > ? +struct dma_buf_map; > > > > + > > > > ? struct drm_mm_node; > > > > ? ? struct ttm_placement; > > > > @@ -494,6 +496,32 @@ int ttm_bo_kmap(struct ttm_buffer_object *bo, > > > > unsigned long start_page, > > > > ?? */ > > > > ? void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map); > > > > ? +/** > > > > + * ttm_bo_vmap > > > > + * > > > > + * @bo: The buffer object. > > > > + * @map: pointer to a struct dma_buf_map representing the map. > > > > + * > > > > + * Sets up a kernel virtual mapping, using ioremap or vmap to the > > > > + * data in the buffer object. The parameter @map returns the virtual > > > > + * address as struct dma_buf_map. Unmap the buffer with ttm_bo_vunmap(). > > > > + * > > > > + * Returns > > > > + * -ENOMEM: Out of memory. > > > > + * -EINVAL: Invalid range. > > > > + */ > > > > +int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map); > > > > + > > > > +/** > > > > + * ttm_bo_vunmap > > > > + * > > > > + * @bo: The buffer object. > > > > + * @map: Object describing the map to unmap. > > > > + * > > > > + * Unmaps a kernel map set up by ttm_bo_vmap(). > > > > + */ > > > > +void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map > > > > *map); > > > > + > > > > ? /** > > > > ?? * ttm_bo_mmap_obj - mmap memory backed by a ttm buffer object. > > > > ?? * > > > > diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h > > > > index fd1aba545fdf..2e8bbecb5091 100644 > > > > --- a/include/linux/dma-buf-map.h > > > > +++ b/include/linux/dma-buf-map.h > > > > @@ -45,6 +45,12 @@ > > > > ?? * > > > > ?? *??? dma_buf_map_set_vaddr(&map. 0xdeadbeaf); > > > > ?? * > > > > + * To set an address in I/O memory, use dma_buf_map_set_vaddr_iomem(). > > > > + * > > > > + * .. code-block:: c > > > > + * > > > > + *??? dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf); > > > > + * > > > > ?? * Test if a mapping is valid with either dma_buf_map_is_set() or > > > > ?? * dma_buf_map_is_null(). > > > > ?? * > > > > @@ -118,6 +124,20 @@ static inline void dma_buf_map_set_vaddr(struct > > > > dma_buf_map *map, void *vaddr) > > > > ????? map->is_iomem = false; > > > > ? } > > > > ? +/** > > > > + * dma_buf_map_set_vaddr_iomem - Sets a dma-buf mapping structure to > > > > an address in I/O memory > > > > + * @map:??????? The dma-buf mapping structure > > > > + * @vaddr_iomem:??? An I/O-memory address > > > > + * > > > > + * Sets the address and the I/O-memory flag. > > > > + */ > > > > +static inline void dma_buf_map_set_vaddr_iomem(struct dma_buf_map *map, > > > > +?????????????????????????? void __iomem *vaddr_iomem) > > > > +{ > > > > +??? map->vaddr_iomem = vaddr_iomem; > > > > +??? map->is_iomem = true; > > > > +} > > > > + > > > > ? /** > > > > ?? * dma_buf_map_is_equal - Compares two dma-buf mapping structures > > > > for equality > > > > ?? * @lhs:??? The dma-buf mapping structure > > > _______________________________________________ > > > dri-devel mailing list > > > dri-devel at lists.freedesktop.org > > > https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Flists.freedesktop.org%2Fmailman%2Flistinfo%2Fdri-devel&data=04%7C01%7Cchristian.koenig%40amd.com%7C07bc68af3c6440b5be8d08d8740e9b32%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637386953433558595%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=RlGCmjzyZERvqfnl4kA1bEHez5bkLf3F9OlKi2ybDAM%3D&reserved=0 > -- Daniel Vetter Software Engineer, Intel Corporation http://blog.ffwll.ch From trix at redhat.com Mon Oct 19 16:31:15 2020 From: trix at redhat.com (trix at redhat.com) Date: Mon, 19 Oct 2020 09:31:15 -0700 Subject: [Spice-devel] [PATCH] drm: remove unneeded break Message-ID: <20201019163115.25814-1-trix@redhat.com> From: Tom Rix A break is not needed if it is preceded by a return or break Signed-off-by: Tom Rix --- drivers/gpu/drm/mgag200/mgag200_mode.c | 5 ----- drivers/gpu/drm/nouveau/nvkm/subdev/bios/pll.c | 1 - drivers/gpu/drm/nouveau/nvkm/subdev/clk/mcp77.c | 3 --- drivers/gpu/drm/nouveau/nvkm/subdev/fb/ramnv50.c | 1 - drivers/gpu/drm/nouveau/nvkm/subdev/top/gk104.c | 1 - drivers/gpu/drm/qxl/qxl_ioctl.c | 1 - 6 files changed, 12 deletions(-) diff --git a/drivers/gpu/drm/mgag200/mgag200_mode.c b/drivers/gpu/drm/mgag200/mgag200_mode.c index 38672f9e5c4f..bbe4e60dfd08 100644 --- a/drivers/gpu/drm/mgag200/mgag200_mode.c +++ b/drivers/gpu/drm/mgag200/mgag200_mode.c @@ -794,21 +794,16 @@ static int mgag200_crtc_set_plls(struct mga_device *mdev, long clock) case G200_SE_A: case G200_SE_B: return mga_g200se_set_plls(mdev, clock); - break; case G200_WB: case G200_EW3: return mga_g200wb_set_plls(mdev, clock); - break; case G200_EV: return mga_g200ev_set_plls(mdev, clock); - break; case G200_EH: case G200_EH3: return mga_g200eh_set_plls(mdev, clock); - break; case G200_ER: return mga_g200er_set_plls(mdev, clock); - break; } misc = RREG8(MGA_MISC_IN); diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/bios/pll.c b/drivers/gpu/drm/nouveau/nvkm/subdev/bios/pll.c index 350f10a3de37..2ec84b8a3b3a 100644 --- a/drivers/gpu/drm/nouveau/nvkm/subdev/bios/pll.c +++ b/drivers/gpu/drm/nouveau/nvkm/subdev/bios/pll.c @@ -123,7 +123,6 @@ pll_map(struct nvkm_bios *bios) case NV_20: case NV_30: return nv04_pll_mapping; - break; case NV_40: return nv40_pll_mapping; case NV_50: diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/clk/mcp77.c b/drivers/gpu/drm/nouveau/nvkm/subdev/clk/mcp77.c index efa50274df97..4884eb4a9221 100644 --- a/drivers/gpu/drm/nouveau/nvkm/subdev/clk/mcp77.c +++ b/drivers/gpu/drm/nouveau/nvkm/subdev/clk/mcp77.c @@ -140,17 +140,14 @@ mcp77_clk_read(struct nvkm_clk *base, enum nv_clk_src src) break; case nv_clk_src_mem: return 0; - break; case nv_clk_src_vdec: P = (read_div(clk) & 0x00000700) >> 8; switch (mast & 0x00400000) { case 0x00400000: return nvkm_clk_read(&clk->base, nv_clk_src_core) >> P; - break; default: return 500000 >> P; - break; } break; default: diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/fb/ramnv50.c b/drivers/gpu/drm/nouveau/nvkm/subdev/fb/ramnv50.c index 2ccb4b6be153..7b1eb44ff3da 100644 --- a/drivers/gpu/drm/nouveau/nvkm/subdev/fb/ramnv50.c +++ b/drivers/gpu/drm/nouveau/nvkm/subdev/fb/ramnv50.c @@ -171,7 +171,6 @@ nv50_ram_timing_read(struct nv50_ram *ram, u32 *timing) break; default: return -ENOSYS; - break; } T(WR) = ((timing[1] >> 24) & 0xff) - 1 - T(CWL); diff --git a/drivers/gpu/drm/nouveau/nvkm/subdev/top/gk104.c b/drivers/gpu/drm/nouveau/nvkm/subdev/top/gk104.c index e01746ce9fc4..1156634533f9 100644 --- a/drivers/gpu/drm/nouveau/nvkm/subdev/top/gk104.c +++ b/drivers/gpu/drm/nouveau/nvkm/subdev/top/gk104.c @@ -90,7 +90,6 @@ gk104_top_oneinit(struct nvkm_top *top) case 0x00000010: B_(NVDEC ); break; case 0x00000013: B_(CE ); break; case 0x00000014: C_(GSP ); break; - break; default: break; } diff --git a/drivers/gpu/drm/qxl/qxl_ioctl.c b/drivers/gpu/drm/qxl/qxl_ioctl.c index 5cea6eea72ab..2072ddc9549c 100644 --- a/drivers/gpu/drm/qxl/qxl_ioctl.c +++ b/drivers/gpu/drm/qxl/qxl_ioctl.c @@ -160,7 +160,6 @@ static int qxl_process_single_command(struct qxl_device *qdev, default: DRM_DEBUG("Only draw commands in execbuffers\n"); return -EINVAL; - break; } if (cmd->command_size > PAGE_SIZE - sizeof(union qxl_release_info)) -- 2.18.1 From christian.koenig at amd.com Tue Oct 20 13:39:35 2020 From: christian.koenig at amd.com (=?UTF-8?Q?Christian_K=c3=b6nig?=) Date: Tue, 20 Oct 2020 15:39:35 +0200 Subject: [Spice-devel] [PATCH v5 05/10] drm/ttm: Add vmap/vunmap to TTM and TTM GEM helpers In-Reply-To: <20201020122046.31167-6-tzimmermann@suse.de> References: <20201020122046.31167-1-tzimmermann@suse.de> <20201020122046.31167-6-tzimmermann@suse.de> Message-ID: <0d936127-ff9b-ace1-97b8-bdbc01921a65@amd.com> Am 20.10.20 um 14:20 schrieb Thomas Zimmermann: > The new functions ttm_bo_{vmap,vunmap}() map and unmap a TTM BO in kernel > address space. The mapping's address is returned as struct dma_buf_map. > Each function is a simplified version of TTM's existing kmap code. Both > functions respect the memory's location ani/or writecombine flags. > > On top TTM's functions, GEM TTM helpers got drm_gem_ttm_{vmap,vunmap}(), > two helpers that convert a GEM object into the TTM BO and forward the call > to TTM's vmap/vunmap. These helpers can be dropped into the rsp GEM object > callbacks. > > v5: > * use size_t for storing mapping size (Christian) > * ignore premapped memory areas correctly in ttm_bo_vunmap() > * rebase onto latest TTM interfaces (Christian) > * remove BUG() from ttm_bo_vmap() (Christian) > v4: > * drop ttm_kmap_obj_to_dma_buf() in favor of vmap helpers (Daniel, > Christian) > > Signed-off-by: Thomas Zimmermann > Acked-by: Daniel Vetter > Tested-by: Sam Ravnborg Reviewed-by: Christian K?nig > --- > drivers/gpu/drm/drm_gem_ttm_helper.c | 38 +++++++++++++++ > drivers/gpu/drm/ttm/ttm_bo_util.c | 72 ++++++++++++++++++++++++++++ > include/drm/drm_gem_ttm_helper.h | 6 +++ > include/drm/ttm/ttm_bo_api.h | 28 +++++++++++ > include/linux/dma-buf-map.h | 20 ++++++++ > 5 files changed, 164 insertions(+) > > diff --git a/drivers/gpu/drm/drm_gem_ttm_helper.c b/drivers/gpu/drm/drm_gem_ttm_helper.c > index 0e4fb9ba43ad..db4c14d78a30 100644 > --- a/drivers/gpu/drm/drm_gem_ttm_helper.c > +++ b/drivers/gpu/drm/drm_gem_ttm_helper.c > @@ -49,6 +49,44 @@ void drm_gem_ttm_print_info(struct drm_printer *p, unsigned int indent, > } > EXPORT_SYMBOL(drm_gem_ttm_print_info); > > +/** > + * drm_gem_ttm_vmap() - vmap &ttm_buffer_object > + * @gem: GEM object. > + * @map: [out] returns the dma-buf mapping. > + * > + * Maps a GEM object with ttm_bo_vmap(). This function can be used as > + * &drm_gem_object_funcs.vmap callback. > + * > + * Returns: > + * 0 on success, or a negative errno code otherwise. > + */ > +int drm_gem_ttm_vmap(struct drm_gem_object *gem, > + struct dma_buf_map *map) > +{ > + struct ttm_buffer_object *bo = drm_gem_ttm_of_gem(gem); > + > + return ttm_bo_vmap(bo, map); > + > +} > +EXPORT_SYMBOL(drm_gem_ttm_vmap); > + > +/** > + * drm_gem_ttm_vunmap() - vunmap &ttm_buffer_object > + * @gem: GEM object. > + * @map: dma-buf mapping. > + * > + * Unmaps a GEM object with ttm_bo_vunmap(). This function can be used as > + * &drm_gem_object_funcs.vmap callback. > + */ > +void drm_gem_ttm_vunmap(struct drm_gem_object *gem, > + struct dma_buf_map *map) > +{ > + struct ttm_buffer_object *bo = drm_gem_ttm_of_gem(gem); > + > + ttm_bo_vunmap(bo, map); > +} > +EXPORT_SYMBOL(drm_gem_ttm_vunmap); > + > /** > * drm_gem_ttm_mmap() - mmap &ttm_buffer_object > * @gem: GEM object. > diff --git a/drivers/gpu/drm/ttm/ttm_bo_util.c b/drivers/gpu/drm/ttm/ttm_bo_util.c > index ba7ab5ed85d0..5c79418405ea 100644 > --- a/drivers/gpu/drm/ttm/ttm_bo_util.c > +++ b/drivers/gpu/drm/ttm/ttm_bo_util.c > @@ -32,6 +32,7 @@ > #include > #include > #include > +#include > #include > #include > #include > @@ -527,6 +528,77 @@ void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map) > } > EXPORT_SYMBOL(ttm_bo_kunmap); > > +int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map) > +{ > + struct ttm_resource *mem = &bo->mem; > + int ret; > + > + ret = ttm_mem_io_reserve(bo->bdev, mem); > + if (ret) > + return ret; > + > + if (mem->bus.is_iomem) { > + void __iomem *vaddr_iomem; > + size_t size = bo->num_pages << PAGE_SHIFT; > + > + if (mem->bus.addr) > + vaddr_iomem = (void __iomem *)mem->bus.addr; > + else if (mem->bus.caching == ttm_write_combined) > + vaddr_iomem = ioremap_wc(mem->bus.offset, size); > + else > + vaddr_iomem = ioremap(mem->bus.offset, size); > + > + if (!vaddr_iomem) > + return -ENOMEM; > + > + dma_buf_map_set_vaddr_iomem(map, vaddr_iomem); > + > + } else { > + struct ttm_operation_ctx ctx = { > + .interruptible = false, > + .no_wait_gpu = false > + }; > + struct ttm_tt *ttm = bo->ttm; > + pgprot_t prot; > + void *vaddr; > + > + ret = ttm_tt_populate(bo->bdev, ttm, &ctx); > + if (ret) > + return ret; > + > + /* > + * We need to use vmap to get the desired page protection > + * or to make the buffer object look contiguous. > + */ > + prot = ttm_io_prot(bo, mem, PAGE_KERNEL); > + vaddr = vmap(ttm->pages, bo->num_pages, 0, prot); > + if (!vaddr) > + return -ENOMEM; > + > + dma_buf_map_set_vaddr(map, vaddr); > + } > + > + return 0; > +} > +EXPORT_SYMBOL(ttm_bo_vmap); > + > +void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map *map) > +{ > + struct ttm_resource *mem = &bo->mem; > + > + if (dma_buf_map_is_null(map)) > + return; > + > + if (!map->is_iomem) > + vunmap(map->vaddr); > + else if (!mem->bus.addr) > + iounmap(map->vaddr_iomem); > + dma_buf_map_clear(map); > + > + ttm_mem_io_free(bo->bdev, &bo->mem); > +} > +EXPORT_SYMBOL(ttm_bo_vunmap); > + > static int ttm_bo_wait_free_node(struct ttm_buffer_object *bo, > bool dst_use_tt) > { > diff --git a/include/drm/drm_gem_ttm_helper.h b/include/drm/drm_gem_ttm_helper.h > index 118cef76f84f..7c6d874910b8 100644 > --- a/include/drm/drm_gem_ttm_helper.h > +++ b/include/drm/drm_gem_ttm_helper.h > @@ -10,11 +10,17 @@ > #include > #include > > +struct dma_buf_map; > + > #define drm_gem_ttm_of_gem(gem_obj) \ > container_of(gem_obj, struct ttm_buffer_object, base) > > void drm_gem_ttm_print_info(struct drm_printer *p, unsigned int indent, > const struct drm_gem_object *gem); > +int drm_gem_ttm_vmap(struct drm_gem_object *gem, > + struct dma_buf_map *map); > +void drm_gem_ttm_vunmap(struct drm_gem_object *gem, > + struct dma_buf_map *map); > int drm_gem_ttm_mmap(struct drm_gem_object *gem, > struct vm_area_struct *vma); > > diff --git a/include/drm/ttm/ttm_bo_api.h b/include/drm/ttm/ttm_bo_api.h > index 37102e45e496..2c59a785374c 100644 > --- a/include/drm/ttm/ttm_bo_api.h > +++ b/include/drm/ttm/ttm_bo_api.h > @@ -48,6 +48,8 @@ struct ttm_bo_global; > > struct ttm_bo_device; > > +struct dma_buf_map; > + > struct drm_mm_node; > > struct ttm_placement; > @@ -494,6 +496,32 @@ int ttm_bo_kmap(struct ttm_buffer_object *bo, unsigned long start_page, > */ > void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map); > > +/** > + * ttm_bo_vmap > + * > + * @bo: The buffer object. > + * @map: pointer to a struct dma_buf_map representing the map. > + * > + * Sets up a kernel virtual mapping, using ioremap or vmap to the > + * data in the buffer object. The parameter @map returns the virtual > + * address as struct dma_buf_map. Unmap the buffer with ttm_bo_vunmap(). > + * > + * Returns > + * -ENOMEM: Out of memory. > + * -EINVAL: Invalid range. > + */ > +int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map); > + > +/** > + * ttm_bo_vunmap > + * > + * @bo: The buffer object. > + * @map: Object describing the map to unmap. > + * > + * Unmaps a kernel map set up by ttm_bo_vmap(). > + */ > +void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map *map); > + > /** > * ttm_bo_mmap_obj - mmap memory backed by a ttm buffer object. > * > diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h > index fd1aba545fdf..2e8bbecb5091 100644 > --- a/include/linux/dma-buf-map.h > +++ b/include/linux/dma-buf-map.h > @@ -45,6 +45,12 @@ > * > * dma_buf_map_set_vaddr(&map. 0xdeadbeaf); > * > + * To set an address in I/O memory, use dma_buf_map_set_vaddr_iomem(). > + * > + * .. code-block:: c > + * > + * dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf); > + * > * Test if a mapping is valid with either dma_buf_map_is_set() or > * dma_buf_map_is_null(). > * > @@ -118,6 +124,20 @@ static inline void dma_buf_map_set_vaddr(struct dma_buf_map *map, void *vaddr) > map->is_iomem = false; > } > > +/** > + * dma_buf_map_set_vaddr_iomem - Sets a dma-buf mapping structure to an address in I/O memory > + * @map: The dma-buf mapping structure > + * @vaddr_iomem: An I/O-memory address > + * > + * Sets the address and the I/O-memory flag. > + */ > +static inline void dma_buf_map_set_vaddr_iomem(struct dma_buf_map *map, > + void __iomem *vaddr_iomem) > +{ > + map->vaddr_iomem = vaddr_iomem; > + map->is_iomem = true; > +} > + > /** > * dma_buf_map_is_equal - Compares two dma-buf mapping structures for equality > * @lhs: The dma-buf mapping structure From tzimmermann at suse.de Tue Oct 20 12:20:39 2020 From: tzimmermann at suse.de (Thomas Zimmermann) Date: Tue, 20 Oct 2020 14:20:39 +0200 Subject: [Spice-devel] [PATCH v5 03/10] drm/etnaviv: Remove empty etnaviv_gem_prime_vunmap() In-Reply-To: <20201020122046.31167-1-tzimmermann@suse.de> References: <20201020122046.31167-1-tzimmermann@suse.de> Message-ID: <20201020122046.31167-4-tzimmermann@suse.de> The function etnaviv_gem_prime_vunmap() is empty. Remove it before changing the interface to use struct drm_buf_map. Signed-off-by: Thomas Zimmermann Acked-by: Christian K?nig Tested-by: Sam Ravnborg --- drivers/gpu/drm/etnaviv/etnaviv_drv.h | 1 - drivers/gpu/drm/etnaviv/etnaviv_gem.c | 1 - drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c | 5 ----- 3 files changed, 7 deletions(-) diff --git a/drivers/gpu/drm/etnaviv/etnaviv_drv.h b/drivers/gpu/drm/etnaviv/etnaviv_drv.h index 914f0867ff71..9682c26d89bb 100644 --- a/drivers/gpu/drm/etnaviv/etnaviv_drv.h +++ b/drivers/gpu/drm/etnaviv/etnaviv_drv.h @@ -52,7 +52,6 @@ int etnaviv_gem_mmap(struct file *filp, struct vm_area_struct *vma); int etnaviv_gem_mmap_offset(struct drm_gem_object *obj, u64 *offset); struct sg_table *etnaviv_gem_prime_get_sg_table(struct drm_gem_object *obj); void *etnaviv_gem_prime_vmap(struct drm_gem_object *obj); -void etnaviv_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr); int etnaviv_gem_prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma); struct drm_gem_object *etnaviv_gem_prime_import_sg_table(struct drm_device *dev, diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem.c b/drivers/gpu/drm/etnaviv/etnaviv_gem.c index 67d9a2b9ea6a..bbd235473645 100644 --- a/drivers/gpu/drm/etnaviv/etnaviv_gem.c +++ b/drivers/gpu/drm/etnaviv/etnaviv_gem.c @@ -571,7 +571,6 @@ static const struct drm_gem_object_funcs etnaviv_gem_object_funcs = { .unpin = etnaviv_gem_prime_unpin, .get_sg_table = etnaviv_gem_prime_get_sg_table, .vmap = etnaviv_gem_prime_vmap, - .vunmap = etnaviv_gem_prime_vunmap, .vm_ops = &vm_ops, }; diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c index 135fbff6fecf..a6d9932a32ae 100644 --- a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c +++ b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c @@ -27,11 +27,6 @@ void *etnaviv_gem_prime_vmap(struct drm_gem_object *obj) return etnaviv_gem_vmap(obj); } -void etnaviv_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr) -{ - /* TODO msm_gem_vunmap() */ -} - int etnaviv_gem_prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma) { -- 2.28.0 From tzimmermann at suse.de Tue Oct 20 12:20:38 2020 From: tzimmermann at suse.de (Thomas Zimmermann) Date: Tue, 20 Oct 2020 14:20:38 +0200 Subject: [Spice-devel] [PATCH v5 02/10] drm/cma-helper: Remove empty drm_gem_cma_prime_vunmap() In-Reply-To: <20201020122046.31167-1-tzimmermann@suse.de> References: <20201020122046.31167-1-tzimmermann@suse.de> Message-ID: <20201020122046.31167-3-tzimmermann@suse.de> The function drm_gem_cma_prime_vunmap() is empty. Remove it before changing the interface to use struct drm_buf_map. Signed-off-by: Thomas Zimmermann Reviewed-by: Christian K?nig Tested-by: Sam Ravnborg --- drivers/gpu/drm/drm_gem_cma_helper.c | 17 ----------------- drivers/gpu/drm/vc4/vc4_bo.c | 1 - include/drm/drm_gem_cma_helper.h | 1 - 3 files changed, 19 deletions(-) diff --git a/drivers/gpu/drm/drm_gem_cma_helper.c b/drivers/gpu/drm/drm_gem_cma_helper.c index 2165633c9b9e..d527485ea0b7 100644 --- a/drivers/gpu/drm/drm_gem_cma_helper.c +++ b/drivers/gpu/drm/drm_gem_cma_helper.c @@ -537,23 +537,6 @@ void *drm_gem_cma_prime_vmap(struct drm_gem_object *obj) } EXPORT_SYMBOL_GPL(drm_gem_cma_prime_vmap); -/** - * drm_gem_cma_prime_vunmap - unmap a CMA GEM object from the kernel's virtual - * address space - * @obj: GEM object - * @vaddr: kernel virtual address where the CMA GEM object was mapped - * - * This function removes a buffer exported via DRM PRIME from the kernel's - * virtual address space. This is a no-op because CMA buffers cannot be - * unmapped from kernel space. Drivers using the CMA helpers should set this - * as their &drm_gem_object_funcs.vunmap callback. - */ -void drm_gem_cma_prime_vunmap(struct drm_gem_object *obj, void *vaddr) -{ - /* Nothing to do */ -} -EXPORT_SYMBOL_GPL(drm_gem_cma_prime_vunmap); - static const struct drm_gem_object_funcs drm_gem_cma_default_funcs = { .free = drm_gem_cma_free_object, .print_info = drm_gem_cma_print_info, diff --git a/drivers/gpu/drm/vc4/vc4_bo.c b/drivers/gpu/drm/vc4/vc4_bo.c index f432278173cd..557f0d1e6437 100644 --- a/drivers/gpu/drm/vc4/vc4_bo.c +++ b/drivers/gpu/drm/vc4/vc4_bo.c @@ -387,7 +387,6 @@ static const struct drm_gem_object_funcs vc4_gem_object_funcs = { .export = vc4_prime_export, .get_sg_table = drm_gem_cma_prime_get_sg_table, .vmap = vc4_prime_vmap, - .vunmap = drm_gem_cma_prime_vunmap, .vm_ops = &vc4_vm_ops, }; diff --git a/include/drm/drm_gem_cma_helper.h b/include/drm/drm_gem_cma_helper.h index 2bfa2502607a..a064b0d1c480 100644 --- a/include/drm/drm_gem_cma_helper.h +++ b/include/drm/drm_gem_cma_helper.h @@ -104,7 +104,6 @@ drm_gem_cma_prime_import_sg_table(struct drm_device *dev, int drm_gem_cma_prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma); void *drm_gem_cma_prime_vmap(struct drm_gem_object *obj); -void drm_gem_cma_prime_vunmap(struct drm_gem_object *obj, void *vaddr); struct drm_gem_object * drm_gem_cma_create_object_default_funcs(struct drm_device *dev, size_t size); -- 2.28.0 From tzimmermann at suse.de Tue Oct 20 12:20:36 2020 From: tzimmermann at suse.de (Thomas Zimmermann) Date: Tue, 20 Oct 2020 14:20:36 +0200 Subject: [Spice-devel] [PATCH v5 00/10] Support GEM object mappings from I/O memory Message-ID: <20201020122046.31167-1-tzimmermann@suse.de> DRM's fbdev console uses regular load and store operations to update framebuffer memory. The bochs driver on sparc64 requires the use of I/O-specific load and store operations. We have a workaround, but need a long-term solution to the problem. This patchset changes GEM's vmap/vunmap interfaces to forward pointers of type struct dma_buf_map and updates the generic fbdev emulation to use them correctly. This enables I/O-memory operations on all framebuffers that require and support them. Patches #1 to #4 prepare VRAM helpers and drivers. Next is the update of the GEM vmap functions. Patch #5 adds vmap and vunmap that is usable with TTM-based GEM drivers, and patch #6 updates GEM's vmap/vunmap callback to forward instances of type struct dma_buf_map. While the patch touches many files throughout the DRM modules, the applied changes are mostly trivial interface fixes. Several TTM-based GEM drivers now use the new vmap code. Patch #7 updates GEM's internal vmap/vunmap functions to forward struct dma_buf_map. With struct dma_buf_map propagated through the layers, patches #8 to #10 convert DRM clients and generic fbdev emulation to use it. Updating the fbdev framebuffer will select the correct functions, either for system or I/O memory. v5: * rebase onto latest TTM changes (Chrsitian) * support TTM premapped memory correctly (Christian) * implement fb_read/fb_write internally (Sam, Daniel) * cleanups v4: * provide TTM vmap/vunmap plus GEM helpers and convert drivers over (Christian, Daniel) * remove several empty functions * more TODOs and documentation (Daniel) v3: * recreate the whole patchset on top of struct dma_buf_map v2: * RFC patchset Thomas Zimmermann (10): drm/vram-helper: Remove invariant parameters from internal kmap function drm/cma-helper: Remove empty drm_gem_cma_prime_vunmap() drm/etnaviv: Remove empty etnaviv_gem_prime_vunmap() drm/exynos: Remove empty exynos_drm_gem_prime_{vmap,vunmap}() drm/ttm: Add vmap/vunmap to TTM and TTM GEM helpers drm/gem: Use struct dma_buf_map in GEM vmap ops and convert GEM backends drm/gem: Update internal GEM vmap/vunmap interfaces to use struct dma_buf_map drm/gem: Store client buffer mappings as struct dma_buf_map dma-buf-map: Add memcpy and pointer-increment interfaces drm/fb_helper: Support framebuffers in I/O memory Documentation/gpu/todo.rst | 37 ++- drivers/gpu/drm/Kconfig | 2 + drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c | 36 --- drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.h | 2 - drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c | 5 +- drivers/gpu/drm/amd/amdgpu/amdgpu_object.h | 1 - drivers/gpu/drm/ast/ast_cursor.c | 27 +-- drivers/gpu/drm/ast/ast_drv.h | 7 +- drivers/gpu/drm/bochs/bochs_kms.c | 1 - drivers/gpu/drm/drm_client.c | 38 +-- drivers/gpu/drm/drm_fb_helper.c | 248 ++++++++++++++++++-- drivers/gpu/drm/drm_gem.c | 29 ++- drivers/gpu/drm/drm_gem_cma_helper.c | 27 +-- drivers/gpu/drm/drm_gem_shmem_helper.c | 48 ++-- drivers/gpu/drm/drm_gem_ttm_helper.c | 38 +++ drivers/gpu/drm/drm_gem_vram_helper.c | 117 +++++---- drivers/gpu/drm/drm_internal.h | 5 +- drivers/gpu/drm/drm_prime.c | 14 +- drivers/gpu/drm/etnaviv/etnaviv_drv.h | 3 +- drivers/gpu/drm/etnaviv/etnaviv_gem.c | 1 - drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c | 12 +- drivers/gpu/drm/exynos/exynos_drm_gem.c | 12 - drivers/gpu/drm/exynos/exynos_drm_gem.h | 2 - drivers/gpu/drm/lima/lima_gem.c | 6 +- drivers/gpu/drm/lima/lima_sched.c | 11 +- drivers/gpu/drm/mgag200/mgag200_mode.c | 10 +- drivers/gpu/drm/nouveau/Kconfig | 1 + drivers/gpu/drm/nouveau/nouveau_bo.h | 2 - drivers/gpu/drm/nouveau/nouveau_gem.c | 6 +- drivers/gpu/drm/nouveau/nouveau_gem.h | 2 - drivers/gpu/drm/nouveau/nouveau_prime.c | 20 -- drivers/gpu/drm/panfrost/panfrost_perfcnt.c | 14 +- drivers/gpu/drm/qxl/qxl_display.c | 11 +- drivers/gpu/drm/qxl/qxl_draw.c | 14 +- drivers/gpu/drm/qxl/qxl_drv.h | 11 +- drivers/gpu/drm/qxl/qxl_object.c | 31 ++- drivers/gpu/drm/qxl/qxl_object.h | 2 +- drivers/gpu/drm/qxl/qxl_prime.c | 12 +- drivers/gpu/drm/radeon/radeon.h | 1 - drivers/gpu/drm/radeon/radeon_gem.c | 7 +- drivers/gpu/drm/radeon/radeon_prime.c | 20 -- drivers/gpu/drm/rockchip/rockchip_drm_gem.c | 22 +- drivers/gpu/drm/rockchip/rockchip_drm_gem.h | 4 +- drivers/gpu/drm/tiny/cirrus.c | 10 +- drivers/gpu/drm/tiny/gm12u320.c | 10 +- drivers/gpu/drm/ttm/ttm_bo_util.c | 72 ++++++ drivers/gpu/drm/udl/udl_modeset.c | 8 +- drivers/gpu/drm/vboxvideo/vbox_mode.c | 11 +- drivers/gpu/drm/vc4/vc4_bo.c | 7 +- drivers/gpu/drm/vc4/vc4_drv.h | 2 +- drivers/gpu/drm/vgem/vgem_drv.c | 16 +- drivers/gpu/drm/vkms/vkms_plane.c | 15 +- drivers/gpu/drm/vkms/vkms_writeback.c | 22 +- drivers/gpu/drm/xen/xen_drm_front_gem.c | 18 +- drivers/gpu/drm/xen/xen_drm_front_gem.h | 6 +- include/drm/drm_client.h | 7 +- include/drm/drm_gem.h | 5 +- include/drm/drm_gem_cma_helper.h | 3 +- include/drm/drm_gem_shmem_helper.h | 4 +- include/drm/drm_gem_ttm_helper.h | 6 + include/drm/drm_gem_vram_helper.h | 14 +- include/drm/drm_mode_config.h | 12 - include/drm/ttm/ttm_bo_api.h | 28 +++ include/linux/dma-buf-map.h | 93 +++++++- 64 files changed, 852 insertions(+), 436 deletions(-) -- 2.28.0 From tzimmermann at suse.de Tue Oct 20 12:20:40 2020 From: tzimmermann at suse.de (Thomas Zimmermann) Date: Tue, 20 Oct 2020 14:20:40 +0200 Subject: [Spice-devel] [PATCH v5 04/10] drm/exynos: Remove empty exynos_drm_gem_prime_{vmap, vunmap}() In-Reply-To: <20201020122046.31167-1-tzimmermann@suse.de> References: <20201020122046.31167-1-tzimmermann@suse.de> Message-ID: <20201020122046.31167-5-tzimmermann@suse.de> The functions exynos_drm_gem_prime_{vmap,vunmap}() are empty. Remove them before changing the interface to use struct drm_buf_map. As a side effect of removing drm_gem_prime_vmap(), the error code changes from ENOMEM to EOPNOTSUPP. Signed-off-by: Thomas Zimmermann Acked-by: Christian K?nig Tested-by: Sam Ravnborg --- drivers/gpu/drm/exynos/exynos_drm_gem.c | 12 ------------ drivers/gpu/drm/exynos/exynos_drm_gem.h | 2 -- 2 files changed, 14 deletions(-) diff --git a/drivers/gpu/drm/exynos/exynos_drm_gem.c b/drivers/gpu/drm/exynos/exynos_drm_gem.c index e7a6eb96f692..13a35623ac04 100644 --- a/drivers/gpu/drm/exynos/exynos_drm_gem.c +++ b/drivers/gpu/drm/exynos/exynos_drm_gem.c @@ -137,8 +137,6 @@ static const struct vm_operations_struct exynos_drm_gem_vm_ops = { static const struct drm_gem_object_funcs exynos_drm_gem_object_funcs = { .free = exynos_drm_gem_free_object, .get_sg_table = exynos_drm_gem_prime_get_sg_table, - .vmap = exynos_drm_gem_prime_vmap, - .vunmap = exynos_drm_gem_prime_vunmap, .vm_ops = &exynos_drm_gem_vm_ops, }; @@ -471,16 +469,6 @@ exynos_drm_gem_prime_import_sg_table(struct drm_device *dev, return &exynos_gem->base; } -void *exynos_drm_gem_prime_vmap(struct drm_gem_object *obj) -{ - return NULL; -} - -void exynos_drm_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr) -{ - /* Nothing to do */ -} - int exynos_drm_gem_prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma) { diff --git a/drivers/gpu/drm/exynos/exynos_drm_gem.h b/drivers/gpu/drm/exynos/exynos_drm_gem.h index 74e926abeff0..a23272fb96fb 100644 --- a/drivers/gpu/drm/exynos/exynos_drm_gem.h +++ b/drivers/gpu/drm/exynos/exynos_drm_gem.h @@ -107,8 +107,6 @@ struct drm_gem_object * exynos_drm_gem_prime_import_sg_table(struct drm_device *dev, struct dma_buf_attachment *attach, struct sg_table *sgt); -void *exynos_drm_gem_prime_vmap(struct drm_gem_object *obj); -void exynos_drm_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr); int exynos_drm_gem_prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma); -- 2.28.0 From tzimmermann at suse.de Tue Oct 20 12:20:41 2020 From: tzimmermann at suse.de (Thomas Zimmermann) Date: Tue, 20 Oct 2020 14:20:41 +0200 Subject: [Spice-devel] [PATCH v5 05/10] drm/ttm: Add vmap/vunmap to TTM and TTM GEM helpers In-Reply-To: <20201020122046.31167-1-tzimmermann@suse.de> References: <20201020122046.31167-1-tzimmermann@suse.de> Message-ID: <20201020122046.31167-6-tzimmermann@suse.de> The new functions ttm_bo_{vmap,vunmap}() map and unmap a TTM BO in kernel address space. The mapping's address is returned as struct dma_buf_map. Each function is a simplified version of TTM's existing kmap code. Both functions respect the memory's location ani/or writecombine flags. On top TTM's functions, GEM TTM helpers got drm_gem_ttm_{vmap,vunmap}(), two helpers that convert a GEM object into the TTM BO and forward the call to TTM's vmap/vunmap. These helpers can be dropped into the rsp GEM object callbacks. v5: * use size_t for storing mapping size (Christian) * ignore premapped memory areas correctly in ttm_bo_vunmap() * rebase onto latest TTM interfaces (Christian) * remove BUG() from ttm_bo_vmap() (Christian) v4: * drop ttm_kmap_obj_to_dma_buf() in favor of vmap helpers (Daniel, Christian) Signed-off-by: Thomas Zimmermann Acked-by: Daniel Vetter Tested-by: Sam Ravnborg --- drivers/gpu/drm/drm_gem_ttm_helper.c | 38 +++++++++++++++ drivers/gpu/drm/ttm/ttm_bo_util.c | 72 ++++++++++++++++++++++++++++ include/drm/drm_gem_ttm_helper.h | 6 +++ include/drm/ttm/ttm_bo_api.h | 28 +++++++++++ include/linux/dma-buf-map.h | 20 ++++++++ 5 files changed, 164 insertions(+) diff --git a/drivers/gpu/drm/drm_gem_ttm_helper.c b/drivers/gpu/drm/drm_gem_ttm_helper.c index 0e4fb9ba43ad..db4c14d78a30 100644 --- a/drivers/gpu/drm/drm_gem_ttm_helper.c +++ b/drivers/gpu/drm/drm_gem_ttm_helper.c @@ -49,6 +49,44 @@ void drm_gem_ttm_print_info(struct drm_printer *p, unsigned int indent, } EXPORT_SYMBOL(drm_gem_ttm_print_info); +/** + * drm_gem_ttm_vmap() - vmap &ttm_buffer_object + * @gem: GEM object. + * @map: [out] returns the dma-buf mapping. + * + * Maps a GEM object with ttm_bo_vmap(). This function can be used as + * &drm_gem_object_funcs.vmap callback. + * + * Returns: + * 0 on success, or a negative errno code otherwise. + */ +int drm_gem_ttm_vmap(struct drm_gem_object *gem, + struct dma_buf_map *map) +{ + struct ttm_buffer_object *bo = drm_gem_ttm_of_gem(gem); + + return ttm_bo_vmap(bo, map); + +} +EXPORT_SYMBOL(drm_gem_ttm_vmap); + +/** + * drm_gem_ttm_vunmap() - vunmap &ttm_buffer_object + * @gem: GEM object. + * @map: dma-buf mapping. + * + * Unmaps a GEM object with ttm_bo_vunmap(). This function can be used as + * &drm_gem_object_funcs.vmap callback. + */ +void drm_gem_ttm_vunmap(struct drm_gem_object *gem, + struct dma_buf_map *map) +{ + struct ttm_buffer_object *bo = drm_gem_ttm_of_gem(gem); + + ttm_bo_vunmap(bo, map); +} +EXPORT_SYMBOL(drm_gem_ttm_vunmap); + /** * drm_gem_ttm_mmap() - mmap &ttm_buffer_object * @gem: GEM object. diff --git a/drivers/gpu/drm/ttm/ttm_bo_util.c b/drivers/gpu/drm/ttm/ttm_bo_util.c index ba7ab5ed85d0..5c79418405ea 100644 --- a/drivers/gpu/drm/ttm/ttm_bo_util.c +++ b/drivers/gpu/drm/ttm/ttm_bo_util.c @@ -32,6 +32,7 @@ #include #include #include +#include #include #include #include @@ -527,6 +528,77 @@ void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map) } EXPORT_SYMBOL(ttm_bo_kunmap); +int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map) +{ + struct ttm_resource *mem = &bo->mem; + int ret; + + ret = ttm_mem_io_reserve(bo->bdev, mem); + if (ret) + return ret; + + if (mem->bus.is_iomem) { + void __iomem *vaddr_iomem; + size_t size = bo->num_pages << PAGE_SHIFT; + + if (mem->bus.addr) + vaddr_iomem = (void __iomem *)mem->bus.addr; + else if (mem->bus.caching == ttm_write_combined) + vaddr_iomem = ioremap_wc(mem->bus.offset, size); + else + vaddr_iomem = ioremap(mem->bus.offset, size); + + if (!vaddr_iomem) + return -ENOMEM; + + dma_buf_map_set_vaddr_iomem(map, vaddr_iomem); + + } else { + struct ttm_operation_ctx ctx = { + .interruptible = false, + .no_wait_gpu = false + }; + struct ttm_tt *ttm = bo->ttm; + pgprot_t prot; + void *vaddr; + + ret = ttm_tt_populate(bo->bdev, ttm, &ctx); + if (ret) + return ret; + + /* + * We need to use vmap to get the desired page protection + * or to make the buffer object look contiguous. + */ + prot = ttm_io_prot(bo, mem, PAGE_KERNEL); + vaddr = vmap(ttm->pages, bo->num_pages, 0, prot); + if (!vaddr) + return -ENOMEM; + + dma_buf_map_set_vaddr(map, vaddr); + } + + return 0; +} +EXPORT_SYMBOL(ttm_bo_vmap); + +void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map *map) +{ + struct ttm_resource *mem = &bo->mem; + + if (dma_buf_map_is_null(map)) + return; + + if (!map->is_iomem) + vunmap(map->vaddr); + else if (!mem->bus.addr) + iounmap(map->vaddr_iomem); + dma_buf_map_clear(map); + + ttm_mem_io_free(bo->bdev, &bo->mem); +} +EXPORT_SYMBOL(ttm_bo_vunmap); + static int ttm_bo_wait_free_node(struct ttm_buffer_object *bo, bool dst_use_tt) { diff --git a/include/drm/drm_gem_ttm_helper.h b/include/drm/drm_gem_ttm_helper.h index 118cef76f84f..7c6d874910b8 100644 --- a/include/drm/drm_gem_ttm_helper.h +++ b/include/drm/drm_gem_ttm_helper.h @@ -10,11 +10,17 @@ #include #include +struct dma_buf_map; + #define drm_gem_ttm_of_gem(gem_obj) \ container_of(gem_obj, struct ttm_buffer_object, base) void drm_gem_ttm_print_info(struct drm_printer *p, unsigned int indent, const struct drm_gem_object *gem); +int drm_gem_ttm_vmap(struct drm_gem_object *gem, + struct dma_buf_map *map); +void drm_gem_ttm_vunmap(struct drm_gem_object *gem, + struct dma_buf_map *map); int drm_gem_ttm_mmap(struct drm_gem_object *gem, struct vm_area_struct *vma); diff --git a/include/drm/ttm/ttm_bo_api.h b/include/drm/ttm/ttm_bo_api.h index 37102e45e496..2c59a785374c 100644 --- a/include/drm/ttm/ttm_bo_api.h +++ b/include/drm/ttm/ttm_bo_api.h @@ -48,6 +48,8 @@ struct ttm_bo_global; struct ttm_bo_device; +struct dma_buf_map; + struct drm_mm_node; struct ttm_placement; @@ -494,6 +496,32 @@ int ttm_bo_kmap(struct ttm_buffer_object *bo, unsigned long start_page, */ void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map); +/** + * ttm_bo_vmap + * + * @bo: The buffer object. + * @map: pointer to a struct dma_buf_map representing the map. + * + * Sets up a kernel virtual mapping, using ioremap or vmap to the + * data in the buffer object. The parameter @map returns the virtual + * address as struct dma_buf_map. Unmap the buffer with ttm_bo_vunmap(). + * + * Returns + * -ENOMEM: Out of memory. + * -EINVAL: Invalid range. + */ +int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map); + +/** + * ttm_bo_vunmap + * + * @bo: The buffer object. + * @map: Object describing the map to unmap. + * + * Unmaps a kernel map set up by ttm_bo_vmap(). + */ +void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map *map); + /** * ttm_bo_mmap_obj - mmap memory backed by a ttm buffer object. * diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h index fd1aba545fdf..2e8bbecb5091 100644 --- a/include/linux/dma-buf-map.h +++ b/include/linux/dma-buf-map.h @@ -45,6 +45,12 @@ * * dma_buf_map_set_vaddr(&map. 0xdeadbeaf); * + * To set an address in I/O memory, use dma_buf_map_set_vaddr_iomem(). + * + * .. code-block:: c + * + * dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf); + * * Test if a mapping is valid with either dma_buf_map_is_set() or * dma_buf_map_is_null(). * @@ -118,6 +124,20 @@ static inline void dma_buf_map_set_vaddr(struct dma_buf_map *map, void *vaddr) map->is_iomem = false; } +/** + * dma_buf_map_set_vaddr_iomem - Sets a dma-buf mapping structure to an address in I/O memory + * @map: The dma-buf mapping structure + * @vaddr_iomem: An I/O-memory address + * + * Sets the address and the I/O-memory flag. + */ +static inline void dma_buf_map_set_vaddr_iomem(struct dma_buf_map *map, + void __iomem *vaddr_iomem) +{ + map->vaddr_iomem = vaddr_iomem; + map->is_iomem = true; +} + /** * dma_buf_map_is_equal - Compares two dma-buf mapping structures for equality * @lhs: The dma-buf mapping structure -- 2.28.0 From tzimmermann at suse.de Tue Oct 20 12:20:42 2020 From: tzimmermann at suse.de (Thomas Zimmermann) Date: Tue, 20 Oct 2020 14:20:42 +0200 Subject: [Spice-devel] [PATCH v5 06/10] drm/gem: Use struct dma_buf_map in GEM vmap ops and convert GEM backends In-Reply-To: <20201020122046.31167-1-tzimmermann@suse.de> References: <20201020122046.31167-1-tzimmermann@suse.de> Message-ID: <20201020122046.31167-7-tzimmermann@suse.de> This patch replaces the vmap/vunmap's use of raw pointers in GEM object functions with instances of struct dma_buf_map. GEM backends are converted as well. For most of them, this simply changes the returned type. TTM-based drivers now return information about the location of the memory, either system or I/O memory. GEM VRAM helpers and qxl now use ttm_bo_vmap() et al. Amdgpu, nouveau and radeon use drm_gem_ttm_vmap() et al instead of implementing their own vmap callbacks. v5: * update vkms after switch to shmem v4: * use ttm_bo_vmap(), drm_gem_ttm_vmap(), et al. (Daniel, Christian) * fix a trailing { in drm_gem_vmap() * remove several empty functions instead of converting them (Daniel) * comment uses of raw pointers with a TODO (Daniel) * TODO list: convert more helpers to use struct dma_buf_map Signed-off-by: Thomas Zimmermann Acked-by: Christian K?nig Tested-by: Sam Ravnborg --- Documentation/gpu/todo.rst | 18 ++++ drivers/gpu/drm/Kconfig | 2 + drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c | 36 ------- drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.h | 2 - drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c | 5 +- drivers/gpu/drm/amd/amdgpu/amdgpu_object.h | 1 - drivers/gpu/drm/ast/ast_cursor.c | 27 +++-- drivers/gpu/drm/ast/ast_drv.h | 7 +- drivers/gpu/drm/drm_gem.c | 23 +++-- drivers/gpu/drm/drm_gem_cma_helper.c | 10 +- drivers/gpu/drm/drm_gem_shmem_helper.c | 48 +++++---- drivers/gpu/drm/drm_gem_vram_helper.c | 107 ++++++++++---------- drivers/gpu/drm/etnaviv/etnaviv_drv.h | 2 +- drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c | 9 +- drivers/gpu/drm/lima/lima_gem.c | 6 +- drivers/gpu/drm/lima/lima_sched.c | 11 +- drivers/gpu/drm/mgag200/mgag200_mode.c | 10 +- drivers/gpu/drm/nouveau/Kconfig | 1 + drivers/gpu/drm/nouveau/nouveau_bo.h | 2 - drivers/gpu/drm/nouveau/nouveau_gem.c | 6 +- drivers/gpu/drm/nouveau/nouveau_gem.h | 2 - drivers/gpu/drm/nouveau/nouveau_prime.c | 20 ---- drivers/gpu/drm/panfrost/panfrost_perfcnt.c | 14 +-- drivers/gpu/drm/qxl/qxl_display.c | 11 +- drivers/gpu/drm/qxl/qxl_draw.c | 14 ++- drivers/gpu/drm/qxl/qxl_drv.h | 11 +- drivers/gpu/drm/qxl/qxl_object.c | 31 +++--- drivers/gpu/drm/qxl/qxl_object.h | 2 +- drivers/gpu/drm/qxl/qxl_prime.c | 12 +-- drivers/gpu/drm/radeon/radeon.h | 1 - drivers/gpu/drm/radeon/radeon_gem.c | 7 +- drivers/gpu/drm/radeon/radeon_prime.c | 20 ---- drivers/gpu/drm/rockchip/rockchip_drm_gem.c | 22 ++-- drivers/gpu/drm/rockchip/rockchip_drm_gem.h | 4 +- drivers/gpu/drm/tiny/cirrus.c | 10 +- drivers/gpu/drm/tiny/gm12u320.c | 10 +- drivers/gpu/drm/udl/udl_modeset.c | 8 +- drivers/gpu/drm/vboxvideo/vbox_mode.c | 11 +- drivers/gpu/drm/vc4/vc4_bo.c | 6 +- drivers/gpu/drm/vc4/vc4_drv.h | 2 +- drivers/gpu/drm/vgem/vgem_drv.c | 16 ++- drivers/gpu/drm/vkms/vkms_plane.c | 15 ++- drivers/gpu/drm/vkms/vkms_writeback.c | 22 ++-- drivers/gpu/drm/xen/xen_drm_front_gem.c | 18 ++-- drivers/gpu/drm/xen/xen_drm_front_gem.h | 6 +- include/drm/drm_gem.h | 5 +- include/drm/drm_gem_cma_helper.h | 2 +- include/drm/drm_gem_shmem_helper.h | 4 +- include/drm/drm_gem_vram_helper.h | 14 +-- 49 files changed, 345 insertions(+), 308 deletions(-) diff --git a/Documentation/gpu/todo.rst b/Documentation/gpu/todo.rst index 700637e25ecd..7e6fc3c04add 100644 --- a/Documentation/gpu/todo.rst +++ b/Documentation/gpu/todo.rst @@ -446,6 +446,24 @@ Contact: Ville Syrj?l?, Daniel Vetter Level: Intermediate +Use struct dma_buf_map throughout codebase +------------------------------------------ + +Pointers to shared device memory are stored in struct dma_buf_map. Each +instance knows whether it refers to system or I/O memory. Most of the DRM-wide +interface have been converted to use struct dma_buf_map, but implementations +often still use raw pointers. + +The task is to use struct dma_buf_map where it makes sense. + +* Memory managers should use struct dma_buf_map for dma-buf-imported buffers. +* TTM might benefit from using struct dma_buf_map internally. +* Framebuffer copying and blitting helpers should operate on struct dma_buf_map. + +Contact: Thomas Zimmermann , Christian K?nig, Daniel Vetter + +Level: Intermediate + Core refactorings ================= diff --git a/drivers/gpu/drm/Kconfig b/drivers/gpu/drm/Kconfig index 32257189e09b..e479b04e955e 100644 --- a/drivers/gpu/drm/Kconfig +++ b/drivers/gpu/drm/Kconfig @@ -239,6 +239,7 @@ config DRM_RADEON select FW_LOADER select DRM_KMS_HELPER select DRM_TTM + select DRM_TTM_HELPER select POWER_SUPPLY select HWMON select BACKLIGHT_CLASS_DEVICE @@ -259,6 +260,7 @@ config DRM_AMDGPU select DRM_KMS_HELPER select DRM_SCHED select DRM_TTM + select DRM_TTM_HELPER select POWER_SUPPLY select HWMON select BACKLIGHT_CLASS_DEVICE diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c index 5b465ab774d1..e5919efca870 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c @@ -41,42 +41,6 @@ #include #include -/** - * amdgpu_gem_prime_vmap - &dma_buf_ops.vmap implementation - * @obj: GEM BO - * - * Sets up an in-kernel virtual mapping of the BO's memory. - * - * Returns: - * The virtual address of the mapping or an error pointer. - */ -void *amdgpu_gem_prime_vmap(struct drm_gem_object *obj) -{ - struct amdgpu_bo *bo = gem_to_amdgpu_bo(obj); - int ret; - - ret = ttm_bo_kmap(&bo->tbo, 0, bo->tbo.num_pages, - &bo->dma_buf_vmap); - if (ret) - return ERR_PTR(ret); - - return bo->dma_buf_vmap.virtual; -} - -/** - * amdgpu_gem_prime_vunmap - &dma_buf_ops.vunmap implementation - * @obj: GEM BO - * @vaddr: Virtual address (unused) - * - * Tears down the in-kernel virtual mapping of the BO's memory. - */ -void amdgpu_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr) -{ - struct amdgpu_bo *bo = gem_to_amdgpu_bo(obj); - - ttm_bo_kunmap(&bo->dma_buf_vmap); -} - /** * amdgpu_gem_prime_mmap - &drm_driver.gem_prime_mmap implementation * @obj: GEM BO diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.h index 2c5c84a06bb9..39b5b9616fd8 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.h +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.h @@ -31,8 +31,6 @@ struct drm_gem_object *amdgpu_gem_prime_import(struct drm_device *dev, struct dma_buf *dma_buf); bool amdgpu_dmabuf_is_xgmi_accessible(struct amdgpu_device *adev, struct amdgpu_bo *bo); -void *amdgpu_gem_prime_vmap(struct drm_gem_object *obj); -void amdgpu_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr); int amdgpu_gem_prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma); diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c index be08a63ef58c..576659827e74 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c @@ -33,6 +33,7 @@ #include #include +#include #include "amdgpu.h" #include "amdgpu_display.h" @@ -220,8 +221,8 @@ static const struct drm_gem_object_funcs amdgpu_gem_object_funcs = { .open = amdgpu_gem_object_open, .close = amdgpu_gem_object_close, .export = amdgpu_gem_prime_export, - .vmap = amdgpu_gem_prime_vmap, - .vunmap = amdgpu_gem_prime_vunmap, + .vmap = drm_gem_ttm_vmap, + .vunmap = drm_gem_ttm_vunmap, }; /* diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h index 132e5f955180..01296ef0d673 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h @@ -100,7 +100,6 @@ struct amdgpu_bo { struct amdgpu_bo *parent; struct amdgpu_bo *shadow; - struct ttm_bo_kmap_obj dma_buf_vmap; struct amdgpu_mn *mn; diff --git a/drivers/gpu/drm/ast/ast_cursor.c b/drivers/gpu/drm/ast/ast_cursor.c index e0f4613918ad..742d43a7edf4 100644 --- a/drivers/gpu/drm/ast/ast_cursor.c +++ b/drivers/gpu/drm/ast/ast_cursor.c @@ -39,7 +39,7 @@ static void ast_cursor_fini(struct ast_private *ast) for (i = 0; i < ARRAY_SIZE(ast->cursor.gbo); ++i) { gbo = ast->cursor.gbo[i]; - drm_gem_vram_vunmap(gbo, ast->cursor.vaddr[i]); + drm_gem_vram_vunmap(gbo, &ast->cursor.map[i]); drm_gem_vram_unpin(gbo); drm_gem_vram_put(gbo); } @@ -60,7 +60,7 @@ int ast_cursor_init(struct ast_private *ast) struct drm_device *dev = &ast->base; size_t size, i; struct drm_gem_vram_object *gbo; - void __iomem *vaddr; + struct dma_buf_map map; int ret; size = roundup(AST_HWC_SIZE + AST_HWC_SIGNATURE_SIZE, PAGE_SIZE); @@ -77,16 +77,15 @@ int ast_cursor_init(struct ast_private *ast) drm_gem_vram_put(gbo); goto err_drm_gem_vram_put; } - vaddr = drm_gem_vram_vmap(gbo); - if (IS_ERR(vaddr)) { - ret = PTR_ERR(vaddr); + ret = drm_gem_vram_vmap(gbo, &map); + if (ret) { drm_gem_vram_unpin(gbo); drm_gem_vram_put(gbo); goto err_drm_gem_vram_put; } ast->cursor.gbo[i] = gbo; - ast->cursor.vaddr[i] = vaddr; + ast->cursor.map[i] = map; } return drmm_add_action_or_reset(dev, ast_cursor_release, NULL); @@ -95,7 +94,7 @@ int ast_cursor_init(struct ast_private *ast) while (i) { --i; gbo = ast->cursor.gbo[i]; - drm_gem_vram_vunmap(gbo, ast->cursor.vaddr[i]); + drm_gem_vram_vunmap(gbo, &ast->cursor.map[i]); drm_gem_vram_unpin(gbo); drm_gem_vram_put(gbo); } @@ -170,6 +169,7 @@ int ast_cursor_blit(struct ast_private *ast, struct drm_framebuffer *fb) { struct drm_device *dev = &ast->base; struct drm_gem_vram_object *gbo; + struct dma_buf_map map; int ret; void *src; void __iomem *dst; @@ -183,18 +183,17 @@ int ast_cursor_blit(struct ast_private *ast, struct drm_framebuffer *fb) ret = drm_gem_vram_pin(gbo, 0); if (ret) return ret; - src = drm_gem_vram_vmap(gbo); - if (IS_ERR(src)) { - ret = PTR_ERR(src); + ret = drm_gem_vram_vmap(gbo, &map); + if (ret) goto err_drm_gem_vram_unpin; - } + src = map.vaddr; /* TODO: Use mapping abstraction properly */ - dst = ast->cursor.vaddr[ast->cursor.next_index]; + dst = ast->cursor.map[ast->cursor.next_index].vaddr_iomem; /* do data transfer to cursor BO */ update_cursor_image(dst, src, fb->width, fb->height); - drm_gem_vram_vunmap(gbo, src); + drm_gem_vram_vunmap(gbo, &map); drm_gem_vram_unpin(gbo); return 0; @@ -257,7 +256,7 @@ void ast_cursor_show(struct ast_private *ast, int x, int y, u8 __iomem *sig; u8 jreg; - dst = ast->cursor.vaddr[ast->cursor.next_index]; + dst = ast->cursor.map[ast->cursor.next_index].vaddr; sig = dst + AST_HWC_SIZE; writel(x, sig + AST_HWC_SIGNATURE_X); diff --git a/drivers/gpu/drm/ast/ast_drv.h b/drivers/gpu/drm/ast/ast_drv.h index 467049ca8430..f963141dd851 100644 --- a/drivers/gpu/drm/ast/ast_drv.h +++ b/drivers/gpu/drm/ast/ast_drv.h @@ -28,10 +28,11 @@ #ifndef __AST_DRV_H__ #define __AST_DRV_H__ -#include -#include +#include #include #include +#include +#include #include #include @@ -131,7 +132,7 @@ struct ast_private { struct { struct drm_gem_vram_object *gbo[AST_DEFAULT_HWC_NUM]; - void __iomem *vaddr[AST_DEFAULT_HWC_NUM]; + struct dma_buf_map map[AST_DEFAULT_HWC_NUM]; unsigned int next_index; } cursor; diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c index 1da67d34e55d..a89ad4570e3c 100644 --- a/drivers/gpu/drm/drm_gem.c +++ b/drivers/gpu/drm/drm_gem.c @@ -36,6 +36,7 @@ #include #include #include +#include #include #include @@ -1207,26 +1208,30 @@ void drm_gem_unpin(struct drm_gem_object *obj) void *drm_gem_vmap(struct drm_gem_object *obj) { - void *vaddr; + struct dma_buf_map map; + int ret; - if (obj->funcs->vmap) - vaddr = obj->funcs->vmap(obj); - else - vaddr = ERR_PTR(-EOPNOTSUPP); + if (!obj->funcs->vmap) + return ERR_PTR(-EOPNOTSUPP); - if (!vaddr) - vaddr = ERR_PTR(-ENOMEM); + ret = obj->funcs->vmap(obj, &map); + if (ret) + return ERR_PTR(ret); + else if (dma_buf_map_is_null(&map)) + return ERR_PTR(-ENOMEM); - return vaddr; + return map.vaddr; } void drm_gem_vunmap(struct drm_gem_object *obj, void *vaddr) { + struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(vaddr); + if (!vaddr) return; if (obj->funcs->vunmap) - obj->funcs->vunmap(obj, vaddr); + obj->funcs->vunmap(obj, &map); } /** diff --git a/drivers/gpu/drm/drm_gem_cma_helper.c b/drivers/gpu/drm/drm_gem_cma_helper.c index d527485ea0b7..b57e3e9222f0 100644 --- a/drivers/gpu/drm/drm_gem_cma_helper.c +++ b/drivers/gpu/drm/drm_gem_cma_helper.c @@ -519,6 +519,8 @@ EXPORT_SYMBOL_GPL(drm_gem_cma_prime_mmap); * drm_gem_cma_prime_vmap - map a CMA GEM object into the kernel's virtual * address space * @obj: GEM object + * @map: Returns the kernel virtual address of the CMA GEM object's backing + * store. * * This function maps a buffer exported via DRM PRIME into the kernel's * virtual address space. Since the CMA buffers are already mapped into the @@ -527,13 +529,15 @@ EXPORT_SYMBOL_GPL(drm_gem_cma_prime_mmap); * driver's &drm_gem_object_funcs.vmap callback. * * Returns: - * The kernel virtual address of the CMA GEM object's backing store. + * 0 on success, or a negative error code otherwise. */ -void *drm_gem_cma_prime_vmap(struct drm_gem_object *obj) +int drm_gem_cma_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map) { struct drm_gem_cma_object *cma_obj = to_drm_gem_cma_obj(obj); - return cma_obj->vaddr; + dma_buf_map_set_vaddr(map, cma_obj->vaddr); + + return 0; } EXPORT_SYMBOL_GPL(drm_gem_cma_prime_vmap); diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c index fb11df7aced5..5553f58f68f3 100644 --- a/drivers/gpu/drm/drm_gem_shmem_helper.c +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c @@ -258,19 +258,25 @@ void drm_gem_shmem_unpin(struct drm_gem_object *obj) } EXPORT_SYMBOL(drm_gem_shmem_unpin); -static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem) +static int drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem, struct dma_buf_map *map) { struct drm_gem_object *obj = &shmem->base; - struct dma_buf_map map; int ret = 0; - if (shmem->vmap_use_count++ > 0) - return shmem->vaddr; + if (shmem->vmap_use_count++ > 0) { + dma_buf_map_set_vaddr(map, shmem->vaddr); + return 0; + } if (obj->import_attach) { - ret = dma_buf_vmap(obj->import_attach->dmabuf, &map); - if (!ret) - shmem->vaddr = map.vaddr; + ret = dma_buf_vmap(obj->import_attach->dmabuf, map); + if (!ret) { + if (WARN_ON(map->is_iomem)) { + ret = -EIO; + goto err_put_pages; + } + shmem->vaddr = map->vaddr; + } } else { pgprot_t prot = PAGE_KERNEL; @@ -284,6 +290,8 @@ static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem) VM_MAP, prot); if (!shmem->vaddr) ret = -ENOMEM; + else + dma_buf_map_set_vaddr(map, shmem->vaddr); } if (ret) { @@ -291,7 +299,7 @@ static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem) goto err_put_pages; } - return shmem->vaddr; + return 0; err_put_pages: if (!obj->import_attach) @@ -299,12 +307,14 @@ static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem) err_zero_use: shmem->vmap_use_count = 0; - return ERR_PTR(ret); + return ret; } /* * drm_gem_shmem_vmap - Create a virtual mapping for a shmem GEM object * @shmem: shmem GEM object + * @map: Returns the kernel virtual address of the SHMEM GEM object's backing + * store. * * This function makes sure that a contiguous kernel virtual address mapping * exists for the buffer backing the shmem GEM object. @@ -318,26 +328,25 @@ static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem) * Returns: * 0 on success or a negative error code on failure. */ -void *drm_gem_shmem_vmap(struct drm_gem_object *obj) +int drm_gem_shmem_vmap(struct drm_gem_object *obj, struct dma_buf_map *map) { struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj); - void *vaddr; int ret; ret = mutex_lock_interruptible(&shmem->vmap_lock); if (ret) - return ERR_PTR(ret); - vaddr = drm_gem_shmem_vmap_locked(shmem); + return ret; + ret = drm_gem_shmem_vmap_locked(shmem, map); mutex_unlock(&shmem->vmap_lock); - return vaddr; + return ret; } EXPORT_SYMBOL(drm_gem_shmem_vmap); -static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem) +static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem, + struct dma_buf_map *map) { struct drm_gem_object *obj = &shmem->base; - struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(shmem->vaddr); if (WARN_ON_ONCE(!shmem->vmap_use_count)) return; @@ -346,7 +355,7 @@ static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem) return; if (obj->import_attach) - dma_buf_vunmap(obj->import_attach->dmabuf, &map); + dma_buf_vunmap(obj->import_attach->dmabuf, map); else vunmap(shmem->vaddr); @@ -357,6 +366,7 @@ static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem) /* * drm_gem_shmem_vunmap - Unmap a virtual mapping fo a shmem GEM object * @shmem: shmem GEM object + * @map: Kernel virtual address where the SHMEM GEM object was mapped * * This function cleans up a kernel virtual address mapping acquired by * drm_gem_shmem_vmap(). The mapping is only removed when the use count drops to @@ -366,12 +376,12 @@ static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem) * also be called by drivers directly, in which case it will hide the * differences between dma-buf imported and natively allocated objects. */ -void drm_gem_shmem_vunmap(struct drm_gem_object *obj, void *vaddr) +void drm_gem_shmem_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map) { struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj); mutex_lock(&shmem->vmap_lock); - drm_gem_shmem_vunmap_locked(shmem); + drm_gem_shmem_vunmap_locked(shmem, map); mutex_unlock(&shmem->vmap_lock); } EXPORT_SYMBOL(drm_gem_shmem_vunmap); diff --git a/drivers/gpu/drm/drm_gem_vram_helper.c b/drivers/gpu/drm/drm_gem_vram_helper.c index bfc059945e31..96fbca6c2e5d 100644 --- a/drivers/gpu/drm/drm_gem_vram_helper.c +++ b/drivers/gpu/drm/drm_gem_vram_helper.c @@ -1,5 +1,6 @@ // SPDX-License-Identifier: GPL-2.0-or-later +#include #include #include @@ -113,8 +114,8 @@ static void drm_gem_vram_cleanup(struct drm_gem_vram_object *gbo) * up; only release the GEM object. */ - WARN_ON(gbo->kmap_use_count); - WARN_ON(gbo->kmap.virtual); + WARN_ON(gbo->vmap_use_count); + WARN_ON(dma_buf_map_is_set(&gbo->map)); drm_gem_object_release(&gbo->bo.base); } @@ -379,29 +380,37 @@ int drm_gem_vram_unpin(struct drm_gem_vram_object *gbo) } EXPORT_SYMBOL(drm_gem_vram_unpin); -static void *drm_gem_vram_kmap_locked(struct drm_gem_vram_object *gbo) +static int drm_gem_vram_kmap_locked(struct drm_gem_vram_object *gbo, + struct dma_buf_map *map) { int ret; - struct ttm_bo_kmap_obj *kmap = &gbo->kmap; - bool is_iomem; - if (gbo->kmap_use_count > 0) + if (gbo->vmap_use_count > 0) goto out; - ret = ttm_bo_kmap(&gbo->bo, 0, gbo->bo.num_pages, kmap); + ret = ttm_bo_vmap(&gbo->bo, &gbo->map); if (ret) - return ERR_PTR(ret); + return ret; out: - ++gbo->kmap_use_count; - return ttm_kmap_obj_virtual(kmap, &is_iomem); + ++gbo->vmap_use_count; + *map = gbo->map; + + return 0; } -static void drm_gem_vram_kunmap_locked(struct drm_gem_vram_object *gbo) +static void drm_gem_vram_kunmap_locked(struct drm_gem_vram_object *gbo, + struct dma_buf_map *map) { - if (WARN_ON_ONCE(!gbo->kmap_use_count)) + struct drm_device *dev = gbo->bo.base.dev; + + if (drm_WARN_ON_ONCE(dev, !gbo->vmap_use_count)) return; - if (--gbo->kmap_use_count > 0) + + if (drm_WARN_ON_ONCE(dev, !dma_buf_map_is_equal(&gbo->map, map))) + return; /* BUG: map not mapped from this BO */ + + if (--gbo->vmap_use_count > 0) return; /* @@ -415,7 +424,9 @@ static void drm_gem_vram_kunmap_locked(struct drm_gem_vram_object *gbo) /** * drm_gem_vram_vmap() - Pins and maps a GEM VRAM object into kernel address * space - * @gbo: The GEM VRAM object to map + * @gbo: The GEM VRAM object to map + * @map: Returns the kernel virtual address of the VRAM GEM object's backing + * store. * * The vmap function pins a GEM VRAM object to its current location, either * system or video memory, and maps its buffer into kernel address space. @@ -424,48 +435,44 @@ static void drm_gem_vram_kunmap_locked(struct drm_gem_vram_object *gbo) * unmap and unpin the GEM VRAM object. * * Returns: - * The buffer's virtual address on success, or - * an ERR_PTR()-encoded error code otherwise. + * 0 on success, or a negative error code otherwise. */ -void *drm_gem_vram_vmap(struct drm_gem_vram_object *gbo) +int drm_gem_vram_vmap(struct drm_gem_vram_object *gbo, struct dma_buf_map *map) { int ret; - void *base; ret = ttm_bo_reserve(&gbo->bo, true, false, NULL); if (ret) - return ERR_PTR(ret); + return ret; ret = drm_gem_vram_pin_locked(gbo, 0); if (ret) goto err_ttm_bo_unreserve; - base = drm_gem_vram_kmap_locked(gbo); - if (IS_ERR(base)) { - ret = PTR_ERR(base); + ret = drm_gem_vram_kmap_locked(gbo, map); + if (ret) goto err_drm_gem_vram_unpin_locked; - } ttm_bo_unreserve(&gbo->bo); - return base; + return 0; err_drm_gem_vram_unpin_locked: drm_gem_vram_unpin_locked(gbo); err_ttm_bo_unreserve: ttm_bo_unreserve(&gbo->bo); - return ERR_PTR(ret); + return ret; } EXPORT_SYMBOL(drm_gem_vram_vmap); /** * drm_gem_vram_vunmap() - Unmaps and unpins a GEM VRAM object - * @gbo: The GEM VRAM object to unmap - * @vaddr: The mapping's base address as returned by drm_gem_vram_vmap() + * @gbo: The GEM VRAM object to unmap + * @map: Kernel virtual address where the VRAM GEM object was mapped * * A call to drm_gem_vram_vunmap() unmaps and unpins a GEM VRAM buffer. See * the documentation for drm_gem_vram_vmap() for more information. */ -void drm_gem_vram_vunmap(struct drm_gem_vram_object *gbo, void *vaddr) +void drm_gem_vram_vunmap(struct drm_gem_vram_object *gbo, struct dma_buf_map *map) { int ret; @@ -473,7 +480,7 @@ void drm_gem_vram_vunmap(struct drm_gem_vram_object *gbo, void *vaddr) if (WARN_ONCE(ret, "ttm_bo_reserve_failed(): ret=%d\n", ret)) return; - drm_gem_vram_kunmap_locked(gbo); + drm_gem_vram_kunmap_locked(gbo, map); drm_gem_vram_unpin_locked(gbo); ttm_bo_unreserve(&gbo->bo); @@ -564,15 +571,13 @@ static void drm_gem_vram_bo_driver_move_notify(struct drm_gem_vram_object *gbo, bool evict, struct ttm_resource *new_mem) { - struct ttm_bo_kmap_obj *kmap = &gbo->kmap; + struct ttm_buffer_object *bo = &gbo->bo; + struct drm_device *dev = bo->base.dev; - if (WARN_ON_ONCE(gbo->kmap_use_count)) + if (drm_WARN_ON_ONCE(dev, gbo->vmap_use_count)) return; - if (!kmap->virtual) - return; - ttm_bo_kunmap(kmap); - kmap->virtual = NULL; + ttm_bo_vunmap(bo, &gbo->map); } static int drm_gem_vram_bo_driver_move(struct drm_gem_vram_object *gbo, @@ -829,37 +834,33 @@ static void drm_gem_vram_object_unpin(struct drm_gem_object *gem) } /** - * drm_gem_vram_object_vmap() - \ - Implements &struct drm_gem_object_funcs.vmap - * @gem: The GEM object to map + * drm_gem_vram_object_vmap() - + * Implements &struct drm_gem_object_funcs.vmap + * @gem: The GEM object to map + * @map: Returns the kernel virtual address of the VRAM GEM object's backing + * store. * * Returns: - * The buffers virtual address on success, or - * NULL otherwise. + * 0 on success, or a negative error code otherwise. */ -static void *drm_gem_vram_object_vmap(struct drm_gem_object *gem) +static int drm_gem_vram_object_vmap(struct drm_gem_object *gem, struct dma_buf_map *map) { struct drm_gem_vram_object *gbo = drm_gem_vram_of_gem(gem); - void *base; - base = drm_gem_vram_vmap(gbo); - if (IS_ERR(base)) - return NULL; - return base; + return drm_gem_vram_vmap(gbo, map); } /** - * drm_gem_vram_object_vunmap() - \ - Implements &struct drm_gem_object_funcs.vunmap - * @gem: The GEM object to unmap - * @vaddr: The mapping's base address + * drm_gem_vram_object_vunmap() - + * Implements &struct drm_gem_object_funcs.vunmap + * @gem: The GEM object to unmap + * @map: Kernel virtual address where the VRAM GEM object was mapped */ -static void drm_gem_vram_object_vunmap(struct drm_gem_object *gem, - void *vaddr) +static void drm_gem_vram_object_vunmap(struct drm_gem_object *gem, struct dma_buf_map *map) { struct drm_gem_vram_object *gbo = drm_gem_vram_of_gem(gem); - drm_gem_vram_vunmap(gbo, vaddr); + drm_gem_vram_vunmap(gbo, map); } /* diff --git a/drivers/gpu/drm/etnaviv/etnaviv_drv.h b/drivers/gpu/drm/etnaviv/etnaviv_drv.h index 9682c26d89bb..f5be627e1de0 100644 --- a/drivers/gpu/drm/etnaviv/etnaviv_drv.h +++ b/drivers/gpu/drm/etnaviv/etnaviv_drv.h @@ -51,7 +51,7 @@ int etnaviv_ioctl_gem_submit(struct drm_device *dev, void *data, int etnaviv_gem_mmap(struct file *filp, struct vm_area_struct *vma); int etnaviv_gem_mmap_offset(struct drm_gem_object *obj, u64 *offset); struct sg_table *etnaviv_gem_prime_get_sg_table(struct drm_gem_object *obj); -void *etnaviv_gem_prime_vmap(struct drm_gem_object *obj); +int etnaviv_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map); int etnaviv_gem_prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma); struct drm_gem_object *etnaviv_gem_prime_import_sg_table(struct drm_device *dev, diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c index a6d9932a32ae..bc2543dd987d 100644 --- a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c +++ b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c @@ -22,9 +22,14 @@ struct sg_table *etnaviv_gem_prime_get_sg_table(struct drm_gem_object *obj) return drm_prime_pages_to_sg(obj->dev, etnaviv_obj->pages, npages); } -void *etnaviv_gem_prime_vmap(struct drm_gem_object *obj) +int etnaviv_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map) { - return etnaviv_gem_vmap(obj); + void *vaddr = etnaviv_gem_vmap(obj); + if (!vaddr) + return -ENOMEM; + dma_buf_map_set_vaddr(map, vaddr); + + return 0; } int etnaviv_gem_prime_mmap(struct drm_gem_object *obj, diff --git a/drivers/gpu/drm/lima/lima_gem.c b/drivers/gpu/drm/lima/lima_gem.c index 11223fe348df..832e5280a6ed 100644 --- a/drivers/gpu/drm/lima/lima_gem.c +++ b/drivers/gpu/drm/lima/lima_gem.c @@ -182,14 +182,14 @@ static int lima_gem_pin(struct drm_gem_object *obj) return drm_gem_shmem_pin(obj); } -static void *lima_gem_vmap(struct drm_gem_object *obj) +static int lima_gem_vmap(struct drm_gem_object *obj, struct dma_buf_map *map) { struct lima_bo *bo = to_lima_bo(obj); if (bo->heap_size) - return ERR_PTR(-EINVAL); + return -EINVAL; - return drm_gem_shmem_vmap(obj); + return drm_gem_shmem_vmap(obj, map); } static int lima_gem_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma) diff --git a/drivers/gpu/drm/lima/lima_sched.c b/drivers/gpu/drm/lima/lima_sched.c index dc6df9e9a40d..a070a85f8f36 100644 --- a/drivers/gpu/drm/lima/lima_sched.c +++ b/drivers/gpu/drm/lima/lima_sched.c @@ -1,6 +1,7 @@ // SPDX-License-Identifier: GPL-2.0 OR MIT /* Copyright 2017-2019 Qiang Yu */ +#include #include #include #include @@ -303,6 +304,8 @@ static void lima_sched_build_error_task_list(struct lima_sched_task *task) struct lima_dump_chunk_buffer *buffer_chunk; u32 size, task_size, mem_size; int i; + struct dma_buf_map map; + int ret; mutex_lock(&dev->error_task_list_lock); @@ -388,15 +391,15 @@ static void lima_sched_build_error_task_list(struct lima_sched_task *task) } else { buffer_chunk->size = lima_bo_size(bo); - data = drm_gem_shmem_vmap(&bo->base.base); - if (IS_ERR_OR_NULL(data)) { + ret = drm_gem_shmem_vmap(&bo->base.base, &map); + if (ret) { kvfree(et); goto out; } - memcpy(buffer_chunk + 1, data, buffer_chunk->size); + memcpy(buffer_chunk + 1, map.vaddr, buffer_chunk->size); - drm_gem_shmem_vunmap(&bo->base.base, data); + drm_gem_shmem_vunmap(&bo->base.base, &map); } buffer_chunk = (void *)(buffer_chunk + 1) + buffer_chunk->size; diff --git a/drivers/gpu/drm/mgag200/mgag200_mode.c b/drivers/gpu/drm/mgag200/mgag200_mode.c index 38672f9e5c4f..8ef76769b97f 100644 --- a/drivers/gpu/drm/mgag200/mgag200_mode.c +++ b/drivers/gpu/drm/mgag200/mgag200_mode.c @@ -9,6 +9,7 @@ */ #include +#include #include #include @@ -1556,15 +1557,18 @@ mgag200_handle_damage(struct mga_device *mdev, struct drm_framebuffer *fb, struct drm_rect *clip) { struct drm_device *dev = &mdev->base; + struct dma_buf_map map; void *vmap; + int ret; - vmap = drm_gem_shmem_vmap(fb->obj[0]); - if (drm_WARN_ON(dev, !vmap)) + ret = drm_gem_shmem_vmap(fb->obj[0], &map); + if (drm_WARN_ON(dev, ret)) return; /* BUG: SHMEM BO should always be vmapped */ + vmap = map.vaddr; /* TODO: Use mapping abstraction properly */ drm_fb_memcpy_dstclip(mdev->vram, vmap, fb, clip); - drm_gem_shmem_vunmap(fb->obj[0], vmap); + drm_gem_shmem_vunmap(fb->obj[0], &map); /* Always scanout image at VRAM offset 0 */ mgag200_set_startadd(mdev, (u32)0); diff --git a/drivers/gpu/drm/nouveau/Kconfig b/drivers/gpu/drm/nouveau/Kconfig index 5dec1e5694b7..9436310d0854 100644 --- a/drivers/gpu/drm/nouveau/Kconfig +++ b/drivers/gpu/drm/nouveau/Kconfig @@ -6,6 +6,7 @@ config DRM_NOUVEAU select FW_LOADER select DRM_KMS_HELPER select DRM_TTM + select DRM_TTM_HELPER select BACKLIGHT_CLASS_DEVICE if DRM_NOUVEAU_BACKLIGHT select ACPI_VIDEO if ACPI && X86 && BACKLIGHT_CLASS_DEVICE && INPUT select X86_PLATFORM_DEVICES if ACPI && X86 diff --git a/drivers/gpu/drm/nouveau/nouveau_bo.h b/drivers/gpu/drm/nouveau/nouveau_bo.h index 641ef6298a0e..6045b85a762a 100644 --- a/drivers/gpu/drm/nouveau/nouveau_bo.h +++ b/drivers/gpu/drm/nouveau/nouveau_bo.h @@ -39,8 +39,6 @@ struct nouveau_bo { unsigned mode; struct nouveau_drm_tile *tile; - - struct ttm_bo_kmap_obj dma_buf_vmap; }; static inline struct nouveau_bo * diff --git a/drivers/gpu/drm/nouveau/nouveau_gem.c b/drivers/gpu/drm/nouveau/nouveau_gem.c index 9a421c3949de..f942b526b0a5 100644 --- a/drivers/gpu/drm/nouveau/nouveau_gem.c +++ b/drivers/gpu/drm/nouveau/nouveau_gem.c @@ -24,6 +24,8 @@ * */ +#include + #include "nouveau_drv.h" #include "nouveau_dma.h" #include "nouveau_fence.h" @@ -176,8 +178,8 @@ const struct drm_gem_object_funcs nouveau_gem_object_funcs = { .pin = nouveau_gem_prime_pin, .unpin = nouveau_gem_prime_unpin, .get_sg_table = nouveau_gem_prime_get_sg_table, - .vmap = nouveau_gem_prime_vmap, - .vunmap = nouveau_gem_prime_vunmap, + .vmap = drm_gem_ttm_vmap, + .vunmap = drm_gem_ttm_vunmap, }; int diff --git a/drivers/gpu/drm/nouveau/nouveau_gem.h b/drivers/gpu/drm/nouveau/nouveau_gem.h index b35c180322e2..3b919c7c931c 100644 --- a/drivers/gpu/drm/nouveau/nouveau_gem.h +++ b/drivers/gpu/drm/nouveau/nouveau_gem.h @@ -37,7 +37,5 @@ extern void nouveau_gem_prime_unpin(struct drm_gem_object *); extern struct sg_table *nouveau_gem_prime_get_sg_table(struct drm_gem_object *); extern struct drm_gem_object *nouveau_gem_prime_import_sg_table( struct drm_device *, struct dma_buf_attachment *, struct sg_table *); -extern void *nouveau_gem_prime_vmap(struct drm_gem_object *); -extern void nouveau_gem_prime_vunmap(struct drm_gem_object *, void *); #endif diff --git a/drivers/gpu/drm/nouveau/nouveau_prime.c b/drivers/gpu/drm/nouveau/nouveau_prime.c index a8264aebf3d4..2f16b5249283 100644 --- a/drivers/gpu/drm/nouveau/nouveau_prime.c +++ b/drivers/gpu/drm/nouveau/nouveau_prime.c @@ -35,26 +35,6 @@ struct sg_table *nouveau_gem_prime_get_sg_table(struct drm_gem_object *obj) return drm_prime_pages_to_sg(obj->dev, nvbo->bo.ttm->pages, npages); } -void *nouveau_gem_prime_vmap(struct drm_gem_object *obj) -{ - struct nouveau_bo *nvbo = nouveau_gem_object(obj); - int ret; - - ret = ttm_bo_kmap(&nvbo->bo, 0, nvbo->bo.num_pages, - &nvbo->dma_buf_vmap); - if (ret) - return ERR_PTR(ret); - - return nvbo->dma_buf_vmap.virtual; -} - -void nouveau_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr) -{ - struct nouveau_bo *nvbo = nouveau_gem_object(obj); - - ttm_bo_kunmap(&nvbo->dma_buf_vmap); -} - struct drm_gem_object *nouveau_gem_prime_import_sg_table(struct drm_device *dev, struct dma_buf_attachment *attach, struct sg_table *sg) diff --git a/drivers/gpu/drm/panfrost/panfrost_perfcnt.c b/drivers/gpu/drm/panfrost/panfrost_perfcnt.c index fdbc8d949135..5ab03d605f57 100644 --- a/drivers/gpu/drm/panfrost/panfrost_perfcnt.c +++ b/drivers/gpu/drm/panfrost/panfrost_perfcnt.c @@ -5,6 +5,7 @@ #include #include #include +#include #include #include #include @@ -72,6 +73,7 @@ static int panfrost_perfcnt_enable_locked(struct panfrost_device *pfdev, { struct panfrost_file_priv *user = file_priv->driver_priv; struct panfrost_perfcnt *perfcnt = pfdev->perfcnt; + struct dma_buf_map map; struct drm_gem_shmem_object *bo; u32 cfg, as; int ret; @@ -103,11 +105,10 @@ static int panfrost_perfcnt_enable_locked(struct panfrost_device *pfdev, goto err_close_bo; } - perfcnt->buf = drm_gem_shmem_vmap(&bo->base); - if (IS_ERR(perfcnt->buf)) { - ret = PTR_ERR(perfcnt->buf); + ret = drm_gem_shmem_vmap(&bo->base, &map); + if (ret) goto err_put_mapping; - } + perfcnt->buf = map.vaddr; /* * Invalidate the cache and clear the counters to start from a fresh @@ -163,7 +164,7 @@ static int panfrost_perfcnt_enable_locked(struct panfrost_device *pfdev, return 0; err_vunmap: - drm_gem_shmem_vunmap(&bo->base, perfcnt->buf); + drm_gem_shmem_vunmap(&bo->base, &map); err_put_mapping: panfrost_gem_mapping_put(perfcnt->mapping); err_close_bo: @@ -180,6 +181,7 @@ static int panfrost_perfcnt_disable_locked(struct panfrost_device *pfdev, { struct panfrost_file_priv *user = file_priv->driver_priv; struct panfrost_perfcnt *perfcnt = pfdev->perfcnt; + struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(perfcnt->buf); if (user != perfcnt->user) return -EINVAL; @@ -192,7 +194,7 @@ static int panfrost_perfcnt_disable_locked(struct panfrost_device *pfdev, GPU_PERFCNT_CFG_MODE(GPU_PERFCNT_CFG_MODE_OFF)); perfcnt->user = NULL; - drm_gem_shmem_vunmap(&perfcnt->mapping->obj->base.base, perfcnt->buf); + drm_gem_shmem_vunmap(&perfcnt->mapping->obj->base.base, &map); perfcnt->buf = NULL; panfrost_gem_close(&perfcnt->mapping->obj->base.base, file_priv); panfrost_mmu_as_put(pfdev, perfcnt->mapping->mmu); diff --git a/drivers/gpu/drm/qxl/qxl_display.c b/drivers/gpu/drm/qxl/qxl_display.c index 45fd76e04bdc..e165fa9b2089 100644 --- a/drivers/gpu/drm/qxl/qxl_display.c +++ b/drivers/gpu/drm/qxl/qxl_display.c @@ -25,6 +25,7 @@ #include #include +#include #include #include @@ -581,6 +582,8 @@ static void qxl_cursor_atomic_update(struct drm_plane *plane, struct drm_gem_object *obj; struct qxl_bo *cursor_bo = NULL, *user_bo = NULL, *old_cursor_bo = NULL; int ret; + struct dma_buf_map user_map; + struct dma_buf_map cursor_map; void *user_ptr; int size = 64*64*4; @@ -595,9 +598,10 @@ static void qxl_cursor_atomic_update(struct drm_plane *plane, user_bo = gem_to_qxl_bo(obj); /* pinning is done in the prepare/cleanup framevbuffer */ - ret = qxl_bo_kmap(user_bo, &user_ptr); + ret = qxl_bo_kmap(user_bo, &user_map); if (ret) goto out_free_release; + user_ptr = user_map.vaddr; /* TODO: Use mapping abstraction properly */ ret = qxl_alloc_bo_reserved(qdev, release, sizeof(struct qxl_cursor) + size, @@ -613,7 +617,7 @@ static void qxl_cursor_atomic_update(struct drm_plane *plane, if (ret) goto out_unpin; - ret = qxl_bo_kmap(cursor_bo, (void **)&cursor); + ret = qxl_bo_kmap(cursor_bo, &cursor_map); if (ret) goto out_backoff; @@ -1133,6 +1137,7 @@ int qxl_create_monitors_object(struct qxl_device *qdev) { int ret; struct drm_gem_object *gobj; + struct dma_buf_map map; int monitors_config_size = sizeof(struct qxl_monitors_config) + qxl_num_crtc * sizeof(struct qxl_head); @@ -1149,7 +1154,7 @@ int qxl_create_monitors_object(struct qxl_device *qdev) if (ret) return ret; - qxl_bo_kmap(qdev->monitors_config_bo, NULL); + qxl_bo_kmap(qdev->monitors_config_bo, &map); qdev->monitors_config = qdev->monitors_config_bo->kptr; qdev->ram_header->monitors_config = diff --git a/drivers/gpu/drm/qxl/qxl_draw.c b/drivers/gpu/drm/qxl/qxl_draw.c index 3599db096973..7b7acb910780 100644 --- a/drivers/gpu/drm/qxl/qxl_draw.c +++ b/drivers/gpu/drm/qxl/qxl_draw.c @@ -20,6 +20,8 @@ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. */ +#include + #include #include "qxl_drv.h" @@ -42,13 +44,15 @@ static struct qxl_rect *drawable_set_clipping(struct qxl_device *qdev, unsigned int num_clips, struct qxl_bo *clips_bo) { + struct dma_buf_map map; struct qxl_clip_rects *dev_clips; int ret; - ret = qxl_bo_kmap(clips_bo, (void **)&dev_clips); - if (ret) { + ret = qxl_bo_kmap(clips_bo, &map); + if (ret) return NULL; - } + dev_clips = map.vaddr; /* TODO: Use mapping abstraction properly */ + dev_clips->num_rects = num_clips; dev_clips->chunk.next_chunk = 0; dev_clips->chunk.prev_chunk = 0; @@ -142,6 +146,7 @@ void qxl_draw_dirty_fb(struct qxl_device *qdev, int stride = fb->pitches[0]; /* depth is not actually interesting, we don't mask with it */ int depth = fb->format->cpp[0] * 8; + struct dma_buf_map surface_map; uint8_t *surface_base; struct qxl_release *release; struct qxl_bo *clips_bo; @@ -197,9 +202,10 @@ void qxl_draw_dirty_fb(struct qxl_device *qdev, if (ret) goto out_release_backoff; - ret = qxl_bo_kmap(bo, (void **)&surface_base); + ret = qxl_bo_kmap(bo, &surface_map); if (ret) goto out_release_backoff; + surface_base = surface_map.vaddr; /* TODO: Use mapping abstraction properly */ ret = qxl_image_init(qdev, release, dimage, surface_base, left - dumb_shadow_offset, diff --git a/drivers/gpu/drm/qxl/qxl_drv.h b/drivers/gpu/drm/qxl/qxl_drv.h index 3602e8b34189..eb437fea5d9e 100644 --- a/drivers/gpu/drm/qxl/qxl_drv.h +++ b/drivers/gpu/drm/qxl/qxl_drv.h @@ -30,6 +30,7 @@ * Definitions taken from spice-protocol, plus kernel driver specific bits. */ +#include #include #include #include @@ -50,6 +51,8 @@ #include "qxl_dev.h" +struct dma_buf_map; + #define DRIVER_AUTHOR "Dave Airlie" #define DRIVER_NAME "qxl" @@ -79,7 +82,7 @@ struct qxl_bo { /* Protected by tbo.reserved */ struct ttm_place placements[3]; struct ttm_placement placement; - struct ttm_bo_kmap_obj kmap; + struct dma_buf_map map; void *kptr; unsigned int map_count; int type; @@ -335,7 +338,6 @@ int qxl_gem_object_open(struct drm_gem_object *obj, struct drm_file *file_priv); void qxl_gem_object_close(struct drm_gem_object *obj, struct drm_file *file_priv); void qxl_bo_force_delete(struct qxl_device *qdev); -int qxl_bo_kmap(struct qxl_bo *bo, void **ptr); /* qxl_dumb.c */ int qxl_mode_dumb_create(struct drm_file *file_priv, @@ -445,8 +447,9 @@ struct sg_table *qxl_gem_prime_get_sg_table(struct drm_gem_object *obj); struct drm_gem_object *qxl_gem_prime_import_sg_table( struct drm_device *dev, struct dma_buf_attachment *attach, struct sg_table *sgt); -void *qxl_gem_prime_vmap(struct drm_gem_object *obj); -void qxl_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr); +int qxl_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map); +void qxl_gem_prime_vunmap(struct drm_gem_object *obj, + struct dma_buf_map *map); int qxl_gem_prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma); diff --git a/drivers/gpu/drm/qxl/qxl_object.c b/drivers/gpu/drm/qxl/qxl_object.c index 547d46c14d56..ceebc5881f68 100644 --- a/drivers/gpu/drm/qxl/qxl_object.c +++ b/drivers/gpu/drm/qxl/qxl_object.c @@ -23,10 +23,12 @@ * Alon Levy */ +#include +#include + #include "qxl_drv.h" #include "qxl_object.h" -#include static void qxl_ttm_bo_destroy(struct ttm_buffer_object *tbo) { struct qxl_bo *bo; @@ -152,24 +154,27 @@ int qxl_bo_create(struct qxl_device *qdev, return 0; } -int qxl_bo_kmap(struct qxl_bo *bo, void **ptr) +int qxl_bo_kmap(struct qxl_bo *bo, struct dma_buf_map *map) { - bool is_iomem; int r; if (bo->kptr) { - if (ptr) - *ptr = bo->kptr; bo->map_count++; - return 0; + goto out; } - r = ttm_bo_kmap(&bo->tbo, 0, bo->tbo.num_pages, &bo->kmap); + r = ttm_bo_vmap(&bo->tbo, &bo->map); if (r) return r; - bo->kptr = ttm_kmap_obj_virtual(&bo->kmap, &is_iomem); - if (ptr) - *ptr = bo->kptr; bo->map_count = 1; + + /* TODO: Remove kptr in favor of map everywhere. */ + if (bo->map.is_iomem) + bo->kptr = (void *)bo->map.vaddr_iomem; + else + bo->kptr = bo->map.vaddr; + +out: + *map = bo->map; return 0; } @@ -180,6 +185,7 @@ void *qxl_bo_kmap_atomic_page(struct qxl_device *qdev, void *rptr; int ret; struct io_mapping *map; + struct dma_buf_map bo_map; if (bo->tbo.mem.mem_type == TTM_PL_VRAM) map = qdev->vram_mapping; @@ -196,9 +202,10 @@ void *qxl_bo_kmap_atomic_page(struct qxl_device *qdev, return rptr; } - ret = qxl_bo_kmap(bo, &rptr); + ret = qxl_bo_kmap(bo, &bo_map); if (ret) return NULL; + rptr = bo_map.vaddr; /* TODO: Use mapping abstraction properly */ rptr += page_offset * PAGE_SIZE; return rptr; @@ -212,7 +219,7 @@ void qxl_bo_kunmap(struct qxl_bo *bo) if (bo->map_count > 0) return; bo->kptr = NULL; - ttm_bo_kunmap(&bo->kmap); + ttm_bo_vunmap(&bo->tbo, &bo->map); } void qxl_bo_kunmap_atomic_page(struct qxl_device *qdev, diff --git a/drivers/gpu/drm/qxl/qxl_object.h b/drivers/gpu/drm/qxl/qxl_object.h index 09a5c818324d..ebf24c9d2bf2 100644 --- a/drivers/gpu/drm/qxl/qxl_object.h +++ b/drivers/gpu/drm/qxl/qxl_object.h @@ -63,7 +63,7 @@ extern int qxl_bo_create(struct qxl_device *qdev, bool kernel, bool pinned, u32 domain, struct qxl_surface *surf, struct qxl_bo **bo_ptr); -extern int qxl_bo_kmap(struct qxl_bo *bo, void **ptr); +extern int qxl_bo_kmap(struct qxl_bo *bo, struct dma_buf_map *map); extern void qxl_bo_kunmap(struct qxl_bo *bo); void *qxl_bo_kmap_atomic_page(struct qxl_device *qdev, struct qxl_bo *bo, int page_offset); void qxl_bo_kunmap_atomic_page(struct qxl_device *qdev, struct qxl_bo *bo, void *map); diff --git a/drivers/gpu/drm/qxl/qxl_prime.c b/drivers/gpu/drm/qxl/qxl_prime.c index 7d3816fca5a8..4aa949799446 100644 --- a/drivers/gpu/drm/qxl/qxl_prime.c +++ b/drivers/gpu/drm/qxl/qxl_prime.c @@ -54,20 +54,20 @@ struct drm_gem_object *qxl_gem_prime_import_sg_table( return ERR_PTR(-ENOSYS); } -void *qxl_gem_prime_vmap(struct drm_gem_object *obj) +int qxl_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map) { struct qxl_bo *bo = gem_to_qxl_bo(obj); - void *ptr; int ret; - ret = qxl_bo_kmap(bo, &ptr); + ret = qxl_bo_kmap(bo, map); if (ret < 0) - return ERR_PTR(ret); + return ret; - return ptr; + return 0; } -void qxl_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr) +void qxl_gem_prime_vunmap(struct drm_gem_object *obj, + struct dma_buf_map *map) { struct qxl_bo *bo = gem_to_qxl_bo(obj); diff --git a/drivers/gpu/drm/radeon/radeon.h b/drivers/gpu/drm/radeon/radeon.h index 5d54bccebd4d..44cb5ee6fc20 100644 --- a/drivers/gpu/drm/radeon/radeon.h +++ b/drivers/gpu/drm/radeon/radeon.h @@ -509,7 +509,6 @@ struct radeon_bo { /* Constant after initialization */ struct radeon_device *rdev; - struct ttm_bo_kmap_obj dma_buf_vmap; pid_t pid; #ifdef CONFIG_MMU_NOTIFIER diff --git a/drivers/gpu/drm/radeon/radeon_gem.c b/drivers/gpu/drm/radeon/radeon_gem.c index 0ccd7213e41f..d2876ce3bc9e 100644 --- a/drivers/gpu/drm/radeon/radeon_gem.c +++ b/drivers/gpu/drm/radeon/radeon_gem.c @@ -31,6 +31,7 @@ #include #include #include +#include #include #include "radeon.h" @@ -40,8 +41,6 @@ struct dma_buf *radeon_gem_prime_export(struct drm_gem_object *gobj, struct sg_table *radeon_gem_prime_get_sg_table(struct drm_gem_object *obj); int radeon_gem_prime_pin(struct drm_gem_object *obj); void radeon_gem_prime_unpin(struct drm_gem_object *obj); -void *radeon_gem_prime_vmap(struct drm_gem_object *obj); -void radeon_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr); static const struct drm_gem_object_funcs radeon_gem_object_funcs; @@ -235,8 +234,8 @@ static const struct drm_gem_object_funcs radeon_gem_object_funcs = { .pin = radeon_gem_prime_pin, .unpin = radeon_gem_prime_unpin, .get_sg_table = radeon_gem_prime_get_sg_table, - .vmap = radeon_gem_prime_vmap, - .vunmap = radeon_gem_prime_vunmap, + .vmap = drm_gem_ttm_vmap, + .vunmap = drm_gem_ttm_vunmap, }; /* diff --git a/drivers/gpu/drm/radeon/radeon_prime.c b/drivers/gpu/drm/radeon/radeon_prime.c index b9de0e51c0be..088d39a51c0d 100644 --- a/drivers/gpu/drm/radeon/radeon_prime.c +++ b/drivers/gpu/drm/radeon/radeon_prime.c @@ -39,26 +39,6 @@ struct sg_table *radeon_gem_prime_get_sg_table(struct drm_gem_object *obj) return drm_prime_pages_to_sg(obj->dev, bo->tbo.ttm->pages, npages); } -void *radeon_gem_prime_vmap(struct drm_gem_object *obj) -{ - struct radeon_bo *bo = gem_to_radeon_bo(obj); - int ret; - - ret = ttm_bo_kmap(&bo->tbo, 0, bo->tbo.num_pages, - &bo->dma_buf_vmap); - if (ret) - return ERR_PTR(ret); - - return bo->dma_buf_vmap.virtual; -} - -void radeon_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr) -{ - struct radeon_bo *bo = gem_to_radeon_bo(obj); - - ttm_bo_kunmap(&bo->dma_buf_vmap); -} - struct drm_gem_object *radeon_gem_prime_import_sg_table(struct drm_device *dev, struct dma_buf_attachment *attach, struct sg_table *sg) diff --git a/drivers/gpu/drm/rockchip/rockchip_drm_gem.c b/drivers/gpu/drm/rockchip/rockchip_drm_gem.c index 7d5ebb10323b..7971f57436dd 100644 --- a/drivers/gpu/drm/rockchip/rockchip_drm_gem.c +++ b/drivers/gpu/drm/rockchip/rockchip_drm_gem.c @@ -532,26 +532,32 @@ rockchip_gem_prime_import_sg_table(struct drm_device *drm, return ERR_PTR(ret); } -void *rockchip_gem_prime_vmap(struct drm_gem_object *obj) +int rockchip_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map) { struct rockchip_gem_object *rk_obj = to_rockchip_obj(obj); - if (rk_obj->pages) - return vmap(rk_obj->pages, rk_obj->num_pages, VM_MAP, - pgprot_writecombine(PAGE_KERNEL)); + if (rk_obj->pages) { + void *vaddr = vmap(rk_obj->pages, rk_obj->num_pages, VM_MAP, + pgprot_writecombine(PAGE_KERNEL)); + if (!vaddr) + return -ENOMEM; + dma_buf_map_set_vaddr(map, vaddr); + return 0; + } if (rk_obj->dma_attrs & DMA_ATTR_NO_KERNEL_MAPPING) - return NULL; + return -ENOMEM; + dma_buf_map_set_vaddr(map, rk_obj->kvaddr); - return rk_obj->kvaddr; + return 0; } -void rockchip_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr) +void rockchip_gem_prime_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map) { struct rockchip_gem_object *rk_obj = to_rockchip_obj(obj); if (rk_obj->pages) { - vunmap(vaddr); + vunmap(map->vaddr); return; } diff --git a/drivers/gpu/drm/rockchip/rockchip_drm_gem.h b/drivers/gpu/drm/rockchip/rockchip_drm_gem.h index 7ffc541bea07..5a70a56cd406 100644 --- a/drivers/gpu/drm/rockchip/rockchip_drm_gem.h +++ b/drivers/gpu/drm/rockchip/rockchip_drm_gem.h @@ -31,8 +31,8 @@ struct drm_gem_object * rockchip_gem_prime_import_sg_table(struct drm_device *dev, struct dma_buf_attachment *attach, struct sg_table *sg); -void *rockchip_gem_prime_vmap(struct drm_gem_object *obj); -void rockchip_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr); +int rockchip_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map); +void rockchip_gem_prime_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map); /* drm driver mmap file operations */ int rockchip_gem_mmap(struct file *filp, struct vm_area_struct *vma); diff --git a/drivers/gpu/drm/tiny/cirrus.c b/drivers/gpu/drm/tiny/cirrus.c index 744a8e337e41..c02e35ed6e76 100644 --- a/drivers/gpu/drm/tiny/cirrus.c +++ b/drivers/gpu/drm/tiny/cirrus.c @@ -17,6 +17,7 @@ */ #include +#include #include #include @@ -314,6 +315,7 @@ static int cirrus_fb_blit_rect(struct drm_framebuffer *fb, struct drm_rect *rect) { struct cirrus_device *cirrus = to_cirrus(fb->dev); + struct dma_buf_map map; void *vmap; int idx, ret; @@ -321,10 +323,10 @@ static int cirrus_fb_blit_rect(struct drm_framebuffer *fb, if (!drm_dev_enter(&cirrus->dev, &idx)) goto out; - ret = -ENOMEM; - vmap = drm_gem_shmem_vmap(fb->obj[0]); - if (!vmap) + ret = drm_gem_shmem_vmap(fb->obj[0], &map); + if (ret) goto out_dev_exit; + vmap = map.vaddr; /* TODO: Use mapping abstraction properly */ if (cirrus->cpp == fb->format->cpp[0]) drm_fb_memcpy_dstclip(cirrus->vram, @@ -343,7 +345,7 @@ static int cirrus_fb_blit_rect(struct drm_framebuffer *fb, else WARN_ON_ONCE("cpp mismatch"); - drm_gem_shmem_vunmap(fb->obj[0], vmap); + drm_gem_shmem_vunmap(fb->obj[0], &map); ret = 0; out_dev_exit: diff --git a/drivers/gpu/drm/tiny/gm12u320.c b/drivers/gpu/drm/tiny/gm12u320.c index cc397671f689..12a890cea6e9 100644 --- a/drivers/gpu/drm/tiny/gm12u320.c +++ b/drivers/gpu/drm/tiny/gm12u320.c @@ -248,6 +248,7 @@ static void gm12u320_copy_fb_to_blocks(struct gm12u320_device *gm12u320) { int block, dst_offset, len, remain, ret, x1, x2, y1, y2; struct drm_framebuffer *fb; + struct dma_buf_map map; void *vaddr; u8 *src; @@ -262,11 +263,12 @@ static void gm12u320_copy_fb_to_blocks(struct gm12u320_device *gm12u320) y1 = gm12u320->fb_update.rect.y1; y2 = gm12u320->fb_update.rect.y2; - vaddr = drm_gem_shmem_vmap(fb->obj[0]); - if (IS_ERR(vaddr)) { - GM12U320_ERR("failed to vmap fb: %ld\n", PTR_ERR(vaddr)); + ret = drm_gem_shmem_vmap(fb->obj[0], &map); + if (ret) { + GM12U320_ERR("failed to vmap fb: %d\n", ret); goto put_fb; } + vaddr = map.vaddr; /* TODO: Use mapping abstraction properly */ if (fb->obj[0]->import_attach) { ret = dma_buf_begin_cpu_access( @@ -318,7 +320,7 @@ static void gm12u320_copy_fb_to_blocks(struct gm12u320_device *gm12u320) GM12U320_ERR("dma_buf_end_cpu_access err: %d\n", ret); } vunmap: - drm_gem_shmem_vunmap(fb->obj[0], vaddr); + drm_gem_shmem_vunmap(fb->obj[0], &map); put_fb: drm_framebuffer_put(fb); gm12u320->fb_update.fb = NULL; diff --git a/drivers/gpu/drm/udl/udl_modeset.c b/drivers/gpu/drm/udl/udl_modeset.c index fef43f4e3bac..42eeba1dfdbf 100644 --- a/drivers/gpu/drm/udl/udl_modeset.c +++ b/drivers/gpu/drm/udl/udl_modeset.c @@ -276,6 +276,7 @@ static int udl_handle_damage(struct drm_framebuffer *fb, int x, int y, struct urb *urb; struct drm_rect clip; int log_bpp; + struct dma_buf_map map; void *vaddr; ret = udl_log_cpp(fb->format->cpp[0]); @@ -296,11 +297,12 @@ static int udl_handle_damage(struct drm_framebuffer *fb, int x, int y, return ret; } - vaddr = drm_gem_shmem_vmap(fb->obj[0]); - if (IS_ERR(vaddr)) { + ret = drm_gem_shmem_vmap(fb->obj[0], &map); + if (ret) { DRM_ERROR("failed to vmap fb\n"); goto out_dma_buf_end_cpu_access; } + vaddr = map.vaddr; /* TODO: Use mapping abstraction properly */ urb = udl_get_urb(dev); if (!urb) @@ -333,7 +335,7 @@ static int udl_handle_damage(struct drm_framebuffer *fb, int x, int y, ret = 0; out_drm_gem_shmem_vunmap: - drm_gem_shmem_vunmap(fb->obj[0], vaddr); + drm_gem_shmem_vunmap(fb->obj[0], &map); out_dma_buf_end_cpu_access: if (import_attach) { tmp_ret = dma_buf_end_cpu_access(import_attach->dmabuf, diff --git a/drivers/gpu/drm/vboxvideo/vbox_mode.c b/drivers/gpu/drm/vboxvideo/vbox_mode.c index 931c55126148..f268fb258c83 100644 --- a/drivers/gpu/drm/vboxvideo/vbox_mode.c +++ b/drivers/gpu/drm/vboxvideo/vbox_mode.c @@ -9,6 +9,8 @@ * Michael Thayer */ + +#include #include #include @@ -384,6 +386,8 @@ static void vbox_cursor_atomic_update(struct drm_plane *plane, u32 height = plane->state->crtc_h; size_t data_size, mask_size; u32 flags; + struct dma_buf_map map; + int ret; u8 *src; /* @@ -397,8 +401,8 @@ static void vbox_cursor_atomic_update(struct drm_plane *plane, vbox_crtc->cursor_enabled = true; - src = drm_gem_vram_vmap(gbo); - if (IS_ERR(src)) { + ret = drm_gem_vram_vmap(gbo, &map); + if (ret) { /* * BUG: we should have pinned the BO in prepare_fb(). */ @@ -406,6 +410,7 @@ static void vbox_cursor_atomic_update(struct drm_plane *plane, DRM_WARN("Could not map cursor bo, skipping update\n"); return; } + src = map.vaddr; /* TODO: Use mapping abstraction properly */ /* * The mask must be calculated based on the alpha @@ -416,7 +421,7 @@ static void vbox_cursor_atomic_update(struct drm_plane *plane, data_size = width * height * 4 + mask_size; copy_cursor_image(src, vbox->cursor_data, width, height, mask_size); - drm_gem_vram_vunmap(gbo, src); + drm_gem_vram_vunmap(gbo, &map); flags = VBOX_MOUSE_POINTER_VISIBLE | VBOX_MOUSE_POINTER_SHAPE | VBOX_MOUSE_POINTER_ALPHA; diff --git a/drivers/gpu/drm/vc4/vc4_bo.c b/drivers/gpu/drm/vc4/vc4_bo.c index 557f0d1e6437..f290a9a942dc 100644 --- a/drivers/gpu/drm/vc4/vc4_bo.c +++ b/drivers/gpu/drm/vc4/vc4_bo.c @@ -785,16 +785,16 @@ int vc4_prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma) return drm_gem_cma_prime_mmap(obj, vma); } -void *vc4_prime_vmap(struct drm_gem_object *obj) +int vc4_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map) { struct vc4_bo *bo = to_vc4_bo(obj); if (bo->validated_shader) { DRM_DEBUG("mmaping of shader BOs not allowed.\n"); - return ERR_PTR(-EINVAL); + return -EINVAL; } - return drm_gem_cma_prime_vmap(obj); + return drm_gem_cma_prime_vmap(obj, map); } struct drm_gem_object * diff --git a/drivers/gpu/drm/vc4/vc4_drv.h b/drivers/gpu/drm/vc4/vc4_drv.h index cc79b1aaa878..904f2c36c963 100644 --- a/drivers/gpu/drm/vc4/vc4_drv.h +++ b/drivers/gpu/drm/vc4/vc4_drv.h @@ -806,7 +806,7 @@ int vc4_prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma); struct drm_gem_object *vc4_prime_import_sg_table(struct drm_device *dev, struct dma_buf_attachment *attach, struct sg_table *sgt); -void *vc4_prime_vmap(struct drm_gem_object *obj); +int vc4_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map); int vc4_bo_cache_init(struct drm_device *dev); void vc4_bo_cache_destroy(struct drm_device *dev); int vc4_bo_inc_usecnt(struct vc4_bo *bo); diff --git a/drivers/gpu/drm/vgem/vgem_drv.c b/drivers/gpu/drm/vgem/vgem_drv.c index fa54a6d1403d..b2aa26e1e4a2 100644 --- a/drivers/gpu/drm/vgem/vgem_drv.c +++ b/drivers/gpu/drm/vgem/vgem_drv.c @@ -361,24 +361,30 @@ static struct drm_gem_object *vgem_prime_import_sg_table(struct drm_device *dev, return &obj->base; } -static void *vgem_prime_vmap(struct drm_gem_object *obj) +static int vgem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map) { struct drm_vgem_gem_object *bo = to_vgem_bo(obj); long n_pages = obj->size >> PAGE_SHIFT; struct page **pages; + void *vaddr; pages = vgem_pin_pages(bo); if (IS_ERR(pages)) - return NULL; + return PTR_ERR(pages); + + vaddr = vmap(pages, n_pages, 0, pgprot_writecombine(PAGE_KERNEL)); + if (!vaddr) + return -ENOMEM; + dma_buf_map_set_vaddr(map, vaddr); - return vmap(pages, n_pages, 0, pgprot_writecombine(PAGE_KERNEL)); + return 0; } -static void vgem_prime_vunmap(struct drm_gem_object *obj, void *vaddr) +static void vgem_prime_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map) { struct drm_vgem_gem_object *bo = to_vgem_bo(obj); - vunmap(vaddr); + vunmap(map->vaddr); vgem_unpin_pages(bo); } diff --git a/drivers/gpu/drm/vkms/vkms_plane.c b/drivers/gpu/drm/vkms/vkms_plane.c index 9890137bcb8d..0824327cc860 100644 --- a/drivers/gpu/drm/vkms/vkms_plane.c +++ b/drivers/gpu/drm/vkms/vkms_plane.c @@ -1,5 +1,7 @@ // SPDX-License-Identifier: GPL-2.0+ +#include + #include #include #include @@ -146,15 +148,16 @@ static int vkms_prepare_fb(struct drm_plane *plane, struct drm_plane_state *state) { struct drm_gem_object *gem_obj; - void *vaddr; + struct dma_buf_map map; + int ret; if (!state->fb) return 0; gem_obj = drm_gem_fb_get_obj(state->fb, 0); - vaddr = drm_gem_shmem_vmap(gem_obj); - if (IS_ERR(vaddr)) - DRM_ERROR("vmap failed: %li\n", PTR_ERR(vaddr)); + ret = drm_gem_shmem_vmap(gem_obj, &map); + if (ret) + DRM_ERROR("vmap failed: %d\n", ret); return drm_gem_fb_prepare_fb(plane, state); } @@ -164,13 +167,15 @@ static void vkms_cleanup_fb(struct drm_plane *plane, { struct drm_gem_object *gem_obj; struct drm_gem_shmem_object *shmem_obj; + struct dma_buf_map map; if (!old_state->fb) return; gem_obj = drm_gem_fb_get_obj(old_state->fb, 0); shmem_obj = to_drm_gem_shmem_obj(drm_gem_fb_get_obj(old_state->fb, 0)); - drm_gem_shmem_vunmap(gem_obj, shmem_obj->vaddr); + dma_buf_map_set_vaddr(&map, shmem_obj->vaddr); + drm_gem_shmem_vunmap(gem_obj, &map); } static const struct drm_plane_helper_funcs vkms_primary_helper_funcs = { diff --git a/drivers/gpu/drm/vkms/vkms_writeback.c b/drivers/gpu/drm/vkms/vkms_writeback.c index 26b903926872..67f80ab1e85f 100644 --- a/drivers/gpu/drm/vkms/vkms_writeback.c +++ b/drivers/gpu/drm/vkms/vkms_writeback.c @@ -1,6 +1,7 @@ // SPDX-License-Identifier: GPL-2.0+ -#include "vkms_drv.h" +#include + #include #include #include @@ -8,6 +9,8 @@ #include #include +#include "vkms_drv.h" + static const u32 vkms_wb_formats[] = { DRM_FORMAT_XRGB8888, }; @@ -65,19 +68,20 @@ static int vkms_wb_prepare_job(struct drm_writeback_connector *wb_connector, struct drm_writeback_job *job) { struct drm_gem_object *gem_obj; - void *vaddr; + struct dma_buf_map map; + int ret; if (!job->fb) return 0; gem_obj = drm_gem_fb_get_obj(job->fb, 0); - vaddr = drm_gem_shmem_vmap(gem_obj); - if (IS_ERR(vaddr)) { - DRM_ERROR("vmap failed: %li\n", PTR_ERR(vaddr)); - return PTR_ERR(vaddr); + ret = drm_gem_shmem_vmap(gem_obj, &map); + if (ret) { + DRM_ERROR("vmap failed: %d\n", ret); + return ret; } - job->priv = vaddr; + job->priv = map.vaddr; return 0; } @@ -87,12 +91,14 @@ static void vkms_wb_cleanup_job(struct drm_writeback_connector *connector, { struct drm_gem_object *gem_obj; struct vkms_device *vkmsdev; + struct dma_buf_map map; if (!job->fb) return; gem_obj = drm_gem_fb_get_obj(job->fb, 0); - drm_gem_shmem_vunmap(gem_obj, job->priv); + dma_buf_map_set_vaddr(&map, job->priv); + drm_gem_shmem_vunmap(gem_obj, &map); vkmsdev = drm_device_to_vkms_device(gem_obj->dev); vkms_set_composer(&vkmsdev->output, false); diff --git a/drivers/gpu/drm/xen/xen_drm_front_gem.c b/drivers/gpu/drm/xen/xen_drm_front_gem.c index 4f34ef34ba60..74db5a840bed 100644 --- a/drivers/gpu/drm/xen/xen_drm_front_gem.c +++ b/drivers/gpu/drm/xen/xen_drm_front_gem.c @@ -290,22 +290,28 @@ int xen_drm_front_gem_mmap(struct file *filp, struct vm_area_struct *vma) return gem_mmap_obj(xen_obj, vma); } -void *xen_drm_front_gem_prime_vmap(struct drm_gem_object *gem_obj) +int xen_drm_front_gem_prime_vmap(struct drm_gem_object *gem_obj, struct dma_buf_map *map) { struct xen_gem_object *xen_obj = to_xen_gem_obj(gem_obj); + void *vaddr; if (!xen_obj->pages) - return NULL; + return -ENOMEM; /* Please see comment in gem_mmap_obj on mapping and attributes. */ - return vmap(xen_obj->pages, xen_obj->num_pages, - VM_MAP, PAGE_KERNEL); + vaddr = vmap(xen_obj->pages, xen_obj->num_pages, + VM_MAP, PAGE_KERNEL); + if (!vaddr) + return -ENOMEM; + dma_buf_map_set_vaddr(map, vaddr); + + return 0; } void xen_drm_front_gem_prime_vunmap(struct drm_gem_object *gem_obj, - void *vaddr) + struct dma_buf_map *map) { - vunmap(vaddr); + vunmap(map->vaddr); } int xen_drm_front_gem_prime_mmap(struct drm_gem_object *gem_obj, diff --git a/drivers/gpu/drm/xen/xen_drm_front_gem.h b/drivers/gpu/drm/xen/xen_drm_front_gem.h index a39675fa31b2..a4e67d0a149c 100644 --- a/drivers/gpu/drm/xen/xen_drm_front_gem.h +++ b/drivers/gpu/drm/xen/xen_drm_front_gem.h @@ -12,6 +12,7 @@ #define __XEN_DRM_FRONT_GEM_H struct dma_buf_attachment; +struct dma_buf_map; struct drm_device; struct drm_gem_object; struct file; @@ -34,10 +35,11 @@ void xen_drm_front_gem_free_object_unlocked(struct drm_gem_object *gem_obj); int xen_drm_front_gem_mmap(struct file *filp, struct vm_area_struct *vma); -void *xen_drm_front_gem_prime_vmap(struct drm_gem_object *gem_obj); +int xen_drm_front_gem_prime_vmap(struct drm_gem_object *gem_obj, + struct dma_buf_map *map); void xen_drm_front_gem_prime_vunmap(struct drm_gem_object *gem_obj, - void *vaddr); + struct dma_buf_map *map); int xen_drm_front_gem_prime_mmap(struct drm_gem_object *gem_obj, struct vm_area_struct *vma); diff --git a/include/drm/drm_gem.h b/include/drm/drm_gem.h index c38dd35da00b..5e6daa1c982f 100644 --- a/include/drm/drm_gem.h +++ b/include/drm/drm_gem.h @@ -39,6 +39,7 @@ #include +struct dma_buf_map; struct drm_gem_object; /** @@ -138,7 +139,7 @@ struct drm_gem_object_funcs { * * This callback is optional. */ - void *(*vmap)(struct drm_gem_object *obj); + int (*vmap)(struct drm_gem_object *obj, struct dma_buf_map *map); /** * @vunmap: @@ -148,7 +149,7 @@ struct drm_gem_object_funcs { * * This callback is optional. */ - void (*vunmap)(struct drm_gem_object *obj, void *vaddr); + void (*vunmap)(struct drm_gem_object *obj, struct dma_buf_map *map); /** * @mmap: diff --git a/include/drm/drm_gem_cma_helper.h b/include/drm/drm_gem_cma_helper.h index a064b0d1c480..caf98b9cf4b4 100644 --- a/include/drm/drm_gem_cma_helper.h +++ b/include/drm/drm_gem_cma_helper.h @@ -103,7 +103,7 @@ drm_gem_cma_prime_import_sg_table(struct drm_device *dev, struct sg_table *sgt); int drm_gem_cma_prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma); -void *drm_gem_cma_prime_vmap(struct drm_gem_object *obj); +int drm_gem_cma_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map); struct drm_gem_object * drm_gem_cma_create_object_default_funcs(struct drm_device *dev, size_t size); diff --git a/include/drm/drm_gem_shmem_helper.h b/include/drm/drm_gem_shmem_helper.h index 5381f0c8cf6f..3449a0353fe0 100644 --- a/include/drm/drm_gem_shmem_helper.h +++ b/include/drm/drm_gem_shmem_helper.h @@ -113,8 +113,8 @@ int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem); void drm_gem_shmem_put_pages(struct drm_gem_shmem_object *shmem); int drm_gem_shmem_pin(struct drm_gem_object *obj); void drm_gem_shmem_unpin(struct drm_gem_object *obj); -void *drm_gem_shmem_vmap(struct drm_gem_object *obj); -void drm_gem_shmem_vunmap(struct drm_gem_object *obj, void *vaddr); +int drm_gem_shmem_vmap(struct drm_gem_object *obj, struct dma_buf_map *map); +void drm_gem_shmem_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map); int drm_gem_shmem_madvise(struct drm_gem_object *obj, int madv); diff --git a/include/drm/drm_gem_vram_helper.h b/include/drm/drm_gem_vram_helper.h index 128f88174d32..c0d28ba0f5c9 100644 --- a/include/drm/drm_gem_vram_helper.h +++ b/include/drm/drm_gem_vram_helper.h @@ -10,6 +10,7 @@ #include #include +#include #include /* for container_of() */ struct drm_mode_create_dumb; @@ -29,9 +30,8 @@ struct vm_area_struct; /** * struct drm_gem_vram_object - GEM object backed by VRAM - * @gem: GEM object * @bo: TTM buffer object - * @kmap: Mapping information for @bo + * @map: Mapping information for @bo * @placement: TTM placement information. Supported placements are \ %TTM_PL_VRAM and %TTM_PL_SYSTEM * @placements: TTM placement information. @@ -50,15 +50,15 @@ struct vm_area_struct; */ struct drm_gem_vram_object { struct ttm_buffer_object bo; - struct ttm_bo_kmap_obj kmap; + struct dma_buf_map map; /** - * @kmap_use_count: + * @vmap_use_count: * * Reference count on the virtual address. * The address are un-mapped when the count reaches zero. */ - unsigned int kmap_use_count; + unsigned int vmap_use_count; /* Supported placements are %TTM_PL_VRAM and %TTM_PL_SYSTEM */ struct ttm_placement placement; @@ -97,8 +97,8 @@ u64 drm_gem_vram_mmap_offset(struct drm_gem_vram_object *gbo); s64 drm_gem_vram_offset(struct drm_gem_vram_object *gbo); int drm_gem_vram_pin(struct drm_gem_vram_object *gbo, unsigned long pl_flag); int drm_gem_vram_unpin(struct drm_gem_vram_object *gbo); -void *drm_gem_vram_vmap(struct drm_gem_vram_object *gbo); -void drm_gem_vram_vunmap(struct drm_gem_vram_object *gbo, void *vaddr); +int drm_gem_vram_vmap(struct drm_gem_vram_object *gbo, struct dma_buf_map *map); +void drm_gem_vram_vunmap(struct drm_gem_vram_object *gbo, struct dma_buf_map *map); int drm_gem_vram_fill_create_dumb(struct drm_file *file, struct drm_device *dev, -- 2.28.0 From tzimmermann at suse.de Tue Oct 20 12:20:45 2020 From: tzimmermann at suse.de (Thomas Zimmermann) Date: Tue, 20 Oct 2020 14:20:45 +0200 Subject: [Spice-devel] [PATCH v5 09/10] dma-buf-map: Add memcpy and pointer-increment interfaces In-Reply-To: <20201020122046.31167-1-tzimmermann@suse.de> References: <20201020122046.31167-1-tzimmermann@suse.de> Message-ID: <20201020122046.31167-10-tzimmermann@suse.de> To do framebuffer updates, one needs memcpy from system memory and a pointer-increment function. Add both interfaces with documentation. v5: * include to build on sparc64 (Sam) Signed-off-by: Thomas Zimmermann Reviewed-by: Sam Ravnborg Tested-by: Sam Ravnborg --- include/linux/dma-buf-map.h | 73 ++++++++++++++++++++++++++++++++----- 1 file changed, 63 insertions(+), 10 deletions(-) diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h index 2e8bbecb5091..583a3a1f9447 100644 --- a/include/linux/dma-buf-map.h +++ b/include/linux/dma-buf-map.h @@ -7,6 +7,7 @@ #define __DMA_BUF_MAP_H__ #include +#include /** * DOC: overview @@ -32,6 +33,14 @@ * accessing the buffer. Use the returned instance and the helper functions * to access the buffer's memory in the correct way. * + * The type :c:type:`struct dma_buf_map ` and its helpers are + * actually independent from the dma-buf infrastructure. When sharing buffers + * among devices, drivers have to know the location of the memory to access + * the buffers in a safe way. :c:type:`struct dma_buf_map ` + * solves this problem for dma-buf and its users. If other drivers or + * sub-systems require similar functionality, the type could be generalized + * and moved to a more prominent header file. + * * Open-coding access to :c:type:`struct dma_buf_map ` is * considered bad style. Rather then accessing its fields directly, use one * of the provided helper functions, or implement your own. For example, @@ -51,6 +60,14 @@ * * dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf); * + * Instances of struct dma_buf_map do not have to be cleaned up, but + * can be cleared to NULL with dma_buf_map_clear(). Cleared mappings + * always refer to system memory. + * + * .. code-block:: c + * + * dma_buf_map_clear(&map); + * * Test if a mapping is valid with either dma_buf_map_is_set() or * dma_buf_map_is_null(). * @@ -73,17 +90,19 @@ * if (dma_buf_map_is_equal(&sys_map, &io_map)) * // always false * - * Instances of struct dma_buf_map do not have to be cleaned up, but - * can be cleared to NULL with dma_buf_map_clear(). Cleared mappings - * always refer to system memory. + * A set up instance of struct dma_buf_map can be used to access or manipulate + * the buffer memory. Depending on the location of the memory, the provided + * helpers will pick the correct operations. Data can be copied into the memory + * with dma_buf_map_memcpy_to(). The address can be manipulated with + * dma_buf_map_incr(). * - * The type :c:type:`struct dma_buf_map ` and its helpers are - * actually independent from the dma-buf infrastructure. When sharing buffers - * among devices, drivers have to know the location of the memory to access - * the buffers in a safe way. :c:type:`struct dma_buf_map ` - * solves this problem for dma-buf and its users. If other drivers or - * sub-systems require similar functionality, the type could be generalized - * and moved to a more prominent header file. + * .. code-block:: c + * + * const void *src = ...; // source buffer + * size_t len = ...; // length of src + * + * dma_buf_map_memcpy_to(&map, src, len); + * dma_buf_map_incr(&map, len); // go to first byte after the memcpy */ /** @@ -210,4 +229,38 @@ static inline void dma_buf_map_clear(struct dma_buf_map *map) } } +/** + * dma_buf_map_memcpy_to - Memcpy into dma-buf mapping + * @dst: The dma-buf mapping structure + * @src: The source buffer + * @len: The number of byte in src + * + * Copies data into a dma-buf mapping. The source buffer is in system + * memory. Depending on the buffer's location, the helper picks the correct + * method of accessing the memory. + */ +static inline void dma_buf_map_memcpy_to(struct dma_buf_map *dst, const void *src, size_t len) +{ + if (dst->is_iomem) + memcpy_toio(dst->vaddr_iomem, src, len); + else + memcpy(dst->vaddr, src, len); +} + +/** + * dma_buf_map_incr - Increments the address stored in a dma-buf mapping + * @map: The dma-buf mapping structure + * @incr: The number of bytes to increment + * + * Increments the address stored in a dma-buf mapping. Depending on the + * buffer's location, the correct value will be updated. + */ +static inline void dma_buf_map_incr(struct dma_buf_map *map, size_t incr) +{ + if (map->is_iomem) + map->vaddr_iomem += incr; + else + map->vaddr += incr; +} + #endif /* __DMA_BUF_MAP_H__ */ -- 2.28.0 From tzimmermann at suse.de Tue Oct 20 12:20:44 2020 From: tzimmermann at suse.de (Thomas Zimmermann) Date: Tue, 20 Oct 2020 14:20:44 +0200 Subject: [Spice-devel] [PATCH v5 08/10] drm/gem: Store client buffer mappings as struct dma_buf_map In-Reply-To: <20201020122046.31167-1-tzimmermann@suse.de> References: <20201020122046.31167-1-tzimmermann@suse.de> Message-ID: <20201020122046.31167-9-tzimmermann@suse.de> Kernel DRM clients now store their framebuffer address in an instance of struct dma_buf_map. Depending on the buffer's location, the address refers to system or I/O memory. Callers of drm_client_buffer_vmap() receive a copy of the value in the call's supplied arguments. It can be accessed and modified with dma_buf_map interfaces. Signed-off-by: Thomas Zimmermann Reviewed-by: Daniel Vetter Tested-by: Sam Ravnborg --- drivers/gpu/drm/drm_client.c | 34 +++++++++++++++++++-------------- drivers/gpu/drm/drm_fb_helper.c | 23 +++++++++++++--------- include/drm/drm_client.h | 7 ++++--- 3 files changed, 38 insertions(+), 26 deletions(-) diff --git a/drivers/gpu/drm/drm_client.c b/drivers/gpu/drm/drm_client.c index ac0082bed966..fe573acf1067 100644 --- a/drivers/gpu/drm/drm_client.c +++ b/drivers/gpu/drm/drm_client.c @@ -235,7 +235,7 @@ static void drm_client_buffer_delete(struct drm_client_buffer *buffer) { struct drm_device *dev = buffer->client->dev; - drm_gem_vunmap(buffer->gem, buffer->vaddr); + drm_gem_vunmap(buffer->gem, &buffer->map); if (buffer->gem) drm_gem_object_put(buffer->gem); @@ -291,25 +291,31 @@ drm_client_buffer_create(struct drm_client_dev *client, u32 width, u32 height, u /** * drm_client_buffer_vmap - Map DRM client buffer into address space * @buffer: DRM client buffer + * @map_copy: Returns the mapped memory's address * * This function maps a client buffer into kernel address space. If the - * buffer is already mapped, it returns the mapping's address. + * buffer is already mapped, it returns the existing mapping's address. * * Client buffer mappings are not ref'counted. Each call to * drm_client_buffer_vmap() should be followed by a call to * drm_client_buffer_vunmap(); or the client buffer should be mapped * throughout its lifetime. * + * The returned address is a copy of the internal value. In contrast to + * other vmap interfaces, you don't need it for the client's vunmap + * function. So you can modify it at will during blit and draw operations. + * * Returns: - * The mapped memory's address + * 0 on success, or a negative errno code otherwise. */ -void *drm_client_buffer_vmap(struct drm_client_buffer *buffer) +int +drm_client_buffer_vmap(struct drm_client_buffer *buffer, struct dma_buf_map *map_copy) { - struct dma_buf_map map; + struct dma_buf_map *map = &buffer->map; int ret; - if (buffer->vaddr) - return buffer->vaddr; + if (dma_buf_map_is_set(map)) + goto out; /* * FIXME: The dependency on GEM here isn't required, we could @@ -319,13 +325,14 @@ void *drm_client_buffer_vmap(struct drm_client_buffer *buffer) * fd_install step out of the driver backend hooks, to make that * final step optional for internal users. */ - ret = drm_gem_vmap(buffer->gem, &map); + ret = drm_gem_vmap(buffer->gem, map); if (ret) - return ERR_PTR(ret); + return ret; - buffer->vaddr = map.vaddr; +out: + *map_copy = *map; - return map.vaddr; + return 0; } EXPORT_SYMBOL(drm_client_buffer_vmap); @@ -339,10 +346,9 @@ EXPORT_SYMBOL(drm_client_buffer_vmap); */ void drm_client_buffer_vunmap(struct drm_client_buffer *buffer) { - struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(buffer->vaddr); + struct dma_buf_map *map = &buffer->map; - drm_gem_vunmap(buffer->gem, &map); - buffer->vaddr = NULL; + drm_gem_vunmap(buffer->gem, map); } EXPORT_SYMBOL(drm_client_buffer_vunmap); diff --git a/drivers/gpu/drm/drm_fb_helper.c b/drivers/gpu/drm/drm_fb_helper.c index c2f72bb6afb1..6212cd7cde1d 100644 --- a/drivers/gpu/drm/drm_fb_helper.c +++ b/drivers/gpu/drm/drm_fb_helper.c @@ -378,7 +378,7 @@ static void drm_fb_helper_dirty_blit_real(struct drm_fb_helper *fb_helper, unsigned int cpp = fb->format->cpp[0]; size_t offset = clip->y1 * fb->pitches[0] + clip->x1 * cpp; void *src = fb_helper->fbdev->screen_buffer + offset; - void *dst = fb_helper->buffer->vaddr + offset; + void *dst = fb_helper->buffer->map.vaddr + offset; size_t len = (clip->x2 - clip->x1) * cpp; unsigned int y; @@ -400,7 +400,8 @@ static void drm_fb_helper_dirty_work(struct work_struct *work) struct drm_clip_rect *clip = &helper->dirty_clip; struct drm_clip_rect clip_copy; unsigned long flags; - void *vaddr; + struct dma_buf_map map; + int ret; spin_lock_irqsave(&helper->dirty_lock, flags); clip_copy = *clip; @@ -413,8 +414,8 @@ static void drm_fb_helper_dirty_work(struct work_struct *work) /* Generic fbdev uses a shadow buffer */ if (helper->buffer) { - vaddr = drm_client_buffer_vmap(helper->buffer); - if (IS_ERR(vaddr)) + ret = drm_client_buffer_vmap(helper->buffer, &map); + if (ret) return; drm_fb_helper_dirty_blit_real(helper, &clip_copy); } @@ -2060,7 +2061,8 @@ static int drm_fb_helper_generic_probe(struct drm_fb_helper *fb_helper, struct drm_framebuffer *fb; struct fb_info *fbi; u32 format; - void *vaddr; + struct dma_buf_map map; + int ret; drm_dbg_kms(dev, "surface width(%d), height(%d) and bpp(%d)\n", sizes->surface_width, sizes->surface_height, @@ -2096,11 +2098,14 @@ static int drm_fb_helper_generic_probe(struct drm_fb_helper *fb_helper, fb_deferred_io_init(fbi); } else { /* buffer is mapped for HW framebuffer */ - vaddr = drm_client_buffer_vmap(fb_helper->buffer); - if (IS_ERR(vaddr)) - return PTR_ERR(vaddr); + ret = drm_client_buffer_vmap(fb_helper->buffer, &map); + if (ret) + return ret; + if (map.is_iomem) + fbi->screen_base = map.vaddr_iomem; + else + fbi->screen_buffer = map.vaddr; - fbi->screen_buffer = vaddr; /* Shamelessly leak the physical address to user-space */ #if IS_ENABLED(CONFIG_DRM_FBDEV_LEAK_PHYS_SMEM) if (drm_leak_fbdev_smem && fbi->fix.smem_start == 0) diff --git a/include/drm/drm_client.h b/include/drm/drm_client.h index 7aaea665bfc2..f07f2fb02e75 100644 --- a/include/drm/drm_client.h +++ b/include/drm/drm_client.h @@ -3,6 +3,7 @@ #ifndef _DRM_CLIENT_H_ #define _DRM_CLIENT_H_ +#include #include #include #include @@ -141,9 +142,9 @@ struct drm_client_buffer { struct drm_gem_object *gem; /** - * @vaddr: Virtual address for the buffer + * @map: Virtual address for the buffer */ - void *vaddr; + struct dma_buf_map map; /** * @fb: DRM framebuffer @@ -155,7 +156,7 @@ struct drm_client_buffer * drm_client_framebuffer_create(struct drm_client_dev *client, u32 width, u32 height, u32 format); void drm_client_framebuffer_delete(struct drm_client_buffer *buffer); int drm_client_framebuffer_flush(struct drm_client_buffer *buffer, struct drm_rect *rect); -void *drm_client_buffer_vmap(struct drm_client_buffer *buffer); +int drm_client_buffer_vmap(struct drm_client_buffer *buffer, struct dma_buf_map *map); void drm_client_buffer_vunmap(struct drm_client_buffer *buffer); int drm_client_modeset_create(struct drm_client_dev *client); -- 2.28.0 From tzimmermann at suse.de Tue Oct 20 12:20:37 2020 From: tzimmermann at suse.de (Thomas Zimmermann) Date: Tue, 20 Oct 2020 14:20:37 +0200 Subject: [Spice-devel] [PATCH v5 01/10] drm/vram-helper: Remove invariant parameters from internal kmap function In-Reply-To: <20201020122046.31167-1-tzimmermann@suse.de> References: <20201020122046.31167-1-tzimmermann@suse.de> Message-ID: <20201020122046.31167-2-tzimmermann@suse.de> The parameters map and is_iomem are always of the same value. Removed them to prepares the function for conversion to struct dma_buf_map. v4: * don't check for !kmap->virtual; will always be false Signed-off-by: Thomas Zimmermann Reviewed-by: Daniel Vetter Reviewed-by: Christian K?nig Tested-by: Sam Ravnborg --- drivers/gpu/drm/drm_gem_vram_helper.c | 18 ++++-------------- 1 file changed, 4 insertions(+), 14 deletions(-) diff --git a/drivers/gpu/drm/drm_gem_vram_helper.c b/drivers/gpu/drm/drm_gem_vram_helper.c index 7aeb5daf2805..bfc059945e31 100644 --- a/drivers/gpu/drm/drm_gem_vram_helper.c +++ b/drivers/gpu/drm/drm_gem_vram_helper.c @@ -379,32 +379,22 @@ int drm_gem_vram_unpin(struct drm_gem_vram_object *gbo) } EXPORT_SYMBOL(drm_gem_vram_unpin); -static void *drm_gem_vram_kmap_locked(struct drm_gem_vram_object *gbo, - bool map, bool *is_iomem) +static void *drm_gem_vram_kmap_locked(struct drm_gem_vram_object *gbo) { int ret; struct ttm_bo_kmap_obj *kmap = &gbo->kmap; + bool is_iomem; if (gbo->kmap_use_count > 0) goto out; - if (kmap->virtual || !map) - goto out; - ret = ttm_bo_kmap(&gbo->bo, 0, gbo->bo.num_pages, kmap); if (ret) return ERR_PTR(ret); out: - if (!kmap->virtual) { - if (is_iomem) - *is_iomem = false; - return NULL; /* not mapped; don't increment ref */ - } ++gbo->kmap_use_count; - if (is_iomem) - return ttm_kmap_obj_virtual(kmap, is_iomem); - return kmap->virtual; + return ttm_kmap_obj_virtual(kmap, &is_iomem); } static void drm_gem_vram_kunmap_locked(struct drm_gem_vram_object *gbo) @@ -449,7 +439,7 @@ void *drm_gem_vram_vmap(struct drm_gem_vram_object *gbo) ret = drm_gem_vram_pin_locked(gbo, 0); if (ret) goto err_ttm_bo_unreserve; - base = drm_gem_vram_kmap_locked(gbo, true, NULL); + base = drm_gem_vram_kmap_locked(gbo); if (IS_ERR(base)) { ret = PTR_ERR(base); goto err_drm_gem_vram_unpin_locked; -- 2.28.0 From tzimmermann at suse.de Tue Oct 20 12:20:43 2020 From: tzimmermann at suse.de (Thomas Zimmermann) Date: Tue, 20 Oct 2020 14:20:43 +0200 Subject: [Spice-devel] [PATCH v5 07/10] drm/gem: Update internal GEM vmap/vunmap interfaces to use struct dma_buf_map In-Reply-To: <20201020122046.31167-1-tzimmermann@suse.de> References: <20201020122046.31167-1-tzimmermann@suse.de> Message-ID: <20201020122046.31167-8-tzimmermann@suse.de> GEM's vmap and vunmap interfaces now wrap memory pointers in struct dma_buf_map. Signed-off-by: Thomas Zimmermann Reviewed-by: Daniel Vetter Tested-by: Sam Ravnborg --- drivers/gpu/drm/drm_client.c | 18 +++++++++++------- drivers/gpu/drm/drm_gem.c | 26 +++++++++++++------------- drivers/gpu/drm/drm_internal.h | 5 +++-- drivers/gpu/drm/drm_prime.c | 14 ++++---------- 4 files changed, 31 insertions(+), 32 deletions(-) diff --git a/drivers/gpu/drm/drm_client.c b/drivers/gpu/drm/drm_client.c index 495f47d23d87..ac0082bed966 100644 --- a/drivers/gpu/drm/drm_client.c +++ b/drivers/gpu/drm/drm_client.c @@ -3,6 +3,7 @@ * Copyright 2018 Noralf Tr?nnes */ +#include #include #include #include @@ -304,7 +305,8 @@ drm_client_buffer_create(struct drm_client_dev *client, u32 width, u32 height, u */ void *drm_client_buffer_vmap(struct drm_client_buffer *buffer) { - void *vaddr; + struct dma_buf_map map; + int ret; if (buffer->vaddr) return buffer->vaddr; @@ -317,13 +319,13 @@ void *drm_client_buffer_vmap(struct drm_client_buffer *buffer) * fd_install step out of the driver backend hooks, to make that * final step optional for internal users. */ - vaddr = drm_gem_vmap(buffer->gem); - if (IS_ERR(vaddr)) - return vaddr; + ret = drm_gem_vmap(buffer->gem, &map); + if (ret) + return ERR_PTR(ret); - buffer->vaddr = vaddr; + buffer->vaddr = map.vaddr; - return vaddr; + return map.vaddr; } EXPORT_SYMBOL(drm_client_buffer_vmap); @@ -337,7 +339,9 @@ EXPORT_SYMBOL(drm_client_buffer_vmap); */ void drm_client_buffer_vunmap(struct drm_client_buffer *buffer) { - drm_gem_vunmap(buffer->gem, buffer->vaddr); + struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(buffer->vaddr); + + drm_gem_vunmap(buffer->gem, &map); buffer->vaddr = NULL; } EXPORT_SYMBOL(drm_client_buffer_vunmap); diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c index a89ad4570e3c..4d5fff4bd821 100644 --- a/drivers/gpu/drm/drm_gem.c +++ b/drivers/gpu/drm/drm_gem.c @@ -1206,32 +1206,32 @@ void drm_gem_unpin(struct drm_gem_object *obj) obj->funcs->unpin(obj); } -void *drm_gem_vmap(struct drm_gem_object *obj) +int drm_gem_vmap(struct drm_gem_object *obj, struct dma_buf_map *map) { - struct dma_buf_map map; int ret; if (!obj->funcs->vmap) - return ERR_PTR(-EOPNOTSUPP); + return -EOPNOTSUPP; - ret = obj->funcs->vmap(obj, &map); + ret = obj->funcs->vmap(obj, map); if (ret) - return ERR_PTR(ret); - else if (dma_buf_map_is_null(&map)) - return ERR_PTR(-ENOMEM); + return ret; + else if (dma_buf_map_is_null(map)) + return -ENOMEM; - return map.vaddr; + return 0; } -void drm_gem_vunmap(struct drm_gem_object *obj, void *vaddr) +void drm_gem_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map) { - struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(vaddr); - - if (!vaddr) + if (dma_buf_map_is_null(map)) return; if (obj->funcs->vunmap) - obj->funcs->vunmap(obj, &map); + obj->funcs->vunmap(obj, map); + + /* Always set the mapping to NULL. Callers may rely on this. */ + dma_buf_map_clear(map); } /** diff --git a/drivers/gpu/drm/drm_internal.h b/drivers/gpu/drm/drm_internal.h index b65865c630b0..58832d75a9bd 100644 --- a/drivers/gpu/drm/drm_internal.h +++ b/drivers/gpu/drm/drm_internal.h @@ -33,6 +33,7 @@ struct dentry; struct dma_buf; +struct dma_buf_map; struct drm_connector; struct drm_crtc; struct drm_framebuffer; @@ -187,8 +188,8 @@ void drm_gem_print_info(struct drm_printer *p, unsigned int indent, int drm_gem_pin(struct drm_gem_object *obj); void drm_gem_unpin(struct drm_gem_object *obj); -void *drm_gem_vmap(struct drm_gem_object *obj); -void drm_gem_vunmap(struct drm_gem_object *obj, void *vaddr); +int drm_gem_vmap(struct drm_gem_object *obj, struct dma_buf_map *map); +void drm_gem_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map); /* drm_debugfs.c drm_debugfs_crc.c */ #if defined(CONFIG_DEBUG_FS) diff --git a/drivers/gpu/drm/drm_prime.c b/drivers/gpu/drm/drm_prime.c index 89e2a2496734..cb8fbeeb731b 100644 --- a/drivers/gpu/drm/drm_prime.c +++ b/drivers/gpu/drm/drm_prime.c @@ -667,21 +667,15 @@ EXPORT_SYMBOL(drm_gem_unmap_dma_buf); * * Sets up a kernel virtual mapping. This can be used as the &dma_buf_ops.vmap * callback. Calls into &drm_gem_object_funcs.vmap for device specific handling. + * The kernel virtual address is returned in map. * - * Returns the kernel virtual address or NULL on failure. + * Returns 0 on success or a negative errno code otherwise. */ int drm_gem_dmabuf_vmap(struct dma_buf *dma_buf, struct dma_buf_map *map) { struct drm_gem_object *obj = dma_buf->priv; - void *vaddr; - vaddr = drm_gem_vmap(obj); - if (IS_ERR(vaddr)) - return PTR_ERR(vaddr); - - dma_buf_map_set_vaddr(map, vaddr); - - return 0; + return drm_gem_vmap(obj, map); } EXPORT_SYMBOL(drm_gem_dmabuf_vmap); @@ -697,7 +691,7 @@ void drm_gem_dmabuf_vunmap(struct dma_buf *dma_buf, struct dma_buf_map *map) { struct drm_gem_object *obj = dma_buf->priv; - drm_gem_vunmap(obj, map->vaddr); + drm_gem_vunmap(obj, map); } EXPORT_SYMBOL(drm_gem_dmabuf_vunmap); -- 2.28.0 From tzimmermann at suse.de Tue Oct 20 12:20:46 2020 From: tzimmermann at suse.de (Thomas Zimmermann) Date: Tue, 20 Oct 2020 14:20:46 +0200 Subject: [Spice-devel] [PATCH v5 10/10] drm/fb_helper: Support framebuffers in I/O memory In-Reply-To: <20201020122046.31167-1-tzimmermann@suse.de> References: <20201020122046.31167-1-tzimmermann@suse.de> Message-ID: <20201020122046.31167-11-tzimmermann@suse.de> At least sparc64 requires I/O-specific access to framebuffers. This patch updates the fbdev console accordingly. For drivers with direct access to the framebuffer memory, the callback functions in struct fb_ops test for the type of memory and call the rsp fb_sys_ of fb_cfb_ functions. Read and write operations are implemented internally by DRM's fbdev helper. For drivers that employ a shadow buffer, fbdev's blit function retrieves the framebuffer address as struct dma_buf_map, and uses dma_buf_map interfaces to access the buffer. The bochs driver on sparc64 uses a workaround to flag the framebuffer as I/O memory and avoid a HW exception. With the introduction of struct dma_buf_map, this is not required any longer. The patch removes the rsp code from both, bochs and fbdev. v5: * implement fb_read/fb_write internally (Daniel, Sam) v4: * move dma_buf_map changes into separate patch (Daniel) * TODO list: comment on fbdev updates (Daniel) Signed-off-by: Thomas Zimmermann Tested-by: Sam Ravnborg --- Documentation/gpu/todo.rst | 19 ++- drivers/gpu/drm/bochs/bochs_kms.c | 1 - drivers/gpu/drm/drm_fb_helper.c | 227 ++++++++++++++++++++++++++++-- include/drm/drm_mode_config.h | 12 -- 4 files changed, 230 insertions(+), 29 deletions(-) diff --git a/Documentation/gpu/todo.rst b/Documentation/gpu/todo.rst index 7e6fc3c04add..638b7f704339 100644 --- a/Documentation/gpu/todo.rst +++ b/Documentation/gpu/todo.rst @@ -197,13 +197,28 @@ Convert drivers to use drm_fbdev_generic_setup() ------------------------------------------------ Most drivers can use drm_fbdev_generic_setup(). Driver have to implement -atomic modesetting and GEM vmap support. Current generic fbdev emulation -expects the framebuffer in system memory (or system-like memory). +atomic modesetting and GEM vmap support. Historically, generic fbdev emulation +expected the framebuffer in system memory or system-like memory. By employing +struct dma_buf_map, drivers with frambuffers in I/O memory can be supported +as well. Contact: Maintainer of the driver you plan to convert Level: Intermediate +Reimplement functions in drm_fbdev_fb_ops without fbdev +------------------------------------------------------- + +A number of callback functions in drm_fbdev_fb_ops could benefit from +being rewritten without dependencies on the fbdev module. Some of the +helpers could further benefit from using struct dma_buf_map instead of +raw pointers. + +Contact: Thomas Zimmermann , Daniel Vetter + +Level: Advanced + + drm_framebuffer_funcs and drm_mode_config_funcs.fb_create cleanup ----------------------------------------------------------------- diff --git a/drivers/gpu/drm/bochs/bochs_kms.c b/drivers/gpu/drm/bochs/bochs_kms.c index 13d0d04c4457..853081d186d5 100644 --- a/drivers/gpu/drm/bochs/bochs_kms.c +++ b/drivers/gpu/drm/bochs/bochs_kms.c @@ -151,7 +151,6 @@ int bochs_kms_init(struct bochs_device *bochs) bochs->dev->mode_config.preferred_depth = 24; bochs->dev->mode_config.prefer_shadow = 0; bochs->dev->mode_config.prefer_shadow_fbdev = 1; - bochs->dev->mode_config.fbdev_use_iomem = true; bochs->dev->mode_config.quirk_addfb_prefer_host_byte_order = true; bochs->dev->mode_config.funcs = &bochs_mode_funcs; diff --git a/drivers/gpu/drm/drm_fb_helper.c b/drivers/gpu/drm/drm_fb_helper.c index 6212cd7cde1d..1d3180841778 100644 --- a/drivers/gpu/drm/drm_fb_helper.c +++ b/drivers/gpu/drm/drm_fb_helper.c @@ -372,24 +372,22 @@ static void drm_fb_helper_resume_worker(struct work_struct *work) } static void drm_fb_helper_dirty_blit_real(struct drm_fb_helper *fb_helper, - struct drm_clip_rect *clip) + struct drm_clip_rect *clip, + struct dma_buf_map *dst) { struct drm_framebuffer *fb = fb_helper->fb; unsigned int cpp = fb->format->cpp[0]; size_t offset = clip->y1 * fb->pitches[0] + clip->x1 * cpp; void *src = fb_helper->fbdev->screen_buffer + offset; - void *dst = fb_helper->buffer->map.vaddr + offset; size_t len = (clip->x2 - clip->x1) * cpp; unsigned int y; - for (y = clip->y1; y < clip->y2; y++) { - if (!fb_helper->dev->mode_config.fbdev_use_iomem) - memcpy(dst, src, len); - else - memcpy_toio((void __iomem *)dst, src, len); + dma_buf_map_incr(dst, offset); /* go to first pixel within clip rect */ + for (y = clip->y1; y < clip->y2; y++) { + dma_buf_map_memcpy_to(dst, src, len); + dma_buf_map_incr(dst, fb->pitches[0]); src += fb->pitches[0]; - dst += fb->pitches[0]; } } @@ -417,8 +415,9 @@ static void drm_fb_helper_dirty_work(struct work_struct *work) ret = drm_client_buffer_vmap(helper->buffer, &map); if (ret) return; - drm_fb_helper_dirty_blit_real(helper, &clip_copy); + drm_fb_helper_dirty_blit_real(helper, &clip_copy, &map); } + if (helper->fb->funcs->dirty) helper->fb->funcs->dirty(helper->fb, NULL, 0, 0, &clip_copy, 1); @@ -2027,6 +2026,206 @@ static int drm_fbdev_fb_mmap(struct fb_info *info, struct vm_area_struct *vma) return -ENODEV; } +static bool drm_fbdev_use_iomem(struct fb_info *info) +{ + struct drm_fb_helper *fb_helper = info->par; + struct drm_client_buffer *buffer = fb_helper->buffer; + + return !drm_fbdev_use_shadow_fb(fb_helper) && buffer->map.is_iomem; +} + +static ssize_t fb_read_screen_base(struct fb_info *info, char __user *buf, size_t count, + loff_t pos) +{ + const char __iomem *src = info->screen_base + pos; + size_t alloc_size = min(count, PAGE_SIZE); + ssize_t ret = 0; + char *tmp; + + tmp = kmalloc(alloc_size, GFP_KERNEL); + if (!tmp) + return -ENOMEM; + + while (count) { + size_t c = min(count, alloc_size); + + memcpy_fromio(tmp, src, c); + if (copy_to_user(buf, tmp, c)) { + ret = -EFAULT; + break; + } + + src += c; + buf += c; + ret += c; + count -= c; + } + + kfree(tmp); + + return ret; +} + +static ssize_t fb_read_screen_buffer(struct fb_info *info, char __user *buf, size_t count, + loff_t pos) +{ + const char *src = info->screen_buffer + pos; + + if (copy_to_user(buf, src, count)) + return -EFAULT; + + return count; +} + +static ssize_t drm_fbdev_fb_read(struct fb_info *info, char __user *buf, + size_t count, loff_t *ppos) +{ + loff_t pos = *ppos; + size_t total_size; + ssize_t ret; + + if (info->state != FBINFO_STATE_RUNNING) + return -EPERM; + + if (info->screen_size) + total_size = info->screen_size; + else + total_size = info->fix.smem_len; + + if (pos >= total_size) + return 0; + if (count >= total_size) + count = total_size; + if (total_size - count < pos) + count = total_size - pos; + + if (drm_fbdev_use_iomem(info)) + ret = fb_read_screen_base(info, buf, count, pos); + else + ret = fb_read_screen_buffer(info, buf, count, pos); + + if (ret > 0) + *ppos = ret; + + return ret; +} + +static ssize_t fb_write_screen_base(struct fb_info *info, const char __user *buf, size_t count, + loff_t pos) +{ + char __iomem *dst = info->screen_base + pos; + size_t alloc_size = min(count, PAGE_SIZE); + ssize_t ret = 0; + u8 *tmp; + + tmp = kmalloc(alloc_size, GFP_KERNEL); + if (!tmp) + return -ENOMEM; + + while (count) { + size_t c = min(count, alloc_size); + + if (copy_from_user(tmp, buf, c)) { + ret = -EFAULT; + break; + } + memcpy_toio(dst, tmp, c); + + dst += c; + buf += c; + ret += c; + count -= c; + } + + kfree(tmp); + + return ret; +} + +static ssize_t fb_write_screen_buffer(struct fb_info *info, const char __user *buf, size_t count, + loff_t pos) +{ + char *dst = info->screen_buffer + pos; + + if (copy_from_user(dst, buf, count)) + return -EFAULT; + + return count; +} + +static ssize_t drm_fbdev_fb_write(struct fb_info *info, const char __user *buf, + size_t count, loff_t *ppos) +{ + loff_t pos = *ppos; + size_t total_size; + ssize_t ret; + int err; + + if (info->state != FBINFO_STATE_RUNNING) + return -EPERM; + + if (info->screen_size) + total_size = info->screen_size; + else + total_size = info->fix.smem_len; + + if (pos > total_size) + return -EFBIG; + if (count > total_size) { + err = -EFBIG; + count = total_size; + } + if (total_size - count < pos) { + if (!err) + err = -ENOSPC; + count = total_size - pos; + } + + /* + * Copy to framebuffer even if we already logged an error. Emulates + * the behavior of the original fbdev implementation. + */ + if (drm_fbdev_use_iomem(info)) + ret = fb_write_screen_base(info, buf, count, pos); + else + ret = fb_write_screen_buffer(info, buf, count, pos); + + if (ret > 0) + *ppos = ret; + + if (err) + return err; + + return ret; +} + +static void drm_fbdev_fb_fillrect(struct fb_info *info, + const struct fb_fillrect *rect) +{ + if (drm_fbdev_use_iomem(info)) + drm_fb_helper_cfb_fillrect(info, rect); + else + drm_fb_helper_sys_fillrect(info, rect); +} + +static void drm_fbdev_fb_copyarea(struct fb_info *info, + const struct fb_copyarea *area) +{ + if (drm_fbdev_use_iomem(info)) + drm_fb_helper_cfb_copyarea(info, area); + else + drm_fb_helper_sys_copyarea(info, area); +} + +static void drm_fbdev_fb_imageblit(struct fb_info *info, + const struct fb_image *image) +{ + if (drm_fbdev_use_iomem(info)) + drm_fb_helper_cfb_imageblit(info, image); + else + drm_fb_helper_sys_imageblit(info, image); +} + static const struct fb_ops drm_fbdev_fb_ops = { .owner = THIS_MODULE, DRM_FB_HELPER_DEFAULT_OPS, @@ -2034,11 +2233,11 @@ static const struct fb_ops drm_fbdev_fb_ops = { .fb_release = drm_fbdev_fb_release, .fb_destroy = drm_fbdev_fb_destroy, .fb_mmap = drm_fbdev_fb_mmap, - .fb_read = drm_fb_helper_sys_read, - .fb_write = drm_fb_helper_sys_write, - .fb_fillrect = drm_fb_helper_sys_fillrect, - .fb_copyarea = drm_fb_helper_sys_copyarea, - .fb_imageblit = drm_fb_helper_sys_imageblit, + .fb_read = drm_fbdev_fb_read, + .fb_write = drm_fbdev_fb_write, + .fb_fillrect = drm_fbdev_fb_fillrect, + .fb_copyarea = drm_fbdev_fb_copyarea, + .fb_imageblit = drm_fbdev_fb_imageblit, }; static struct fb_deferred_io drm_fbdev_defio = { diff --git a/include/drm/drm_mode_config.h b/include/drm/drm_mode_config.h index 5ffbb4ed5b35..ab424ddd7665 100644 --- a/include/drm/drm_mode_config.h +++ b/include/drm/drm_mode_config.h @@ -877,18 +877,6 @@ struct drm_mode_config { */ bool prefer_shadow_fbdev; - /** - * @fbdev_use_iomem: - * - * Set to true if framebuffer reside in iomem. - * When set to true memcpy_toio() is used when copying the framebuffer in - * drm_fb_helper.drm_fb_helper_dirty_blit_real(). - * - * FIXME: This should be replaced with a per-mapping is_iomem - * flag (like ttm does), and then used everywhere in fbdev code. - */ - bool fbdev_use_iomem; - /** * @quirk_addfb_prefer_xbgr_30bpp: * -- 2.28.0 From joe at perches.com Tue Oct 20 18:42:42 2020 From: joe at perches.com (Joe Perches) Date: Tue, 20 Oct 2020 11:42:42 -0700 Subject: [Spice-devel] [RFC] treewide: cleanup unreachable breaks In-Reply-To: References: <20201017160928.12698-1-trix@redhat.com> <20201018054332.GB593954@kroah.com> Message-ID: <3bc5c2e3b3edc22a4d167ec807ecdaaf8dcda76d.camel@perches.com> On Mon, 2020-10-19 at 12:42 -0700, Nick Desaulniers wrote: > On Sat, Oct 17, 2020 at 10:43 PM Greg KH wrote: > > On Sat, Oct 17, 2020 at 09:09:28AM -0700, trix at redhat.com wrote: > > > From: Tom Rix > > > > > > This is a upcoming change to clean up a new warning treewide. > > > I am wondering if the change could be one mega patch (see below) or > > > normal patch per file about 100 patches or somewhere half way by collecting > > > early acks. > > > > Please break it up into one-patch-per-subsystem, like normal, and get it > > merged that way. > > > > Sending us a patch, without even a diffstat to review, isn't going to > > get you very far... > > Tom, > If you're able to automate this cleanup, I suggest checking in a > script that can be run on a directory. Then for each subsystem you > can say in your commit "I ran scripts/fix_whatever.py on this subdir." > Then others can help you drive the tree wide cleanup. Then we can > enable -Wunreachable-code-break either by default, or W=2 right now > might be a good idea. > > Ah, George (gbiv@, cc'ed), did an analysis recently of > `-Wunreachable-code-loop-increment`, `-Wunreachable-code-break`, and > `-Wunreachable-code-return` for Android userspace. From the review: > ``` > Spoilers: of these, it seems useful to turn on > -Wunreachable-code-loop-increment and -Wunreachable-code-return by > default for Android > ... > While these conventions about always having break arguably became > obsolete when we enabled -Wfallthrough, my sample turned up zero > potential bugs caught by this warning, and we'd need to put a lot of > effort into getting a clean tree. So this warning doesn't seem to be > worth it. > ``` > Looks like there's an order of magnitude of `-Wunreachable-code-break` > than the other two. > > We probably should add all 3 to W=2 builds (wrapped in cc-option). > I've filed https://github.com/ClangBuiltLinux/linux/issues/1180 to > follow up on. I suggest using W=1 as people that are doing cleanups generally use that and not W=123 or any other style. Every other use of W= is still quite noisy and these code warnings are relatively trivially to fix up. From daniel at ffwll.ch Thu Oct 22 08:05:34 2020 From: daniel at ffwll.ch (Daniel Vetter) Date: Thu, 22 Oct 2020 10:05:34 +0200 Subject: [Spice-devel] [PATCH v5 10/10] drm/fb_helper: Support framebuffers in I/O memory In-Reply-To: <20201020122046.31167-11-tzimmermann@suse.de> References: <20201020122046.31167-1-tzimmermann@suse.de> <20201020122046.31167-11-tzimmermann@suse.de> Message-ID: <20201022080534.GT401619@phenom.ffwll.local> On Tue, Oct 20, 2020 at 02:20:46PM +0200, Thomas Zimmermann wrote: > At least sparc64 requires I/O-specific access to framebuffers. This > patch updates the fbdev console accordingly. > > For drivers with direct access to the framebuffer memory, the callback > functions in struct fb_ops test for the type of memory and call the rsp > fb_sys_ of fb_cfb_ functions. Read and write operations are implemented > internally by DRM's fbdev helper. > > For drivers that employ a shadow buffer, fbdev's blit function retrieves > the framebuffer address as struct dma_buf_map, and uses dma_buf_map > interfaces to access the buffer. > > The bochs driver on sparc64 uses a workaround to flag the framebuffer as > I/O memory and avoid a HW exception. With the introduction of struct > dma_buf_map, this is not required any longer. The patch removes the rsp > code from both, bochs and fbdev. > > v5: > * implement fb_read/fb_write internally (Daniel, Sam) > v4: > * move dma_buf_map changes into separate patch (Daniel) > * TODO list: comment on fbdev updates (Daniel) > > Signed-off-by: Thomas Zimmermann > Tested-by: Sam Ravnborg > --- > Documentation/gpu/todo.rst | 19 ++- > drivers/gpu/drm/bochs/bochs_kms.c | 1 - > drivers/gpu/drm/drm_fb_helper.c | 227 ++++++++++++++++++++++++++++-- > include/drm/drm_mode_config.h | 12 -- > 4 files changed, 230 insertions(+), 29 deletions(-) > > diff --git a/Documentation/gpu/todo.rst b/Documentation/gpu/todo.rst > index 7e6fc3c04add..638b7f704339 100644 > --- a/Documentation/gpu/todo.rst > +++ b/Documentation/gpu/todo.rst > @@ -197,13 +197,28 @@ Convert drivers to use drm_fbdev_generic_setup() > ------------------------------------------------ > > Most drivers can use drm_fbdev_generic_setup(). Driver have to implement > -atomic modesetting and GEM vmap support. Current generic fbdev emulation > -expects the framebuffer in system memory (or system-like memory). > +atomic modesetting and GEM vmap support. Historically, generic fbdev emulation > +expected the framebuffer in system memory or system-like memory. By employing > +struct dma_buf_map, drivers with frambuffers in I/O memory can be supported > +as well. > > Contact: Maintainer of the driver you plan to convert > > Level: Intermediate > > +Reimplement functions in drm_fbdev_fb_ops without fbdev > +------------------------------------------------------- > + > +A number of callback functions in drm_fbdev_fb_ops could benefit from > +being rewritten without dependencies on the fbdev module. Some of the > +helpers could further benefit from using struct dma_buf_map instead of > +raw pointers. > + > +Contact: Thomas Zimmermann , Daniel Vetter > + > +Level: Advanced > + > + > drm_framebuffer_funcs and drm_mode_config_funcs.fb_create cleanup > ----------------------------------------------------------------- > > diff --git a/drivers/gpu/drm/bochs/bochs_kms.c b/drivers/gpu/drm/bochs/bochs_kms.c > index 13d0d04c4457..853081d186d5 100644 > --- a/drivers/gpu/drm/bochs/bochs_kms.c > +++ b/drivers/gpu/drm/bochs/bochs_kms.c > @@ -151,7 +151,6 @@ int bochs_kms_init(struct bochs_device *bochs) > bochs->dev->mode_config.preferred_depth = 24; > bochs->dev->mode_config.prefer_shadow = 0; > bochs->dev->mode_config.prefer_shadow_fbdev = 1; > - bochs->dev->mode_config.fbdev_use_iomem = true; > bochs->dev->mode_config.quirk_addfb_prefer_host_byte_order = true; > > bochs->dev->mode_config.funcs = &bochs_mode_funcs; > diff --git a/drivers/gpu/drm/drm_fb_helper.c b/drivers/gpu/drm/drm_fb_helper.c > index 6212cd7cde1d..1d3180841778 100644 > --- a/drivers/gpu/drm/drm_fb_helper.c > +++ b/drivers/gpu/drm/drm_fb_helper.c > @@ -372,24 +372,22 @@ static void drm_fb_helper_resume_worker(struct work_struct *work) > } > > static void drm_fb_helper_dirty_blit_real(struct drm_fb_helper *fb_helper, > - struct drm_clip_rect *clip) > + struct drm_clip_rect *clip, > + struct dma_buf_map *dst) > { > struct drm_framebuffer *fb = fb_helper->fb; > unsigned int cpp = fb->format->cpp[0]; > size_t offset = clip->y1 * fb->pitches[0] + clip->x1 * cpp; > void *src = fb_helper->fbdev->screen_buffer + offset; > - void *dst = fb_helper->buffer->map.vaddr + offset; > size_t len = (clip->x2 - clip->x1) * cpp; > unsigned int y; > > - for (y = clip->y1; y < clip->y2; y++) { > - if (!fb_helper->dev->mode_config.fbdev_use_iomem) > - memcpy(dst, src, len); > - else > - memcpy_toio((void __iomem *)dst, src, len); > + dma_buf_map_incr(dst, offset); /* go to first pixel within clip rect */ > > + for (y = clip->y1; y < clip->y2; y++) { > + dma_buf_map_memcpy_to(dst, src, len); > + dma_buf_map_incr(dst, fb->pitches[0]); > src += fb->pitches[0]; > - dst += fb->pitches[0]; > } > } > > @@ -417,8 +415,9 @@ static void drm_fb_helper_dirty_work(struct work_struct *work) > ret = drm_client_buffer_vmap(helper->buffer, &map); > if (ret) > return; > - drm_fb_helper_dirty_blit_real(helper, &clip_copy); > + drm_fb_helper_dirty_blit_real(helper, &clip_copy, &map); > } > + > if (helper->fb->funcs->dirty) > helper->fb->funcs->dirty(helper->fb, NULL, 0, 0, > &clip_copy, 1); > @@ -2027,6 +2026,206 @@ static int drm_fbdev_fb_mmap(struct fb_info *info, struct vm_area_struct *vma) > return -ENODEV; > } > > +static bool drm_fbdev_use_iomem(struct fb_info *info) > +{ > + struct drm_fb_helper *fb_helper = info->par; > + struct drm_client_buffer *buffer = fb_helper->buffer; > + > + return !drm_fbdev_use_shadow_fb(fb_helper) && buffer->map.is_iomem; > +} > + > +static ssize_t fb_read_screen_base(struct fb_info *info, char __user *buf, size_t count, > + loff_t pos) > +{ > + const char __iomem *src = info->screen_base + pos; Maybe a bit much a bikeshed, but I'd write this in terms of drm objects, like the dirty_blit function, using the dma_buf_map (instead of the fb_info parameter). And then instead of screen_base and screen_buffer suffixes give them _mem and _iomem suffixes. Same for write below. Or I'm not quite understanding why we do it like this here - I don't think this code will be used outside of the generic fbdev code, so we can always assume that drm_fb_helper->buffer is set up. The other thing I think we need is some minimal testcases to make sure. The fbtest tool used way back seems to have disappeared, I couldn't find a copy of the source anywhere anymore. With all that: Acked-by: Daniel Vetter Cheers, Daniel > + size_t alloc_size = min(count, PAGE_SIZE); > + ssize_t ret = 0; > + char *tmp; > + > + tmp = kmalloc(alloc_size, GFP_KERNEL); > + if (!tmp) > + return -ENOMEM; > + > + while (count) { > + size_t c = min(count, alloc_size); > + > + memcpy_fromio(tmp, src, c); > + if (copy_to_user(buf, tmp, c)) { > + ret = -EFAULT; > + break; > + } > + > + src += c; > + buf += c; > + ret += c; > + count -= c; > + } > + > + kfree(tmp); > + > + return ret; > +} > + > +static ssize_t fb_read_screen_buffer(struct fb_info *info, char __user *buf, size_t count, > + loff_t pos) > +{ > + const char *src = info->screen_buffer + pos; > + > + if (copy_to_user(buf, src, count)) > + return -EFAULT; > + > + return count; > +} > + > +static ssize_t drm_fbdev_fb_read(struct fb_info *info, char __user *buf, > + size_t count, loff_t *ppos) > +{ > + loff_t pos = *ppos; > + size_t total_size; > + ssize_t ret; > + > + if (info->state != FBINFO_STATE_RUNNING) > + return -EPERM; > + > + if (info->screen_size) > + total_size = info->screen_size; > + else > + total_size = info->fix.smem_len; > + > + if (pos >= total_size) > + return 0; > + if (count >= total_size) > + count = total_size; > + if (total_size - count < pos) > + count = total_size - pos; > + > + if (drm_fbdev_use_iomem(info)) > + ret = fb_read_screen_base(info, buf, count, pos); > + else > + ret = fb_read_screen_buffer(info, buf, count, pos); > + > + if (ret > 0) > + *ppos = ret; > + > + return ret; > +} > + > +static ssize_t fb_write_screen_base(struct fb_info *info, const char __user *buf, size_t count, > + loff_t pos) > +{ > + char __iomem *dst = info->screen_base + pos; > + size_t alloc_size = min(count, PAGE_SIZE); > + ssize_t ret = 0; > + u8 *tmp; > + > + tmp = kmalloc(alloc_size, GFP_KERNEL); > + if (!tmp) > + return -ENOMEM; > + > + while (count) { > + size_t c = min(count, alloc_size); > + > + if (copy_from_user(tmp, buf, c)) { > + ret = -EFAULT; > + break; > + } > + memcpy_toio(dst, tmp, c); > + > + dst += c; > + buf += c; > + ret += c; > + count -= c; > + } > + > + kfree(tmp); > + > + return ret; > +} > + > +static ssize_t fb_write_screen_buffer(struct fb_info *info, const char __user *buf, size_t count, > + loff_t pos) > +{ > + char *dst = info->screen_buffer + pos; > + > + if (copy_from_user(dst, buf, count)) > + return -EFAULT; > + > + return count; > +} > + > +static ssize_t drm_fbdev_fb_write(struct fb_info *info, const char __user *buf, > + size_t count, loff_t *ppos) > +{ > + loff_t pos = *ppos; > + size_t total_size; > + ssize_t ret; > + int err; > + > + if (info->state != FBINFO_STATE_RUNNING) > + return -EPERM; > + > + if (info->screen_size) > + total_size = info->screen_size; > + else > + total_size = info->fix.smem_len; > + > + if (pos > total_size) > + return -EFBIG; > + if (count > total_size) { > + err = -EFBIG; > + count = total_size; > + } > + if (total_size - count < pos) { > + if (!err) > + err = -ENOSPC; > + count = total_size - pos; > + } > + > + /* > + * Copy to framebuffer even if we already logged an error. Emulates > + * the behavior of the original fbdev implementation. > + */ > + if (drm_fbdev_use_iomem(info)) > + ret = fb_write_screen_base(info, buf, count, pos); > + else > + ret = fb_write_screen_buffer(info, buf, count, pos); > + > + if (ret > 0) > + *ppos = ret; > + > + if (err) > + return err; > + > + return ret; > +} > + > +static void drm_fbdev_fb_fillrect(struct fb_info *info, > + const struct fb_fillrect *rect) > +{ > + if (drm_fbdev_use_iomem(info)) > + drm_fb_helper_cfb_fillrect(info, rect); > + else > + drm_fb_helper_sys_fillrect(info, rect); > +} > + > +static void drm_fbdev_fb_copyarea(struct fb_info *info, > + const struct fb_copyarea *area) > +{ > + if (drm_fbdev_use_iomem(info)) > + drm_fb_helper_cfb_copyarea(info, area); > + else > + drm_fb_helper_sys_copyarea(info, area); > +} > + > +static void drm_fbdev_fb_imageblit(struct fb_info *info, > + const struct fb_image *image) > +{ > + if (drm_fbdev_use_iomem(info)) > + drm_fb_helper_cfb_imageblit(info, image); > + else > + drm_fb_helper_sys_imageblit(info, image); > +} > + > static const struct fb_ops drm_fbdev_fb_ops = { > .owner = THIS_MODULE, > DRM_FB_HELPER_DEFAULT_OPS, > @@ -2034,11 +2233,11 @@ static const struct fb_ops drm_fbdev_fb_ops = { > .fb_release = drm_fbdev_fb_release, > .fb_destroy = drm_fbdev_fb_destroy, > .fb_mmap = drm_fbdev_fb_mmap, > - .fb_read = drm_fb_helper_sys_read, > - .fb_write = drm_fb_helper_sys_write, > - .fb_fillrect = drm_fb_helper_sys_fillrect, > - .fb_copyarea = drm_fb_helper_sys_copyarea, > - .fb_imageblit = drm_fb_helper_sys_imageblit, > + .fb_read = drm_fbdev_fb_read, > + .fb_write = drm_fbdev_fb_write, > + .fb_fillrect = drm_fbdev_fb_fillrect, > + .fb_copyarea = drm_fbdev_fb_copyarea, > + .fb_imageblit = drm_fbdev_fb_imageblit, > }; > > static struct fb_deferred_io drm_fbdev_defio = { > diff --git a/include/drm/drm_mode_config.h b/include/drm/drm_mode_config.h > index 5ffbb4ed5b35..ab424ddd7665 100644 > --- a/include/drm/drm_mode_config.h > +++ b/include/drm/drm_mode_config.h > @@ -877,18 +877,6 @@ struct drm_mode_config { > */ > bool prefer_shadow_fbdev; > > - /** > - * @fbdev_use_iomem: > - * > - * Set to true if framebuffer reside in iomem. > - * When set to true memcpy_toio() is used when copying the framebuffer in > - * drm_fb_helper.drm_fb_helper_dirty_blit_real(). > - * > - * FIXME: This should be replaced with a per-mapping is_iomem > - * flag (like ttm does), and then used everywhere in fbdev code. > - */ > - bool fbdev_use_iomem; > - > /** > * @quirk_addfb_prefer_xbgr_30bpp: > * > -- > 2.28.0 > -- Daniel Vetter Software Engineer, Intel Corporation http://blog.ffwll.ch From daniel at ffwll.ch Thu Oct 22 08:51:35 2020 From: daniel at ffwll.ch (Daniel Vetter) Date: Thu, 22 Oct 2020 10:51:35 +0200 Subject: [Spice-devel] [PATCH v5 10/10] drm/fb_helper: Support framebuffers in I/O memory In-Reply-To: <794e6ab4-041b-55f9-e95e-55ef0526edd5@suse.de> References: <20201020122046.31167-1-tzimmermann@suse.de> <20201020122046.31167-11-tzimmermann@suse.de> <20201022080534.GT401619@phenom.ffwll.local> <794e6ab4-041b-55f9-e95e-55ef0526edd5@suse.de> Message-ID: <20201022085135.GV401619@phenom.ffwll.local> On Thu, Oct 22, 2020 at 10:37:56AM +0200, Thomas Zimmermann wrote: > Hi > > On 22.10.20 10:05, Daniel Vetter wrote: > > On Tue, Oct 20, 2020 at 02:20:46PM +0200, Thomas Zimmermann wrote: > >> At least sparc64 requires I/O-specific access to framebuffers. This > >> patch updates the fbdev console accordingly. > >> > >> For drivers with direct access to the framebuffer memory, the callback > >> functions in struct fb_ops test for the type of memory and call the rsp > >> fb_sys_ of fb_cfb_ functions. Read and write operations are implemented > >> internally by DRM's fbdev helper. > >> > >> For drivers that employ a shadow buffer, fbdev's blit function retrieves > >> the framebuffer address as struct dma_buf_map, and uses dma_buf_map > >> interfaces to access the buffer. > >> > >> The bochs driver on sparc64 uses a workaround to flag the framebuffer as > >> I/O memory and avoid a HW exception. With the introduction of struct > >> dma_buf_map, this is not required any longer. The patch removes the rsp > >> code from both, bochs and fbdev. > >> > >> v5: > >> * implement fb_read/fb_write internally (Daniel, Sam) > >> v4: > >> * move dma_buf_map changes into separate patch (Daniel) > >> * TODO list: comment on fbdev updates (Daniel) > >> > >> Signed-off-by: Thomas Zimmermann > >> Tested-by: Sam Ravnborg > >> --- > >> Documentation/gpu/todo.rst | 19 ++- > >> drivers/gpu/drm/bochs/bochs_kms.c | 1 - > >> drivers/gpu/drm/drm_fb_helper.c | 227 ++++++++++++++++++++++++++++-- > >> include/drm/drm_mode_config.h | 12 -- > >> 4 files changed, 230 insertions(+), 29 deletions(-) > >> > >> diff --git a/Documentation/gpu/todo.rst b/Documentation/gpu/todo.rst > >> index 7e6fc3c04add..638b7f704339 100644 > >> --- a/Documentation/gpu/todo.rst > >> +++ b/Documentation/gpu/todo.rst > >> @@ -197,13 +197,28 @@ Convert drivers to use drm_fbdev_generic_setup() > >> ------------------------------------------------ > >> > >> Most drivers can use drm_fbdev_generic_setup(). Driver have to implement > >> -atomic modesetting and GEM vmap support. Current generic fbdev emulation > >> -expects the framebuffer in system memory (or system-like memory). > >> +atomic modesetting and GEM vmap support. Historically, generic fbdev emulation > >> +expected the framebuffer in system memory or system-like memory. By employing > >> +struct dma_buf_map, drivers with frambuffers in I/O memory can be supported > >> +as well. > >> > >> Contact: Maintainer of the driver you plan to convert > >> > >> Level: Intermediate > >> > >> +Reimplement functions in drm_fbdev_fb_ops without fbdev > >> +------------------------------------------------------- > >> + > >> +A number of callback functions in drm_fbdev_fb_ops could benefit from > >> +being rewritten without dependencies on the fbdev module. Some of the > >> +helpers could further benefit from using struct dma_buf_map instead of > >> +raw pointers. > >> + > >> +Contact: Thomas Zimmermann , Daniel Vetter > >> + > >> +Level: Advanced > >> + > >> + > >> drm_framebuffer_funcs and drm_mode_config_funcs.fb_create cleanup > >> ----------------------------------------------------------------- > >> > >> diff --git a/drivers/gpu/drm/bochs/bochs_kms.c b/drivers/gpu/drm/bochs/bochs_kms.c > >> index 13d0d04c4457..853081d186d5 100644 > >> --- a/drivers/gpu/drm/bochs/bochs_kms.c > >> +++ b/drivers/gpu/drm/bochs/bochs_kms.c > >> @@ -151,7 +151,6 @@ int bochs_kms_init(struct bochs_device *bochs) > >> bochs->dev->mode_config.preferred_depth = 24; > >> bochs->dev->mode_config.prefer_shadow = 0; > >> bochs->dev->mode_config.prefer_shadow_fbdev = 1; > >> - bochs->dev->mode_config.fbdev_use_iomem = true; > >> bochs->dev->mode_config.quirk_addfb_prefer_host_byte_order = true; > >> > >> bochs->dev->mode_config.funcs = &bochs_mode_funcs; > >> diff --git a/drivers/gpu/drm/drm_fb_helper.c b/drivers/gpu/drm/drm_fb_helper.c > >> index 6212cd7cde1d..1d3180841778 100644 > >> --- a/drivers/gpu/drm/drm_fb_helper.c > >> +++ b/drivers/gpu/drm/drm_fb_helper.c > >> @@ -372,24 +372,22 @@ static void drm_fb_helper_resume_worker(struct work_struct *work) > >> } > >> > >> static void drm_fb_helper_dirty_blit_real(struct drm_fb_helper *fb_helper, > >> - struct drm_clip_rect *clip) > >> + struct drm_clip_rect *clip, > >> + struct dma_buf_map *dst) > >> { > >> struct drm_framebuffer *fb = fb_helper->fb; > >> unsigned int cpp = fb->format->cpp[0]; > >> size_t offset = clip->y1 * fb->pitches[0] + clip->x1 * cpp; > >> void *src = fb_helper->fbdev->screen_buffer + offset; > >> - void *dst = fb_helper->buffer->map.vaddr + offset; > >> size_t len = (clip->x2 - clip->x1) * cpp; > >> unsigned int y; > >> > >> - for (y = clip->y1; y < clip->y2; y++) { > >> - if (!fb_helper->dev->mode_config.fbdev_use_iomem) > >> - memcpy(dst, src, len); > >> - else > >> - memcpy_toio((void __iomem *)dst, src, len); > >> + dma_buf_map_incr(dst, offset); /* go to first pixel within clip rect */ > >> > >> + for (y = clip->y1; y < clip->y2; y++) { > >> + dma_buf_map_memcpy_to(dst, src, len); > >> + dma_buf_map_incr(dst, fb->pitches[0]); > >> src += fb->pitches[0]; > >> - dst += fb->pitches[0]; > >> } > >> } > >> > >> @@ -417,8 +415,9 @@ static void drm_fb_helper_dirty_work(struct work_struct *work) > >> ret = drm_client_buffer_vmap(helper->buffer, &map); > >> if (ret) > >> return; > >> - drm_fb_helper_dirty_blit_real(helper, &clip_copy); > >> + drm_fb_helper_dirty_blit_real(helper, &clip_copy, &map); > >> } > >> + > >> if (helper->fb->funcs->dirty) > >> helper->fb->funcs->dirty(helper->fb, NULL, 0, 0, > >> &clip_copy, 1); > >> @@ -2027,6 +2026,206 @@ static int drm_fbdev_fb_mmap(struct fb_info *info, struct vm_area_struct *vma) > >> return -ENODEV; > >> } > >> > >> +static bool drm_fbdev_use_iomem(struct fb_info *info) > >> +{ > >> + struct drm_fb_helper *fb_helper = info->par; > >> + struct drm_client_buffer *buffer = fb_helper->buffer; > >> + > >> + return !drm_fbdev_use_shadow_fb(fb_helper) && buffer->map.is_iomem; > >> +} > >> + > >> +static ssize_t fb_read_screen_base(struct fb_info *info, char __user *buf, size_t count, > >> + loff_t pos) > >> +{ > >> + const char __iomem *src = info->screen_base + pos; > > > > Maybe a bit much a bikeshed, but I'd write this in terms of drm objects, > > like the dirty_blit function, using the dma_buf_map (instead of the > > fb_info parameter). And then instead of > > screen_base and screen_buffer suffixes give them _mem and _iomem suffixes. > > Screen_buffer can be a shadow buffer. Until the blit worker (see > drm_fb_helper_dirty_work() ) completes, it might be more up to date than > the real buffer that's stored in the client. > > The orignal fbdev code supported an fb_sync callback to synchronize with > outstanding screen updates (e.g., HW blit ops), but fb_sync is just > overhead here. Copying from screen_buffer or screen_base always returns > the most up-to-date image. > > > > > Same for write below. Or I'm not quite understanding why we do it like > > this here - I don't think this code will be used outside of the generic > > fbdev code, so we can always assume that drm_fb_helper->buffer is set up. > > It's similar as in the read case. If we write to the client's buffer, an > outstanding blit worker could write the now-outdated shadow buffer over > the user's newly written framebuffer data. > > Thinking about it, we might want to schedule the blit worker at the end > of each fb_write, so that the data makes it into the HW buffer in time. Hm ok, makes some sense. I think there's some potential for cleanup if we add a dma_buf_map drm_fb_helper->uapi_map which points at the right thing always. That could then also the drm_fbdev_use_iomem() helper and make this all look really neat. But maybe a follow up clean up patch, if you're bored. As-is: Reviewed-by: Daniel Vetter While looking at this I also noticed a potential small issue in an earlier patch. > > The other thing I think we need is some minimal testcases to make sure. > > The fbtest tool used way back seems to have disappeared, I couldn't find > > a copy of the source anywhere anymore. > > As discussed on IRC, I'll add some testcase to the igt test. I'll share > the link here when done. > > Best regards > Thomas > > > > > With all that: Acked-by: Daniel Vetter > > > > Cheers, Daniel > > > >> + size_t alloc_size = min(count, PAGE_SIZE); > >> + ssize_t ret = 0; > >> + char *tmp; > >> + > >> + tmp = kmalloc(alloc_size, GFP_KERNEL); > >> + if (!tmp) > >> + return -ENOMEM; > >> + > >> + while (count) { > >> + size_t c = min(count, alloc_size); > >> + > >> + memcpy_fromio(tmp, src, c); > >> + if (copy_to_user(buf, tmp, c)) { > >> + ret = -EFAULT; > >> + break; > >> + } > >> + > >> + src += c; > >> + buf += c; > >> + ret += c; > >> + count -= c; > >> + } > >> + > >> + kfree(tmp); > >> + > >> + return ret; > >> +} > >> + > >> +static ssize_t fb_read_screen_buffer(struct fb_info *info, char __user *buf, size_t count, > >> + loff_t pos) > >> +{ > >> + const char *src = info->screen_buffer + pos; > >> + > >> + if (copy_to_user(buf, src, count)) > >> + return -EFAULT; > >> + > >> + return count; > >> +} > >> + > >> +static ssize_t drm_fbdev_fb_read(struct fb_info *info, char __user *buf, > >> + size_t count, loff_t *ppos) > >> +{ > >> + loff_t pos = *ppos; > >> + size_t total_size; > >> + ssize_t ret; > >> + > >> + if (info->state != FBINFO_STATE_RUNNING) > >> + return -EPERM; > >> + > >> + if (info->screen_size) > >> + total_size = info->screen_size; > >> + else > >> + total_size = info->fix.smem_len; > >> + > >> + if (pos >= total_size) > >> + return 0; > >> + if (count >= total_size) > >> + count = total_size; > >> + if (total_size - count < pos) > >> + count = total_size - pos; > >> + > >> + if (drm_fbdev_use_iomem(info)) > >> + ret = fb_read_screen_base(info, buf, count, pos); > >> + else > >> + ret = fb_read_screen_buffer(info, buf, count, pos); > >> + > >> + if (ret > 0) > >> + *ppos = ret; > >> + > >> + return ret; > >> +} > >> + > >> +static ssize_t fb_write_screen_base(struct fb_info *info, const char __user *buf, size_t count, > >> + loff_t pos) > >> +{ > >> + char __iomem *dst = info->screen_base + pos; > >> + size_t alloc_size = min(count, PAGE_SIZE); > >> + ssize_t ret = 0; > >> + u8 *tmp; > >> + > >> + tmp = kmalloc(alloc_size, GFP_KERNEL); > >> + if (!tmp) > >> + return -ENOMEM; > >> + > >> + while (count) { > >> + size_t c = min(count, alloc_size); > >> + > >> + if (copy_from_user(tmp, buf, c)) { > >> + ret = -EFAULT; > >> + break; > >> + } > >> + memcpy_toio(dst, tmp, c); > >> + > >> + dst += c; > >> + buf += c; > >> + ret += c; > >> + count -= c; > >> + } > >> + > >> + kfree(tmp); > >> + > >> + return ret; > >> +} > >> + > >> +static ssize_t fb_write_screen_buffer(struct fb_info *info, const char __user *buf, size_t count, > >> + loff_t pos) > >> +{ > >> + char *dst = info->screen_buffer + pos; > >> + > >> + if (copy_from_user(dst, buf, count)) > >> + return -EFAULT; > >> + > >> + return count; > >> +} > >> + > >> +static ssize_t drm_fbdev_fb_write(struct fb_info *info, const char __user *buf, > >> + size_t count, loff_t *ppos) > >> +{ > >> + loff_t pos = *ppos; > >> + size_t total_size; > >> + ssize_t ret; > >> + int err; > >> + > >> + if (info->state != FBINFO_STATE_RUNNING) > >> + return -EPERM; > >> + > >> + if (info->screen_size) > >> + total_size = info->screen_size; > >> + else > >> + total_size = info->fix.smem_len; > >> + > >> + if (pos > total_size) > >> + return -EFBIG; > >> + if (count > total_size) { > >> + err = -EFBIG; > >> + count = total_size; > >> + } > >> + if (total_size - count < pos) { > >> + if (!err) > >> + err = -ENOSPC; > >> + count = total_size - pos; > >> + } > >> + > >> + /* > >> + * Copy to framebuffer even if we already logged an error. Emulates > >> + * the behavior of the original fbdev implementation. > >> + */ > >> + if (drm_fbdev_use_iomem(info)) > >> + ret = fb_write_screen_base(info, buf, count, pos); > >> + else > >> + ret = fb_write_screen_buffer(info, buf, count, pos); > >> + > >> + if (ret > 0) > >> + *ppos = ret; > >> + > >> + if (err) > >> + return err; > >> + > >> + return ret; > >> +} > >> + > >> +static void drm_fbdev_fb_fillrect(struct fb_info *info, > >> + const struct fb_fillrect *rect) > >> +{ > >> + if (drm_fbdev_use_iomem(info)) > >> + drm_fb_helper_cfb_fillrect(info, rect); > >> + else > >> + drm_fb_helper_sys_fillrect(info, rect); > >> +} > >> + > >> +static void drm_fbdev_fb_copyarea(struct fb_info *info, > >> + const struct fb_copyarea *area) > >> +{ > >> + if (drm_fbdev_use_iomem(info)) > >> + drm_fb_helper_cfb_copyarea(info, area); > >> + else > >> + drm_fb_helper_sys_copyarea(info, area); > >> +} > >> + > >> +static void drm_fbdev_fb_imageblit(struct fb_info *info, > >> + const struct fb_image *image) > >> +{ > >> + if (drm_fbdev_use_iomem(info)) > >> + drm_fb_helper_cfb_imageblit(info, image); > >> + else > >> + drm_fb_helper_sys_imageblit(info, image); > >> +} > >> + > >> static const struct fb_ops drm_fbdev_fb_ops = { > >> .owner = THIS_MODULE, > >> DRM_FB_HELPER_DEFAULT_OPS, > >> @@ -2034,11 +2233,11 @@ static const struct fb_ops drm_fbdev_fb_ops = { > >> .fb_release = drm_fbdev_fb_release, > >> .fb_destroy = drm_fbdev_fb_destroy, > >> .fb_mmap = drm_fbdev_fb_mmap, > >> - .fb_read = drm_fb_helper_sys_read, > >> - .fb_write = drm_fb_helper_sys_write, > >> - .fb_fillrect = drm_fb_helper_sys_fillrect, > >> - .fb_copyarea = drm_fb_helper_sys_copyarea, > >> - .fb_imageblit = drm_fb_helper_sys_imageblit, > >> + .fb_read = drm_fbdev_fb_read, > >> + .fb_write = drm_fbdev_fb_write, > >> + .fb_fillrect = drm_fbdev_fb_fillrect, > >> + .fb_copyarea = drm_fbdev_fb_copyarea, > >> + .fb_imageblit = drm_fbdev_fb_imageblit, > >> }; > >> > >> static struct fb_deferred_io drm_fbdev_defio = { > >> diff --git a/include/drm/drm_mode_config.h b/include/drm/drm_mode_config.h > >> index 5ffbb4ed5b35..ab424ddd7665 100644 > >> --- a/include/drm/drm_mode_config.h > >> +++ b/include/drm/drm_mode_config.h > >> @@ -877,18 +877,6 @@ struct drm_mode_config { > >> */ > >> bool prefer_shadow_fbdev; > >> > >> - /** > >> - * @fbdev_use_iomem: > >> - * > >> - * Set to true if framebuffer reside in iomem. > >> - * When set to true memcpy_toio() is used when copying the framebuffer in > >> - * drm_fb_helper.drm_fb_helper_dirty_blit_real(). > >> - * > >> - * FIXME: This should be replaced with a per-mapping is_iomem > >> - * flag (like ttm does), and then used everywhere in fbdev code. > >> - */ > >> - bool fbdev_use_iomem; > >> - > >> /** > >> * @quirk_addfb_prefer_xbgr_30bpp: > >> * > >> -- > >> 2.28.0 > >> > > > > -- > Thomas Zimmermann > Graphics Driver Developer > SUSE Software Solutions Germany GmbH > Maxfeldstr. 5, 90409 N?rnberg, Germany > (HRB 36809, AG N?rnberg) > Gesch?ftsf?hrer: Felix Imend?rffer -- Daniel Vetter Software Engineer, Intel Corporation http://blog.ffwll.ch From daniel at ffwll.ch Thu Oct 22 10:21:07 2020 From: daniel at ffwll.ch (Daniel Vetter) Date: Thu, 22 Oct 2020 12:21:07 +0200 Subject: [Spice-devel] [PATCH v5 08/10] drm/gem: Store client buffer mappings as struct dma_buf_map In-Reply-To: References: <20201020122046.31167-1-tzimmermann@suse.de> <20201020122046.31167-9-tzimmermann@suse.de> <20201022084919.GU401619@phenom.ffwll.local> Message-ID: On Thu, Oct 22, 2020 at 11:18 AM Thomas Zimmermann wrote: > > Hi > > On 22.10.20 10:49, Daniel Vetter wrote: > > On Tue, Oct 20, 2020 at 02:20:44PM +0200, Thomas Zimmermann wrote: > >> Kernel DRM clients now store their framebuffer address in an instance > >> of struct dma_buf_map. Depending on the buffer's location, the address > >> refers to system or I/O memory. > >> > >> Callers of drm_client_buffer_vmap() receive a copy of the value in > >> the call's supplied arguments. It can be accessed and modified with > >> dma_buf_map interfaces. > >> > >> Signed-off-by: Thomas Zimmermann > >> Reviewed-by: Daniel Vetter > >> Tested-by: Sam Ravnborg > >> --- > >> drivers/gpu/drm/drm_client.c | 34 +++++++++++++++++++-------------- > >> drivers/gpu/drm/drm_fb_helper.c | 23 +++++++++++++--------- > >> include/drm/drm_client.h | 7 ++++--- > >> 3 files changed, 38 insertions(+), 26 deletions(-) > >> > >> diff --git a/drivers/gpu/drm/drm_client.c b/drivers/gpu/drm/drm_client.c > >> index ac0082bed966..fe573acf1067 100644 > >> --- a/drivers/gpu/drm/drm_client.c > >> +++ b/drivers/gpu/drm/drm_client.c > >> @@ -235,7 +235,7 @@ static void drm_client_buffer_delete(struct drm_client_buffer *buffer) > >> { > >> struct drm_device *dev = buffer->client->dev; > >> > >> - drm_gem_vunmap(buffer->gem, buffer->vaddr); > >> + drm_gem_vunmap(buffer->gem, &buffer->map); > >> > >> if (buffer->gem) > >> drm_gem_object_put(buffer->gem); > >> @@ -291,25 +291,31 @@ drm_client_buffer_create(struct drm_client_dev *client, u32 width, u32 height, u > >> /** > >> * drm_client_buffer_vmap - Map DRM client buffer into address space > >> * @buffer: DRM client buffer > >> + * @map_copy: Returns the mapped memory's address > >> * > >> * This function maps a client buffer into kernel address space. If the > >> - * buffer is already mapped, it returns the mapping's address. > >> + * buffer is already mapped, it returns the existing mapping's address. > >> * > >> * Client buffer mappings are not ref'counted. Each call to > >> * drm_client_buffer_vmap() should be followed by a call to > >> * drm_client_buffer_vunmap(); or the client buffer should be mapped > >> * throughout its lifetime. > >> * > >> + * The returned address is a copy of the internal value. In contrast to > >> + * other vmap interfaces, you don't need it for the client's vunmap > >> + * function. So you can modify it at will during blit and draw operations. > >> + * > >> * Returns: > >> - * The mapped memory's address > >> + * 0 on success, or a negative errno code otherwise. > >> */ > >> -void *drm_client_buffer_vmap(struct drm_client_buffer *buffer) > >> +int > >> +drm_client_buffer_vmap(struct drm_client_buffer *buffer, struct dma_buf_map *map_copy) > >> { > >> - struct dma_buf_map map; > >> + struct dma_buf_map *map = &buffer->map; > >> int ret; > >> > >> - if (buffer->vaddr) > >> - return buffer->vaddr; > >> + if (dma_buf_map_is_set(map)) > >> + goto out; > >> > >> /* > >> * FIXME: The dependency on GEM here isn't required, we could > >> @@ -319,13 +325,14 @@ void *drm_client_buffer_vmap(struct drm_client_buffer *buffer) > >> * fd_install step out of the driver backend hooks, to make that > >> * final step optional for internal users. > >> */ > >> - ret = drm_gem_vmap(buffer->gem, &map); > >> + ret = drm_gem_vmap(buffer->gem, map); > >> if (ret) > >> - return ERR_PTR(ret); > >> + return ret; > >> > >> - buffer->vaddr = map.vaddr; > >> +out: > >> + *map_copy = *map; > >> > >> - return map.vaddr; > >> + return 0; > >> } > >> EXPORT_SYMBOL(drm_client_buffer_vmap); > >> > >> @@ -339,10 +346,9 @@ EXPORT_SYMBOL(drm_client_buffer_vmap); > >> */ > >> void drm_client_buffer_vunmap(struct drm_client_buffer *buffer) > >> { > >> - struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(buffer->vaddr); > >> + struct dma_buf_map *map = &buffer->map; > >> > >> - drm_gem_vunmap(buffer->gem, &map); > >> - buffer->vaddr = NULL; > >> + drm_gem_vunmap(buffer->gem, map); > >> } > >> EXPORT_SYMBOL(drm_client_buffer_vunmap); > >> > >> diff --git a/drivers/gpu/drm/drm_fb_helper.c b/drivers/gpu/drm/drm_fb_helper.c > >> index c2f72bb6afb1..6212cd7cde1d 100644 > >> --- a/drivers/gpu/drm/drm_fb_helper.c > >> +++ b/drivers/gpu/drm/drm_fb_helper.c > >> @@ -378,7 +378,7 @@ static void drm_fb_helper_dirty_blit_real(struct drm_fb_helper *fb_helper, > >> unsigned int cpp = fb->format->cpp[0]; > >> size_t offset = clip->y1 * fb->pitches[0] + clip->x1 * cpp; > >> void *src = fb_helper->fbdev->screen_buffer + offset; > >> - void *dst = fb_helper->buffer->vaddr + offset; > >> + void *dst = fb_helper->buffer->map.vaddr + offset; > >> size_t len = (clip->x2 - clip->x1) * cpp; > >> unsigned int y; > >> > >> @@ -400,7 +400,8 @@ static void drm_fb_helper_dirty_work(struct work_struct *work) > >> struct drm_clip_rect *clip = &helper->dirty_clip; > >> struct drm_clip_rect clip_copy; > >> unsigned long flags; > >> - void *vaddr; > >> + struct dma_buf_map map; > >> + int ret; > >> > >> spin_lock_irqsave(&helper->dirty_lock, flags); > >> clip_copy = *clip; > >> @@ -413,8 +414,8 @@ static void drm_fb_helper_dirty_work(struct work_struct *work) > >> > >> /* Generic fbdev uses a shadow buffer */ > >> if (helper->buffer) { > >> - vaddr = drm_client_buffer_vmap(helper->buffer); > >> - if (IS_ERR(vaddr)) > >> + ret = drm_client_buffer_vmap(helper->buffer, &map); > >> + if (ret) > >> return; > >> drm_fb_helper_dirty_blit_real(helper, &clip_copy); > >> } > >> @@ -2060,7 +2061,8 @@ static int drm_fb_helper_generic_probe(struct drm_fb_helper *fb_helper, > >> struct drm_framebuffer *fb; > >> struct fb_info *fbi; > >> u32 format; > >> - void *vaddr; > >> + struct dma_buf_map map; > >> + int ret; > >> > >> drm_dbg_kms(dev, "surface width(%d), height(%d) and bpp(%d)\n", > >> sizes->surface_width, sizes->surface_height, > >> @@ -2096,11 +2098,14 @@ static int drm_fb_helper_generic_probe(struct drm_fb_helper *fb_helper, > >> fb_deferred_io_init(fbi); > >> } else { > >> /* buffer is mapped for HW framebuffer */ > >> - vaddr = drm_client_buffer_vmap(fb_helper->buffer); > >> - if (IS_ERR(vaddr)) > >> - return PTR_ERR(vaddr); > >> + ret = drm_client_buffer_vmap(fb_helper->buffer, &map); > >> + if (ret) > >> + return ret; > >> + if (map.is_iomem) > >> + fbi->screen_base = map.vaddr_iomem; > >> + else > >> + fbi->screen_buffer = map.vaddr; > >> > >> - fbi->screen_buffer = vaddr; > >> /* Shamelessly leak the physical address to user-space */ > >> #if IS_ENABLED(CONFIG_DRM_FBDEV_LEAK_PHYS_SMEM) > >> if (drm_leak_fbdev_smem && fbi->fix.smem_start == 0) > > > > Just noticed a tiny thing here: I think this needs to be patched to only > > set smem_start when the map is _not_ iomem. Since virt_to_page isn't > > defined on iomem at all. > > > > I guess it'd be neat if we can set this for iomem too, but I have no idea > > how to convert an iomem pointer back to a bus_addr_t ... > > Not that I disagree, but that should be reviewed by the right people. > The commit at 4be9bd10e22d ("drm/fb_helper: Allow leaking fbdev > smem_start") appears to work around specific userspace drivers. It's for soc drivers, which all use either shmem or cma helpers, so all system memory. Which means your patch here doesn't break anything. But we need to make sure that if someone enables this it doesn't blow up at least when used on a device where we map iomem. -Daniel > Best regards > Thomas > > > > > Cheers, Daniel > > > >> diff --git a/include/drm/drm_client.h b/include/drm/drm_client.h > >> index 7aaea665bfc2..f07f2fb02e75 100644 > >> --- a/include/drm/drm_client.h > >> +++ b/include/drm/drm_client.h > >> @@ -3,6 +3,7 @@ > >> #ifndef _DRM_CLIENT_H_ > >> #define _DRM_CLIENT_H_ > >> > >> +#include > >> #include > >> #include > >> #include > >> @@ -141,9 +142,9 @@ struct drm_client_buffer { > >> struct drm_gem_object *gem; > >> > >> /** > >> - * @vaddr: Virtual address for the buffer > >> + * @map: Virtual address for the buffer > >> */ > >> - void *vaddr; > >> + struct dma_buf_map map; > >> > >> /** > >> * @fb: DRM framebuffer > >> @@ -155,7 +156,7 @@ struct drm_client_buffer * > >> drm_client_framebuffer_create(struct drm_client_dev *client, u32 width, u32 height, u32 format); > >> void drm_client_framebuffer_delete(struct drm_client_buffer *buffer); > >> int drm_client_framebuffer_flush(struct drm_client_buffer *buffer, struct drm_rect *rect); > >> -void *drm_client_buffer_vmap(struct drm_client_buffer *buffer); > >> +int drm_client_buffer_vmap(struct drm_client_buffer *buffer, struct dma_buf_map *map); > >> void drm_client_buffer_vunmap(struct drm_client_buffer *buffer); > >> > >> int drm_client_modeset_create(struct drm_client_dev *client); > >> -- > >> 2.28.0 > >> > > > > -- > Thomas Zimmermann > Graphics Driver Developer > SUSE Software Solutions Germany GmbH > Maxfeldstr. 5, 90409 N?rnberg, Germany > (HRB 36809, AG N?rnberg) > Gesch?ftsf?hrer: Felix Imend?rffer -- Daniel Vetter Software Engineer, Intel Corporation http://blog.ffwll.ch From daniel at ffwll.ch Thu Oct 22 08:49:19 2020 From: daniel at ffwll.ch (Daniel Vetter) Date: Thu, 22 Oct 2020 10:49:19 +0200 Subject: [Spice-devel] [PATCH v5 08/10] drm/gem: Store client buffer mappings as struct dma_buf_map In-Reply-To: <20201020122046.31167-9-tzimmermann@suse.de> References: <20201020122046.31167-1-tzimmermann@suse.de> <20201020122046.31167-9-tzimmermann@suse.de> Message-ID: <20201022084919.GU401619@phenom.ffwll.local> On Tue, Oct 20, 2020 at 02:20:44PM +0200, Thomas Zimmermann wrote: > Kernel DRM clients now store their framebuffer address in an instance > of struct dma_buf_map. Depending on the buffer's location, the address > refers to system or I/O memory. > > Callers of drm_client_buffer_vmap() receive a copy of the value in > the call's supplied arguments. It can be accessed and modified with > dma_buf_map interfaces. > > Signed-off-by: Thomas Zimmermann > Reviewed-by: Daniel Vetter > Tested-by: Sam Ravnborg > --- > drivers/gpu/drm/drm_client.c | 34 +++++++++++++++++++-------------- > drivers/gpu/drm/drm_fb_helper.c | 23 +++++++++++++--------- > include/drm/drm_client.h | 7 ++++--- > 3 files changed, 38 insertions(+), 26 deletions(-) > > diff --git a/drivers/gpu/drm/drm_client.c b/drivers/gpu/drm/drm_client.c > index ac0082bed966..fe573acf1067 100644 > --- a/drivers/gpu/drm/drm_client.c > +++ b/drivers/gpu/drm/drm_client.c > @@ -235,7 +235,7 @@ static void drm_client_buffer_delete(struct drm_client_buffer *buffer) > { > struct drm_device *dev = buffer->client->dev; > > - drm_gem_vunmap(buffer->gem, buffer->vaddr); > + drm_gem_vunmap(buffer->gem, &buffer->map); > > if (buffer->gem) > drm_gem_object_put(buffer->gem); > @@ -291,25 +291,31 @@ drm_client_buffer_create(struct drm_client_dev *client, u32 width, u32 height, u > /** > * drm_client_buffer_vmap - Map DRM client buffer into address space > * @buffer: DRM client buffer > + * @map_copy: Returns the mapped memory's address > * > * This function maps a client buffer into kernel address space. If the > - * buffer is already mapped, it returns the mapping's address. > + * buffer is already mapped, it returns the existing mapping's address. > * > * Client buffer mappings are not ref'counted. Each call to > * drm_client_buffer_vmap() should be followed by a call to > * drm_client_buffer_vunmap(); or the client buffer should be mapped > * throughout its lifetime. > * > + * The returned address is a copy of the internal value. In contrast to > + * other vmap interfaces, you don't need it for the client's vunmap > + * function. So you can modify it at will during blit and draw operations. > + * > * Returns: > - * The mapped memory's address > + * 0 on success, or a negative errno code otherwise. > */ > -void *drm_client_buffer_vmap(struct drm_client_buffer *buffer) > +int > +drm_client_buffer_vmap(struct drm_client_buffer *buffer, struct dma_buf_map *map_copy) > { > - struct dma_buf_map map; > + struct dma_buf_map *map = &buffer->map; > int ret; > > - if (buffer->vaddr) > - return buffer->vaddr; > + if (dma_buf_map_is_set(map)) > + goto out; > > /* > * FIXME: The dependency on GEM here isn't required, we could > @@ -319,13 +325,14 @@ void *drm_client_buffer_vmap(struct drm_client_buffer *buffer) > * fd_install step out of the driver backend hooks, to make that > * final step optional for internal users. > */ > - ret = drm_gem_vmap(buffer->gem, &map); > + ret = drm_gem_vmap(buffer->gem, map); > if (ret) > - return ERR_PTR(ret); > + return ret; > > - buffer->vaddr = map.vaddr; > +out: > + *map_copy = *map; > > - return map.vaddr; > + return 0; > } > EXPORT_SYMBOL(drm_client_buffer_vmap); > > @@ -339,10 +346,9 @@ EXPORT_SYMBOL(drm_client_buffer_vmap); > */ > void drm_client_buffer_vunmap(struct drm_client_buffer *buffer) > { > - struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(buffer->vaddr); > + struct dma_buf_map *map = &buffer->map; > > - drm_gem_vunmap(buffer->gem, &map); > - buffer->vaddr = NULL; > + drm_gem_vunmap(buffer->gem, map); > } > EXPORT_SYMBOL(drm_client_buffer_vunmap); > > diff --git a/drivers/gpu/drm/drm_fb_helper.c b/drivers/gpu/drm/drm_fb_helper.c > index c2f72bb6afb1..6212cd7cde1d 100644 > --- a/drivers/gpu/drm/drm_fb_helper.c > +++ b/drivers/gpu/drm/drm_fb_helper.c > @@ -378,7 +378,7 @@ static void drm_fb_helper_dirty_blit_real(struct drm_fb_helper *fb_helper, > unsigned int cpp = fb->format->cpp[0]; > size_t offset = clip->y1 * fb->pitches[0] + clip->x1 * cpp; > void *src = fb_helper->fbdev->screen_buffer + offset; > - void *dst = fb_helper->buffer->vaddr + offset; > + void *dst = fb_helper->buffer->map.vaddr + offset; > size_t len = (clip->x2 - clip->x1) * cpp; > unsigned int y; > > @@ -400,7 +400,8 @@ static void drm_fb_helper_dirty_work(struct work_struct *work) > struct drm_clip_rect *clip = &helper->dirty_clip; > struct drm_clip_rect clip_copy; > unsigned long flags; > - void *vaddr; > + struct dma_buf_map map; > + int ret; > > spin_lock_irqsave(&helper->dirty_lock, flags); > clip_copy = *clip; > @@ -413,8 +414,8 @@ static void drm_fb_helper_dirty_work(struct work_struct *work) > > /* Generic fbdev uses a shadow buffer */ > if (helper->buffer) { > - vaddr = drm_client_buffer_vmap(helper->buffer); > - if (IS_ERR(vaddr)) > + ret = drm_client_buffer_vmap(helper->buffer, &map); > + if (ret) > return; > drm_fb_helper_dirty_blit_real(helper, &clip_copy); > } > @@ -2060,7 +2061,8 @@ static int drm_fb_helper_generic_probe(struct drm_fb_helper *fb_helper, > struct drm_framebuffer *fb; > struct fb_info *fbi; > u32 format; > - void *vaddr; > + struct dma_buf_map map; > + int ret; > > drm_dbg_kms(dev, "surface width(%d), height(%d) and bpp(%d)\n", > sizes->surface_width, sizes->surface_height, > @@ -2096,11 +2098,14 @@ static int drm_fb_helper_generic_probe(struct drm_fb_helper *fb_helper, > fb_deferred_io_init(fbi); > } else { > /* buffer is mapped for HW framebuffer */ > - vaddr = drm_client_buffer_vmap(fb_helper->buffer); > - if (IS_ERR(vaddr)) > - return PTR_ERR(vaddr); > + ret = drm_client_buffer_vmap(fb_helper->buffer, &map); > + if (ret) > + return ret; > + if (map.is_iomem) > + fbi->screen_base = map.vaddr_iomem; > + else > + fbi->screen_buffer = map.vaddr; > > - fbi->screen_buffer = vaddr; > /* Shamelessly leak the physical address to user-space */ > #if IS_ENABLED(CONFIG_DRM_FBDEV_LEAK_PHYS_SMEM) > if (drm_leak_fbdev_smem && fbi->fix.smem_start == 0) Just noticed a tiny thing here: I think this needs to be patched to only set smem_start when the map is _not_ iomem. Since virt_to_page isn't defined on iomem at all. I guess it'd be neat if we can set this for iomem too, but I have no idea how to convert an iomem pointer back to a bus_addr_t ... Cheers, Daniel > diff --git a/include/drm/drm_client.h b/include/drm/drm_client.h > index 7aaea665bfc2..f07f2fb02e75 100644 > --- a/include/drm/drm_client.h > +++ b/include/drm/drm_client.h > @@ -3,6 +3,7 @@ > #ifndef _DRM_CLIENT_H_ > #define _DRM_CLIENT_H_ > > +#include > #include > #include > #include > @@ -141,9 +142,9 @@ struct drm_client_buffer { > struct drm_gem_object *gem; > > /** > - * @vaddr: Virtual address for the buffer > + * @map: Virtual address for the buffer > */ > - void *vaddr; > + struct dma_buf_map map; > > /** > * @fb: DRM framebuffer > @@ -155,7 +156,7 @@ struct drm_client_buffer * > drm_client_framebuffer_create(struct drm_client_dev *client, u32 width, u32 height, u32 format); > void drm_client_framebuffer_delete(struct drm_client_buffer *buffer); > int drm_client_framebuffer_flush(struct drm_client_buffer *buffer, struct drm_rect *rect); > -void *drm_client_buffer_vmap(struct drm_client_buffer *buffer); > +int drm_client_buffer_vmap(struct drm_client_buffer *buffer, struct dma_buf_map *map); > void drm_client_buffer_vunmap(struct drm_client_buffer *buffer); > > int drm_client_modeset_create(struct drm_client_dev *client); > -- > 2.28.0 > -- Daniel Vetter Software Engineer, Intel Corporation http://blog.ffwll.ch From tzimmermann at suse.de Thu Oct 22 08:37:56 2020 From: tzimmermann at suse.de (Thomas Zimmermann) Date: Thu, 22 Oct 2020 10:37:56 +0200 Subject: [Spice-devel] [PATCH v5 10/10] drm/fb_helper: Support framebuffers in I/O memory In-Reply-To: <20201022080534.GT401619@phenom.ffwll.local> References: <20201020122046.31167-1-tzimmermann@suse.de> <20201020122046.31167-11-tzimmermann@suse.de> <20201022080534.GT401619@phenom.ffwll.local> Message-ID: <794e6ab4-041b-55f9-e95e-55ef0526edd5@suse.de> Hi On 22.10.20 10:05, Daniel Vetter wrote: > On Tue, Oct 20, 2020 at 02:20:46PM +0200, Thomas Zimmermann wrote: >> At least sparc64 requires I/O-specific access to framebuffers. This >> patch updates the fbdev console accordingly. >> >> For drivers with direct access to the framebuffer memory, the callback >> functions in struct fb_ops test for the type of memory and call the rsp >> fb_sys_ of fb_cfb_ functions. Read and write operations are implemented >> internally by DRM's fbdev helper. >> >> For drivers that employ a shadow buffer, fbdev's blit function retrieves >> the framebuffer address as struct dma_buf_map, and uses dma_buf_map >> interfaces to access the buffer. >> >> The bochs driver on sparc64 uses a workaround to flag the framebuffer as >> I/O memory and avoid a HW exception. With the introduction of struct >> dma_buf_map, this is not required any longer. The patch removes the rsp >> code from both, bochs and fbdev. >> >> v5: >> * implement fb_read/fb_write internally (Daniel, Sam) >> v4: >> * move dma_buf_map changes into separate patch (Daniel) >> * TODO list: comment on fbdev updates (Daniel) >> >> Signed-off-by: Thomas Zimmermann >> Tested-by: Sam Ravnborg >> --- >> Documentation/gpu/todo.rst | 19 ++- >> drivers/gpu/drm/bochs/bochs_kms.c | 1 - >> drivers/gpu/drm/drm_fb_helper.c | 227 ++++++++++++++++++++++++++++-- >> include/drm/drm_mode_config.h | 12 -- >> 4 files changed, 230 insertions(+), 29 deletions(-) >> >> diff --git a/Documentation/gpu/todo.rst b/Documentation/gpu/todo.rst >> index 7e6fc3c04add..638b7f704339 100644 >> --- a/Documentation/gpu/todo.rst >> +++ b/Documentation/gpu/todo.rst >> @@ -197,13 +197,28 @@ Convert drivers to use drm_fbdev_generic_setup() >> ------------------------------------------------ >> >> Most drivers can use drm_fbdev_generic_setup(). Driver have to implement >> -atomic modesetting and GEM vmap support. Current generic fbdev emulation >> -expects the framebuffer in system memory (or system-like memory). >> +atomic modesetting and GEM vmap support. Historically, generic fbdev emulation >> +expected the framebuffer in system memory or system-like memory. By employing >> +struct dma_buf_map, drivers with frambuffers in I/O memory can be supported >> +as well. >> >> Contact: Maintainer of the driver you plan to convert >> >> Level: Intermediate >> >> +Reimplement functions in drm_fbdev_fb_ops without fbdev >> +------------------------------------------------------- >> + >> +A number of callback functions in drm_fbdev_fb_ops could benefit from >> +being rewritten without dependencies on the fbdev module. Some of the >> +helpers could further benefit from using struct dma_buf_map instead of >> +raw pointers. >> + >> +Contact: Thomas Zimmermann , Daniel Vetter >> + >> +Level: Advanced >> + >> + >> drm_framebuffer_funcs and drm_mode_config_funcs.fb_create cleanup >> ----------------------------------------------------------------- >> >> diff --git a/drivers/gpu/drm/bochs/bochs_kms.c b/drivers/gpu/drm/bochs/bochs_kms.c >> index 13d0d04c4457..853081d186d5 100644 >> --- a/drivers/gpu/drm/bochs/bochs_kms.c >> +++ b/drivers/gpu/drm/bochs/bochs_kms.c >> @@ -151,7 +151,6 @@ int bochs_kms_init(struct bochs_device *bochs) >> bochs->dev->mode_config.preferred_depth = 24; >> bochs->dev->mode_config.prefer_shadow = 0; >> bochs->dev->mode_config.prefer_shadow_fbdev = 1; >> - bochs->dev->mode_config.fbdev_use_iomem = true; >> bochs->dev->mode_config.quirk_addfb_prefer_host_byte_order = true; >> >> bochs->dev->mode_config.funcs = &bochs_mode_funcs; >> diff --git a/drivers/gpu/drm/drm_fb_helper.c b/drivers/gpu/drm/drm_fb_helper.c >> index 6212cd7cde1d..1d3180841778 100644 >> --- a/drivers/gpu/drm/drm_fb_helper.c >> +++ b/drivers/gpu/drm/drm_fb_helper.c >> @@ -372,24 +372,22 @@ static void drm_fb_helper_resume_worker(struct work_struct *work) >> } >> >> static void drm_fb_helper_dirty_blit_real(struct drm_fb_helper *fb_helper, >> - struct drm_clip_rect *clip) >> + struct drm_clip_rect *clip, >> + struct dma_buf_map *dst) >> { >> struct drm_framebuffer *fb = fb_helper->fb; >> unsigned int cpp = fb->format->cpp[0]; >> size_t offset = clip->y1 * fb->pitches[0] + clip->x1 * cpp; >> void *src = fb_helper->fbdev->screen_buffer + offset; >> - void *dst = fb_helper->buffer->map.vaddr + offset; >> size_t len = (clip->x2 - clip->x1) * cpp; >> unsigned int y; >> >> - for (y = clip->y1; y < clip->y2; y++) { >> - if (!fb_helper->dev->mode_config.fbdev_use_iomem) >> - memcpy(dst, src, len); >> - else >> - memcpy_toio((void __iomem *)dst, src, len); >> + dma_buf_map_incr(dst, offset); /* go to first pixel within clip rect */ >> >> + for (y = clip->y1; y < clip->y2; y++) { >> + dma_buf_map_memcpy_to(dst, src, len); >> + dma_buf_map_incr(dst, fb->pitches[0]); >> src += fb->pitches[0]; >> - dst += fb->pitches[0]; >> } >> } >> >> @@ -417,8 +415,9 @@ static void drm_fb_helper_dirty_work(struct work_struct *work) >> ret = drm_client_buffer_vmap(helper->buffer, &map); >> if (ret) >> return; >> - drm_fb_helper_dirty_blit_real(helper, &clip_copy); >> + drm_fb_helper_dirty_blit_real(helper, &clip_copy, &map); >> } >> + >> if (helper->fb->funcs->dirty) >> helper->fb->funcs->dirty(helper->fb, NULL, 0, 0, >> &clip_copy, 1); >> @@ -2027,6 +2026,206 @@ static int drm_fbdev_fb_mmap(struct fb_info *info, struct vm_area_struct *vma) >> return -ENODEV; >> } >> >> +static bool drm_fbdev_use_iomem(struct fb_info *info) >> +{ >> + struct drm_fb_helper *fb_helper = info->par; >> + struct drm_client_buffer *buffer = fb_helper->buffer; >> + >> + return !drm_fbdev_use_shadow_fb(fb_helper) && buffer->map.is_iomem; >> +} >> + >> +static ssize_t fb_read_screen_base(struct fb_info *info, char __user *buf, size_t count, >> + loff_t pos) >> +{ >> + const char __iomem *src = info->screen_base + pos; > > Maybe a bit much a bikeshed, but I'd write this in terms of drm objects, > like the dirty_blit function, using the dma_buf_map (instead of the > fb_info parameter). And then instead of > screen_base and screen_buffer suffixes give them _mem and _iomem suffixes. Screen_buffer can be a shadow buffer. Until the blit worker (see drm_fb_helper_dirty_work() ) completes, it might be more up to date than the real buffer that's stored in the client. The orignal fbdev code supported an fb_sync callback to synchronize with outstanding screen updates (e.g., HW blit ops), but fb_sync is just overhead here. Copying from screen_buffer or screen_base always returns the most up-to-date image. > > Same for write below. Or I'm not quite understanding why we do it like > this here - I don't think this code will be used outside of the generic > fbdev code, so we can always assume that drm_fb_helper->buffer is set up. It's similar as in the read case. If we write to the client's buffer, an outstanding blit worker could write the now-outdated shadow buffer over the user's newly written framebuffer data. Thinking about it, we might want to schedule the blit worker at the end of each fb_write, so that the data makes it into the HW buffer in time. > > The other thing I think we need is some minimal testcases to make sure. > The fbtest tool used way back seems to have disappeared, I couldn't find > a copy of the source anywhere anymore. As discussed on IRC, I'll add some testcase to the igt test. I'll share the link here when done. Best regards Thomas > > With all that: Acked-by: Daniel Vetter > > Cheers, Daniel > >> + size_t alloc_size = min(count, PAGE_SIZE); >> + ssize_t ret = 0; >> + char *tmp; >> + >> + tmp = kmalloc(alloc_size, GFP_KERNEL); >> + if (!tmp) >> + return -ENOMEM; >> + >> + while (count) { >> + size_t c = min(count, alloc_size); >> + >> + memcpy_fromio(tmp, src, c); >> + if (copy_to_user(buf, tmp, c)) { >> + ret = -EFAULT; >> + break; >> + } >> + >> + src += c; >> + buf += c; >> + ret += c; >> + count -= c; >> + } >> + >> + kfree(tmp); >> + >> + return ret; >> +} >> + >> +static ssize_t fb_read_screen_buffer(struct fb_info *info, char __user *buf, size_t count, >> + loff_t pos) >> +{ >> + const char *src = info->screen_buffer + pos; >> + >> + if (copy_to_user(buf, src, count)) >> + return -EFAULT; >> + >> + return count; >> +} >> + >> +static ssize_t drm_fbdev_fb_read(struct fb_info *info, char __user *buf, >> + size_t count, loff_t *ppos) >> +{ >> + loff_t pos = *ppos; >> + size_t total_size; >> + ssize_t ret; >> + >> + if (info->state != FBINFO_STATE_RUNNING) >> + return -EPERM; >> + >> + if (info->screen_size) >> + total_size = info->screen_size; >> + else >> + total_size = info->fix.smem_len; >> + >> + if (pos >= total_size) >> + return 0; >> + if (count >= total_size) >> + count = total_size; >> + if (total_size - count < pos) >> + count = total_size - pos; >> + >> + if (drm_fbdev_use_iomem(info)) >> + ret = fb_read_screen_base(info, buf, count, pos); >> + else >> + ret = fb_read_screen_buffer(info, buf, count, pos); >> + >> + if (ret > 0) >> + *ppos = ret; >> + >> + return ret; >> +} >> + >> +static ssize_t fb_write_screen_base(struct fb_info *info, const char __user *buf, size_t count, >> + loff_t pos) >> +{ >> + char __iomem *dst = info->screen_base + pos; >> + size_t alloc_size = min(count, PAGE_SIZE); >> + ssize_t ret = 0; >> + u8 *tmp; >> + >> + tmp = kmalloc(alloc_size, GFP_KERNEL); >> + if (!tmp) >> + return -ENOMEM; >> + >> + while (count) { >> + size_t c = min(count, alloc_size); >> + >> + if (copy_from_user(tmp, buf, c)) { >> + ret = -EFAULT; >> + break; >> + } >> + memcpy_toio(dst, tmp, c); >> + >> + dst += c; >> + buf += c; >> + ret += c; >> + count -= c; >> + } >> + >> + kfree(tmp); >> + >> + return ret; >> +} >> + >> +static ssize_t fb_write_screen_buffer(struct fb_info *info, const char __user *buf, size_t count, >> + loff_t pos) >> +{ >> + char *dst = info->screen_buffer + pos; >> + >> + if (copy_from_user(dst, buf, count)) >> + return -EFAULT; >> + >> + return count; >> +} >> + >> +static ssize_t drm_fbdev_fb_write(struct fb_info *info, const char __user *buf, >> + size_t count, loff_t *ppos) >> +{ >> + loff_t pos = *ppos; >> + size_t total_size; >> + ssize_t ret; >> + int err; >> + >> + if (info->state != FBINFO_STATE_RUNNING) >> + return -EPERM; >> + >> + if (info->screen_size) >> + total_size = info->screen_size; >> + else >> + total_size = info->fix.smem_len; >> + >> + if (pos > total_size) >> + return -EFBIG; >> + if (count > total_size) { >> + err = -EFBIG; >> + count = total_size; >> + } >> + if (total_size - count < pos) { >> + if (!err) >> + err = -ENOSPC; >> + count = total_size - pos; >> + } >> + >> + /* >> + * Copy to framebuffer even if we already logged an error. Emulates >> + * the behavior of the original fbdev implementation. >> + */ >> + if (drm_fbdev_use_iomem(info)) >> + ret = fb_write_screen_base(info, buf, count, pos); >> + else >> + ret = fb_write_screen_buffer(info, buf, count, pos); >> + >> + if (ret > 0) >> + *ppos = ret; >> + >> + if (err) >> + return err; >> + >> + return ret; >> +} >> + >> +static void drm_fbdev_fb_fillrect(struct fb_info *info, >> + const struct fb_fillrect *rect) >> +{ >> + if (drm_fbdev_use_iomem(info)) >> + drm_fb_helper_cfb_fillrect(info, rect); >> + else >> + drm_fb_helper_sys_fillrect(info, rect); >> +} >> + >> +static void drm_fbdev_fb_copyarea(struct fb_info *info, >> + const struct fb_copyarea *area) >> +{ >> + if (drm_fbdev_use_iomem(info)) >> + drm_fb_helper_cfb_copyarea(info, area); >> + else >> + drm_fb_helper_sys_copyarea(info, area); >> +} >> + >> +static void drm_fbdev_fb_imageblit(struct fb_info *info, >> + const struct fb_image *image) >> +{ >> + if (drm_fbdev_use_iomem(info)) >> + drm_fb_helper_cfb_imageblit(info, image); >> + else >> + drm_fb_helper_sys_imageblit(info, image); >> +} >> + >> static const struct fb_ops drm_fbdev_fb_ops = { >> .owner = THIS_MODULE, >> DRM_FB_HELPER_DEFAULT_OPS, >> @@ -2034,11 +2233,11 @@ static const struct fb_ops drm_fbdev_fb_ops = { >> .fb_release = drm_fbdev_fb_release, >> .fb_destroy = drm_fbdev_fb_destroy, >> .fb_mmap = drm_fbdev_fb_mmap, >> - .fb_read = drm_fb_helper_sys_read, >> - .fb_write = drm_fb_helper_sys_write, >> - .fb_fillrect = drm_fb_helper_sys_fillrect, >> - .fb_copyarea = drm_fb_helper_sys_copyarea, >> - .fb_imageblit = drm_fb_helper_sys_imageblit, >> + .fb_read = drm_fbdev_fb_read, >> + .fb_write = drm_fbdev_fb_write, >> + .fb_fillrect = drm_fbdev_fb_fillrect, >> + .fb_copyarea = drm_fbdev_fb_copyarea, >> + .fb_imageblit = drm_fbdev_fb_imageblit, >> }; >> >> static struct fb_deferred_io drm_fbdev_defio = { >> diff --git a/include/drm/drm_mode_config.h b/include/drm/drm_mode_config.h >> index 5ffbb4ed5b35..ab424ddd7665 100644 >> --- a/include/drm/drm_mode_config.h >> +++ b/include/drm/drm_mode_config.h >> @@ -877,18 +877,6 @@ struct drm_mode_config { >> */ >> bool prefer_shadow_fbdev; >> >> - /** >> - * @fbdev_use_iomem: >> - * >> - * Set to true if framebuffer reside in iomem. >> - * When set to true memcpy_toio() is used when copying the framebuffer in >> - * drm_fb_helper.drm_fb_helper_dirty_blit_real(). >> - * >> - * FIXME: This should be replaced with a per-mapping is_iomem >> - * flag (like ttm does), and then used everywhere in fbdev code. >> - */ >> - bool fbdev_use_iomem; >> - >> /** >> * @quirk_addfb_prefer_xbgr_30bpp: >> * >> -- >> 2.28.0 >> > -- Thomas Zimmermann Graphics Driver Developer SUSE Software Solutions Germany GmbH Maxfeldstr. 5, 90409 N?rnberg, Germany (HRB 36809, AG N?rnberg) Gesch?ftsf?hrer: Felix Imend?rffer From tzimmermann at suse.de Thu Oct 22 09:18:40 2020 From: tzimmermann at suse.de (Thomas Zimmermann) Date: Thu, 22 Oct 2020 11:18:40 +0200 Subject: [Spice-devel] [PATCH v5 08/10] drm/gem: Store client buffer mappings as struct dma_buf_map In-Reply-To: <20201022084919.GU401619@phenom.ffwll.local> References: <20201020122046.31167-1-tzimmermann@suse.de> <20201020122046.31167-9-tzimmermann@suse.de> <20201022084919.GU401619@phenom.ffwll.local> Message-ID: Hi On 22.10.20 10:49, Daniel Vetter wrote: > On Tue, Oct 20, 2020 at 02:20:44PM +0200, Thomas Zimmermann wrote: >> Kernel DRM clients now store their framebuffer address in an instance >> of struct dma_buf_map. Depending on the buffer's location, the address >> refers to system or I/O memory. >> >> Callers of drm_client_buffer_vmap() receive a copy of the value in >> the call's supplied arguments. It can be accessed and modified with >> dma_buf_map interfaces. >> >> Signed-off-by: Thomas Zimmermann >> Reviewed-by: Daniel Vetter >> Tested-by: Sam Ravnborg >> --- >> drivers/gpu/drm/drm_client.c | 34 +++++++++++++++++++-------------- >> drivers/gpu/drm/drm_fb_helper.c | 23 +++++++++++++--------- >> include/drm/drm_client.h | 7 ++++--- >> 3 files changed, 38 insertions(+), 26 deletions(-) >> >> diff --git a/drivers/gpu/drm/drm_client.c b/drivers/gpu/drm/drm_client.c >> index ac0082bed966..fe573acf1067 100644 >> --- a/drivers/gpu/drm/drm_client.c >> +++ b/drivers/gpu/drm/drm_client.c >> @@ -235,7 +235,7 @@ static void drm_client_buffer_delete(struct drm_client_buffer *buffer) >> { >> struct drm_device *dev = buffer->client->dev; >> >> - drm_gem_vunmap(buffer->gem, buffer->vaddr); >> + drm_gem_vunmap(buffer->gem, &buffer->map); >> >> if (buffer->gem) >> drm_gem_object_put(buffer->gem); >> @@ -291,25 +291,31 @@ drm_client_buffer_create(struct drm_client_dev *client, u32 width, u32 height, u >> /** >> * drm_client_buffer_vmap - Map DRM client buffer into address space >> * @buffer: DRM client buffer >> + * @map_copy: Returns the mapped memory's address >> * >> * This function maps a client buffer into kernel address space. If the >> - * buffer is already mapped, it returns the mapping's address. >> + * buffer is already mapped, it returns the existing mapping's address. >> * >> * Client buffer mappings are not ref'counted. Each call to >> * drm_client_buffer_vmap() should be followed by a call to >> * drm_client_buffer_vunmap(); or the client buffer should be mapped >> * throughout its lifetime. >> * >> + * The returned address is a copy of the internal value. In contrast to >> + * other vmap interfaces, you don't need it for the client's vunmap >> + * function. So you can modify it at will during blit and draw operations. >> + * >> * Returns: >> - * The mapped memory's address >> + * 0 on success, or a negative errno code otherwise. >> */ >> -void *drm_client_buffer_vmap(struct drm_client_buffer *buffer) >> +int >> +drm_client_buffer_vmap(struct drm_client_buffer *buffer, struct dma_buf_map *map_copy) >> { >> - struct dma_buf_map map; >> + struct dma_buf_map *map = &buffer->map; >> int ret; >> >> - if (buffer->vaddr) >> - return buffer->vaddr; >> + if (dma_buf_map_is_set(map)) >> + goto out; >> >> /* >> * FIXME: The dependency on GEM here isn't required, we could >> @@ -319,13 +325,14 @@ void *drm_client_buffer_vmap(struct drm_client_buffer *buffer) >> * fd_install step out of the driver backend hooks, to make that >> * final step optional for internal users. >> */ >> - ret = drm_gem_vmap(buffer->gem, &map); >> + ret = drm_gem_vmap(buffer->gem, map); >> if (ret) >> - return ERR_PTR(ret); >> + return ret; >> >> - buffer->vaddr = map.vaddr; >> +out: >> + *map_copy = *map; >> >> - return map.vaddr; >> + return 0; >> } >> EXPORT_SYMBOL(drm_client_buffer_vmap); >> >> @@ -339,10 +346,9 @@ EXPORT_SYMBOL(drm_client_buffer_vmap); >> */ >> void drm_client_buffer_vunmap(struct drm_client_buffer *buffer) >> { >> - struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(buffer->vaddr); >> + struct dma_buf_map *map = &buffer->map; >> >> - drm_gem_vunmap(buffer->gem, &map); >> - buffer->vaddr = NULL; >> + drm_gem_vunmap(buffer->gem, map); >> } >> EXPORT_SYMBOL(drm_client_buffer_vunmap); >> >> diff --git a/drivers/gpu/drm/drm_fb_helper.c b/drivers/gpu/drm/drm_fb_helper.c >> index c2f72bb6afb1..6212cd7cde1d 100644 >> --- a/drivers/gpu/drm/drm_fb_helper.c >> +++ b/drivers/gpu/drm/drm_fb_helper.c >> @@ -378,7 +378,7 @@ static void drm_fb_helper_dirty_blit_real(struct drm_fb_helper *fb_helper, >> unsigned int cpp = fb->format->cpp[0]; >> size_t offset = clip->y1 * fb->pitches[0] + clip->x1 * cpp; >> void *src = fb_helper->fbdev->screen_buffer + offset; >> - void *dst = fb_helper->buffer->vaddr + offset; >> + void *dst = fb_helper->buffer->map.vaddr + offset; >> size_t len = (clip->x2 - clip->x1) * cpp; >> unsigned int y; >> >> @@ -400,7 +400,8 @@ static void drm_fb_helper_dirty_work(struct work_struct *work) >> struct drm_clip_rect *clip = &helper->dirty_clip; >> struct drm_clip_rect clip_copy; >> unsigned long flags; >> - void *vaddr; >> + struct dma_buf_map map; >> + int ret; >> >> spin_lock_irqsave(&helper->dirty_lock, flags); >> clip_copy = *clip; >> @@ -413,8 +414,8 @@ static void drm_fb_helper_dirty_work(struct work_struct *work) >> >> /* Generic fbdev uses a shadow buffer */ >> if (helper->buffer) { >> - vaddr = drm_client_buffer_vmap(helper->buffer); >> - if (IS_ERR(vaddr)) >> + ret = drm_client_buffer_vmap(helper->buffer, &map); >> + if (ret) >> return; >> drm_fb_helper_dirty_blit_real(helper, &clip_copy); >> } >> @@ -2060,7 +2061,8 @@ static int drm_fb_helper_generic_probe(struct drm_fb_helper *fb_helper, >> struct drm_framebuffer *fb; >> struct fb_info *fbi; >> u32 format; >> - void *vaddr; >> + struct dma_buf_map map; >> + int ret; >> >> drm_dbg_kms(dev, "surface width(%d), height(%d) and bpp(%d)\n", >> sizes->surface_width, sizes->surface_height, >> @@ -2096,11 +2098,14 @@ static int drm_fb_helper_generic_probe(struct drm_fb_helper *fb_helper, >> fb_deferred_io_init(fbi); >> } else { >> /* buffer is mapped for HW framebuffer */ >> - vaddr = drm_client_buffer_vmap(fb_helper->buffer); >> - if (IS_ERR(vaddr)) >> - return PTR_ERR(vaddr); >> + ret = drm_client_buffer_vmap(fb_helper->buffer, &map); >> + if (ret) >> + return ret; >> + if (map.is_iomem) >> + fbi->screen_base = map.vaddr_iomem; >> + else >> + fbi->screen_buffer = map.vaddr; >> >> - fbi->screen_buffer = vaddr; >> /* Shamelessly leak the physical address to user-space */ >> #if IS_ENABLED(CONFIG_DRM_FBDEV_LEAK_PHYS_SMEM) >> if (drm_leak_fbdev_smem && fbi->fix.smem_start == 0) > > Just noticed a tiny thing here: I think this needs to be patched to only > set smem_start when the map is _not_ iomem. Since virt_to_page isn't > defined on iomem at all. > > I guess it'd be neat if we can set this for iomem too, but I have no idea > how to convert an iomem pointer back to a bus_addr_t ... Not that I disagree, but that should be reviewed by the right people. The commit at 4be9bd10e22d ("drm/fb_helper: Allow leaking fbdev smem_start") appears to work around specific userspace drivers. Best regards Thomas > > Cheers, Daniel > >> diff --git a/include/drm/drm_client.h b/include/drm/drm_client.h >> index 7aaea665bfc2..f07f2fb02e75 100644 >> --- a/include/drm/drm_client.h >> +++ b/include/drm/drm_client.h >> @@ -3,6 +3,7 @@ >> #ifndef _DRM_CLIENT_H_ >> #define _DRM_CLIENT_H_ >> >> +#include >> #include >> #include >> #include >> @@ -141,9 +142,9 @@ struct drm_client_buffer { >> struct drm_gem_object *gem; >> >> /** >> - * @vaddr: Virtual address for the buffer >> + * @map: Virtual address for the buffer >> */ >> - void *vaddr; >> + struct dma_buf_map map; >> >> /** >> * @fb: DRM framebuffer >> @@ -155,7 +156,7 @@ struct drm_client_buffer * >> drm_client_framebuffer_create(struct drm_client_dev *client, u32 width, u32 height, u32 format); >> void drm_client_framebuffer_delete(struct drm_client_buffer *buffer); >> int drm_client_framebuffer_flush(struct drm_client_buffer *buffer, struct drm_rect *rect); >> -void *drm_client_buffer_vmap(struct drm_client_buffer *buffer); >> +int drm_client_buffer_vmap(struct drm_client_buffer *buffer, struct dma_buf_map *map); >> void drm_client_buffer_vunmap(struct drm_client_buffer *buffer); >> >> int drm_client_modeset_create(struct drm_client_dev *client); >> -- >> 2.28.0 >> > -- Thomas Zimmermann Graphics Driver Developer SUSE Software Solutions Germany GmbH Maxfeldstr. 5, 90409 N?rnberg, Germany (HRB 36809, AG N?rnberg) Gesch?ftsf?hrer: Felix Imend?rffer From franklee1973 at 163.com Fri Oct 23 17:20:52 2020 From: franklee1973 at 163.com (hshsh) Date: Sat, 24 Oct 2020 01:20:52 +0800 (CST) Subject: [Spice-devel] how to get surface screen-shot in spice-server Message-ID: <6a34253e.27.175567a3b1e.Coremail.franklee1973@163.com> Hi, spice gurus: I am a spice developer in my company which devle with desktop cloud computing. In spice-0.12.4 we can get surface screen-shot in red_worker.c by adding this line in red_process_commands(): surface_flush(worker, surface_id, &rect); Function surface_flush flush undraw image to surface, then we get the screen-shot by reading surface address. But in spice-0.14.3 we can not get proper screen-shot, by adding this line in red_process_display(): display_channel_current_flush(worker->display_channel, surface_id); We get screen-shot that flicker with white bars, I do not know why. Much appreciation for any reply! regards Frank | NamePosition franklee1973 at 163.com Organization: Address: Telephone: Cellphone: | ????????????????????? ???? | -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ??.vcf Type: text/x-vcard Size: 231 bytes Desc: not available URL: From sam at ravnborg.org Sat Oct 24 20:38:38 2020 From: sam at ravnborg.org (Sam Ravnborg) Date: Sat, 24 Oct 2020 22:38:38 +0200 Subject: [Spice-devel] [PATCH v5 10/10] drm/fb_helper: Support framebuffers in I/O memory In-Reply-To: <20201020122046.31167-11-tzimmermann@suse.de> References: <20201020122046.31167-1-tzimmermann@suse.de> <20201020122046.31167-11-tzimmermann@suse.de> Message-ID: <20201024203838.GB93644@ravnborg.org> Hi Thomas. On Tue, Oct 20, 2020 at 02:20:46PM +0200, Thomas Zimmermann wrote: > At least sparc64 requires I/O-specific access to framebuffers. This > patch updates the fbdev console accordingly. > > For drivers with direct access to the framebuffer memory, the callback > functions in struct fb_ops test for the type of memory and call the rsp > fb_sys_ of fb_cfb_ functions. Read and write operations are implemented > internally by DRM's fbdev helper. > > For drivers that employ a shadow buffer, fbdev's blit function retrieves > the framebuffer address as struct dma_buf_map, and uses dma_buf_map > interfaces to access the buffer. > > The bochs driver on sparc64 uses a workaround to flag the framebuffer as > I/O memory and avoid a HW exception. With the introduction of struct > dma_buf_map, this is not required any longer. The patch removes the rsp > code from both, bochs and fbdev. > > v5: > * implement fb_read/fb_write internally (Daniel, Sam) > v4: > * move dma_buf_map changes into separate patch (Daniel) > * TODO list: comment on fbdev updates (Daniel) > > Signed-off-by: Thomas Zimmermann > Tested-by: Sam Ravnborg Reviewed-by: Sam Ravnborg But see a few comments below on naming for you to consider. Sam > --- > Documentation/gpu/todo.rst | 19 ++- > drivers/gpu/drm/bochs/bochs_kms.c | 1 - > drivers/gpu/drm/drm_fb_helper.c | 227 ++++++++++++++++++++++++++++-- > include/drm/drm_mode_config.h | 12 -- > 4 files changed, 230 insertions(+), 29 deletions(-) > > diff --git a/Documentation/gpu/todo.rst b/Documentation/gpu/todo.rst > index 7e6fc3c04add..638b7f704339 100644 > --- a/Documentation/gpu/todo.rst > +++ b/Documentation/gpu/todo.rst > @@ -197,13 +197,28 @@ Convert drivers to use drm_fbdev_generic_setup() > ------------------------------------------------ > > Most drivers can use drm_fbdev_generic_setup(). Driver have to implement > -atomic modesetting and GEM vmap support. Current generic fbdev emulation > -expects the framebuffer in system memory (or system-like memory). > +atomic modesetting and GEM vmap support. Historically, generic fbdev emulation > +expected the framebuffer in system memory or system-like memory. By employing > +struct dma_buf_map, drivers with frambuffers in I/O memory can be supported > +as well. > > Contact: Maintainer of the driver you plan to convert > > Level: Intermediate > > +Reimplement functions in drm_fbdev_fb_ops without fbdev > +------------------------------------------------------- > + > +A number of callback functions in drm_fbdev_fb_ops could benefit from > +being rewritten without dependencies on the fbdev module. Some of the > +helpers could further benefit from using struct dma_buf_map instead of > +raw pointers. > + > +Contact: Thomas Zimmermann , Daniel Vetter > + > +Level: Advanced > + > + > drm_framebuffer_funcs and drm_mode_config_funcs.fb_create cleanup > ----------------------------------------------------------------- > > diff --git a/drivers/gpu/drm/bochs/bochs_kms.c b/drivers/gpu/drm/bochs/bochs_kms.c > index 13d0d04c4457..853081d186d5 100644 > --- a/drivers/gpu/drm/bochs/bochs_kms.c > +++ b/drivers/gpu/drm/bochs/bochs_kms.c > @@ -151,7 +151,6 @@ int bochs_kms_init(struct bochs_device *bochs) > bochs->dev->mode_config.preferred_depth = 24; > bochs->dev->mode_config.prefer_shadow = 0; > bochs->dev->mode_config.prefer_shadow_fbdev = 1; > - bochs->dev->mode_config.fbdev_use_iomem = true; > bochs->dev->mode_config.quirk_addfb_prefer_host_byte_order = true; > > bochs->dev->mode_config.funcs = &bochs_mode_funcs; > diff --git a/drivers/gpu/drm/drm_fb_helper.c b/drivers/gpu/drm/drm_fb_helper.c > index 6212cd7cde1d..1d3180841778 100644 > --- a/drivers/gpu/drm/drm_fb_helper.c > +++ b/drivers/gpu/drm/drm_fb_helper.c > @@ -372,24 +372,22 @@ static void drm_fb_helper_resume_worker(struct work_struct *work) > } > > static void drm_fb_helper_dirty_blit_real(struct drm_fb_helper *fb_helper, > - struct drm_clip_rect *clip) > + struct drm_clip_rect *clip, > + struct dma_buf_map *dst) > { > struct drm_framebuffer *fb = fb_helper->fb; > unsigned int cpp = fb->format->cpp[0]; > size_t offset = clip->y1 * fb->pitches[0] + clip->x1 * cpp; > void *src = fb_helper->fbdev->screen_buffer + offset; > - void *dst = fb_helper->buffer->map.vaddr + offset; > size_t len = (clip->x2 - clip->x1) * cpp; > unsigned int y; > > - for (y = clip->y1; y < clip->y2; y++) { > - if (!fb_helper->dev->mode_config.fbdev_use_iomem) > - memcpy(dst, src, len); > - else > - memcpy_toio((void __iomem *)dst, src, len); > + dma_buf_map_incr(dst, offset); /* go to first pixel within clip rect */ > > + for (y = clip->y1; y < clip->y2; y++) { > + dma_buf_map_memcpy_to(dst, src, len); > + dma_buf_map_incr(dst, fb->pitches[0]); > src += fb->pitches[0]; > - dst += fb->pitches[0]; > } > } > > @@ -417,8 +415,9 @@ static void drm_fb_helper_dirty_work(struct work_struct *work) > ret = drm_client_buffer_vmap(helper->buffer, &map); > if (ret) > return; > - drm_fb_helper_dirty_blit_real(helper, &clip_copy); > + drm_fb_helper_dirty_blit_real(helper, &clip_copy, &map); > } > + > if (helper->fb->funcs->dirty) > helper->fb->funcs->dirty(helper->fb, NULL, 0, 0, > &clip_copy, 1); > @@ -2027,6 +2026,206 @@ static int drm_fbdev_fb_mmap(struct fb_info *info, struct vm_area_struct *vma) > return -ENODEV; > } > > +static bool drm_fbdev_use_iomem(struct fb_info *info) > +{ > + struct drm_fb_helper *fb_helper = info->par; > + struct drm_client_buffer *buffer = fb_helper->buffer; > + > + return !drm_fbdev_use_shadow_fb(fb_helper) && buffer->map.is_iomem; > +} > + > +static ssize_t fb_read_screen_base(struct fb_info *info, char __user *buf, size_t count, > + loff_t pos) The naming here confused me - a name like: fb_read_iomem() would have helped me more. With the current naming I shall remember that the screen_base member is the iomem pointer. > +{ > + const char __iomem *src = info->screen_base + pos; > + size_t alloc_size = min(count, PAGE_SIZE); > + ssize_t ret = 0; > + char *tmp; > + > + tmp = kmalloc(alloc_size, GFP_KERNEL); > + if (!tmp) > + return -ENOMEM; > + I looked around and could not find other places where we copy from iomem to mem to usermem in chunks of PAGE_SIZE. > + while (count) { > + size_t c = min(count, alloc_size); > + > + memcpy_fromio(tmp, src, c); > + if (copy_to_user(buf, tmp, c)) { > + ret = -EFAULT; > + break; > + } > + > + src += c; > + buf += c; > + ret += c; > + count -= c; > + } > + > + kfree(tmp); > + > + return ret; > +} > + > +static ssize_t fb_read_screen_buffer(struct fb_info *info, char __user *buf, size_t count, > + loff_t pos) And fb_read_sysmem() here. > +{ > + const char *src = info->screen_buffer + pos; > + > + if (copy_to_user(buf, src, count)) > + return -EFAULT; > + > + return count; > +} > + > +static ssize_t drm_fbdev_fb_read(struct fb_info *info, char __user *buf, > + size_t count, loff_t *ppos) > +{ > + loff_t pos = *ppos; > + size_t total_size; > + ssize_t ret; > + > + if (info->state != FBINFO_STATE_RUNNING) > + return -EPERM; > + > + if (info->screen_size) > + total_size = info->screen_size; > + else > + total_size = info->fix.smem_len; > + > + if (pos >= total_size) > + return 0; > + if (count >= total_size) > + count = total_size; > + if (total_size - count < pos) > + count = total_size - pos; > + > + if (drm_fbdev_use_iomem(info)) > + ret = fb_read_screen_base(info, buf, count, pos); > + else > + ret = fb_read_screen_buffer(info, buf, count, pos); > + > + if (ret > 0) > + *ppos = ret; > + > + return ret; > +} > + > +static ssize_t fb_write_screen_base(struct fb_info *info, const char __user *buf, size_t count, > + loff_t pos) fb_write_iomem() > +{ > + char __iomem *dst = info->screen_base + pos; > + size_t alloc_size = min(count, PAGE_SIZE); > + ssize_t ret = 0; > + u8 *tmp; > + > + tmp = kmalloc(alloc_size, GFP_KERNEL); > + if (!tmp) > + return -ENOMEM; > + > + while (count) { > + size_t c = min(count, alloc_size); > + > + if (copy_from_user(tmp, buf, c)) { > + ret = -EFAULT; > + break; > + } > + memcpy_toio(dst, tmp, c); > + > + dst += c; > + buf += c; > + ret += c; > + count -= c; > + } > + > + kfree(tmp); > + > + return ret; > +} > + > +static ssize_t fb_write_screen_buffer(struct fb_info *info, const char __user *buf, size_t count, > + loff_t pos) fb_write_sysmem() > +{ > + char *dst = info->screen_buffer + pos; > + > + if (copy_from_user(dst, buf, count)) > + return -EFAULT; > + > + return count; > +} > + > +static ssize_t drm_fbdev_fb_write(struct fb_info *info, const char __user *buf, > + size_t count, loff_t *ppos) > +{ > + loff_t pos = *ppos; > + size_t total_size; > + ssize_t ret; > + int err; > + > + if (info->state != FBINFO_STATE_RUNNING) > + return -EPERM; > + > + if (info->screen_size) > + total_size = info->screen_size; > + else > + total_size = info->fix.smem_len; > + > + if (pos > total_size) > + return -EFBIG; > + if (count > total_size) { > + err = -EFBIG; > + count = total_size; > + } > + if (total_size - count < pos) { > + if (!err) > + err = -ENOSPC; > + count = total_size - pos; > + } > + > + /* > + * Copy to framebuffer even if we already logged an error. Emulates > + * the behavior of the original fbdev implementation. > + */ > + if (drm_fbdev_use_iomem(info)) > + ret = fb_write_screen_base(info, buf, count, pos); > + else > + ret = fb_write_screen_buffer(info, buf, count, pos); > + > + if (ret > 0) > + *ppos = ret; > + > + if (err) > + return err; > + > + return ret; > +} > + > +static void drm_fbdev_fb_fillrect(struct fb_info *info, > + const struct fb_fillrect *rect) > +{ > + if (drm_fbdev_use_iomem(info)) > + drm_fb_helper_cfb_fillrect(info, rect); > + else > + drm_fb_helper_sys_fillrect(info, rect); > +} > + > +static void drm_fbdev_fb_copyarea(struct fb_info *info, > + const struct fb_copyarea *area) > +{ > + if (drm_fbdev_use_iomem(info)) > + drm_fb_helper_cfb_copyarea(info, area); > + else > + drm_fb_helper_sys_copyarea(info, area); > +} > + > +static void drm_fbdev_fb_imageblit(struct fb_info *info, > + const struct fb_image *image) > +{ > + if (drm_fbdev_use_iomem(info)) > + drm_fb_helper_cfb_imageblit(info, image); > + else > + drm_fb_helper_sys_imageblit(info, image); > +} > + > static const struct fb_ops drm_fbdev_fb_ops = { > .owner = THIS_MODULE, > DRM_FB_HELPER_DEFAULT_OPS, > @@ -2034,11 +2233,11 @@ static const struct fb_ops drm_fbdev_fb_ops = { > .fb_release = drm_fbdev_fb_release, > .fb_destroy = drm_fbdev_fb_destroy, > .fb_mmap = drm_fbdev_fb_mmap, > - .fb_read = drm_fb_helper_sys_read, > - .fb_write = drm_fb_helper_sys_write, > - .fb_fillrect = drm_fb_helper_sys_fillrect, > - .fb_copyarea = drm_fb_helper_sys_copyarea, > - .fb_imageblit = drm_fb_helper_sys_imageblit, > + .fb_read = drm_fbdev_fb_read, > + .fb_write = drm_fbdev_fb_write, > + .fb_fillrect = drm_fbdev_fb_fillrect, > + .fb_copyarea = drm_fbdev_fb_copyarea, > + .fb_imageblit = drm_fbdev_fb_imageblit, > }; > > static struct fb_deferred_io drm_fbdev_defio = { > diff --git a/include/drm/drm_mode_config.h b/include/drm/drm_mode_config.h > index 5ffbb4ed5b35..ab424ddd7665 100644 > --- a/include/drm/drm_mode_config.h > +++ b/include/drm/drm_mode_config.h > @@ -877,18 +877,6 @@ struct drm_mode_config { > */ > bool prefer_shadow_fbdev; > > - /** > - * @fbdev_use_iomem: > - * > - * Set to true if framebuffer reside in iomem. > - * When set to true memcpy_toio() is used when copying the framebuffer in > - * drm_fb_helper.drm_fb_helper_dirty_blit_real(). > - * > - * FIXME: This should be replaced with a per-mapping is_iomem > - * flag (like ttm does), and then used everywhere in fbdev code. > - */ > - bool fbdev_use_iomem; > - > /** > * @quirk_addfb_prefer_xbgr_30bpp: > * > -- > 2.28.0 From tzimmermann at suse.de Mon Oct 26 07:50:00 2020 From: tzimmermann at suse.de (Thomas Zimmermann) Date: Mon, 26 Oct 2020 08:50:00 +0100 Subject: [Spice-devel] [PATCH v5 10/10] drm/fb_helper: Support framebuffers in I/O memory In-Reply-To: <20201024203838.GB93644@ravnborg.org> References: <20201020122046.31167-1-tzimmermann@suse.de> <20201020122046.31167-11-tzimmermann@suse.de> <20201024203838.GB93644@ravnborg.org> Message-ID: Hi Am 24.10.20 um 22:38 schrieb Sam Ravnborg: > Hi Thomas. > > On Tue, Oct 20, 2020 at 02:20:46PM +0200, Thomas Zimmermann wrote: >> At least sparc64 requires I/O-specific access to framebuffers. This >> patch updates the fbdev console accordingly. >> >> For drivers with direct access to the framebuffer memory, the callback >> functions in struct fb_ops test for the type of memory and call the rsp >> fb_sys_ of fb_cfb_ functions. Read and write operations are implemented >> internally by DRM's fbdev helper. >> >> For drivers that employ a shadow buffer, fbdev's blit function retrieves >> the framebuffer address as struct dma_buf_map, and uses dma_buf_map >> interfaces to access the buffer. >> >> The bochs driver on sparc64 uses a workaround to flag the framebuffer as >> I/O memory and avoid a HW exception. With the introduction of struct >> dma_buf_map, this is not required any longer. The patch removes the rsp >> code from both, bochs and fbdev. >> >> v5: >> * implement fb_read/fb_write internally (Daniel, Sam) >> v4: >> * move dma_buf_map changes into separate patch (Daniel) >> * TODO list: comment on fbdev updates (Daniel) >> >> Signed-off-by: Thomas Zimmermann >> Tested-by: Sam Ravnborg > Reviewed-by: Sam Ravnborg > > But see a few comments below on naming for you to consider. > > Sam > >> --- >> Documentation/gpu/todo.rst | 19 ++- >> drivers/gpu/drm/bochs/bochs_kms.c | 1 - >> drivers/gpu/drm/drm_fb_helper.c | 227 ++++++++++++++++++++++++++++-- >> include/drm/drm_mode_config.h | 12 -- >> 4 files changed, 230 insertions(+), 29 deletions(-) >> >> diff --git a/Documentation/gpu/todo.rst b/Documentation/gpu/todo.rst >> index 7e6fc3c04add..638b7f704339 100644 >> --- a/Documentation/gpu/todo.rst >> +++ b/Documentation/gpu/todo.rst >> @@ -197,13 +197,28 @@ Convert drivers to use drm_fbdev_generic_setup() >> ------------------------------------------------ >> >> Most drivers can use drm_fbdev_generic_setup(). Driver have to implement >> -atomic modesetting and GEM vmap support. Current generic fbdev emulation >> -expects the framebuffer in system memory (or system-like memory). >> +atomic modesetting and GEM vmap support. Historically, generic fbdev emulation >> +expected the framebuffer in system memory or system-like memory. By employing >> +struct dma_buf_map, drivers with frambuffers in I/O memory can be supported >> +as well. >> >> Contact: Maintainer of the driver you plan to convert >> >> Level: Intermediate >> >> +Reimplement functions in drm_fbdev_fb_ops without fbdev >> +------------------------------------------------------- >> + >> +A number of callback functions in drm_fbdev_fb_ops could benefit from >> +being rewritten without dependencies on the fbdev module. Some of the >> +helpers could further benefit from using struct dma_buf_map instead of >> +raw pointers. >> + >> +Contact: Thomas Zimmermann , Daniel Vetter >> + >> +Level: Advanced >> + >> + >> drm_framebuffer_funcs and drm_mode_config_funcs.fb_create cleanup >> ----------------------------------------------------------------- >> >> diff --git a/drivers/gpu/drm/bochs/bochs_kms.c b/drivers/gpu/drm/bochs/bochs_kms.c >> index 13d0d04c4457..853081d186d5 100644 >> --- a/drivers/gpu/drm/bochs/bochs_kms.c >> +++ b/drivers/gpu/drm/bochs/bochs_kms.c >> @@ -151,7 +151,6 @@ int bochs_kms_init(struct bochs_device *bochs) >> bochs->dev->mode_config.preferred_depth = 24; >> bochs->dev->mode_config.prefer_shadow = 0; >> bochs->dev->mode_config.prefer_shadow_fbdev = 1; >> - bochs->dev->mode_config.fbdev_use_iomem = true; >> bochs->dev->mode_config.quirk_addfb_prefer_host_byte_order = true; >> >> bochs->dev->mode_config.funcs = &bochs_mode_funcs; >> diff --git a/drivers/gpu/drm/drm_fb_helper.c b/drivers/gpu/drm/drm_fb_helper.c >> index 6212cd7cde1d..1d3180841778 100644 >> --- a/drivers/gpu/drm/drm_fb_helper.c >> +++ b/drivers/gpu/drm/drm_fb_helper.c >> @@ -372,24 +372,22 @@ static void drm_fb_helper_resume_worker(struct work_struct *work) >> } >> >> static void drm_fb_helper_dirty_blit_real(struct drm_fb_helper *fb_helper, >> - struct drm_clip_rect *clip) >> + struct drm_clip_rect *clip, >> + struct dma_buf_map *dst) >> { >> struct drm_framebuffer *fb = fb_helper->fb; >> unsigned int cpp = fb->format->cpp[0]; >> size_t offset = clip->y1 * fb->pitches[0] + clip->x1 * cpp; >> void *src = fb_helper->fbdev->screen_buffer + offset; >> - void *dst = fb_helper->buffer->map.vaddr + offset; >> size_t len = (clip->x2 - clip->x1) * cpp; >> unsigned int y; >> >> - for (y = clip->y1; y < clip->y2; y++) { >> - if (!fb_helper->dev->mode_config.fbdev_use_iomem) >> - memcpy(dst, src, len); >> - else >> - memcpy_toio((void __iomem *)dst, src, len); >> + dma_buf_map_incr(dst, offset); /* go to first pixel within clip rect */ >> >> + for (y = clip->y1; y < clip->y2; y++) { >> + dma_buf_map_memcpy_to(dst, src, len); >> + dma_buf_map_incr(dst, fb->pitches[0]); >> src += fb->pitches[0]; >> - dst += fb->pitches[0]; >> } >> } >> >> @@ -417,8 +415,9 @@ static void drm_fb_helper_dirty_work(struct work_struct *work) >> ret = drm_client_buffer_vmap(helper->buffer, &map); >> if (ret) >> return; >> - drm_fb_helper_dirty_blit_real(helper, &clip_copy); >> + drm_fb_helper_dirty_blit_real(helper, &clip_copy, &map); >> } >> + >> if (helper->fb->funcs->dirty) >> helper->fb->funcs->dirty(helper->fb, NULL, 0, 0, >> &clip_copy, 1); >> @@ -2027,6 +2026,206 @@ static int drm_fbdev_fb_mmap(struct fb_info *info, struct vm_area_struct *vma) >> return -ENODEV; >> } >> >> +static bool drm_fbdev_use_iomem(struct fb_info *info) >> +{ >> + struct drm_fb_helper *fb_helper = info->par; >> + struct drm_client_buffer *buffer = fb_helper->buffer; >> + >> + return !drm_fbdev_use_shadow_fb(fb_helper) && buffer->map.is_iomem; >> +} >> + >> +static ssize_t fb_read_screen_base(struct fb_info *info, char __user *buf, size_t count, >> + loff_t pos) > The naming here confused me - a name like: > fb_read_iomem() would have helped me more. > With the current naming I shall remember that the screen_base member is > the iomem pointer. Yeah, true. In terms of naming, I was undecided. I was thinking about adopting a naming similar to what you describe, but OTOH we don't use sysmem anywhere in the code. I thought about adopting fbdev's conention of using _sys_ and _cfb_. But that would make sensein the local context. > >> +{ >> + const char __iomem *src = info->screen_base + pos; >> + size_t alloc_size = min(count, PAGE_SIZE); >> + ssize_t ret = 0; >> + char *tmp; >> + >> + tmp = kmalloc(alloc_size, GFP_KERNEL); >> + if (!tmp) >> + return -ENOMEM; >> + > > I looked around and could not find other places where > we copy from iomem to mem to usermem in chunks of PAGE_SIZE. I took this pattern from fbdev's original implementation. I think it's done to work nicely with kmalloc. Best regards Thomas > >> + while (count) { >> + size_t c = min(count, alloc_size); >> + >> + memcpy_fromio(tmp, src, c); >> + if (copy_to_user(buf, tmp, c)) { >> + ret = -EFAULT; >> + break; >> + } >> + >> + src += c; >> + buf += c; >> + ret += c; >> + count -= c; >> + } >> + >> + kfree(tmp); >> + >> + return ret; >> +} >> + >> +static ssize_t fb_read_screen_buffer(struct fb_info *info, char __user *buf, size_t count, >> + loff_t pos) > And fb_read_sysmem() here. > >> +{ >> + const char *src = info->screen_buffer + pos; >> + >> + if (copy_to_user(buf, src, count)) >> + return -EFAULT; >> + >> + return count; >> +} >> + >> +static ssize_t drm_fbdev_fb_read(struct fb_info *info, char __user *buf, >> + size_t count, loff_t *ppos) >> +{ >> + loff_t pos = *ppos; >> + size_t total_size; >> + ssize_t ret; >> + >> + if (info->state != FBINFO_STATE_RUNNING) >> + return -EPERM; >> + >> + if (info->screen_size) >> + total_size = info->screen_size; >> + else >> + total_size = info->fix.smem_len; >> + >> + if (pos >= total_size) >> + return 0; >> + if (count >= total_size) >> + count = total_size; >> + if (total_size - count < pos) >> + count = total_size - pos; >> + >> + if (drm_fbdev_use_iomem(info)) >> + ret = fb_read_screen_base(info, buf, count, pos); >> + else >> + ret = fb_read_screen_buffer(info, buf, count, pos); >> + >> + if (ret > 0) >> + *ppos = ret; >> + >> + return ret; >> +} >> + >> +static ssize_t fb_write_screen_base(struct fb_info *info, const char __user *buf, size_t count, >> + loff_t pos) > > fb_write_iomem() > >> +{ >> + char __iomem *dst = info->screen_base + pos; >> + size_t alloc_size = min(count, PAGE_SIZE); >> + ssize_t ret = 0; >> + u8 *tmp; >> + >> + tmp = kmalloc(alloc_size, GFP_KERNEL); >> + if (!tmp) >> + return -ENOMEM; >> + >> + while (count) { >> + size_t c = min(count, alloc_size); >> + >> + if (copy_from_user(tmp, buf, c)) { >> + ret = -EFAULT; >> + break; >> + } >> + memcpy_toio(dst, tmp, c); >> + >> + dst += c; >> + buf += c; >> + ret += c; >> + count -= c; >> + } >> + >> + kfree(tmp); >> + >> + return ret; >> +} >> + >> +static ssize_t fb_write_screen_buffer(struct fb_info *info, const char __user *buf, size_t count, >> + loff_t pos) > fb_write_sysmem() > >> +{ >> + char *dst = info->screen_buffer + pos; >> + >> + if (copy_from_user(dst, buf, count)) >> + return -EFAULT; >> + >> + return count; >> +} >> + >> +static ssize_t drm_fbdev_fb_write(struct fb_info *info, const char __user *buf, >> + size_t count, loff_t *ppos) >> +{ >> + loff_t pos = *ppos; >> + size_t total_size; >> + ssize_t ret; >> + int err; >> + >> + if (info->state != FBINFO_STATE_RUNNING) >> + return -EPERM; >> + >> + if (info->screen_size) >> + total_size = info->screen_size; >> + else >> + total_size = info->fix.smem_len; >> + >> + if (pos > total_size) >> + return -EFBIG; >> + if (count > total_size) { >> + err = -EFBIG; >> + count = total_size; >> + } >> + if (total_size - count < pos) { >> + if (!err) >> + err = -ENOSPC; >> + count = total_size - pos; >> + } >> + >> + /* >> + * Copy to framebuffer even if we already logged an error. Emulates >> + * the behavior of the original fbdev implementation. >> + */ >> + if (drm_fbdev_use_iomem(info)) >> + ret = fb_write_screen_base(info, buf, count, pos); >> + else >> + ret = fb_write_screen_buffer(info, buf, count, pos); >> + >> + if (ret > 0) >> + *ppos = ret; >> + >> + if (err) >> + return err; >> + >> + return ret; >> +} >> + >> +static void drm_fbdev_fb_fillrect(struct fb_info *info, >> + const struct fb_fillrect *rect) >> +{ >> + if (drm_fbdev_use_iomem(info)) >> + drm_fb_helper_cfb_fillrect(info, rect); >> + else >> + drm_fb_helper_sys_fillrect(info, rect); >> +} >> + >> +static void drm_fbdev_fb_copyarea(struct fb_info *info, >> + const struct fb_copyarea *area) >> +{ >> + if (drm_fbdev_use_iomem(info)) >> + drm_fb_helper_cfb_copyarea(info, area); >> + else >> + drm_fb_helper_sys_copyarea(info, area); >> +} >> + >> +static void drm_fbdev_fb_imageblit(struct fb_info *info, >> + const struct fb_image *image) >> +{ >> + if (drm_fbdev_use_iomem(info)) >> + drm_fb_helper_cfb_imageblit(info, image); >> + else >> + drm_fb_helper_sys_imageblit(info, image); >> +} >> + >> static const struct fb_ops drm_fbdev_fb_ops = { >> .owner = THIS_MODULE, >> DRM_FB_HELPER_DEFAULT_OPS, >> @@ -2034,11 +2233,11 @@ static const struct fb_ops drm_fbdev_fb_ops = { >> .fb_release = drm_fbdev_fb_release, >> .fb_destroy = drm_fbdev_fb_destroy, >> .fb_mmap = drm_fbdev_fb_mmap, >> - .fb_read = drm_fb_helper_sys_read, >> - .fb_write = drm_fb_helper_sys_write, >> - .fb_fillrect = drm_fb_helper_sys_fillrect, >> - .fb_copyarea = drm_fb_helper_sys_copyarea, >> - .fb_imageblit = drm_fb_helper_sys_imageblit, >> + .fb_read = drm_fbdev_fb_read, >> + .fb_write = drm_fbdev_fb_write, >> + .fb_fillrect = drm_fbdev_fb_fillrect, >> + .fb_copyarea = drm_fbdev_fb_copyarea, >> + .fb_imageblit = drm_fbdev_fb_imageblit, >> }; >> >> static struct fb_deferred_io drm_fbdev_defio = { >> diff --git a/include/drm/drm_mode_config.h b/include/drm/drm_mode_config.h >> index 5ffbb4ed5b35..ab424ddd7665 100644 >> --- a/include/drm/drm_mode_config.h >> +++ b/include/drm/drm_mode_config.h >> @@ -877,18 +877,6 @@ struct drm_mode_config { >> */ >> bool prefer_shadow_fbdev; >> >> - /** >> - * @fbdev_use_iomem: >> - * >> - * Set to true if framebuffer reside in iomem. >> - * When set to true memcpy_toio() is used when copying the framebuffer in >> - * drm_fb_helper.drm_fb_helper_dirty_blit_real(). >> - * >> - * FIXME: This should be replaced with a per-mapping is_iomem >> - * flag (like ttm does), and then used everywhere in fbdev code. >> - */ >> - bool fbdev_use_iomem; >> - >> /** >> * @quirk_addfb_prefer_xbgr_30bpp: >> * >> -- >> 2.28.0 > _______________________________________________ > dri-devel mailing list > dri-devel at lists.freedesktop.org > https://lists.freedesktop.org/mailman/listinfo/dri-devel > -- Thomas Zimmermann Graphics Driver Developer SUSE Software Solutions Germany GmbH Maxfeldstr. 5, 90409 N?rnberg, Germany (HRB 36809, AG N?rnberg) Gesch?ftsf?hrer: Felix Imend?rffer From fziglio at redhat.com Mon Oct 26 09:59:58 2020 From: fziglio at redhat.com (Frediano Ziglio) Date: Mon, 26 Oct 2020 05:59:58 -0400 (EDT) Subject: [Spice-devel] how to get surface screen-shot in spice-server In-Reply-To: <6a34253e.27.175567a3b1e.Coremail.franklee1973@163.com> References: <6a34253e.27.175567a3b1e.Coremail.franklee1973@163.com> Message-ID: <1600792948.5398821.1603706398660.JavaMail.zimbra@redhat.com> > Hi, spice gurus: > I am a spice developer in my company which devle with desktop cloud > computing. > In spice-0.12.4 we can get surface screen-shot in red_worker.c by adding this > line in red_process_commands(): > surface_flush(worker, surface_id, &rect); > Function surface_flush flush undraw image to surface, then we get the > screen-shot by reading surface address. > But in spice-0.14.3 we can not get proper screen-shot, by adding this line in > red_process_display(): > display_channel_current_flush(worker->display_channel, surface_id); > We get screen-shot that flicker with white bars, I do not know why. > Much appreciation for any reply! > regards > Frank Hi, you should use display_channel_draw. Frediano From fziglio at redhat.com Wed Oct 28 16:08:35 2020 From: fziglio at redhat.com (Frediano Ziglio) Date: Wed, 28 Oct 2020 12:08:35 -0400 (EDT) Subject: [Spice-devel] ANNOUNCE spice-server 0.14.91 release candidate In-Reply-To: <1798225862.5721507.1603901176982.JavaMail.zimbra@redhat.com> Message-ID: <176547150.5721914.1603901315064.JavaMail.zimbra@redhat.com> Hey everyone, I just cut a new release candidate in the 0.14.x stable series. If you find any bugs or regressions, please report them in our issue tracker: https://gitlab.freedesktop.org/groups/spice/-/issues. See also https://gitlab.freedesktop.org/spice/spice/-/tags/v0.14.91. Major Changes in 0.14.91: ========================= **IMPORTANT** 0.14.91 is the first release candidate for the stable 0.15.x series. While some bugs might still be present, it should be reasonably stable. If you are looking for stability for daily use, please keep using the latest 0.14.x release. * Support UNIX abstract sockets * Fix some potential thread race condition in RedClient * Many cleanups in the code * Improve migration test script * Update in protocol documentation * Improve Meson build * Removed CELT support * Update CI * Removed QXLWorker definition, it was deprecated 6 years ago * Fix some compatibility with MacOS * Fix some compatibility with Windows * Move the project to C++ * Some fixes for SASL dealing with WebDAV * Fix minor Coverity reports * Add Doxygen support, manually built with "make doxy" * Support more mouse buttons (up to 16 buttons) * CVE-2020-14355 multiple buffer overflow vulnerabilities in QUIC decoding code https://www.spice-space.org/download/releases/spice-0.14.91.tar.bz2 Kind Regards, Frediano From tzimmermann at suse.de Wed Oct 28 19:35:12 2020 From: tzimmermann at suse.de (Thomas Zimmermann) Date: Wed, 28 Oct 2020 20:35:12 +0100 Subject: [Spice-devel] [PATCH v6 01/10] drm/vram-helper: Remove invariant parameters from internal kmap function In-Reply-To: <20201028193521.2489-1-tzimmermann@suse.de> References: <20201028193521.2489-1-tzimmermann@suse.de> Message-ID: <20201028193521.2489-2-tzimmermann@suse.de> The parameters map and is_iomem are always of the same value. Removed them to prepares the function for conversion to struct dma_buf_map. v4: * don't check for !kmap->virtual; will always be false Signed-off-by: Thomas Zimmermann Reviewed-by: Daniel Vetter Reviewed-by: Christian K?nig Tested-by: Sam Ravnborg --- drivers/gpu/drm/drm_gem_vram_helper.c | 18 ++++-------------- 1 file changed, 4 insertions(+), 14 deletions(-) diff --git a/drivers/gpu/drm/drm_gem_vram_helper.c b/drivers/gpu/drm/drm_gem_vram_helper.c index 9da823eb0edd..f445b84c43c4 100644 --- a/drivers/gpu/drm/drm_gem_vram_helper.c +++ b/drivers/gpu/drm/drm_gem_vram_helper.c @@ -379,32 +379,22 @@ int drm_gem_vram_unpin(struct drm_gem_vram_object *gbo) } EXPORT_SYMBOL(drm_gem_vram_unpin); -static void *drm_gem_vram_kmap_locked(struct drm_gem_vram_object *gbo, - bool map, bool *is_iomem) +static void *drm_gem_vram_kmap_locked(struct drm_gem_vram_object *gbo) { int ret; struct ttm_bo_kmap_obj *kmap = &gbo->kmap; + bool is_iomem; if (gbo->kmap_use_count > 0) goto out; - if (kmap->virtual || !map) - goto out; - ret = ttm_bo_kmap(&gbo->bo, 0, gbo->bo.num_pages, kmap); if (ret) return ERR_PTR(ret); out: - if (!kmap->virtual) { - if (is_iomem) - *is_iomem = false; - return NULL; /* not mapped; don't increment ref */ - } ++gbo->kmap_use_count; - if (is_iomem) - return ttm_kmap_obj_virtual(kmap, is_iomem); - return kmap->virtual; + return ttm_kmap_obj_virtual(kmap, &is_iomem); } static void drm_gem_vram_kunmap_locked(struct drm_gem_vram_object *gbo) @@ -449,7 +439,7 @@ void *drm_gem_vram_vmap(struct drm_gem_vram_object *gbo) ret = drm_gem_vram_pin_locked(gbo, 0); if (ret) goto err_ttm_bo_unreserve; - base = drm_gem_vram_kmap_locked(gbo, true, NULL); + base = drm_gem_vram_kmap_locked(gbo); if (IS_ERR(base)) { ret = PTR_ERR(base); goto err_drm_gem_vram_unpin_locked; -- 2.29.0 From tzimmermann at suse.de Wed Oct 28 19:35:13 2020 From: tzimmermann at suse.de (Thomas Zimmermann) Date: Wed, 28 Oct 2020 20:35:13 +0100 Subject: [Spice-devel] [PATCH v6 02/10] drm/cma-helper: Remove empty drm_gem_cma_prime_vunmap() In-Reply-To: <20201028193521.2489-1-tzimmermann@suse.de> References: <20201028193521.2489-1-tzimmermann@suse.de> Message-ID: <20201028193521.2489-3-tzimmermann@suse.de> The function drm_gem_cma_prime_vunmap() is empty. Remove it before changing the interface to use struct drm_buf_map. Signed-off-by: Thomas Zimmermann Reviewed-by: Christian K?nig Tested-by: Sam Ravnborg --- drivers/gpu/drm/drm_gem_cma_helper.c | 17 ----------------- drivers/gpu/drm/vc4/vc4_bo.c | 1 - include/drm/drm_gem_cma_helper.h | 1 - 3 files changed, 19 deletions(-) diff --git a/drivers/gpu/drm/drm_gem_cma_helper.c b/drivers/gpu/drm/drm_gem_cma_helper.c index 2165633c9b9e..d527485ea0b7 100644 --- a/drivers/gpu/drm/drm_gem_cma_helper.c +++ b/drivers/gpu/drm/drm_gem_cma_helper.c @@ -537,23 +537,6 @@ void *drm_gem_cma_prime_vmap(struct drm_gem_object *obj) } EXPORT_SYMBOL_GPL(drm_gem_cma_prime_vmap); -/** - * drm_gem_cma_prime_vunmap - unmap a CMA GEM object from the kernel's virtual - * address space - * @obj: GEM object - * @vaddr: kernel virtual address where the CMA GEM object was mapped - * - * This function removes a buffer exported via DRM PRIME from the kernel's - * virtual address space. This is a no-op because CMA buffers cannot be - * unmapped from kernel space. Drivers using the CMA helpers should set this - * as their &drm_gem_object_funcs.vunmap callback. - */ -void drm_gem_cma_prime_vunmap(struct drm_gem_object *obj, void *vaddr) -{ - /* Nothing to do */ -} -EXPORT_SYMBOL_GPL(drm_gem_cma_prime_vunmap); - static const struct drm_gem_object_funcs drm_gem_cma_default_funcs = { .free = drm_gem_cma_free_object, .print_info = drm_gem_cma_print_info, diff --git a/drivers/gpu/drm/vc4/vc4_bo.c b/drivers/gpu/drm/vc4/vc4_bo.c index f432278173cd..557f0d1e6437 100644 --- a/drivers/gpu/drm/vc4/vc4_bo.c +++ b/drivers/gpu/drm/vc4/vc4_bo.c @@ -387,7 +387,6 @@ static const struct drm_gem_object_funcs vc4_gem_object_funcs = { .export = vc4_prime_export, .get_sg_table = drm_gem_cma_prime_get_sg_table, .vmap = vc4_prime_vmap, - .vunmap = drm_gem_cma_prime_vunmap, .vm_ops = &vc4_vm_ops, }; diff --git a/include/drm/drm_gem_cma_helper.h b/include/drm/drm_gem_cma_helper.h index 2bfa2502607a..a064b0d1c480 100644 --- a/include/drm/drm_gem_cma_helper.h +++ b/include/drm/drm_gem_cma_helper.h @@ -104,7 +104,6 @@ drm_gem_cma_prime_import_sg_table(struct drm_device *dev, int drm_gem_cma_prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma); void *drm_gem_cma_prime_vmap(struct drm_gem_object *obj); -void drm_gem_cma_prime_vunmap(struct drm_gem_object *obj, void *vaddr); struct drm_gem_object * drm_gem_cma_create_object_default_funcs(struct drm_device *dev, size_t size); -- 2.29.0 From tzimmermann at suse.de Wed Oct 28 19:35:18 2020 From: tzimmermann at suse.de (Thomas Zimmermann) Date: Wed, 28 Oct 2020 20:35:18 +0100 Subject: [Spice-devel] [PATCH v6 07/10] drm/gem: Update internal GEM vmap/vunmap interfaces to use struct dma_buf_map In-Reply-To: <20201028193521.2489-1-tzimmermann@suse.de> References: <20201028193521.2489-1-tzimmermann@suse.de> Message-ID: <20201028193521.2489-8-tzimmermann@suse.de> GEM's vmap and vunmap interfaces now wrap memory pointers in struct dma_buf_map. Signed-off-by: Thomas Zimmermann Reviewed-by: Daniel Vetter Tested-by: Sam Ravnborg --- drivers/gpu/drm/drm_client.c | 18 +++++++++++------- drivers/gpu/drm/drm_gem.c | 26 +++++++++++++------------- drivers/gpu/drm/drm_internal.h | 5 +++-- drivers/gpu/drm/drm_prime.c | 14 ++++---------- 4 files changed, 31 insertions(+), 32 deletions(-) diff --git a/drivers/gpu/drm/drm_client.c b/drivers/gpu/drm/drm_client.c index 495f47d23d87..ac0082bed966 100644 --- a/drivers/gpu/drm/drm_client.c +++ b/drivers/gpu/drm/drm_client.c @@ -3,6 +3,7 @@ * Copyright 2018 Noralf Tr?nnes */ +#include #include #include #include @@ -304,7 +305,8 @@ drm_client_buffer_create(struct drm_client_dev *client, u32 width, u32 height, u */ void *drm_client_buffer_vmap(struct drm_client_buffer *buffer) { - void *vaddr; + struct dma_buf_map map; + int ret; if (buffer->vaddr) return buffer->vaddr; @@ -317,13 +319,13 @@ void *drm_client_buffer_vmap(struct drm_client_buffer *buffer) * fd_install step out of the driver backend hooks, to make that * final step optional for internal users. */ - vaddr = drm_gem_vmap(buffer->gem); - if (IS_ERR(vaddr)) - return vaddr; + ret = drm_gem_vmap(buffer->gem, &map); + if (ret) + return ERR_PTR(ret); - buffer->vaddr = vaddr; + buffer->vaddr = map.vaddr; - return vaddr; + return map.vaddr; } EXPORT_SYMBOL(drm_client_buffer_vmap); @@ -337,7 +339,9 @@ EXPORT_SYMBOL(drm_client_buffer_vmap); */ void drm_client_buffer_vunmap(struct drm_client_buffer *buffer) { - drm_gem_vunmap(buffer->gem, buffer->vaddr); + struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(buffer->vaddr); + + drm_gem_vunmap(buffer->gem, &map); buffer->vaddr = NULL; } EXPORT_SYMBOL(drm_client_buffer_vunmap); diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c index a89ad4570e3c..4d5fff4bd821 100644 --- a/drivers/gpu/drm/drm_gem.c +++ b/drivers/gpu/drm/drm_gem.c @@ -1206,32 +1206,32 @@ void drm_gem_unpin(struct drm_gem_object *obj) obj->funcs->unpin(obj); } -void *drm_gem_vmap(struct drm_gem_object *obj) +int drm_gem_vmap(struct drm_gem_object *obj, struct dma_buf_map *map) { - struct dma_buf_map map; int ret; if (!obj->funcs->vmap) - return ERR_PTR(-EOPNOTSUPP); + return -EOPNOTSUPP; - ret = obj->funcs->vmap(obj, &map); + ret = obj->funcs->vmap(obj, map); if (ret) - return ERR_PTR(ret); - else if (dma_buf_map_is_null(&map)) - return ERR_PTR(-ENOMEM); + return ret; + else if (dma_buf_map_is_null(map)) + return -ENOMEM; - return map.vaddr; + return 0; } -void drm_gem_vunmap(struct drm_gem_object *obj, void *vaddr) +void drm_gem_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map) { - struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(vaddr); - - if (!vaddr) + if (dma_buf_map_is_null(map)) return; if (obj->funcs->vunmap) - obj->funcs->vunmap(obj, &map); + obj->funcs->vunmap(obj, map); + + /* Always set the mapping to NULL. Callers may rely on this. */ + dma_buf_map_clear(map); } /** diff --git a/drivers/gpu/drm/drm_internal.h b/drivers/gpu/drm/drm_internal.h index 2bdac3557765..81d386b5b92a 100644 --- a/drivers/gpu/drm/drm_internal.h +++ b/drivers/gpu/drm/drm_internal.h @@ -33,6 +33,7 @@ struct dentry; struct dma_buf; +struct dma_buf_map; struct drm_connector; struct drm_crtc; struct drm_framebuffer; @@ -187,8 +188,8 @@ void drm_gem_print_info(struct drm_printer *p, unsigned int indent, int drm_gem_pin(struct drm_gem_object *obj); void drm_gem_unpin(struct drm_gem_object *obj); -void *drm_gem_vmap(struct drm_gem_object *obj); -void drm_gem_vunmap(struct drm_gem_object *obj, void *vaddr); +int drm_gem_vmap(struct drm_gem_object *obj, struct dma_buf_map *map); +void drm_gem_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map); /* drm_debugfs.c drm_debugfs_crc.c */ #if defined(CONFIG_DEBUG_FS) diff --git a/drivers/gpu/drm/drm_prime.c b/drivers/gpu/drm/drm_prime.c index 89e2a2496734..cb8fbeeb731b 100644 --- a/drivers/gpu/drm/drm_prime.c +++ b/drivers/gpu/drm/drm_prime.c @@ -667,21 +667,15 @@ EXPORT_SYMBOL(drm_gem_unmap_dma_buf); * * Sets up a kernel virtual mapping. This can be used as the &dma_buf_ops.vmap * callback. Calls into &drm_gem_object_funcs.vmap for device specific handling. + * The kernel virtual address is returned in map. * - * Returns the kernel virtual address or NULL on failure. + * Returns 0 on success or a negative errno code otherwise. */ int drm_gem_dmabuf_vmap(struct dma_buf *dma_buf, struct dma_buf_map *map) { struct drm_gem_object *obj = dma_buf->priv; - void *vaddr; - vaddr = drm_gem_vmap(obj); - if (IS_ERR(vaddr)) - return PTR_ERR(vaddr); - - dma_buf_map_set_vaddr(map, vaddr); - - return 0; + return drm_gem_vmap(obj, map); } EXPORT_SYMBOL(drm_gem_dmabuf_vmap); @@ -697,7 +691,7 @@ void drm_gem_dmabuf_vunmap(struct dma_buf *dma_buf, struct dma_buf_map *map) { struct drm_gem_object *obj = dma_buf->priv; - drm_gem_vunmap(obj, map->vaddr); + drm_gem_vunmap(obj, map); } EXPORT_SYMBOL(drm_gem_dmabuf_vunmap); -- 2.29.0 From tzimmermann at suse.de Wed Oct 28 19:35:19 2020 From: tzimmermann at suse.de (Thomas Zimmermann) Date: Wed, 28 Oct 2020 20:35:19 +0100 Subject: [Spice-devel] [PATCH v6 08/10] drm/gem: Store client buffer mappings as struct dma_buf_map In-Reply-To: <20201028193521.2489-1-tzimmermann@suse.de> References: <20201028193521.2489-1-tzimmermann@suse.de> Message-ID: <20201028193521.2489-9-tzimmermann@suse.de> Kernel DRM clients now store their framebuffer address in an instance of struct dma_buf_map. Depending on the buffer's location, the address refers to system or I/O memory. Callers of drm_client_buffer_vmap() receive a copy of the value in the call's supplied arguments. It can be accessed and modified with dma_buf_map interfaces. v6: * don't call page_to_phys() on framebuffers in I/O memory; warn instead (Daniel) Signed-off-by: Thomas Zimmermann Reviewed-by: Daniel Vetter Tested-by: Sam Ravnborg --- drivers/gpu/drm/drm_client.c | 34 +++++++++++++++++++-------------- drivers/gpu/drm/drm_fb_helper.c | 32 ++++++++++++++++++++----------- include/drm/drm_client.h | 7 ++++--- 3 files changed, 45 insertions(+), 28 deletions(-) diff --git a/drivers/gpu/drm/drm_client.c b/drivers/gpu/drm/drm_client.c index ac0082bed966..fe573acf1067 100644 --- a/drivers/gpu/drm/drm_client.c +++ b/drivers/gpu/drm/drm_client.c @@ -235,7 +235,7 @@ static void drm_client_buffer_delete(struct drm_client_buffer *buffer) { struct drm_device *dev = buffer->client->dev; - drm_gem_vunmap(buffer->gem, buffer->vaddr); + drm_gem_vunmap(buffer->gem, &buffer->map); if (buffer->gem) drm_gem_object_put(buffer->gem); @@ -291,25 +291,31 @@ drm_client_buffer_create(struct drm_client_dev *client, u32 width, u32 height, u /** * drm_client_buffer_vmap - Map DRM client buffer into address space * @buffer: DRM client buffer + * @map_copy: Returns the mapped memory's address * * This function maps a client buffer into kernel address space. If the - * buffer is already mapped, it returns the mapping's address. + * buffer is already mapped, it returns the existing mapping's address. * * Client buffer mappings are not ref'counted. Each call to * drm_client_buffer_vmap() should be followed by a call to * drm_client_buffer_vunmap(); or the client buffer should be mapped * throughout its lifetime. * + * The returned address is a copy of the internal value. In contrast to + * other vmap interfaces, you don't need it for the client's vunmap + * function. So you can modify it at will during blit and draw operations. + * * Returns: - * The mapped memory's address + * 0 on success, or a negative errno code otherwise. */ -void *drm_client_buffer_vmap(struct drm_client_buffer *buffer) +int +drm_client_buffer_vmap(struct drm_client_buffer *buffer, struct dma_buf_map *map_copy) { - struct dma_buf_map map; + struct dma_buf_map *map = &buffer->map; int ret; - if (buffer->vaddr) - return buffer->vaddr; + if (dma_buf_map_is_set(map)) + goto out; /* * FIXME: The dependency on GEM here isn't required, we could @@ -319,13 +325,14 @@ void *drm_client_buffer_vmap(struct drm_client_buffer *buffer) * fd_install step out of the driver backend hooks, to make that * final step optional for internal users. */ - ret = drm_gem_vmap(buffer->gem, &map); + ret = drm_gem_vmap(buffer->gem, map); if (ret) - return ERR_PTR(ret); + return ret; - buffer->vaddr = map.vaddr; +out: + *map_copy = *map; - return map.vaddr; + return 0; } EXPORT_SYMBOL(drm_client_buffer_vmap); @@ -339,10 +346,9 @@ EXPORT_SYMBOL(drm_client_buffer_vmap); */ void drm_client_buffer_vunmap(struct drm_client_buffer *buffer) { - struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(buffer->vaddr); + struct dma_buf_map *map = &buffer->map; - drm_gem_vunmap(buffer->gem, &map); - buffer->vaddr = NULL; + drm_gem_vunmap(buffer->gem, map); } EXPORT_SYMBOL(drm_client_buffer_vunmap); diff --git a/drivers/gpu/drm/drm_fb_helper.c b/drivers/gpu/drm/drm_fb_helper.c index c2f72bb6afb1..6ce0b9119ef2 100644 --- a/drivers/gpu/drm/drm_fb_helper.c +++ b/drivers/gpu/drm/drm_fb_helper.c @@ -378,7 +378,7 @@ static void drm_fb_helper_dirty_blit_real(struct drm_fb_helper *fb_helper, unsigned int cpp = fb->format->cpp[0]; size_t offset = clip->y1 * fb->pitches[0] + clip->x1 * cpp; void *src = fb_helper->fbdev->screen_buffer + offset; - void *dst = fb_helper->buffer->vaddr + offset; + void *dst = fb_helper->buffer->map.vaddr + offset; size_t len = (clip->x2 - clip->x1) * cpp; unsigned int y; @@ -400,7 +400,8 @@ static void drm_fb_helper_dirty_work(struct work_struct *work) struct drm_clip_rect *clip = &helper->dirty_clip; struct drm_clip_rect clip_copy; unsigned long flags; - void *vaddr; + struct dma_buf_map map; + int ret; spin_lock_irqsave(&helper->dirty_lock, flags); clip_copy = *clip; @@ -413,8 +414,8 @@ static void drm_fb_helper_dirty_work(struct work_struct *work) /* Generic fbdev uses a shadow buffer */ if (helper->buffer) { - vaddr = drm_client_buffer_vmap(helper->buffer); - if (IS_ERR(vaddr)) + ret = drm_client_buffer_vmap(helper->buffer, &map); + if (ret) return; drm_fb_helper_dirty_blit_real(helper, &clip_copy); } @@ -2060,7 +2061,8 @@ static int drm_fb_helper_generic_probe(struct drm_fb_helper *fb_helper, struct drm_framebuffer *fb; struct fb_info *fbi; u32 format; - void *vaddr; + struct dma_buf_map map; + int ret; drm_dbg_kms(dev, "surface width(%d), height(%d) and bpp(%d)\n", sizes->surface_width, sizes->surface_height, @@ -2096,14 +2098,22 @@ static int drm_fb_helper_generic_probe(struct drm_fb_helper *fb_helper, fb_deferred_io_init(fbi); } else { /* buffer is mapped for HW framebuffer */ - vaddr = drm_client_buffer_vmap(fb_helper->buffer); - if (IS_ERR(vaddr)) - return PTR_ERR(vaddr); + ret = drm_client_buffer_vmap(fb_helper->buffer, &map); + if (ret) + return ret; + if (map.is_iomem) + fbi->screen_base = map.vaddr_iomem; + else + fbi->screen_buffer = map.vaddr; - fbi->screen_buffer = vaddr; - /* Shamelessly leak the physical address to user-space */ + /* + * Shamelessly leak the physical address to user-space. As + * page_to_phys() is undefined for I/O memory, warn in this + * case. + */ #if IS_ENABLED(CONFIG_DRM_FBDEV_LEAK_PHYS_SMEM) - if (drm_leak_fbdev_smem && fbi->fix.smem_start == 0) + if (drm_leak_fbdev_smem && fbi->fix.smem_start == 0 && + !drm_WARN_ON_ONCE(dev, map.is_iomem)) fbi->fix.smem_start = page_to_phys(virt_to_page(fbi->screen_buffer)); #endif diff --git a/include/drm/drm_client.h b/include/drm/drm_client.h index 7aaea665bfc2..f07f2fb02e75 100644 --- a/include/drm/drm_client.h +++ b/include/drm/drm_client.h @@ -3,6 +3,7 @@ #ifndef _DRM_CLIENT_H_ #define _DRM_CLIENT_H_ +#include #include #include #include @@ -141,9 +142,9 @@ struct drm_client_buffer { struct drm_gem_object *gem; /** - * @vaddr: Virtual address for the buffer + * @map: Virtual address for the buffer */ - void *vaddr; + struct dma_buf_map map; /** * @fb: DRM framebuffer @@ -155,7 +156,7 @@ struct drm_client_buffer * drm_client_framebuffer_create(struct drm_client_dev *client, u32 width, u32 height, u32 format); void drm_client_framebuffer_delete(struct drm_client_buffer *buffer); int drm_client_framebuffer_flush(struct drm_client_buffer *buffer, struct drm_rect *rect); -void *drm_client_buffer_vmap(struct drm_client_buffer *buffer); +int drm_client_buffer_vmap(struct drm_client_buffer *buffer, struct dma_buf_map *map); void drm_client_buffer_vunmap(struct drm_client_buffer *buffer); int drm_client_modeset_create(struct drm_client_dev *client); -- 2.29.0 From tzimmermann at suse.de Wed Oct 28 19:35:21 2020 From: tzimmermann at suse.de (Thomas Zimmermann) Date: Wed, 28 Oct 2020 20:35:21 +0100 Subject: [Spice-devel] [PATCH v6 10/10] drm/fb_helper: Support framebuffers in I/O memory In-Reply-To: <20201028193521.2489-1-tzimmermann@suse.de> References: <20201028193521.2489-1-tzimmermann@suse.de> Message-ID: <20201028193521.2489-11-tzimmermann@suse.de> At least sparc64 requires I/O-specific access to framebuffers. This patch updates the fbdev console accordingly. For drivers with direct access to the framebuffer memory, the callback functions in struct fb_ops test for the type of memory and call the rsp fb_sys_ of fb_cfb_ functions. Read and write operations are implemented internally by DRM's fbdev helper. For drivers that employ a shadow buffer, fbdev's blit function retrieves the framebuffer address as struct dma_buf_map, and uses dma_buf_map interfaces to access the buffer. The bochs driver on sparc64 uses a workaround to flag the framebuffer as I/O memory and avoid a HW exception. With the introduction of struct dma_buf_map, this is not required any longer. The patch removes the rsp code from both, bochs and fbdev. v5: * implement fb_read/fb_write internally (Daniel, Sam) v4: * move dma_buf_map changes into separate patch (Daniel) * TODO list: comment on fbdev updates (Daniel) Signed-off-by: Thomas Zimmermann Reviewed-by: Daniel Vetter Reviewed-by: Sam Ravnborg Tested-by: Sam Ravnborg --- Documentation/gpu/todo.rst | 19 ++- drivers/gpu/drm/bochs/bochs_kms.c | 1 - drivers/gpu/drm/drm_fb_helper.c | 227 ++++++++++++++++++++++++++++-- include/drm/drm_mode_config.h | 12 -- 4 files changed, 230 insertions(+), 29 deletions(-) diff --git a/Documentation/gpu/todo.rst b/Documentation/gpu/todo.rst index 7e6fc3c04add..638b7f704339 100644 --- a/Documentation/gpu/todo.rst +++ b/Documentation/gpu/todo.rst @@ -197,13 +197,28 @@ Convert drivers to use drm_fbdev_generic_setup() ------------------------------------------------ Most drivers can use drm_fbdev_generic_setup(). Driver have to implement -atomic modesetting and GEM vmap support. Current generic fbdev emulation -expects the framebuffer in system memory (or system-like memory). +atomic modesetting and GEM vmap support. Historically, generic fbdev emulation +expected the framebuffer in system memory or system-like memory. By employing +struct dma_buf_map, drivers with frambuffers in I/O memory can be supported +as well. Contact: Maintainer of the driver you plan to convert Level: Intermediate +Reimplement functions in drm_fbdev_fb_ops without fbdev +------------------------------------------------------- + +A number of callback functions in drm_fbdev_fb_ops could benefit from +being rewritten without dependencies on the fbdev module. Some of the +helpers could further benefit from using struct dma_buf_map instead of +raw pointers. + +Contact: Thomas Zimmermann , Daniel Vetter + +Level: Advanced + + drm_framebuffer_funcs and drm_mode_config_funcs.fb_create cleanup ----------------------------------------------------------------- diff --git a/drivers/gpu/drm/bochs/bochs_kms.c b/drivers/gpu/drm/bochs/bochs_kms.c index 13d0d04c4457..853081d186d5 100644 --- a/drivers/gpu/drm/bochs/bochs_kms.c +++ b/drivers/gpu/drm/bochs/bochs_kms.c @@ -151,7 +151,6 @@ int bochs_kms_init(struct bochs_device *bochs) bochs->dev->mode_config.preferred_depth = 24; bochs->dev->mode_config.prefer_shadow = 0; bochs->dev->mode_config.prefer_shadow_fbdev = 1; - bochs->dev->mode_config.fbdev_use_iomem = true; bochs->dev->mode_config.quirk_addfb_prefer_host_byte_order = true; bochs->dev->mode_config.funcs = &bochs_mode_funcs; diff --git a/drivers/gpu/drm/drm_fb_helper.c b/drivers/gpu/drm/drm_fb_helper.c index 6ce0b9119ef2..714ce3bd6221 100644 --- a/drivers/gpu/drm/drm_fb_helper.c +++ b/drivers/gpu/drm/drm_fb_helper.c @@ -372,24 +372,22 @@ static void drm_fb_helper_resume_worker(struct work_struct *work) } static void drm_fb_helper_dirty_blit_real(struct drm_fb_helper *fb_helper, - struct drm_clip_rect *clip) + struct drm_clip_rect *clip, + struct dma_buf_map *dst) { struct drm_framebuffer *fb = fb_helper->fb; unsigned int cpp = fb->format->cpp[0]; size_t offset = clip->y1 * fb->pitches[0] + clip->x1 * cpp; void *src = fb_helper->fbdev->screen_buffer + offset; - void *dst = fb_helper->buffer->map.vaddr + offset; size_t len = (clip->x2 - clip->x1) * cpp; unsigned int y; - for (y = clip->y1; y < clip->y2; y++) { - if (!fb_helper->dev->mode_config.fbdev_use_iomem) - memcpy(dst, src, len); - else - memcpy_toio((void __iomem *)dst, src, len); + dma_buf_map_incr(dst, offset); /* go to first pixel within clip rect */ + for (y = clip->y1; y < clip->y2; y++) { + dma_buf_map_memcpy_to(dst, src, len); + dma_buf_map_incr(dst, fb->pitches[0]); src += fb->pitches[0]; - dst += fb->pitches[0]; } } @@ -417,8 +415,9 @@ static void drm_fb_helper_dirty_work(struct work_struct *work) ret = drm_client_buffer_vmap(helper->buffer, &map); if (ret) return; - drm_fb_helper_dirty_blit_real(helper, &clip_copy); + drm_fb_helper_dirty_blit_real(helper, &clip_copy, &map); } + if (helper->fb->funcs->dirty) helper->fb->funcs->dirty(helper->fb, NULL, 0, 0, &clip_copy, 1); @@ -2027,6 +2026,206 @@ static int drm_fbdev_fb_mmap(struct fb_info *info, struct vm_area_struct *vma) return -ENODEV; } +static bool drm_fbdev_use_iomem(struct fb_info *info) +{ + struct drm_fb_helper *fb_helper = info->par; + struct drm_client_buffer *buffer = fb_helper->buffer; + + return !drm_fbdev_use_shadow_fb(fb_helper) && buffer->map.is_iomem; +} + +static ssize_t fb_read_screen_base(struct fb_info *info, char __user *buf, size_t count, + loff_t pos) +{ + const char __iomem *src = info->screen_base + pos; + size_t alloc_size = min(count, PAGE_SIZE); + ssize_t ret = 0; + char *tmp; + + tmp = kmalloc(alloc_size, GFP_KERNEL); + if (!tmp) + return -ENOMEM; + + while (count) { + size_t c = min(count, alloc_size); + + memcpy_fromio(tmp, src, c); + if (copy_to_user(buf, tmp, c)) { + ret = -EFAULT; + break; + } + + src += c; + buf += c; + ret += c; + count -= c; + } + + kfree(tmp); + + return ret; +} + +static ssize_t fb_read_screen_buffer(struct fb_info *info, char __user *buf, size_t count, + loff_t pos) +{ + const char *src = info->screen_buffer + pos; + + if (copy_to_user(buf, src, count)) + return -EFAULT; + + return count; +} + +static ssize_t drm_fbdev_fb_read(struct fb_info *info, char __user *buf, + size_t count, loff_t *ppos) +{ + loff_t pos = *ppos; + size_t total_size; + ssize_t ret; + + if (info->state != FBINFO_STATE_RUNNING) + return -EPERM; + + if (info->screen_size) + total_size = info->screen_size; + else + total_size = info->fix.smem_len; + + if (pos >= total_size) + return 0; + if (count >= total_size) + count = total_size; + if (total_size - count < pos) + count = total_size - pos; + + if (drm_fbdev_use_iomem(info)) + ret = fb_read_screen_base(info, buf, count, pos); + else + ret = fb_read_screen_buffer(info, buf, count, pos); + + if (ret > 0) + *ppos = ret; + + return ret; +} + +static ssize_t fb_write_screen_base(struct fb_info *info, const char __user *buf, size_t count, + loff_t pos) +{ + char __iomem *dst = info->screen_base + pos; + size_t alloc_size = min(count, PAGE_SIZE); + ssize_t ret = 0; + u8 *tmp; + + tmp = kmalloc(alloc_size, GFP_KERNEL); + if (!tmp) + return -ENOMEM; + + while (count) { + size_t c = min(count, alloc_size); + + if (copy_from_user(tmp, buf, c)) { + ret = -EFAULT; + break; + } + memcpy_toio(dst, tmp, c); + + dst += c; + buf += c; + ret += c; + count -= c; + } + + kfree(tmp); + + return ret; +} + +static ssize_t fb_write_screen_buffer(struct fb_info *info, const char __user *buf, size_t count, + loff_t pos) +{ + char *dst = info->screen_buffer + pos; + + if (copy_from_user(dst, buf, count)) + return -EFAULT; + + return count; +} + +static ssize_t drm_fbdev_fb_write(struct fb_info *info, const char __user *buf, + size_t count, loff_t *ppos) +{ + loff_t pos = *ppos; + size_t total_size; + ssize_t ret; + int err; + + if (info->state != FBINFO_STATE_RUNNING) + return -EPERM; + + if (info->screen_size) + total_size = info->screen_size; + else + total_size = info->fix.smem_len; + + if (pos > total_size) + return -EFBIG; + if (count > total_size) { + err = -EFBIG; + count = total_size; + } + if (total_size - count < pos) { + if (!err) + err = -ENOSPC; + count = total_size - pos; + } + + /* + * Copy to framebuffer even if we already logged an error. Emulates + * the behavior of the original fbdev implementation. + */ + if (drm_fbdev_use_iomem(info)) + ret = fb_write_screen_base(info, buf, count, pos); + else + ret = fb_write_screen_buffer(info, buf, count, pos); + + if (ret > 0) + *ppos = ret; + + if (err) + return err; + + return ret; +} + +static void drm_fbdev_fb_fillrect(struct fb_info *info, + const struct fb_fillrect *rect) +{ + if (drm_fbdev_use_iomem(info)) + drm_fb_helper_cfb_fillrect(info, rect); + else + drm_fb_helper_sys_fillrect(info, rect); +} + +static void drm_fbdev_fb_copyarea(struct fb_info *info, + const struct fb_copyarea *area) +{ + if (drm_fbdev_use_iomem(info)) + drm_fb_helper_cfb_copyarea(info, area); + else + drm_fb_helper_sys_copyarea(info, area); +} + +static void drm_fbdev_fb_imageblit(struct fb_info *info, + const struct fb_image *image) +{ + if (drm_fbdev_use_iomem(info)) + drm_fb_helper_cfb_imageblit(info, image); + else + drm_fb_helper_sys_imageblit(info, image); +} + static const struct fb_ops drm_fbdev_fb_ops = { .owner = THIS_MODULE, DRM_FB_HELPER_DEFAULT_OPS, @@ -2034,11 +2233,11 @@ static const struct fb_ops drm_fbdev_fb_ops = { .fb_release = drm_fbdev_fb_release, .fb_destroy = drm_fbdev_fb_destroy, .fb_mmap = drm_fbdev_fb_mmap, - .fb_read = drm_fb_helper_sys_read, - .fb_write = drm_fb_helper_sys_write, - .fb_fillrect = drm_fb_helper_sys_fillrect, - .fb_copyarea = drm_fb_helper_sys_copyarea, - .fb_imageblit = drm_fb_helper_sys_imageblit, + .fb_read = drm_fbdev_fb_read, + .fb_write = drm_fbdev_fb_write, + .fb_fillrect = drm_fbdev_fb_fillrect, + .fb_copyarea = drm_fbdev_fb_copyarea, + .fb_imageblit = drm_fbdev_fb_imageblit, }; static struct fb_deferred_io drm_fbdev_defio = { diff --git a/include/drm/drm_mode_config.h b/include/drm/drm_mode_config.h index 5ffbb4ed5b35..ab424ddd7665 100644 --- a/include/drm/drm_mode_config.h +++ b/include/drm/drm_mode_config.h @@ -877,18 +877,6 @@ struct drm_mode_config { */ bool prefer_shadow_fbdev; - /** - * @fbdev_use_iomem: - * - * Set to true if framebuffer reside in iomem. - * When set to true memcpy_toio() is used when copying the framebuffer in - * drm_fb_helper.drm_fb_helper_dirty_blit_real(). - * - * FIXME: This should be replaced with a per-mapping is_iomem - * flag (like ttm does), and then used everywhere in fbdev code. - */ - bool fbdev_use_iomem; - /** * @quirk_addfb_prefer_xbgr_30bpp: * -- 2.29.0 From tzimmermann at suse.de Wed Oct 28 19:35:14 2020 From: tzimmermann at suse.de (Thomas Zimmermann) Date: Wed, 28 Oct 2020 20:35:14 +0100 Subject: [Spice-devel] [PATCH v6 03/10] drm/etnaviv: Remove empty etnaviv_gem_prime_vunmap() In-Reply-To: <20201028193521.2489-1-tzimmermann@suse.de> References: <20201028193521.2489-1-tzimmermann@suse.de> Message-ID: <20201028193521.2489-4-tzimmermann@suse.de> The function etnaviv_gem_prime_vunmap() is empty. Remove it before changing the interface to use struct drm_buf_map. Signed-off-by: Thomas Zimmermann Acked-by: Christian K?nig Tested-by: Sam Ravnborg --- drivers/gpu/drm/etnaviv/etnaviv_drv.h | 1 - drivers/gpu/drm/etnaviv/etnaviv_gem.c | 1 - drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c | 5 ----- 3 files changed, 7 deletions(-) diff --git a/drivers/gpu/drm/etnaviv/etnaviv_drv.h b/drivers/gpu/drm/etnaviv/etnaviv_drv.h index 914f0867ff71..9682c26d89bb 100644 --- a/drivers/gpu/drm/etnaviv/etnaviv_drv.h +++ b/drivers/gpu/drm/etnaviv/etnaviv_drv.h @@ -52,7 +52,6 @@ int etnaviv_gem_mmap(struct file *filp, struct vm_area_struct *vma); int etnaviv_gem_mmap_offset(struct drm_gem_object *obj, u64 *offset); struct sg_table *etnaviv_gem_prime_get_sg_table(struct drm_gem_object *obj); void *etnaviv_gem_prime_vmap(struct drm_gem_object *obj); -void etnaviv_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr); int etnaviv_gem_prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma); struct drm_gem_object *etnaviv_gem_prime_import_sg_table(struct drm_device *dev, diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem.c b/drivers/gpu/drm/etnaviv/etnaviv_gem.c index 67d9a2b9ea6a..bbd235473645 100644 --- a/drivers/gpu/drm/etnaviv/etnaviv_gem.c +++ b/drivers/gpu/drm/etnaviv/etnaviv_gem.c @@ -571,7 +571,6 @@ static const struct drm_gem_object_funcs etnaviv_gem_object_funcs = { .unpin = etnaviv_gem_prime_unpin, .get_sg_table = etnaviv_gem_prime_get_sg_table, .vmap = etnaviv_gem_prime_vmap, - .vunmap = etnaviv_gem_prime_vunmap, .vm_ops = &vm_ops, }; diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c index 135fbff6fecf..a6d9932a32ae 100644 --- a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c +++ b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c @@ -27,11 +27,6 @@ void *etnaviv_gem_prime_vmap(struct drm_gem_object *obj) return etnaviv_gem_vmap(obj); } -void etnaviv_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr) -{ - /* TODO msm_gem_vunmap() */ -} - int etnaviv_gem_prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma) { -- 2.29.0 From tzimmermann at suse.de Wed Oct 28 19:35:11 2020 From: tzimmermann at suse.de (Thomas Zimmermann) Date: Wed, 28 Oct 2020 20:35:11 +0100 Subject: [Spice-devel] [PATCH v6 00/10] Support GEM object mappings from I/O memory Message-ID: <20201028193521.2489-1-tzimmermann@suse.de> DRM's fbdev console uses regular load and store operations to update framebuffer memory. The bochs driver on sparc64 requires the use of I/O-specific load and store operations. We have a workaround, but need a long-term solution to the problem. This patchset changes GEM's vmap/vunmap interfaces to forward pointers of type struct dma_buf_map and updates the generic fbdev emulation to use them correctly. This enables I/O-memory operations on all framebuffers that require and support them. Patches #1 to #4 prepare VRAM helpers and drivers. Next is the update of the GEM vmap functions. Patch #5 adds vmap and vunmap that is usable with TTM-based GEM drivers, and patch #6 updates GEM's vmap/vunmap callback to forward instances of type struct dma_buf_map. While the patch touches many files throughout the DRM modules, the applied changes are mostly trivial interface fixes. Several TTM-based GEM drivers now use the new vmap code. Patch #7 updates GEM's internal vmap/vunmap functions to forward struct dma_buf_map. With struct dma_buf_map propagated through the layers, patches #8 to #10 convert DRM clients and generic fbdev emulation to use it. Updating the fbdev framebuffer will select the correct functions, either for system or I/O memory. v6: * don't call page_to_phys() on fbdev framebuffers in I/O memory; warn instead (Daniel) v5: * rebase onto latest TTM changes (Christian) * support TTM premapped memory correctly (Christian) * implement fb_read/fb_write internally (Sam, Daniel) * cleanups v4: * provide TTM vmap/vunmap plus GEM helpers and convert drivers over (Christian, Daniel) * remove several empty functions * more TODOs and documentation (Daniel) v3: * recreate the whole patchset on top of struct dma_buf_map v2: * RFC patchset Thomas Zimmermann (10): drm/vram-helper: Remove invariant parameters from internal kmap function drm/cma-helper: Remove empty drm_gem_cma_prime_vunmap() drm/etnaviv: Remove empty etnaviv_gem_prime_vunmap() drm/exynos: Remove empty exynos_drm_gem_prime_{vmap,vunmap}() drm/ttm: Add vmap/vunmap to TTM and TTM GEM helpers drm/gem: Use struct dma_buf_map in GEM vmap ops and convert GEM backends drm/gem: Update internal GEM vmap/vunmap interfaces to use struct dma_buf_map drm/gem: Store client buffer mappings as struct dma_buf_map dma-buf-map: Add memcpy and pointer-increment interfaces drm/fb_helper: Support framebuffers in I/O memory Documentation/gpu/todo.rst | 37 ++- drivers/gpu/drm/Kconfig | 2 + drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c | 36 --- drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.h | 2 - drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c | 5 +- drivers/gpu/drm/amd/amdgpu/amdgpu_object.h | 1 - drivers/gpu/drm/ast/ast_cursor.c | 27 +- drivers/gpu/drm/ast/ast_drv.h | 7 +- drivers/gpu/drm/bochs/bochs_kms.c | 1 - drivers/gpu/drm/drm_client.c | 38 +-- drivers/gpu/drm/drm_fb_helper.c | 257 ++++++++++++++++++-- drivers/gpu/drm/drm_gem.c | 29 ++- drivers/gpu/drm/drm_gem_cma_helper.c | 27 +- drivers/gpu/drm/drm_gem_shmem_helper.c | 48 ++-- drivers/gpu/drm/drm_gem_ttm_helper.c | 38 +++ drivers/gpu/drm/drm_gem_vram_helper.c | 117 ++++----- drivers/gpu/drm/drm_internal.h | 5 +- drivers/gpu/drm/drm_prime.c | 14 +- drivers/gpu/drm/etnaviv/etnaviv_drv.h | 3 +- drivers/gpu/drm/etnaviv/etnaviv_gem.c | 1 - drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c | 12 +- drivers/gpu/drm/exynos/exynos_drm_gem.c | 12 - drivers/gpu/drm/exynos/exynos_drm_gem.h | 2 - drivers/gpu/drm/lima/lima_gem.c | 6 +- drivers/gpu/drm/lima/lima_sched.c | 11 +- drivers/gpu/drm/mgag200/mgag200_mode.c | 10 +- drivers/gpu/drm/nouveau/Kconfig | 1 + drivers/gpu/drm/nouveau/nouveau_bo.h | 2 - drivers/gpu/drm/nouveau/nouveau_gem.c | 6 +- drivers/gpu/drm/nouveau/nouveau_gem.h | 2 - drivers/gpu/drm/nouveau/nouveau_prime.c | 20 -- drivers/gpu/drm/panfrost/panfrost_perfcnt.c | 14 +- drivers/gpu/drm/qxl/qxl_display.c | 11 +- drivers/gpu/drm/qxl/qxl_draw.c | 14 +- drivers/gpu/drm/qxl/qxl_drv.h | 11 +- drivers/gpu/drm/qxl/qxl_object.c | 31 ++- drivers/gpu/drm/qxl/qxl_object.h | 2 +- drivers/gpu/drm/qxl/qxl_prime.c | 12 +- drivers/gpu/drm/radeon/radeon.h | 1 - drivers/gpu/drm/radeon/radeon_gem.c | 7 +- drivers/gpu/drm/radeon/radeon_prime.c | 20 -- drivers/gpu/drm/rockchip/rockchip_drm_gem.c | 22 +- drivers/gpu/drm/rockchip/rockchip_drm_gem.h | 4 +- drivers/gpu/drm/tiny/cirrus.c | 10 +- drivers/gpu/drm/tiny/gm12u320.c | 10 +- drivers/gpu/drm/ttm/ttm_bo_util.c | 72 ++++++ drivers/gpu/drm/udl/udl_modeset.c | 8 +- drivers/gpu/drm/vboxvideo/vbox_mode.c | 11 +- drivers/gpu/drm/vc4/vc4_bo.c | 7 +- drivers/gpu/drm/vc4/vc4_drv.h | 2 +- drivers/gpu/drm/vgem/vgem_drv.c | 16 +- drivers/gpu/drm/vkms/vkms_plane.c | 15 +- drivers/gpu/drm/vkms/vkms_writeback.c | 22 +- drivers/gpu/drm/xen/xen_drm_front_gem.c | 18 +- drivers/gpu/drm/xen/xen_drm_front_gem.h | 6 +- include/drm/drm_client.h | 7 +- include/drm/drm_gem.h | 5 +- include/drm/drm_gem_cma_helper.h | 3 +- include/drm/drm_gem_shmem_helper.h | 4 +- include/drm/drm_gem_ttm_helper.h | 6 + include/drm/drm_gem_vram_helper.h | 14 +- include/drm/drm_mode_config.h | 12 - include/drm/ttm/ttm_bo_api.h | 28 +++ include/linux/dma-buf-map.h | 93 ++++++- 64 files changed, 859 insertions(+), 438 deletions(-) -- 2.29.0 From tzimmermann at suse.de Wed Oct 28 19:35:15 2020 From: tzimmermann at suse.de (Thomas Zimmermann) Date: Wed, 28 Oct 2020 20:35:15 +0100 Subject: [Spice-devel] [PATCH v6 04/10] drm/exynos: Remove empty exynos_drm_gem_prime_{vmap, vunmap}() In-Reply-To: <20201028193521.2489-1-tzimmermann@suse.de> References: <20201028193521.2489-1-tzimmermann@suse.de> Message-ID: <20201028193521.2489-5-tzimmermann@suse.de> The functions exynos_drm_gem_prime_{vmap,vunmap}() are empty. Remove them before changing the interface to use struct drm_buf_map. As a side effect of removing drm_gem_prime_vmap(), the error code changes from ENOMEM to EOPNOTSUPP. Signed-off-by: Thomas Zimmermann Acked-by: Christian K?nig Tested-by: Sam Ravnborg --- drivers/gpu/drm/exynos/exynos_drm_gem.c | 12 ------------ drivers/gpu/drm/exynos/exynos_drm_gem.h | 2 -- 2 files changed, 14 deletions(-) diff --git a/drivers/gpu/drm/exynos/exynos_drm_gem.c b/drivers/gpu/drm/exynos/exynos_drm_gem.c index e7a6eb96f692..13a35623ac04 100644 --- a/drivers/gpu/drm/exynos/exynos_drm_gem.c +++ b/drivers/gpu/drm/exynos/exynos_drm_gem.c @@ -137,8 +137,6 @@ static const struct vm_operations_struct exynos_drm_gem_vm_ops = { static const struct drm_gem_object_funcs exynos_drm_gem_object_funcs = { .free = exynos_drm_gem_free_object, .get_sg_table = exynos_drm_gem_prime_get_sg_table, - .vmap = exynos_drm_gem_prime_vmap, - .vunmap = exynos_drm_gem_prime_vunmap, .vm_ops = &exynos_drm_gem_vm_ops, }; @@ -471,16 +469,6 @@ exynos_drm_gem_prime_import_sg_table(struct drm_device *dev, return &exynos_gem->base; } -void *exynos_drm_gem_prime_vmap(struct drm_gem_object *obj) -{ - return NULL; -} - -void exynos_drm_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr) -{ - /* Nothing to do */ -} - int exynos_drm_gem_prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma) { diff --git a/drivers/gpu/drm/exynos/exynos_drm_gem.h b/drivers/gpu/drm/exynos/exynos_drm_gem.h index 74e926abeff0..a23272fb96fb 100644 --- a/drivers/gpu/drm/exynos/exynos_drm_gem.h +++ b/drivers/gpu/drm/exynos/exynos_drm_gem.h @@ -107,8 +107,6 @@ struct drm_gem_object * exynos_drm_gem_prime_import_sg_table(struct drm_device *dev, struct dma_buf_attachment *attach, struct sg_table *sgt); -void *exynos_drm_gem_prime_vmap(struct drm_gem_object *obj); -void exynos_drm_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr); int exynos_drm_gem_prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma); -- 2.29.0 From tzimmermann at suse.de Wed Oct 28 19:35:16 2020 From: tzimmermann at suse.de (Thomas Zimmermann) Date: Wed, 28 Oct 2020 20:35:16 +0100 Subject: [Spice-devel] [PATCH v6 05/10] drm/ttm: Add vmap/vunmap to TTM and TTM GEM helpers In-Reply-To: <20201028193521.2489-1-tzimmermann@suse.de> References: <20201028193521.2489-1-tzimmermann@suse.de> Message-ID: <20201028193521.2489-6-tzimmermann@suse.de> The new functions ttm_bo_{vmap,vunmap}() map and unmap a TTM BO in kernel address space. The mapping's address is returned as struct dma_buf_map. Each function is a simplified version of TTM's existing kmap code. Both functions respect the memory's location ani/or writecombine flags. On top TTM's functions, GEM TTM helpers got drm_gem_ttm_{vmap,vunmap}(), two helpers that convert a GEM object into the TTM BO and forward the call to TTM's vmap/vunmap. These helpers can be dropped into the rsp GEM object callbacks. v5: * use size_t for storing mapping size (Christian) * ignore premapped memory areas correctly in ttm_bo_vunmap() * rebase onto latest TTM interfaces (Christian) * remove BUG() from ttm_bo_vmap() (Christian) v4: * drop ttm_kmap_obj_to_dma_buf() in favor of vmap helpers (Daniel, Christian) Signed-off-by: Thomas Zimmermann Reviewed-by: Christian K?nig Acked-by: Daniel Vetter Tested-by: Sam Ravnborg --- drivers/gpu/drm/drm_gem_ttm_helper.c | 38 +++++++++++++++ drivers/gpu/drm/ttm/ttm_bo_util.c | 72 ++++++++++++++++++++++++++++ include/drm/drm_gem_ttm_helper.h | 6 +++ include/drm/ttm/ttm_bo_api.h | 28 +++++++++++ include/linux/dma-buf-map.h | 20 ++++++++ 5 files changed, 164 insertions(+) diff --git a/drivers/gpu/drm/drm_gem_ttm_helper.c b/drivers/gpu/drm/drm_gem_ttm_helper.c index 0e4fb9ba43ad..db4c14d78a30 100644 --- a/drivers/gpu/drm/drm_gem_ttm_helper.c +++ b/drivers/gpu/drm/drm_gem_ttm_helper.c @@ -49,6 +49,44 @@ void drm_gem_ttm_print_info(struct drm_printer *p, unsigned int indent, } EXPORT_SYMBOL(drm_gem_ttm_print_info); +/** + * drm_gem_ttm_vmap() - vmap &ttm_buffer_object + * @gem: GEM object. + * @map: [out] returns the dma-buf mapping. + * + * Maps a GEM object with ttm_bo_vmap(). This function can be used as + * &drm_gem_object_funcs.vmap callback. + * + * Returns: + * 0 on success, or a negative errno code otherwise. + */ +int drm_gem_ttm_vmap(struct drm_gem_object *gem, + struct dma_buf_map *map) +{ + struct ttm_buffer_object *bo = drm_gem_ttm_of_gem(gem); + + return ttm_bo_vmap(bo, map); + +} +EXPORT_SYMBOL(drm_gem_ttm_vmap); + +/** + * drm_gem_ttm_vunmap() - vunmap &ttm_buffer_object + * @gem: GEM object. + * @map: dma-buf mapping. + * + * Unmaps a GEM object with ttm_bo_vunmap(). This function can be used as + * &drm_gem_object_funcs.vmap callback. + */ +void drm_gem_ttm_vunmap(struct drm_gem_object *gem, + struct dma_buf_map *map) +{ + struct ttm_buffer_object *bo = drm_gem_ttm_of_gem(gem); + + ttm_bo_vunmap(bo, map); +} +EXPORT_SYMBOL(drm_gem_ttm_vunmap); + /** * drm_gem_ttm_mmap() - mmap &ttm_buffer_object * @gem: GEM object. diff --git a/drivers/gpu/drm/ttm/ttm_bo_util.c b/drivers/gpu/drm/ttm/ttm_bo_util.c index ecb54415d1ca..7ccb2295cac1 100644 --- a/drivers/gpu/drm/ttm/ttm_bo_util.c +++ b/drivers/gpu/drm/ttm/ttm_bo_util.c @@ -32,6 +32,7 @@ #include #include #include +#include #include #include #include @@ -471,6 +472,77 @@ void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map) } EXPORT_SYMBOL(ttm_bo_kunmap); +int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map) +{ + struct ttm_resource *mem = &bo->mem; + int ret; + + ret = ttm_mem_io_reserve(bo->bdev, mem); + if (ret) + return ret; + + if (mem->bus.is_iomem) { + void __iomem *vaddr_iomem; + size_t size = bo->num_pages << PAGE_SHIFT; + + if (mem->bus.addr) + vaddr_iomem = (void __iomem *)mem->bus.addr; + else if (mem->bus.caching == ttm_write_combined) + vaddr_iomem = ioremap_wc(mem->bus.offset, size); + else + vaddr_iomem = ioremap(mem->bus.offset, size); + + if (!vaddr_iomem) + return -ENOMEM; + + dma_buf_map_set_vaddr_iomem(map, vaddr_iomem); + + } else { + struct ttm_operation_ctx ctx = { + .interruptible = false, + .no_wait_gpu = false + }; + struct ttm_tt *ttm = bo->ttm; + pgprot_t prot; + void *vaddr; + + ret = ttm_tt_populate(bo->bdev, ttm, &ctx); + if (ret) + return ret; + + /* + * We need to use vmap to get the desired page protection + * or to make the buffer object look contiguous. + */ + prot = ttm_io_prot(bo, mem, PAGE_KERNEL); + vaddr = vmap(ttm->pages, bo->num_pages, 0, prot); + if (!vaddr) + return -ENOMEM; + + dma_buf_map_set_vaddr(map, vaddr); + } + + return 0; +} +EXPORT_SYMBOL(ttm_bo_vmap); + +void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map *map) +{ + struct ttm_resource *mem = &bo->mem; + + if (dma_buf_map_is_null(map)) + return; + + if (!map->is_iomem) + vunmap(map->vaddr); + else if (!mem->bus.addr) + iounmap(map->vaddr_iomem); + dma_buf_map_clear(map); + + ttm_mem_io_free(bo->bdev, &bo->mem); +} +EXPORT_SYMBOL(ttm_bo_vunmap); + static int ttm_bo_wait_free_node(struct ttm_buffer_object *bo, bool dst_use_tt) { diff --git a/include/drm/drm_gem_ttm_helper.h b/include/drm/drm_gem_ttm_helper.h index 118cef76f84f..7c6d874910b8 100644 --- a/include/drm/drm_gem_ttm_helper.h +++ b/include/drm/drm_gem_ttm_helper.h @@ -10,11 +10,17 @@ #include #include +struct dma_buf_map; + #define drm_gem_ttm_of_gem(gem_obj) \ container_of(gem_obj, struct ttm_buffer_object, base) void drm_gem_ttm_print_info(struct drm_printer *p, unsigned int indent, const struct drm_gem_object *gem); +int drm_gem_ttm_vmap(struct drm_gem_object *gem, + struct dma_buf_map *map); +void drm_gem_ttm_vunmap(struct drm_gem_object *gem, + struct dma_buf_map *map); int drm_gem_ttm_mmap(struct drm_gem_object *gem, struct vm_area_struct *vma); diff --git a/include/drm/ttm/ttm_bo_api.h b/include/drm/ttm/ttm_bo_api.h index 37102e45e496..2c59a785374c 100644 --- a/include/drm/ttm/ttm_bo_api.h +++ b/include/drm/ttm/ttm_bo_api.h @@ -48,6 +48,8 @@ struct ttm_bo_global; struct ttm_bo_device; +struct dma_buf_map; + struct drm_mm_node; struct ttm_placement; @@ -494,6 +496,32 @@ int ttm_bo_kmap(struct ttm_buffer_object *bo, unsigned long start_page, */ void ttm_bo_kunmap(struct ttm_bo_kmap_obj *map); +/** + * ttm_bo_vmap + * + * @bo: The buffer object. + * @map: pointer to a struct dma_buf_map representing the map. + * + * Sets up a kernel virtual mapping, using ioremap or vmap to the + * data in the buffer object. The parameter @map returns the virtual + * address as struct dma_buf_map. Unmap the buffer with ttm_bo_vunmap(). + * + * Returns + * -ENOMEM: Out of memory. + * -EINVAL: Invalid range. + */ +int ttm_bo_vmap(struct ttm_buffer_object *bo, struct dma_buf_map *map); + +/** + * ttm_bo_vunmap + * + * @bo: The buffer object. + * @map: Object describing the map to unmap. + * + * Unmaps a kernel map set up by ttm_bo_vmap(). + */ +void ttm_bo_vunmap(struct ttm_buffer_object *bo, struct dma_buf_map *map); + /** * ttm_bo_mmap_obj - mmap memory backed by a ttm buffer object. * diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h index fd1aba545fdf..2e8bbecb5091 100644 --- a/include/linux/dma-buf-map.h +++ b/include/linux/dma-buf-map.h @@ -45,6 +45,12 @@ * * dma_buf_map_set_vaddr(&map. 0xdeadbeaf); * + * To set an address in I/O memory, use dma_buf_map_set_vaddr_iomem(). + * + * .. code-block:: c + * + * dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf); + * * Test if a mapping is valid with either dma_buf_map_is_set() or * dma_buf_map_is_null(). * @@ -118,6 +124,20 @@ static inline void dma_buf_map_set_vaddr(struct dma_buf_map *map, void *vaddr) map->is_iomem = false; } +/** + * dma_buf_map_set_vaddr_iomem - Sets a dma-buf mapping structure to an address in I/O memory + * @map: The dma-buf mapping structure + * @vaddr_iomem: An I/O-memory address + * + * Sets the address and the I/O-memory flag. + */ +static inline void dma_buf_map_set_vaddr_iomem(struct dma_buf_map *map, + void __iomem *vaddr_iomem) +{ + map->vaddr_iomem = vaddr_iomem; + map->is_iomem = true; +} + /** * dma_buf_map_is_equal - Compares two dma-buf mapping structures for equality * @lhs: The dma-buf mapping structure -- 2.29.0 From tzimmermann at suse.de Wed Oct 28 19:35:17 2020 From: tzimmermann at suse.de (Thomas Zimmermann) Date: Wed, 28 Oct 2020 20:35:17 +0100 Subject: [Spice-devel] [PATCH v6 06/10] drm/gem: Use struct dma_buf_map in GEM vmap ops and convert GEM backends In-Reply-To: <20201028193521.2489-1-tzimmermann@suse.de> References: <20201028193521.2489-1-tzimmermann@suse.de> Message-ID: <20201028193521.2489-7-tzimmermann@suse.de> This patch replaces the vmap/vunmap's use of raw pointers in GEM object functions with instances of struct dma_buf_map. GEM backends are converted as well. For most of them, this simply changes the returned type. TTM-based drivers now return information about the location of the memory, either system or I/O memory. GEM VRAM helpers and qxl now use ttm_bo_vmap() et al. Amdgpu, nouveau and radeon use drm_gem_ttm_vmap() et al instead of implementing their own vmap callbacks. v5: * update vkms after switch to shmem v4: * use ttm_bo_vmap(), drm_gem_ttm_vmap(), et al. (Daniel, Christian) * fix a trailing { in drm_gem_vmap() * remove several empty functions instead of converting them (Daniel) * comment uses of raw pointers with a TODO (Daniel) * TODO list: convert more helpers to use struct dma_buf_map Signed-off-by: Thomas Zimmermann Acked-by: Christian K?nig Tested-by: Sam Ravnborg --- Documentation/gpu/todo.rst | 18 ++++ drivers/gpu/drm/Kconfig | 2 + drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c | 36 ------- drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.h | 2 - drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c | 5 +- drivers/gpu/drm/amd/amdgpu/amdgpu_object.h | 1 - drivers/gpu/drm/ast/ast_cursor.c | 27 +++-- drivers/gpu/drm/ast/ast_drv.h | 7 +- drivers/gpu/drm/drm_gem.c | 23 +++-- drivers/gpu/drm/drm_gem_cma_helper.c | 10 +- drivers/gpu/drm/drm_gem_shmem_helper.c | 48 +++++---- drivers/gpu/drm/drm_gem_vram_helper.c | 107 ++++++++++---------- drivers/gpu/drm/etnaviv/etnaviv_drv.h | 2 +- drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c | 9 +- drivers/gpu/drm/lima/lima_gem.c | 6 +- drivers/gpu/drm/lima/lima_sched.c | 11 +- drivers/gpu/drm/mgag200/mgag200_mode.c | 10 +- drivers/gpu/drm/nouveau/Kconfig | 1 + drivers/gpu/drm/nouveau/nouveau_bo.h | 2 - drivers/gpu/drm/nouveau/nouveau_gem.c | 6 +- drivers/gpu/drm/nouveau/nouveau_gem.h | 2 - drivers/gpu/drm/nouveau/nouveau_prime.c | 20 ---- drivers/gpu/drm/panfrost/panfrost_perfcnt.c | 14 +-- drivers/gpu/drm/qxl/qxl_display.c | 11 +- drivers/gpu/drm/qxl/qxl_draw.c | 14 ++- drivers/gpu/drm/qxl/qxl_drv.h | 11 +- drivers/gpu/drm/qxl/qxl_object.c | 31 +++--- drivers/gpu/drm/qxl/qxl_object.h | 2 +- drivers/gpu/drm/qxl/qxl_prime.c | 12 +-- drivers/gpu/drm/radeon/radeon.h | 1 - drivers/gpu/drm/radeon/radeon_gem.c | 7 +- drivers/gpu/drm/radeon/radeon_prime.c | 20 ---- drivers/gpu/drm/rockchip/rockchip_drm_gem.c | 22 ++-- drivers/gpu/drm/rockchip/rockchip_drm_gem.h | 4 +- drivers/gpu/drm/tiny/cirrus.c | 10 +- drivers/gpu/drm/tiny/gm12u320.c | 10 +- drivers/gpu/drm/udl/udl_modeset.c | 8 +- drivers/gpu/drm/vboxvideo/vbox_mode.c | 11 +- drivers/gpu/drm/vc4/vc4_bo.c | 6 +- drivers/gpu/drm/vc4/vc4_drv.h | 2 +- drivers/gpu/drm/vgem/vgem_drv.c | 16 ++- drivers/gpu/drm/vkms/vkms_plane.c | 15 ++- drivers/gpu/drm/vkms/vkms_writeback.c | 22 ++-- drivers/gpu/drm/xen/xen_drm_front_gem.c | 18 ++-- drivers/gpu/drm/xen/xen_drm_front_gem.h | 6 +- include/drm/drm_gem.h | 5 +- include/drm/drm_gem_cma_helper.h | 2 +- include/drm/drm_gem_shmem_helper.h | 4 +- include/drm/drm_gem_vram_helper.h | 14 +-- 49 files changed, 345 insertions(+), 308 deletions(-) diff --git a/Documentation/gpu/todo.rst b/Documentation/gpu/todo.rst index 700637e25ecd..7e6fc3c04add 100644 --- a/Documentation/gpu/todo.rst +++ b/Documentation/gpu/todo.rst @@ -446,6 +446,24 @@ Contact: Ville Syrj?l?, Daniel Vetter Level: Intermediate +Use struct dma_buf_map throughout codebase +------------------------------------------ + +Pointers to shared device memory are stored in struct dma_buf_map. Each +instance knows whether it refers to system or I/O memory. Most of the DRM-wide +interface have been converted to use struct dma_buf_map, but implementations +often still use raw pointers. + +The task is to use struct dma_buf_map where it makes sense. + +* Memory managers should use struct dma_buf_map for dma-buf-imported buffers. +* TTM might benefit from using struct dma_buf_map internally. +* Framebuffer copying and blitting helpers should operate on struct dma_buf_map. + +Contact: Thomas Zimmermann , Christian K?nig, Daniel Vetter + +Level: Intermediate + Core refactorings ================= diff --git a/drivers/gpu/drm/Kconfig b/drivers/gpu/drm/Kconfig index 32257189e09b..e479b04e955e 100644 --- a/drivers/gpu/drm/Kconfig +++ b/drivers/gpu/drm/Kconfig @@ -239,6 +239,7 @@ config DRM_RADEON select FW_LOADER select DRM_KMS_HELPER select DRM_TTM + select DRM_TTM_HELPER select POWER_SUPPLY select HWMON select BACKLIGHT_CLASS_DEVICE @@ -259,6 +260,7 @@ config DRM_AMDGPU select DRM_KMS_HELPER select DRM_SCHED select DRM_TTM + select DRM_TTM_HELPER select POWER_SUPPLY select HWMON select BACKLIGHT_CLASS_DEVICE diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c index 5b465ab774d1..e5919efca870 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.c @@ -41,42 +41,6 @@ #include #include -/** - * amdgpu_gem_prime_vmap - &dma_buf_ops.vmap implementation - * @obj: GEM BO - * - * Sets up an in-kernel virtual mapping of the BO's memory. - * - * Returns: - * The virtual address of the mapping or an error pointer. - */ -void *amdgpu_gem_prime_vmap(struct drm_gem_object *obj) -{ - struct amdgpu_bo *bo = gem_to_amdgpu_bo(obj); - int ret; - - ret = ttm_bo_kmap(&bo->tbo, 0, bo->tbo.num_pages, - &bo->dma_buf_vmap); - if (ret) - return ERR_PTR(ret); - - return bo->dma_buf_vmap.virtual; -} - -/** - * amdgpu_gem_prime_vunmap - &dma_buf_ops.vunmap implementation - * @obj: GEM BO - * @vaddr: Virtual address (unused) - * - * Tears down the in-kernel virtual mapping of the BO's memory. - */ -void amdgpu_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr) -{ - struct amdgpu_bo *bo = gem_to_amdgpu_bo(obj); - - ttm_bo_kunmap(&bo->dma_buf_vmap); -} - /** * amdgpu_gem_prime_mmap - &drm_driver.gem_prime_mmap implementation * @obj: GEM BO diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.h index 2c5c84a06bb9..39b5b9616fd8 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.h +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_dma_buf.h @@ -31,8 +31,6 @@ struct drm_gem_object *amdgpu_gem_prime_import(struct drm_device *dev, struct dma_buf *dma_buf); bool amdgpu_dmabuf_is_xgmi_accessible(struct amdgpu_device *adev, struct amdgpu_bo *bo); -void *amdgpu_gem_prime_vmap(struct drm_gem_object *obj); -void amdgpu_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr); int amdgpu_gem_prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma); diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c index be08a63ef58c..576659827e74 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_gem.c @@ -33,6 +33,7 @@ #include #include +#include #include "amdgpu.h" #include "amdgpu_display.h" @@ -220,8 +221,8 @@ static const struct drm_gem_object_funcs amdgpu_gem_object_funcs = { .open = amdgpu_gem_object_open, .close = amdgpu_gem_object_close, .export = amdgpu_gem_prime_export, - .vmap = amdgpu_gem_prime_vmap, - .vunmap = amdgpu_gem_prime_vunmap, + .vmap = drm_gem_ttm_vmap, + .vunmap = drm_gem_ttm_vunmap, }; /* diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h index 132e5f955180..01296ef0d673 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_object.h @@ -100,7 +100,6 @@ struct amdgpu_bo { struct amdgpu_bo *parent; struct amdgpu_bo *shadow; - struct ttm_bo_kmap_obj dma_buf_vmap; struct amdgpu_mn *mn; diff --git a/drivers/gpu/drm/ast/ast_cursor.c b/drivers/gpu/drm/ast/ast_cursor.c index e0f4613918ad..742d43a7edf4 100644 --- a/drivers/gpu/drm/ast/ast_cursor.c +++ b/drivers/gpu/drm/ast/ast_cursor.c @@ -39,7 +39,7 @@ static void ast_cursor_fini(struct ast_private *ast) for (i = 0; i < ARRAY_SIZE(ast->cursor.gbo); ++i) { gbo = ast->cursor.gbo[i]; - drm_gem_vram_vunmap(gbo, ast->cursor.vaddr[i]); + drm_gem_vram_vunmap(gbo, &ast->cursor.map[i]); drm_gem_vram_unpin(gbo); drm_gem_vram_put(gbo); } @@ -60,7 +60,7 @@ int ast_cursor_init(struct ast_private *ast) struct drm_device *dev = &ast->base; size_t size, i; struct drm_gem_vram_object *gbo; - void __iomem *vaddr; + struct dma_buf_map map; int ret; size = roundup(AST_HWC_SIZE + AST_HWC_SIGNATURE_SIZE, PAGE_SIZE); @@ -77,16 +77,15 @@ int ast_cursor_init(struct ast_private *ast) drm_gem_vram_put(gbo); goto err_drm_gem_vram_put; } - vaddr = drm_gem_vram_vmap(gbo); - if (IS_ERR(vaddr)) { - ret = PTR_ERR(vaddr); + ret = drm_gem_vram_vmap(gbo, &map); + if (ret) { drm_gem_vram_unpin(gbo); drm_gem_vram_put(gbo); goto err_drm_gem_vram_put; } ast->cursor.gbo[i] = gbo; - ast->cursor.vaddr[i] = vaddr; + ast->cursor.map[i] = map; } return drmm_add_action_or_reset(dev, ast_cursor_release, NULL); @@ -95,7 +94,7 @@ int ast_cursor_init(struct ast_private *ast) while (i) { --i; gbo = ast->cursor.gbo[i]; - drm_gem_vram_vunmap(gbo, ast->cursor.vaddr[i]); + drm_gem_vram_vunmap(gbo, &ast->cursor.map[i]); drm_gem_vram_unpin(gbo); drm_gem_vram_put(gbo); } @@ -170,6 +169,7 @@ int ast_cursor_blit(struct ast_private *ast, struct drm_framebuffer *fb) { struct drm_device *dev = &ast->base; struct drm_gem_vram_object *gbo; + struct dma_buf_map map; int ret; void *src; void __iomem *dst; @@ -183,18 +183,17 @@ int ast_cursor_blit(struct ast_private *ast, struct drm_framebuffer *fb) ret = drm_gem_vram_pin(gbo, 0); if (ret) return ret; - src = drm_gem_vram_vmap(gbo); - if (IS_ERR(src)) { - ret = PTR_ERR(src); + ret = drm_gem_vram_vmap(gbo, &map); + if (ret) goto err_drm_gem_vram_unpin; - } + src = map.vaddr; /* TODO: Use mapping abstraction properly */ - dst = ast->cursor.vaddr[ast->cursor.next_index]; + dst = ast->cursor.map[ast->cursor.next_index].vaddr_iomem; /* do data transfer to cursor BO */ update_cursor_image(dst, src, fb->width, fb->height); - drm_gem_vram_vunmap(gbo, src); + drm_gem_vram_vunmap(gbo, &map); drm_gem_vram_unpin(gbo); return 0; @@ -257,7 +256,7 @@ void ast_cursor_show(struct ast_private *ast, int x, int y, u8 __iomem *sig; u8 jreg; - dst = ast->cursor.vaddr[ast->cursor.next_index]; + dst = ast->cursor.map[ast->cursor.next_index].vaddr; sig = dst + AST_HWC_SIZE; writel(x, sig + AST_HWC_SIGNATURE_X); diff --git a/drivers/gpu/drm/ast/ast_drv.h b/drivers/gpu/drm/ast/ast_drv.h index 467049ca8430..f963141dd851 100644 --- a/drivers/gpu/drm/ast/ast_drv.h +++ b/drivers/gpu/drm/ast/ast_drv.h @@ -28,10 +28,11 @@ #ifndef __AST_DRV_H__ #define __AST_DRV_H__ -#include -#include +#include #include #include +#include +#include #include #include @@ -131,7 +132,7 @@ struct ast_private { struct { struct drm_gem_vram_object *gbo[AST_DEFAULT_HWC_NUM]; - void __iomem *vaddr[AST_DEFAULT_HWC_NUM]; + struct dma_buf_map map[AST_DEFAULT_HWC_NUM]; unsigned int next_index; } cursor; diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c index 1da67d34e55d..a89ad4570e3c 100644 --- a/drivers/gpu/drm/drm_gem.c +++ b/drivers/gpu/drm/drm_gem.c @@ -36,6 +36,7 @@ #include #include #include +#include #include #include @@ -1207,26 +1208,30 @@ void drm_gem_unpin(struct drm_gem_object *obj) void *drm_gem_vmap(struct drm_gem_object *obj) { - void *vaddr; + struct dma_buf_map map; + int ret; - if (obj->funcs->vmap) - vaddr = obj->funcs->vmap(obj); - else - vaddr = ERR_PTR(-EOPNOTSUPP); + if (!obj->funcs->vmap) + return ERR_PTR(-EOPNOTSUPP); - if (!vaddr) - vaddr = ERR_PTR(-ENOMEM); + ret = obj->funcs->vmap(obj, &map); + if (ret) + return ERR_PTR(ret); + else if (dma_buf_map_is_null(&map)) + return ERR_PTR(-ENOMEM); - return vaddr; + return map.vaddr; } void drm_gem_vunmap(struct drm_gem_object *obj, void *vaddr) { + struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(vaddr); + if (!vaddr) return; if (obj->funcs->vunmap) - obj->funcs->vunmap(obj, vaddr); + obj->funcs->vunmap(obj, &map); } /** diff --git a/drivers/gpu/drm/drm_gem_cma_helper.c b/drivers/gpu/drm/drm_gem_cma_helper.c index d527485ea0b7..b57e3e9222f0 100644 --- a/drivers/gpu/drm/drm_gem_cma_helper.c +++ b/drivers/gpu/drm/drm_gem_cma_helper.c @@ -519,6 +519,8 @@ EXPORT_SYMBOL_GPL(drm_gem_cma_prime_mmap); * drm_gem_cma_prime_vmap - map a CMA GEM object into the kernel's virtual * address space * @obj: GEM object + * @map: Returns the kernel virtual address of the CMA GEM object's backing + * store. * * This function maps a buffer exported via DRM PRIME into the kernel's * virtual address space. Since the CMA buffers are already mapped into the @@ -527,13 +529,15 @@ EXPORT_SYMBOL_GPL(drm_gem_cma_prime_mmap); * driver's &drm_gem_object_funcs.vmap callback. * * Returns: - * The kernel virtual address of the CMA GEM object's backing store. + * 0 on success, or a negative error code otherwise. */ -void *drm_gem_cma_prime_vmap(struct drm_gem_object *obj) +int drm_gem_cma_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map) { struct drm_gem_cma_object *cma_obj = to_drm_gem_cma_obj(obj); - return cma_obj->vaddr; + dma_buf_map_set_vaddr(map, cma_obj->vaddr); + + return 0; } EXPORT_SYMBOL_GPL(drm_gem_cma_prime_vmap); diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c b/drivers/gpu/drm/drm_gem_shmem_helper.c index fb11df7aced5..5553f58f68f3 100644 --- a/drivers/gpu/drm/drm_gem_shmem_helper.c +++ b/drivers/gpu/drm/drm_gem_shmem_helper.c @@ -258,19 +258,25 @@ void drm_gem_shmem_unpin(struct drm_gem_object *obj) } EXPORT_SYMBOL(drm_gem_shmem_unpin); -static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem) +static int drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem, struct dma_buf_map *map) { struct drm_gem_object *obj = &shmem->base; - struct dma_buf_map map; int ret = 0; - if (shmem->vmap_use_count++ > 0) - return shmem->vaddr; + if (shmem->vmap_use_count++ > 0) { + dma_buf_map_set_vaddr(map, shmem->vaddr); + return 0; + } if (obj->import_attach) { - ret = dma_buf_vmap(obj->import_attach->dmabuf, &map); - if (!ret) - shmem->vaddr = map.vaddr; + ret = dma_buf_vmap(obj->import_attach->dmabuf, map); + if (!ret) { + if (WARN_ON(map->is_iomem)) { + ret = -EIO; + goto err_put_pages; + } + shmem->vaddr = map->vaddr; + } } else { pgprot_t prot = PAGE_KERNEL; @@ -284,6 +290,8 @@ static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem) VM_MAP, prot); if (!shmem->vaddr) ret = -ENOMEM; + else + dma_buf_map_set_vaddr(map, shmem->vaddr); } if (ret) { @@ -291,7 +299,7 @@ static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem) goto err_put_pages; } - return shmem->vaddr; + return 0; err_put_pages: if (!obj->import_attach) @@ -299,12 +307,14 @@ static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem) err_zero_use: shmem->vmap_use_count = 0; - return ERR_PTR(ret); + return ret; } /* * drm_gem_shmem_vmap - Create a virtual mapping for a shmem GEM object * @shmem: shmem GEM object + * @map: Returns the kernel virtual address of the SHMEM GEM object's backing + * store. * * This function makes sure that a contiguous kernel virtual address mapping * exists for the buffer backing the shmem GEM object. @@ -318,26 +328,25 @@ static void *drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem) * Returns: * 0 on success or a negative error code on failure. */ -void *drm_gem_shmem_vmap(struct drm_gem_object *obj) +int drm_gem_shmem_vmap(struct drm_gem_object *obj, struct dma_buf_map *map) { struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj); - void *vaddr; int ret; ret = mutex_lock_interruptible(&shmem->vmap_lock); if (ret) - return ERR_PTR(ret); - vaddr = drm_gem_shmem_vmap_locked(shmem); + return ret; + ret = drm_gem_shmem_vmap_locked(shmem, map); mutex_unlock(&shmem->vmap_lock); - return vaddr; + return ret; } EXPORT_SYMBOL(drm_gem_shmem_vmap); -static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem) +static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem, + struct dma_buf_map *map) { struct drm_gem_object *obj = &shmem->base; - struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(shmem->vaddr); if (WARN_ON_ONCE(!shmem->vmap_use_count)) return; @@ -346,7 +355,7 @@ static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem) return; if (obj->import_attach) - dma_buf_vunmap(obj->import_attach->dmabuf, &map); + dma_buf_vunmap(obj->import_attach->dmabuf, map); else vunmap(shmem->vaddr); @@ -357,6 +366,7 @@ static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem) /* * drm_gem_shmem_vunmap - Unmap a virtual mapping fo a shmem GEM object * @shmem: shmem GEM object + * @map: Kernel virtual address where the SHMEM GEM object was mapped * * This function cleans up a kernel virtual address mapping acquired by * drm_gem_shmem_vmap(). The mapping is only removed when the use count drops to @@ -366,12 +376,12 @@ static void drm_gem_shmem_vunmap_locked(struct drm_gem_shmem_object *shmem) * also be called by drivers directly, in which case it will hide the * differences between dma-buf imported and natively allocated objects. */ -void drm_gem_shmem_vunmap(struct drm_gem_object *obj, void *vaddr) +void drm_gem_shmem_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map) { struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj); mutex_lock(&shmem->vmap_lock); - drm_gem_shmem_vunmap_locked(shmem); + drm_gem_shmem_vunmap_locked(shmem, map); mutex_unlock(&shmem->vmap_lock); } EXPORT_SYMBOL(drm_gem_shmem_vunmap); diff --git a/drivers/gpu/drm/drm_gem_vram_helper.c b/drivers/gpu/drm/drm_gem_vram_helper.c index f445b84c43c4..4d99dd50e763 100644 --- a/drivers/gpu/drm/drm_gem_vram_helper.c +++ b/drivers/gpu/drm/drm_gem_vram_helper.c @@ -1,5 +1,6 @@ // SPDX-License-Identifier: GPL-2.0-or-later +#include #include #include @@ -113,8 +114,8 @@ static void drm_gem_vram_cleanup(struct drm_gem_vram_object *gbo) * up; only release the GEM object. */ - WARN_ON(gbo->kmap_use_count); - WARN_ON(gbo->kmap.virtual); + WARN_ON(gbo->vmap_use_count); + WARN_ON(dma_buf_map_is_set(&gbo->map)); drm_gem_object_release(&gbo->bo.base); } @@ -379,29 +380,37 @@ int drm_gem_vram_unpin(struct drm_gem_vram_object *gbo) } EXPORT_SYMBOL(drm_gem_vram_unpin); -static void *drm_gem_vram_kmap_locked(struct drm_gem_vram_object *gbo) +static int drm_gem_vram_kmap_locked(struct drm_gem_vram_object *gbo, + struct dma_buf_map *map) { int ret; - struct ttm_bo_kmap_obj *kmap = &gbo->kmap; - bool is_iomem; - if (gbo->kmap_use_count > 0) + if (gbo->vmap_use_count > 0) goto out; - ret = ttm_bo_kmap(&gbo->bo, 0, gbo->bo.num_pages, kmap); + ret = ttm_bo_vmap(&gbo->bo, &gbo->map); if (ret) - return ERR_PTR(ret); + return ret; out: - ++gbo->kmap_use_count; - return ttm_kmap_obj_virtual(kmap, &is_iomem); + ++gbo->vmap_use_count; + *map = gbo->map; + + return 0; } -static void drm_gem_vram_kunmap_locked(struct drm_gem_vram_object *gbo) +static void drm_gem_vram_kunmap_locked(struct drm_gem_vram_object *gbo, + struct dma_buf_map *map) { - if (WARN_ON_ONCE(!gbo->kmap_use_count)) + struct drm_device *dev = gbo->bo.base.dev; + + if (drm_WARN_ON_ONCE(dev, !gbo->vmap_use_count)) return; - if (--gbo->kmap_use_count > 0) + + if (drm_WARN_ON_ONCE(dev, !dma_buf_map_is_equal(&gbo->map, map))) + return; /* BUG: map not mapped from this BO */ + + if (--gbo->vmap_use_count > 0) return; /* @@ -415,7 +424,9 @@ static void drm_gem_vram_kunmap_locked(struct drm_gem_vram_object *gbo) /** * drm_gem_vram_vmap() - Pins and maps a GEM VRAM object into kernel address * space - * @gbo: The GEM VRAM object to map + * @gbo: The GEM VRAM object to map + * @map: Returns the kernel virtual address of the VRAM GEM object's backing + * store. * * The vmap function pins a GEM VRAM object to its current location, either * system or video memory, and maps its buffer into kernel address space. @@ -424,48 +435,44 @@ static void drm_gem_vram_kunmap_locked(struct drm_gem_vram_object *gbo) * unmap and unpin the GEM VRAM object. * * Returns: - * The buffer's virtual address on success, or - * an ERR_PTR()-encoded error code otherwise. + * 0 on success, or a negative error code otherwise. */ -void *drm_gem_vram_vmap(struct drm_gem_vram_object *gbo) +int drm_gem_vram_vmap(struct drm_gem_vram_object *gbo, struct dma_buf_map *map) { int ret; - void *base; ret = ttm_bo_reserve(&gbo->bo, true, false, NULL); if (ret) - return ERR_PTR(ret); + return ret; ret = drm_gem_vram_pin_locked(gbo, 0); if (ret) goto err_ttm_bo_unreserve; - base = drm_gem_vram_kmap_locked(gbo); - if (IS_ERR(base)) { - ret = PTR_ERR(base); + ret = drm_gem_vram_kmap_locked(gbo, map); + if (ret) goto err_drm_gem_vram_unpin_locked; - } ttm_bo_unreserve(&gbo->bo); - return base; + return 0; err_drm_gem_vram_unpin_locked: drm_gem_vram_unpin_locked(gbo); err_ttm_bo_unreserve: ttm_bo_unreserve(&gbo->bo); - return ERR_PTR(ret); + return ret; } EXPORT_SYMBOL(drm_gem_vram_vmap); /** * drm_gem_vram_vunmap() - Unmaps and unpins a GEM VRAM object - * @gbo: The GEM VRAM object to unmap - * @vaddr: The mapping's base address as returned by drm_gem_vram_vmap() + * @gbo: The GEM VRAM object to unmap + * @map: Kernel virtual address where the VRAM GEM object was mapped * * A call to drm_gem_vram_vunmap() unmaps and unpins a GEM VRAM buffer. See * the documentation for drm_gem_vram_vmap() for more information. */ -void drm_gem_vram_vunmap(struct drm_gem_vram_object *gbo, void *vaddr) +void drm_gem_vram_vunmap(struct drm_gem_vram_object *gbo, struct dma_buf_map *map) { int ret; @@ -473,7 +480,7 @@ void drm_gem_vram_vunmap(struct drm_gem_vram_object *gbo, void *vaddr) if (WARN_ONCE(ret, "ttm_bo_reserve_failed(): ret=%d\n", ret)) return; - drm_gem_vram_kunmap_locked(gbo); + drm_gem_vram_kunmap_locked(gbo, map); drm_gem_vram_unpin_locked(gbo); ttm_bo_unreserve(&gbo->bo); @@ -564,15 +571,13 @@ static void drm_gem_vram_bo_driver_move_notify(struct drm_gem_vram_object *gbo, bool evict, struct ttm_resource *new_mem) { - struct ttm_bo_kmap_obj *kmap = &gbo->kmap; + struct ttm_buffer_object *bo = &gbo->bo; + struct drm_device *dev = bo->base.dev; - if (WARN_ON_ONCE(gbo->kmap_use_count)) + if (drm_WARN_ON_ONCE(dev, gbo->vmap_use_count)) return; - if (!kmap->virtual) - return; - ttm_bo_kunmap(kmap); - kmap->virtual = NULL; + ttm_bo_vunmap(bo, &gbo->map); } static int drm_gem_vram_bo_driver_move(struct drm_gem_vram_object *gbo, @@ -838,37 +843,33 @@ static void drm_gem_vram_object_unpin(struct drm_gem_object *gem) } /** - * drm_gem_vram_object_vmap() - \ - Implements &struct drm_gem_object_funcs.vmap - * @gem: The GEM object to map + * drm_gem_vram_object_vmap() - + * Implements &struct drm_gem_object_funcs.vmap + * @gem: The GEM object to map + * @map: Returns the kernel virtual address of the VRAM GEM object's backing + * store. * * Returns: - * The buffers virtual address on success, or - * NULL otherwise. + * 0 on success, or a negative error code otherwise. */ -static void *drm_gem_vram_object_vmap(struct drm_gem_object *gem) +static int drm_gem_vram_object_vmap(struct drm_gem_object *gem, struct dma_buf_map *map) { struct drm_gem_vram_object *gbo = drm_gem_vram_of_gem(gem); - void *base; - base = drm_gem_vram_vmap(gbo); - if (IS_ERR(base)) - return NULL; - return base; + return drm_gem_vram_vmap(gbo, map); } /** - * drm_gem_vram_object_vunmap() - \ - Implements &struct drm_gem_object_funcs.vunmap - * @gem: The GEM object to unmap - * @vaddr: The mapping's base address + * drm_gem_vram_object_vunmap() - + * Implements &struct drm_gem_object_funcs.vunmap + * @gem: The GEM object to unmap + * @map: Kernel virtual address where the VRAM GEM object was mapped */ -static void drm_gem_vram_object_vunmap(struct drm_gem_object *gem, - void *vaddr) +static void drm_gem_vram_object_vunmap(struct drm_gem_object *gem, struct dma_buf_map *map) { struct drm_gem_vram_object *gbo = drm_gem_vram_of_gem(gem); - drm_gem_vram_vunmap(gbo, vaddr); + drm_gem_vram_vunmap(gbo, map); } /* diff --git a/drivers/gpu/drm/etnaviv/etnaviv_drv.h b/drivers/gpu/drm/etnaviv/etnaviv_drv.h index 9682c26d89bb..f5be627e1de0 100644 --- a/drivers/gpu/drm/etnaviv/etnaviv_drv.h +++ b/drivers/gpu/drm/etnaviv/etnaviv_drv.h @@ -51,7 +51,7 @@ int etnaviv_ioctl_gem_submit(struct drm_device *dev, void *data, int etnaviv_gem_mmap(struct file *filp, struct vm_area_struct *vma); int etnaviv_gem_mmap_offset(struct drm_gem_object *obj, u64 *offset); struct sg_table *etnaviv_gem_prime_get_sg_table(struct drm_gem_object *obj); -void *etnaviv_gem_prime_vmap(struct drm_gem_object *obj); +int etnaviv_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map); int etnaviv_gem_prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma); struct drm_gem_object *etnaviv_gem_prime_import_sg_table(struct drm_device *dev, diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c index a6d9932a32ae..bc2543dd987d 100644 --- a/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c +++ b/drivers/gpu/drm/etnaviv/etnaviv_gem_prime.c @@ -22,9 +22,14 @@ struct sg_table *etnaviv_gem_prime_get_sg_table(struct drm_gem_object *obj) return drm_prime_pages_to_sg(obj->dev, etnaviv_obj->pages, npages); } -void *etnaviv_gem_prime_vmap(struct drm_gem_object *obj) +int etnaviv_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map) { - return etnaviv_gem_vmap(obj); + void *vaddr = etnaviv_gem_vmap(obj); + if (!vaddr) + return -ENOMEM; + dma_buf_map_set_vaddr(map, vaddr); + + return 0; } int etnaviv_gem_prime_mmap(struct drm_gem_object *obj, diff --git a/drivers/gpu/drm/lima/lima_gem.c b/drivers/gpu/drm/lima/lima_gem.c index 11223fe348df..832e5280a6ed 100644 --- a/drivers/gpu/drm/lima/lima_gem.c +++ b/drivers/gpu/drm/lima/lima_gem.c @@ -182,14 +182,14 @@ static int lima_gem_pin(struct drm_gem_object *obj) return drm_gem_shmem_pin(obj); } -static void *lima_gem_vmap(struct drm_gem_object *obj) +static int lima_gem_vmap(struct drm_gem_object *obj, struct dma_buf_map *map) { struct lima_bo *bo = to_lima_bo(obj); if (bo->heap_size) - return ERR_PTR(-EINVAL); + return -EINVAL; - return drm_gem_shmem_vmap(obj); + return drm_gem_shmem_vmap(obj, map); } static int lima_gem_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma) diff --git a/drivers/gpu/drm/lima/lima_sched.c b/drivers/gpu/drm/lima/lima_sched.c index dc6df9e9a40d..a070a85f8f36 100644 --- a/drivers/gpu/drm/lima/lima_sched.c +++ b/drivers/gpu/drm/lima/lima_sched.c @@ -1,6 +1,7 @@ // SPDX-License-Identifier: GPL-2.0 OR MIT /* Copyright 2017-2019 Qiang Yu */ +#include #include #include #include @@ -303,6 +304,8 @@ static void lima_sched_build_error_task_list(struct lima_sched_task *task) struct lima_dump_chunk_buffer *buffer_chunk; u32 size, task_size, mem_size; int i; + struct dma_buf_map map; + int ret; mutex_lock(&dev->error_task_list_lock); @@ -388,15 +391,15 @@ static void lima_sched_build_error_task_list(struct lima_sched_task *task) } else { buffer_chunk->size = lima_bo_size(bo); - data = drm_gem_shmem_vmap(&bo->base.base); - if (IS_ERR_OR_NULL(data)) { + ret = drm_gem_shmem_vmap(&bo->base.base, &map); + if (ret) { kvfree(et); goto out; } - memcpy(buffer_chunk + 1, data, buffer_chunk->size); + memcpy(buffer_chunk + 1, map.vaddr, buffer_chunk->size); - drm_gem_shmem_vunmap(&bo->base.base, data); + drm_gem_shmem_vunmap(&bo->base.base, &map); } buffer_chunk = (void *)(buffer_chunk + 1) + buffer_chunk->size; diff --git a/drivers/gpu/drm/mgag200/mgag200_mode.c b/drivers/gpu/drm/mgag200/mgag200_mode.c index 38672f9e5c4f..8ef76769b97f 100644 --- a/drivers/gpu/drm/mgag200/mgag200_mode.c +++ b/drivers/gpu/drm/mgag200/mgag200_mode.c @@ -9,6 +9,7 @@ */ #include +#include #include #include @@ -1556,15 +1557,18 @@ mgag200_handle_damage(struct mga_device *mdev, struct drm_framebuffer *fb, struct drm_rect *clip) { struct drm_device *dev = &mdev->base; + struct dma_buf_map map; void *vmap; + int ret; - vmap = drm_gem_shmem_vmap(fb->obj[0]); - if (drm_WARN_ON(dev, !vmap)) + ret = drm_gem_shmem_vmap(fb->obj[0], &map); + if (drm_WARN_ON(dev, ret)) return; /* BUG: SHMEM BO should always be vmapped */ + vmap = map.vaddr; /* TODO: Use mapping abstraction properly */ drm_fb_memcpy_dstclip(mdev->vram, vmap, fb, clip); - drm_gem_shmem_vunmap(fb->obj[0], vmap); + drm_gem_shmem_vunmap(fb->obj[0], &map); /* Always scanout image at VRAM offset 0 */ mgag200_set_startadd(mdev, (u32)0); diff --git a/drivers/gpu/drm/nouveau/Kconfig b/drivers/gpu/drm/nouveau/Kconfig index 5dec1e5694b7..9436310d0854 100644 --- a/drivers/gpu/drm/nouveau/Kconfig +++ b/drivers/gpu/drm/nouveau/Kconfig @@ -6,6 +6,7 @@ config DRM_NOUVEAU select FW_LOADER select DRM_KMS_HELPER select DRM_TTM + select DRM_TTM_HELPER select BACKLIGHT_CLASS_DEVICE if DRM_NOUVEAU_BACKLIGHT select ACPI_VIDEO if ACPI && X86 && BACKLIGHT_CLASS_DEVICE && INPUT select X86_PLATFORM_DEVICES if ACPI && X86 diff --git a/drivers/gpu/drm/nouveau/nouveau_bo.h b/drivers/gpu/drm/nouveau/nouveau_bo.h index 641ef6298a0e..6045b85a762a 100644 --- a/drivers/gpu/drm/nouveau/nouveau_bo.h +++ b/drivers/gpu/drm/nouveau/nouveau_bo.h @@ -39,8 +39,6 @@ struct nouveau_bo { unsigned mode; struct nouveau_drm_tile *tile; - - struct ttm_bo_kmap_obj dma_buf_vmap; }; static inline struct nouveau_bo * diff --git a/drivers/gpu/drm/nouveau/nouveau_gem.c b/drivers/gpu/drm/nouveau/nouveau_gem.c index 9a421c3949de..f942b526b0a5 100644 --- a/drivers/gpu/drm/nouveau/nouveau_gem.c +++ b/drivers/gpu/drm/nouveau/nouveau_gem.c @@ -24,6 +24,8 @@ * */ +#include + #include "nouveau_drv.h" #include "nouveau_dma.h" #include "nouveau_fence.h" @@ -176,8 +178,8 @@ const struct drm_gem_object_funcs nouveau_gem_object_funcs = { .pin = nouveau_gem_prime_pin, .unpin = nouveau_gem_prime_unpin, .get_sg_table = nouveau_gem_prime_get_sg_table, - .vmap = nouveau_gem_prime_vmap, - .vunmap = nouveau_gem_prime_vunmap, + .vmap = drm_gem_ttm_vmap, + .vunmap = drm_gem_ttm_vunmap, }; int diff --git a/drivers/gpu/drm/nouveau/nouveau_gem.h b/drivers/gpu/drm/nouveau/nouveau_gem.h index b35c180322e2..3b919c7c931c 100644 --- a/drivers/gpu/drm/nouveau/nouveau_gem.h +++ b/drivers/gpu/drm/nouveau/nouveau_gem.h @@ -37,7 +37,5 @@ extern void nouveau_gem_prime_unpin(struct drm_gem_object *); extern struct sg_table *nouveau_gem_prime_get_sg_table(struct drm_gem_object *); extern struct drm_gem_object *nouveau_gem_prime_import_sg_table( struct drm_device *, struct dma_buf_attachment *, struct sg_table *); -extern void *nouveau_gem_prime_vmap(struct drm_gem_object *); -extern void nouveau_gem_prime_vunmap(struct drm_gem_object *, void *); #endif diff --git a/drivers/gpu/drm/nouveau/nouveau_prime.c b/drivers/gpu/drm/nouveau/nouveau_prime.c index a8264aebf3d4..2f16b5249283 100644 --- a/drivers/gpu/drm/nouveau/nouveau_prime.c +++ b/drivers/gpu/drm/nouveau/nouveau_prime.c @@ -35,26 +35,6 @@ struct sg_table *nouveau_gem_prime_get_sg_table(struct drm_gem_object *obj) return drm_prime_pages_to_sg(obj->dev, nvbo->bo.ttm->pages, npages); } -void *nouveau_gem_prime_vmap(struct drm_gem_object *obj) -{ - struct nouveau_bo *nvbo = nouveau_gem_object(obj); - int ret; - - ret = ttm_bo_kmap(&nvbo->bo, 0, nvbo->bo.num_pages, - &nvbo->dma_buf_vmap); - if (ret) - return ERR_PTR(ret); - - return nvbo->dma_buf_vmap.virtual; -} - -void nouveau_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr) -{ - struct nouveau_bo *nvbo = nouveau_gem_object(obj); - - ttm_bo_kunmap(&nvbo->dma_buf_vmap); -} - struct drm_gem_object *nouveau_gem_prime_import_sg_table(struct drm_device *dev, struct dma_buf_attachment *attach, struct sg_table *sg) diff --git a/drivers/gpu/drm/panfrost/panfrost_perfcnt.c b/drivers/gpu/drm/panfrost/panfrost_perfcnt.c index fdbc8d949135..5ab03d605f57 100644 --- a/drivers/gpu/drm/panfrost/panfrost_perfcnt.c +++ b/drivers/gpu/drm/panfrost/panfrost_perfcnt.c @@ -5,6 +5,7 @@ #include #include #include +#include #include #include #include @@ -72,6 +73,7 @@ static int panfrost_perfcnt_enable_locked(struct panfrost_device *pfdev, { struct panfrost_file_priv *user = file_priv->driver_priv; struct panfrost_perfcnt *perfcnt = pfdev->perfcnt; + struct dma_buf_map map; struct drm_gem_shmem_object *bo; u32 cfg, as; int ret; @@ -103,11 +105,10 @@ static int panfrost_perfcnt_enable_locked(struct panfrost_device *pfdev, goto err_close_bo; } - perfcnt->buf = drm_gem_shmem_vmap(&bo->base); - if (IS_ERR(perfcnt->buf)) { - ret = PTR_ERR(perfcnt->buf); + ret = drm_gem_shmem_vmap(&bo->base, &map); + if (ret) goto err_put_mapping; - } + perfcnt->buf = map.vaddr; /* * Invalidate the cache and clear the counters to start from a fresh @@ -163,7 +164,7 @@ static int panfrost_perfcnt_enable_locked(struct panfrost_device *pfdev, return 0; err_vunmap: - drm_gem_shmem_vunmap(&bo->base, perfcnt->buf); + drm_gem_shmem_vunmap(&bo->base, &map); err_put_mapping: panfrost_gem_mapping_put(perfcnt->mapping); err_close_bo: @@ -180,6 +181,7 @@ static int panfrost_perfcnt_disable_locked(struct panfrost_device *pfdev, { struct panfrost_file_priv *user = file_priv->driver_priv; struct panfrost_perfcnt *perfcnt = pfdev->perfcnt; + struct dma_buf_map map = DMA_BUF_MAP_INIT_VADDR(perfcnt->buf); if (user != perfcnt->user) return -EINVAL; @@ -192,7 +194,7 @@ static int panfrost_perfcnt_disable_locked(struct panfrost_device *pfdev, GPU_PERFCNT_CFG_MODE(GPU_PERFCNT_CFG_MODE_OFF)); perfcnt->user = NULL; - drm_gem_shmem_vunmap(&perfcnt->mapping->obj->base.base, perfcnt->buf); + drm_gem_shmem_vunmap(&perfcnt->mapping->obj->base.base, &map); perfcnt->buf = NULL; panfrost_gem_close(&perfcnt->mapping->obj->base.base, file_priv); panfrost_mmu_as_put(pfdev, perfcnt->mapping->mmu); diff --git a/drivers/gpu/drm/qxl/qxl_display.c b/drivers/gpu/drm/qxl/qxl_display.c index 45fd76e04bdc..e165fa9b2089 100644 --- a/drivers/gpu/drm/qxl/qxl_display.c +++ b/drivers/gpu/drm/qxl/qxl_display.c @@ -25,6 +25,7 @@ #include #include +#include #include #include @@ -581,6 +582,8 @@ static void qxl_cursor_atomic_update(struct drm_plane *plane, struct drm_gem_object *obj; struct qxl_bo *cursor_bo = NULL, *user_bo = NULL, *old_cursor_bo = NULL; int ret; + struct dma_buf_map user_map; + struct dma_buf_map cursor_map; void *user_ptr; int size = 64*64*4; @@ -595,9 +598,10 @@ static void qxl_cursor_atomic_update(struct drm_plane *plane, user_bo = gem_to_qxl_bo(obj); /* pinning is done in the prepare/cleanup framevbuffer */ - ret = qxl_bo_kmap(user_bo, &user_ptr); + ret = qxl_bo_kmap(user_bo, &user_map); if (ret) goto out_free_release; + user_ptr = user_map.vaddr; /* TODO: Use mapping abstraction properly */ ret = qxl_alloc_bo_reserved(qdev, release, sizeof(struct qxl_cursor) + size, @@ -613,7 +617,7 @@ static void qxl_cursor_atomic_update(struct drm_plane *plane, if (ret) goto out_unpin; - ret = qxl_bo_kmap(cursor_bo, (void **)&cursor); + ret = qxl_bo_kmap(cursor_bo, &cursor_map); if (ret) goto out_backoff; @@ -1133,6 +1137,7 @@ int qxl_create_monitors_object(struct qxl_device *qdev) { int ret; struct drm_gem_object *gobj; + struct dma_buf_map map; int monitors_config_size = sizeof(struct qxl_monitors_config) + qxl_num_crtc * sizeof(struct qxl_head); @@ -1149,7 +1154,7 @@ int qxl_create_monitors_object(struct qxl_device *qdev) if (ret) return ret; - qxl_bo_kmap(qdev->monitors_config_bo, NULL); + qxl_bo_kmap(qdev->monitors_config_bo, &map); qdev->monitors_config = qdev->monitors_config_bo->kptr; qdev->ram_header->monitors_config = diff --git a/drivers/gpu/drm/qxl/qxl_draw.c b/drivers/gpu/drm/qxl/qxl_draw.c index 3599db096973..7b7acb910780 100644 --- a/drivers/gpu/drm/qxl/qxl_draw.c +++ b/drivers/gpu/drm/qxl/qxl_draw.c @@ -20,6 +20,8 @@ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. */ +#include + #include #include "qxl_drv.h" @@ -42,13 +44,15 @@ static struct qxl_rect *drawable_set_clipping(struct qxl_device *qdev, unsigned int num_clips, struct qxl_bo *clips_bo) { + struct dma_buf_map map; struct qxl_clip_rects *dev_clips; int ret; - ret = qxl_bo_kmap(clips_bo, (void **)&dev_clips); - if (ret) { + ret = qxl_bo_kmap(clips_bo, &map); + if (ret) return NULL; - } + dev_clips = map.vaddr; /* TODO: Use mapping abstraction properly */ + dev_clips->num_rects = num_clips; dev_clips->chunk.next_chunk = 0; dev_clips->chunk.prev_chunk = 0; @@ -142,6 +146,7 @@ void qxl_draw_dirty_fb(struct qxl_device *qdev, int stride = fb->pitches[0]; /* depth is not actually interesting, we don't mask with it */ int depth = fb->format->cpp[0] * 8; + struct dma_buf_map surface_map; uint8_t *surface_base; struct qxl_release *release; struct qxl_bo *clips_bo; @@ -197,9 +202,10 @@ void qxl_draw_dirty_fb(struct qxl_device *qdev, if (ret) goto out_release_backoff; - ret = qxl_bo_kmap(bo, (void **)&surface_base); + ret = qxl_bo_kmap(bo, &surface_map); if (ret) goto out_release_backoff; + surface_base = surface_map.vaddr; /* TODO: Use mapping abstraction properly */ ret = qxl_image_init(qdev, release, dimage, surface_base, left - dumb_shadow_offset, diff --git a/drivers/gpu/drm/qxl/qxl_drv.h b/drivers/gpu/drm/qxl/qxl_drv.h index 3602e8b34189..eb437fea5d9e 100644 --- a/drivers/gpu/drm/qxl/qxl_drv.h +++ b/drivers/gpu/drm/qxl/qxl_drv.h @@ -30,6 +30,7 @@ * Definitions taken from spice-protocol, plus kernel driver specific bits. */ +#include #include #include #include @@ -50,6 +51,8 @@ #include "qxl_dev.h" +struct dma_buf_map; + #define DRIVER_AUTHOR "Dave Airlie" #define DRIVER_NAME "qxl" @@ -79,7 +82,7 @@ struct qxl_bo { /* Protected by tbo.reserved */ struct ttm_place placements[3]; struct ttm_placement placement; - struct ttm_bo_kmap_obj kmap; + struct dma_buf_map map; void *kptr; unsigned int map_count; int type; @@ -335,7 +338,6 @@ int qxl_gem_object_open(struct drm_gem_object *obj, struct drm_file *file_priv); void qxl_gem_object_close(struct drm_gem_object *obj, struct drm_file *file_priv); void qxl_bo_force_delete(struct qxl_device *qdev); -int qxl_bo_kmap(struct qxl_bo *bo, void **ptr); /* qxl_dumb.c */ int qxl_mode_dumb_create(struct drm_file *file_priv, @@ -445,8 +447,9 @@ struct sg_table *qxl_gem_prime_get_sg_table(struct drm_gem_object *obj); struct drm_gem_object *qxl_gem_prime_import_sg_table( struct drm_device *dev, struct dma_buf_attachment *attach, struct sg_table *sgt); -void *qxl_gem_prime_vmap(struct drm_gem_object *obj); -void qxl_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr); +int qxl_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map); +void qxl_gem_prime_vunmap(struct drm_gem_object *obj, + struct dma_buf_map *map); int qxl_gem_prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma); diff --git a/drivers/gpu/drm/qxl/qxl_object.c b/drivers/gpu/drm/qxl/qxl_object.c index 547d46c14d56..ceebc5881f68 100644 --- a/drivers/gpu/drm/qxl/qxl_object.c +++ b/drivers/gpu/drm/qxl/qxl_object.c @@ -23,10 +23,12 @@ * Alon Levy */ +#include +#include + #include "qxl_drv.h" #include "qxl_object.h" -#include static void qxl_ttm_bo_destroy(struct ttm_buffer_object *tbo) { struct qxl_bo *bo; @@ -152,24 +154,27 @@ int qxl_bo_create(struct qxl_device *qdev, return 0; } -int qxl_bo_kmap(struct qxl_bo *bo, void **ptr) +int qxl_bo_kmap(struct qxl_bo *bo, struct dma_buf_map *map) { - bool is_iomem; int r; if (bo->kptr) { - if (ptr) - *ptr = bo->kptr; bo->map_count++; - return 0; + goto out; } - r = ttm_bo_kmap(&bo->tbo, 0, bo->tbo.num_pages, &bo->kmap); + r = ttm_bo_vmap(&bo->tbo, &bo->map); if (r) return r; - bo->kptr = ttm_kmap_obj_virtual(&bo->kmap, &is_iomem); - if (ptr) - *ptr = bo->kptr; bo->map_count = 1; + + /* TODO: Remove kptr in favor of map everywhere. */ + if (bo->map.is_iomem) + bo->kptr = (void *)bo->map.vaddr_iomem; + else + bo->kptr = bo->map.vaddr; + +out: + *map = bo->map; return 0; } @@ -180,6 +185,7 @@ void *qxl_bo_kmap_atomic_page(struct qxl_device *qdev, void *rptr; int ret; struct io_mapping *map; + struct dma_buf_map bo_map; if (bo->tbo.mem.mem_type == TTM_PL_VRAM) map = qdev->vram_mapping; @@ -196,9 +202,10 @@ void *qxl_bo_kmap_atomic_page(struct qxl_device *qdev, return rptr; } - ret = qxl_bo_kmap(bo, &rptr); + ret = qxl_bo_kmap(bo, &bo_map); if (ret) return NULL; + rptr = bo_map.vaddr; /* TODO: Use mapping abstraction properly */ rptr += page_offset * PAGE_SIZE; return rptr; @@ -212,7 +219,7 @@ void qxl_bo_kunmap(struct qxl_bo *bo) if (bo->map_count > 0) return; bo->kptr = NULL; - ttm_bo_kunmap(&bo->kmap); + ttm_bo_vunmap(&bo->tbo, &bo->map); } void qxl_bo_kunmap_atomic_page(struct qxl_device *qdev, diff --git a/drivers/gpu/drm/qxl/qxl_object.h b/drivers/gpu/drm/qxl/qxl_object.h index 09a5c818324d..ebf24c9d2bf2 100644 --- a/drivers/gpu/drm/qxl/qxl_object.h +++ b/drivers/gpu/drm/qxl/qxl_object.h @@ -63,7 +63,7 @@ extern int qxl_bo_create(struct qxl_device *qdev, bool kernel, bool pinned, u32 domain, struct qxl_surface *surf, struct qxl_bo **bo_ptr); -extern int qxl_bo_kmap(struct qxl_bo *bo, void **ptr); +extern int qxl_bo_kmap(struct qxl_bo *bo, struct dma_buf_map *map); extern void qxl_bo_kunmap(struct qxl_bo *bo); void *qxl_bo_kmap_atomic_page(struct qxl_device *qdev, struct qxl_bo *bo, int page_offset); void qxl_bo_kunmap_atomic_page(struct qxl_device *qdev, struct qxl_bo *bo, void *map); diff --git a/drivers/gpu/drm/qxl/qxl_prime.c b/drivers/gpu/drm/qxl/qxl_prime.c index 7d3816fca5a8..4aa949799446 100644 --- a/drivers/gpu/drm/qxl/qxl_prime.c +++ b/drivers/gpu/drm/qxl/qxl_prime.c @@ -54,20 +54,20 @@ struct drm_gem_object *qxl_gem_prime_import_sg_table( return ERR_PTR(-ENOSYS); } -void *qxl_gem_prime_vmap(struct drm_gem_object *obj) +int qxl_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map) { struct qxl_bo *bo = gem_to_qxl_bo(obj); - void *ptr; int ret; - ret = qxl_bo_kmap(bo, &ptr); + ret = qxl_bo_kmap(bo, map); if (ret < 0) - return ERR_PTR(ret); + return ret; - return ptr; + return 0; } -void qxl_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr) +void qxl_gem_prime_vunmap(struct drm_gem_object *obj, + struct dma_buf_map *map) { struct qxl_bo *bo = gem_to_qxl_bo(obj); diff --git a/drivers/gpu/drm/radeon/radeon.h b/drivers/gpu/drm/radeon/radeon.h index 5d54bccebd4d..44cb5ee6fc20 100644 --- a/drivers/gpu/drm/radeon/radeon.h +++ b/drivers/gpu/drm/radeon/radeon.h @@ -509,7 +509,6 @@ struct radeon_bo { /* Constant after initialization */ struct radeon_device *rdev; - struct ttm_bo_kmap_obj dma_buf_vmap; pid_t pid; #ifdef CONFIG_MMU_NOTIFIER diff --git a/drivers/gpu/drm/radeon/radeon_gem.c b/drivers/gpu/drm/radeon/radeon_gem.c index 0ccd7213e41f..d2876ce3bc9e 100644 --- a/drivers/gpu/drm/radeon/radeon_gem.c +++ b/drivers/gpu/drm/radeon/radeon_gem.c @@ -31,6 +31,7 @@ #include #include #include +#include #include #include "radeon.h" @@ -40,8 +41,6 @@ struct dma_buf *radeon_gem_prime_export(struct drm_gem_object *gobj, struct sg_table *radeon_gem_prime_get_sg_table(struct drm_gem_object *obj); int radeon_gem_prime_pin(struct drm_gem_object *obj); void radeon_gem_prime_unpin(struct drm_gem_object *obj); -void *radeon_gem_prime_vmap(struct drm_gem_object *obj); -void radeon_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr); static const struct drm_gem_object_funcs radeon_gem_object_funcs; @@ -235,8 +234,8 @@ static const struct drm_gem_object_funcs radeon_gem_object_funcs = { .pin = radeon_gem_prime_pin, .unpin = radeon_gem_prime_unpin, .get_sg_table = radeon_gem_prime_get_sg_table, - .vmap = radeon_gem_prime_vmap, - .vunmap = radeon_gem_prime_vunmap, + .vmap = drm_gem_ttm_vmap, + .vunmap = drm_gem_ttm_vunmap, }; /* diff --git a/drivers/gpu/drm/radeon/radeon_prime.c b/drivers/gpu/drm/radeon/radeon_prime.c index b9de0e51c0be..088d39a51c0d 100644 --- a/drivers/gpu/drm/radeon/radeon_prime.c +++ b/drivers/gpu/drm/radeon/radeon_prime.c @@ -39,26 +39,6 @@ struct sg_table *radeon_gem_prime_get_sg_table(struct drm_gem_object *obj) return drm_prime_pages_to_sg(obj->dev, bo->tbo.ttm->pages, npages); } -void *radeon_gem_prime_vmap(struct drm_gem_object *obj) -{ - struct radeon_bo *bo = gem_to_radeon_bo(obj); - int ret; - - ret = ttm_bo_kmap(&bo->tbo, 0, bo->tbo.num_pages, - &bo->dma_buf_vmap); - if (ret) - return ERR_PTR(ret); - - return bo->dma_buf_vmap.virtual; -} - -void radeon_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr) -{ - struct radeon_bo *bo = gem_to_radeon_bo(obj); - - ttm_bo_kunmap(&bo->dma_buf_vmap); -} - struct drm_gem_object *radeon_gem_prime_import_sg_table(struct drm_device *dev, struct dma_buf_attachment *attach, struct sg_table *sg) diff --git a/drivers/gpu/drm/rockchip/rockchip_drm_gem.c b/drivers/gpu/drm/rockchip/rockchip_drm_gem.c index 7d5ebb10323b..7971f57436dd 100644 --- a/drivers/gpu/drm/rockchip/rockchip_drm_gem.c +++ b/drivers/gpu/drm/rockchip/rockchip_drm_gem.c @@ -532,26 +532,32 @@ rockchip_gem_prime_import_sg_table(struct drm_device *drm, return ERR_PTR(ret); } -void *rockchip_gem_prime_vmap(struct drm_gem_object *obj) +int rockchip_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map) { struct rockchip_gem_object *rk_obj = to_rockchip_obj(obj); - if (rk_obj->pages) - return vmap(rk_obj->pages, rk_obj->num_pages, VM_MAP, - pgprot_writecombine(PAGE_KERNEL)); + if (rk_obj->pages) { + void *vaddr = vmap(rk_obj->pages, rk_obj->num_pages, VM_MAP, + pgprot_writecombine(PAGE_KERNEL)); + if (!vaddr) + return -ENOMEM; + dma_buf_map_set_vaddr(map, vaddr); + return 0; + } if (rk_obj->dma_attrs & DMA_ATTR_NO_KERNEL_MAPPING) - return NULL; + return -ENOMEM; + dma_buf_map_set_vaddr(map, rk_obj->kvaddr); - return rk_obj->kvaddr; + return 0; } -void rockchip_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr) +void rockchip_gem_prime_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map) { struct rockchip_gem_object *rk_obj = to_rockchip_obj(obj); if (rk_obj->pages) { - vunmap(vaddr); + vunmap(map->vaddr); return; } diff --git a/drivers/gpu/drm/rockchip/rockchip_drm_gem.h b/drivers/gpu/drm/rockchip/rockchip_drm_gem.h index 7ffc541bea07..5a70a56cd406 100644 --- a/drivers/gpu/drm/rockchip/rockchip_drm_gem.h +++ b/drivers/gpu/drm/rockchip/rockchip_drm_gem.h @@ -31,8 +31,8 @@ struct drm_gem_object * rockchip_gem_prime_import_sg_table(struct drm_device *dev, struct dma_buf_attachment *attach, struct sg_table *sg); -void *rockchip_gem_prime_vmap(struct drm_gem_object *obj); -void rockchip_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr); +int rockchip_gem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map); +void rockchip_gem_prime_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map); /* drm driver mmap file operations */ int rockchip_gem_mmap(struct file *filp, struct vm_area_struct *vma); diff --git a/drivers/gpu/drm/tiny/cirrus.c b/drivers/gpu/drm/tiny/cirrus.c index 744a8e337e41..c02e35ed6e76 100644 --- a/drivers/gpu/drm/tiny/cirrus.c +++ b/drivers/gpu/drm/tiny/cirrus.c @@ -17,6 +17,7 @@ */ #include +#include #include #include @@ -314,6 +315,7 @@ static int cirrus_fb_blit_rect(struct drm_framebuffer *fb, struct drm_rect *rect) { struct cirrus_device *cirrus = to_cirrus(fb->dev); + struct dma_buf_map map; void *vmap; int idx, ret; @@ -321,10 +323,10 @@ static int cirrus_fb_blit_rect(struct drm_framebuffer *fb, if (!drm_dev_enter(&cirrus->dev, &idx)) goto out; - ret = -ENOMEM; - vmap = drm_gem_shmem_vmap(fb->obj[0]); - if (!vmap) + ret = drm_gem_shmem_vmap(fb->obj[0], &map); + if (ret) goto out_dev_exit; + vmap = map.vaddr; /* TODO: Use mapping abstraction properly */ if (cirrus->cpp == fb->format->cpp[0]) drm_fb_memcpy_dstclip(cirrus->vram, @@ -343,7 +345,7 @@ static int cirrus_fb_blit_rect(struct drm_framebuffer *fb, else WARN_ON_ONCE("cpp mismatch"); - drm_gem_shmem_vunmap(fb->obj[0], vmap); + drm_gem_shmem_vunmap(fb->obj[0], &map); ret = 0; out_dev_exit: diff --git a/drivers/gpu/drm/tiny/gm12u320.c b/drivers/gpu/drm/tiny/gm12u320.c index cc397671f689..12a890cea6e9 100644 --- a/drivers/gpu/drm/tiny/gm12u320.c +++ b/drivers/gpu/drm/tiny/gm12u320.c @@ -248,6 +248,7 @@ static void gm12u320_copy_fb_to_blocks(struct gm12u320_device *gm12u320) { int block, dst_offset, len, remain, ret, x1, x2, y1, y2; struct drm_framebuffer *fb; + struct dma_buf_map map; void *vaddr; u8 *src; @@ -262,11 +263,12 @@ static void gm12u320_copy_fb_to_blocks(struct gm12u320_device *gm12u320) y1 = gm12u320->fb_update.rect.y1; y2 = gm12u320->fb_update.rect.y2; - vaddr = drm_gem_shmem_vmap(fb->obj[0]); - if (IS_ERR(vaddr)) { - GM12U320_ERR("failed to vmap fb: %ld\n", PTR_ERR(vaddr)); + ret = drm_gem_shmem_vmap(fb->obj[0], &map); + if (ret) { + GM12U320_ERR("failed to vmap fb: %d\n", ret); goto put_fb; } + vaddr = map.vaddr; /* TODO: Use mapping abstraction properly */ if (fb->obj[0]->import_attach) { ret = dma_buf_begin_cpu_access( @@ -318,7 +320,7 @@ static void gm12u320_copy_fb_to_blocks(struct gm12u320_device *gm12u320) GM12U320_ERR("dma_buf_end_cpu_access err: %d\n", ret); } vunmap: - drm_gem_shmem_vunmap(fb->obj[0], vaddr); + drm_gem_shmem_vunmap(fb->obj[0], &map); put_fb: drm_framebuffer_put(fb); gm12u320->fb_update.fb = NULL; diff --git a/drivers/gpu/drm/udl/udl_modeset.c b/drivers/gpu/drm/udl/udl_modeset.c index fef43f4e3bac..42eeba1dfdbf 100644 --- a/drivers/gpu/drm/udl/udl_modeset.c +++ b/drivers/gpu/drm/udl/udl_modeset.c @@ -276,6 +276,7 @@ static int udl_handle_damage(struct drm_framebuffer *fb, int x, int y, struct urb *urb; struct drm_rect clip; int log_bpp; + struct dma_buf_map map; void *vaddr; ret = udl_log_cpp(fb->format->cpp[0]); @@ -296,11 +297,12 @@ static int udl_handle_damage(struct drm_framebuffer *fb, int x, int y, return ret; } - vaddr = drm_gem_shmem_vmap(fb->obj[0]); - if (IS_ERR(vaddr)) { + ret = drm_gem_shmem_vmap(fb->obj[0], &map); + if (ret) { DRM_ERROR("failed to vmap fb\n"); goto out_dma_buf_end_cpu_access; } + vaddr = map.vaddr; /* TODO: Use mapping abstraction properly */ urb = udl_get_urb(dev); if (!urb) @@ -333,7 +335,7 @@ static int udl_handle_damage(struct drm_framebuffer *fb, int x, int y, ret = 0; out_drm_gem_shmem_vunmap: - drm_gem_shmem_vunmap(fb->obj[0], vaddr); + drm_gem_shmem_vunmap(fb->obj[0], &map); out_dma_buf_end_cpu_access: if (import_attach) { tmp_ret = dma_buf_end_cpu_access(import_attach->dmabuf, diff --git a/drivers/gpu/drm/vboxvideo/vbox_mode.c b/drivers/gpu/drm/vboxvideo/vbox_mode.c index 931c55126148..f268fb258c83 100644 --- a/drivers/gpu/drm/vboxvideo/vbox_mode.c +++ b/drivers/gpu/drm/vboxvideo/vbox_mode.c @@ -9,6 +9,8 @@ * Michael Thayer */ + +#include #include #include @@ -384,6 +386,8 @@ static void vbox_cursor_atomic_update(struct drm_plane *plane, u32 height = plane->state->crtc_h; size_t data_size, mask_size; u32 flags; + struct dma_buf_map map; + int ret; u8 *src; /* @@ -397,8 +401,8 @@ static void vbox_cursor_atomic_update(struct drm_plane *plane, vbox_crtc->cursor_enabled = true; - src = drm_gem_vram_vmap(gbo); - if (IS_ERR(src)) { + ret = drm_gem_vram_vmap(gbo, &map); + if (ret) { /* * BUG: we should have pinned the BO in prepare_fb(). */ @@ -406,6 +410,7 @@ static void vbox_cursor_atomic_update(struct drm_plane *plane, DRM_WARN("Could not map cursor bo, skipping update\n"); return; } + src = map.vaddr; /* TODO: Use mapping abstraction properly */ /* * The mask must be calculated based on the alpha @@ -416,7 +421,7 @@ static void vbox_cursor_atomic_update(struct drm_plane *plane, data_size = width * height * 4 + mask_size; copy_cursor_image(src, vbox->cursor_data, width, height, mask_size); - drm_gem_vram_vunmap(gbo, src); + drm_gem_vram_vunmap(gbo, &map); flags = VBOX_MOUSE_POINTER_VISIBLE | VBOX_MOUSE_POINTER_SHAPE | VBOX_MOUSE_POINTER_ALPHA; diff --git a/drivers/gpu/drm/vc4/vc4_bo.c b/drivers/gpu/drm/vc4/vc4_bo.c index 557f0d1e6437..f290a9a942dc 100644 --- a/drivers/gpu/drm/vc4/vc4_bo.c +++ b/drivers/gpu/drm/vc4/vc4_bo.c @@ -785,16 +785,16 @@ int vc4_prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma) return drm_gem_cma_prime_mmap(obj, vma); } -void *vc4_prime_vmap(struct drm_gem_object *obj) +int vc4_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map) { struct vc4_bo *bo = to_vc4_bo(obj); if (bo->validated_shader) { DRM_DEBUG("mmaping of shader BOs not allowed.\n"); - return ERR_PTR(-EINVAL); + return -EINVAL; } - return drm_gem_cma_prime_vmap(obj); + return drm_gem_cma_prime_vmap(obj, map); } struct drm_gem_object * diff --git a/drivers/gpu/drm/vc4/vc4_drv.h b/drivers/gpu/drm/vc4/vc4_drv.h index cc79b1aaa878..904f2c36c963 100644 --- a/drivers/gpu/drm/vc4/vc4_drv.h +++ b/drivers/gpu/drm/vc4/vc4_drv.h @@ -806,7 +806,7 @@ int vc4_prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma); struct drm_gem_object *vc4_prime_import_sg_table(struct drm_device *dev, struct dma_buf_attachment *attach, struct sg_table *sgt); -void *vc4_prime_vmap(struct drm_gem_object *obj); +int vc4_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map); int vc4_bo_cache_init(struct drm_device *dev); void vc4_bo_cache_destroy(struct drm_device *dev); int vc4_bo_inc_usecnt(struct vc4_bo *bo); diff --git a/drivers/gpu/drm/vgem/vgem_drv.c b/drivers/gpu/drm/vgem/vgem_drv.c index fa54a6d1403d..b2aa26e1e4a2 100644 --- a/drivers/gpu/drm/vgem/vgem_drv.c +++ b/drivers/gpu/drm/vgem/vgem_drv.c @@ -361,24 +361,30 @@ static struct drm_gem_object *vgem_prime_import_sg_table(struct drm_device *dev, return &obj->base; } -static void *vgem_prime_vmap(struct drm_gem_object *obj) +static int vgem_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map) { struct drm_vgem_gem_object *bo = to_vgem_bo(obj); long n_pages = obj->size >> PAGE_SHIFT; struct page **pages; + void *vaddr; pages = vgem_pin_pages(bo); if (IS_ERR(pages)) - return NULL; + return PTR_ERR(pages); + + vaddr = vmap(pages, n_pages, 0, pgprot_writecombine(PAGE_KERNEL)); + if (!vaddr) + return -ENOMEM; + dma_buf_map_set_vaddr(map, vaddr); - return vmap(pages, n_pages, 0, pgprot_writecombine(PAGE_KERNEL)); + return 0; } -static void vgem_prime_vunmap(struct drm_gem_object *obj, void *vaddr) +static void vgem_prime_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map) { struct drm_vgem_gem_object *bo = to_vgem_bo(obj); - vunmap(vaddr); + vunmap(map->vaddr); vgem_unpin_pages(bo); } diff --git a/drivers/gpu/drm/vkms/vkms_plane.c b/drivers/gpu/drm/vkms/vkms_plane.c index 9890137bcb8d..0824327cc860 100644 --- a/drivers/gpu/drm/vkms/vkms_plane.c +++ b/drivers/gpu/drm/vkms/vkms_plane.c @@ -1,5 +1,7 @@ // SPDX-License-Identifier: GPL-2.0+ +#include + #include #include #include @@ -146,15 +148,16 @@ static int vkms_prepare_fb(struct drm_plane *plane, struct drm_plane_state *state) { struct drm_gem_object *gem_obj; - void *vaddr; + struct dma_buf_map map; + int ret; if (!state->fb) return 0; gem_obj = drm_gem_fb_get_obj(state->fb, 0); - vaddr = drm_gem_shmem_vmap(gem_obj); - if (IS_ERR(vaddr)) - DRM_ERROR("vmap failed: %li\n", PTR_ERR(vaddr)); + ret = drm_gem_shmem_vmap(gem_obj, &map); + if (ret) + DRM_ERROR("vmap failed: %d\n", ret); return drm_gem_fb_prepare_fb(plane, state); } @@ -164,13 +167,15 @@ static void vkms_cleanup_fb(struct drm_plane *plane, { struct drm_gem_object *gem_obj; struct drm_gem_shmem_object *shmem_obj; + struct dma_buf_map map; if (!old_state->fb) return; gem_obj = drm_gem_fb_get_obj(old_state->fb, 0); shmem_obj = to_drm_gem_shmem_obj(drm_gem_fb_get_obj(old_state->fb, 0)); - drm_gem_shmem_vunmap(gem_obj, shmem_obj->vaddr); + dma_buf_map_set_vaddr(&map, shmem_obj->vaddr); + drm_gem_shmem_vunmap(gem_obj, &map); } static const struct drm_plane_helper_funcs vkms_primary_helper_funcs = { diff --git a/drivers/gpu/drm/vkms/vkms_writeback.c b/drivers/gpu/drm/vkms/vkms_writeback.c index 26b903926872..67f80ab1e85f 100644 --- a/drivers/gpu/drm/vkms/vkms_writeback.c +++ b/drivers/gpu/drm/vkms/vkms_writeback.c @@ -1,6 +1,7 @@ // SPDX-License-Identifier: GPL-2.0+ -#include "vkms_drv.h" +#include + #include #include #include @@ -8,6 +9,8 @@ #include #include +#include "vkms_drv.h" + static const u32 vkms_wb_formats[] = { DRM_FORMAT_XRGB8888, }; @@ -65,19 +68,20 @@ static int vkms_wb_prepare_job(struct drm_writeback_connector *wb_connector, struct drm_writeback_job *job) { struct drm_gem_object *gem_obj; - void *vaddr; + struct dma_buf_map map; + int ret; if (!job->fb) return 0; gem_obj = drm_gem_fb_get_obj(job->fb, 0); - vaddr = drm_gem_shmem_vmap(gem_obj); - if (IS_ERR(vaddr)) { - DRM_ERROR("vmap failed: %li\n", PTR_ERR(vaddr)); - return PTR_ERR(vaddr); + ret = drm_gem_shmem_vmap(gem_obj, &map); + if (ret) { + DRM_ERROR("vmap failed: %d\n", ret); + return ret; } - job->priv = vaddr; + job->priv = map.vaddr; return 0; } @@ -87,12 +91,14 @@ static void vkms_wb_cleanup_job(struct drm_writeback_connector *connector, { struct drm_gem_object *gem_obj; struct vkms_device *vkmsdev; + struct dma_buf_map map; if (!job->fb) return; gem_obj = drm_gem_fb_get_obj(job->fb, 0); - drm_gem_shmem_vunmap(gem_obj, job->priv); + dma_buf_map_set_vaddr(&map, job->priv); + drm_gem_shmem_vunmap(gem_obj, &map); vkmsdev = drm_device_to_vkms_device(gem_obj->dev); vkms_set_composer(&vkmsdev->output, false); diff --git a/drivers/gpu/drm/xen/xen_drm_front_gem.c b/drivers/gpu/drm/xen/xen_drm_front_gem.c index 4f34ef34ba60..74db5a840bed 100644 --- a/drivers/gpu/drm/xen/xen_drm_front_gem.c +++ b/drivers/gpu/drm/xen/xen_drm_front_gem.c @@ -290,22 +290,28 @@ int xen_drm_front_gem_mmap(struct file *filp, struct vm_area_struct *vma) return gem_mmap_obj(xen_obj, vma); } -void *xen_drm_front_gem_prime_vmap(struct drm_gem_object *gem_obj) +int xen_drm_front_gem_prime_vmap(struct drm_gem_object *gem_obj, struct dma_buf_map *map) { struct xen_gem_object *xen_obj = to_xen_gem_obj(gem_obj); + void *vaddr; if (!xen_obj->pages) - return NULL; + return -ENOMEM; /* Please see comment in gem_mmap_obj on mapping and attributes. */ - return vmap(xen_obj->pages, xen_obj->num_pages, - VM_MAP, PAGE_KERNEL); + vaddr = vmap(xen_obj->pages, xen_obj->num_pages, + VM_MAP, PAGE_KERNEL); + if (!vaddr) + return -ENOMEM; + dma_buf_map_set_vaddr(map, vaddr); + + return 0; } void xen_drm_front_gem_prime_vunmap(struct drm_gem_object *gem_obj, - void *vaddr) + struct dma_buf_map *map) { - vunmap(vaddr); + vunmap(map->vaddr); } int xen_drm_front_gem_prime_mmap(struct drm_gem_object *gem_obj, diff --git a/drivers/gpu/drm/xen/xen_drm_front_gem.h b/drivers/gpu/drm/xen/xen_drm_front_gem.h index a39675fa31b2..a4e67d0a149c 100644 --- a/drivers/gpu/drm/xen/xen_drm_front_gem.h +++ b/drivers/gpu/drm/xen/xen_drm_front_gem.h @@ -12,6 +12,7 @@ #define __XEN_DRM_FRONT_GEM_H struct dma_buf_attachment; +struct dma_buf_map; struct drm_device; struct drm_gem_object; struct file; @@ -34,10 +35,11 @@ void xen_drm_front_gem_free_object_unlocked(struct drm_gem_object *gem_obj); int xen_drm_front_gem_mmap(struct file *filp, struct vm_area_struct *vma); -void *xen_drm_front_gem_prime_vmap(struct drm_gem_object *gem_obj); +int xen_drm_front_gem_prime_vmap(struct drm_gem_object *gem_obj, + struct dma_buf_map *map); void xen_drm_front_gem_prime_vunmap(struct drm_gem_object *gem_obj, - void *vaddr); + struct dma_buf_map *map); int xen_drm_front_gem_prime_mmap(struct drm_gem_object *gem_obj, struct vm_area_struct *vma); diff --git a/include/drm/drm_gem.h b/include/drm/drm_gem.h index c38dd35da00b..5e6daa1c982f 100644 --- a/include/drm/drm_gem.h +++ b/include/drm/drm_gem.h @@ -39,6 +39,7 @@ #include +struct dma_buf_map; struct drm_gem_object; /** @@ -138,7 +139,7 @@ struct drm_gem_object_funcs { * * This callback is optional. */ - void *(*vmap)(struct drm_gem_object *obj); + int (*vmap)(struct drm_gem_object *obj, struct dma_buf_map *map); /** * @vunmap: @@ -148,7 +149,7 @@ struct drm_gem_object_funcs { * * This callback is optional. */ - void (*vunmap)(struct drm_gem_object *obj, void *vaddr); + void (*vunmap)(struct drm_gem_object *obj, struct dma_buf_map *map); /** * @mmap: diff --git a/include/drm/drm_gem_cma_helper.h b/include/drm/drm_gem_cma_helper.h index a064b0d1c480..caf98b9cf4b4 100644 --- a/include/drm/drm_gem_cma_helper.h +++ b/include/drm/drm_gem_cma_helper.h @@ -103,7 +103,7 @@ drm_gem_cma_prime_import_sg_table(struct drm_device *dev, struct sg_table *sgt); int drm_gem_cma_prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma); -void *drm_gem_cma_prime_vmap(struct drm_gem_object *obj); +int drm_gem_cma_prime_vmap(struct drm_gem_object *obj, struct dma_buf_map *map); struct drm_gem_object * drm_gem_cma_create_object_default_funcs(struct drm_device *dev, size_t size); diff --git a/include/drm/drm_gem_shmem_helper.h b/include/drm/drm_gem_shmem_helper.h index 5381f0c8cf6f..3449a0353fe0 100644 --- a/include/drm/drm_gem_shmem_helper.h +++ b/include/drm/drm_gem_shmem_helper.h @@ -113,8 +113,8 @@ int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem); void drm_gem_shmem_put_pages(struct drm_gem_shmem_object *shmem); int drm_gem_shmem_pin(struct drm_gem_object *obj); void drm_gem_shmem_unpin(struct drm_gem_object *obj); -void *drm_gem_shmem_vmap(struct drm_gem_object *obj); -void drm_gem_shmem_vunmap(struct drm_gem_object *obj, void *vaddr); +int drm_gem_shmem_vmap(struct drm_gem_object *obj, struct dma_buf_map *map); +void drm_gem_shmem_vunmap(struct drm_gem_object *obj, struct dma_buf_map *map); int drm_gem_shmem_madvise(struct drm_gem_object *obj, int madv); diff --git a/include/drm/drm_gem_vram_helper.h b/include/drm/drm_gem_vram_helper.h index 128f88174d32..c0d28ba0f5c9 100644 --- a/include/drm/drm_gem_vram_helper.h +++ b/include/drm/drm_gem_vram_helper.h @@ -10,6 +10,7 @@ #include #include +#include #include /* for container_of() */ struct drm_mode_create_dumb; @@ -29,9 +30,8 @@ struct vm_area_struct; /** * struct drm_gem_vram_object - GEM object backed by VRAM - * @gem: GEM object * @bo: TTM buffer object - * @kmap: Mapping information for @bo + * @map: Mapping information for @bo * @placement: TTM placement information. Supported placements are \ %TTM_PL_VRAM and %TTM_PL_SYSTEM * @placements: TTM placement information. @@ -50,15 +50,15 @@ struct vm_area_struct; */ struct drm_gem_vram_object { struct ttm_buffer_object bo; - struct ttm_bo_kmap_obj kmap; + struct dma_buf_map map; /** - * @kmap_use_count: + * @vmap_use_count: * * Reference count on the virtual address. * The address are un-mapped when the count reaches zero. */ - unsigned int kmap_use_count; + unsigned int vmap_use_count; /* Supported placements are %TTM_PL_VRAM and %TTM_PL_SYSTEM */ struct ttm_placement placement; @@ -97,8 +97,8 @@ u64 drm_gem_vram_mmap_offset(struct drm_gem_vram_object *gbo); s64 drm_gem_vram_offset(struct drm_gem_vram_object *gbo); int drm_gem_vram_pin(struct drm_gem_vram_object *gbo, unsigned long pl_flag); int drm_gem_vram_unpin(struct drm_gem_vram_object *gbo); -void *drm_gem_vram_vmap(struct drm_gem_vram_object *gbo); -void drm_gem_vram_vunmap(struct drm_gem_vram_object *gbo, void *vaddr); +int drm_gem_vram_vmap(struct drm_gem_vram_object *gbo, struct dma_buf_map *map); +void drm_gem_vram_vunmap(struct drm_gem_vram_object *gbo, struct dma_buf_map *map); int drm_gem_vram_fill_create_dumb(struct drm_file *file, struct drm_device *dev, -- 2.29.0 From tzimmermann at suse.de Wed Oct 28 19:35:20 2020 From: tzimmermann at suse.de (Thomas Zimmermann) Date: Wed, 28 Oct 2020 20:35:20 +0100 Subject: [Spice-devel] [PATCH v6 09/10] dma-buf-map: Add memcpy and pointer-increment interfaces In-Reply-To: <20201028193521.2489-1-tzimmermann@suse.de> References: <20201028193521.2489-1-tzimmermann@suse.de> Message-ID: <20201028193521.2489-10-tzimmermann@suse.de> To do framebuffer updates, one needs memcpy from system memory and a pointer-increment function. Add both interfaces with documentation. v5: * include to build on sparc64 (Sam) Signed-off-by: Thomas Zimmermann Reviewed-by: Sam Ravnborg Tested-by: Sam Ravnborg --- include/linux/dma-buf-map.h | 73 ++++++++++++++++++++++++++++++++----- 1 file changed, 63 insertions(+), 10 deletions(-) diff --git a/include/linux/dma-buf-map.h b/include/linux/dma-buf-map.h index 2e8bbecb5091..583a3a1f9447 100644 --- a/include/linux/dma-buf-map.h +++ b/include/linux/dma-buf-map.h @@ -7,6 +7,7 @@ #define __DMA_BUF_MAP_H__ #include +#include /** * DOC: overview @@ -32,6 +33,14 @@ * accessing the buffer. Use the returned instance and the helper functions * to access the buffer's memory in the correct way. * + * The type :c:type:`struct dma_buf_map ` and its helpers are + * actually independent from the dma-buf infrastructure. When sharing buffers + * among devices, drivers have to know the location of the memory to access + * the buffers in a safe way. :c:type:`struct dma_buf_map ` + * solves this problem for dma-buf and its users. If other drivers or + * sub-systems require similar functionality, the type could be generalized + * and moved to a more prominent header file. + * * Open-coding access to :c:type:`struct dma_buf_map ` is * considered bad style. Rather then accessing its fields directly, use one * of the provided helper functions, or implement your own. For example, @@ -51,6 +60,14 @@ * * dma_buf_map_set_vaddr_iomem(&map. 0xdeadbeaf); * + * Instances of struct dma_buf_map do not have to be cleaned up, but + * can be cleared to NULL with dma_buf_map_clear(). Cleared mappings + * always refer to system memory. + * + * .. code-block:: c + * + * dma_buf_map_clear(&map); + * * Test if a mapping is valid with either dma_buf_map_is_set() or * dma_buf_map_is_null(). * @@ -73,17 +90,19 @@ * if (dma_buf_map_is_equal(&sys_map, &io_map)) * // always false * - * Instances of struct dma_buf_map do not have to be cleaned up, but - * can be cleared to NULL with dma_buf_map_clear(). Cleared mappings - * always refer to system memory. + * A set up instance of struct dma_buf_map can be used to access or manipulate + * the buffer memory. Depending on the location of the memory, the provided + * helpers will pick the correct operations. Data can be copied into the memory + * with dma_buf_map_memcpy_to(). The address can be manipulated with + * dma_buf_map_incr(). * - * The type :c:type:`struct dma_buf_map ` and its helpers are - * actually independent from the dma-buf infrastructure. When sharing buffers - * among devices, drivers have to know the location of the memory to access - * the buffers in a safe way. :c:type:`struct dma_buf_map ` - * solves this problem for dma-buf and its users. If other drivers or - * sub-systems require similar functionality, the type could be generalized - * and moved to a more prominent header file. + * .. code-block:: c + * + * const void *src = ...; // source buffer + * size_t len = ...; // length of src + * + * dma_buf_map_memcpy_to(&map, src, len); + * dma_buf_map_incr(&map, len); // go to first byte after the memcpy */ /** @@ -210,4 +229,38 @@ static inline void dma_buf_map_clear(struct dma_buf_map *map) } } +/** + * dma_buf_map_memcpy_to - Memcpy into dma-buf mapping + * @dst: The dma-buf mapping structure + * @src: The source buffer + * @len: The number of byte in src + * + * Copies data into a dma-buf mapping. The source buffer is in system + * memory. Depending on the buffer's location, the helper picks the correct + * method of accessing the memory. + */ +static inline void dma_buf_map_memcpy_to(struct dma_buf_map *dst, const void *src, size_t len) +{ + if (dst->is_iomem) + memcpy_toio(dst->vaddr_iomem, src, len); + else + memcpy(dst->vaddr, src, len); +} + +/** + * dma_buf_map_incr - Increments the address stored in a dma-buf mapping + * @map: The dma-buf mapping structure + * @incr: The number of bytes to increment + * + * Increments the address stored in a dma-buf mapping. Depending on the + * buffer's location, the correct value will be updated. + */ +static inline void dma_buf_map_incr(struct dma_buf_map *map, size_t incr) +{ + if (map->is_iomem) + map->vaddr_iomem += incr; + else + map->vaddr += incr; +} + #endif /* __DMA_BUF_MAP_H__ */ -- 2.29.0 From arequipeno at gmail.com Wed Oct 28 22:32:30 2020 From: arequipeno at gmail.com (Ian Pilcher) Date: Wed, 28 Oct 2020 17:32:30 -0500 Subject: [Spice-devel] How to build Windows vdagent? Message-ID: Following up on my earlier thread re the agent's incompatibility with "mixed" (QXL + passthrough) configurations. I'd like to take a crack at this issue, but I've been unable to find any instructions for building the Windows vdagent. Can anyone point me to the instructions? Thanks! -- ======================================================================== In Soviet Russia, Google searches you! ======================================================================== From fziglio at redhat.com Thu Oct 29 08:10:08 2020 From: fziglio at redhat.com (Frediano Ziglio) Date: Thu, 29 Oct 2020 04:10:08 -0400 (EDT) Subject: [Spice-devel] How to build Windows vdagent? In-Reply-To: References: Message-ID: <1165128467.5786858.1603959008695.JavaMail.zimbra@redhat.com> > > Following up on my earlier thread re the agent's incompatibility with > "mixed" (QXL + passthrough) configurations. I'd like to take a crack > at this issue, but I've been unable to find any instructions for > building the Windows vdagent. > > Can anyone point me to the instructions? > > Thanks! > It depends on the environment you want to use. Either you can use MingW + classic Autoconf (configure) and compile as many Unix projects (better to have a Linux machine) or you can use CMake, better suited for use with Visual Studio (for instance). There's currently a minor issue using Visual Studio (just define G_DEPRECATED preprocessor macro as empty as a workaround). Regards, Frediano From daniel.vetter at ffwll.ch Thu Oct 29 10:14:28 2020 From: daniel.vetter at ffwll.ch (Daniel Vetter) Date: Thu, 29 Oct 2020 11:14:28 +0100 Subject: [Spice-devel] [PATCH 3/3] drm/qxl: Remove fbcon acceleration leftovers In-Reply-To: <20201029101428.4058311-1-daniel.vetter@ffwll.ch> References: <20201029101428.4058311-1-daniel.vetter@ffwll.ch> Message-ID: <20201029101428.4058311-3-daniel.vetter@ffwll.ch> These are leftovers from 13aff184ed9f ("drm/qxl: remove dead qxl fbdev emulation code"). Signed-off-by: Daniel Vetter Cc: Dave Airlie Cc: Gerd Hoffmann Cc: virtualization at lists.linux-foundation.org Cc: spice-devel at lists.freedesktop.org --- drivers/gpu/drm/qxl/qxl_drv.h | 14 -------------- 1 file changed, 14 deletions(-) diff --git a/drivers/gpu/drm/qxl/qxl_drv.h b/drivers/gpu/drm/qxl/qxl_drv.h index 3602e8b34189..86eee66ecbad 100644 --- a/drivers/gpu/drm/qxl/qxl_drv.h +++ b/drivers/gpu/drm/qxl/qxl_drv.h @@ -166,20 +166,6 @@ struct qxl_drm_image { struct list_head chunk_list; }; -struct qxl_fb_image { - struct qxl_device *qdev; - uint32_t pseudo_palette[16]; - struct fb_image fb_image; - uint32_t visual; -}; - -struct qxl_draw_fill { - struct qxl_device *qdev; - struct qxl_rect rect; - uint32_t color; - uint16_t rop; -}; - /* * Debugfs */ -- 2.28.0 From kraxel at redhat.com Thu Oct 29 11:13:00 2020 From: kraxel at redhat.com (Gerd Hoffmann) Date: Thu, 29 Oct 2020 12:13:00 +0100 Subject: [Spice-devel] [PATCH 3/3] drm/qxl: Remove fbcon acceleration leftovers In-Reply-To: <20201029101428.4058311-3-daniel.vetter@ffwll.ch> References: <20201029101428.4058311-1-daniel.vetter@ffwll.ch> <20201029101428.4058311-3-daniel.vetter@ffwll.ch> Message-ID: <20201029111300.p2vld6qc4e2q53xy@sirius.home.kraxel.org> On Thu, Oct 29, 2020 at 11:14:28AM +0100, Daniel Vetter wrote: > These are leftovers from 13aff184ed9f ("drm/qxl: remove dead qxl fbdev > emulation code"). Acked-by: Gerd Hoffmann From daniel.vetter at ffwll.ch Thu Oct 29 13:33:47 2020 From: daniel.vetter at ffwll.ch (Daniel Vetter) Date: Thu, 29 Oct 2020 14:33:47 +0100 Subject: [Spice-devel] [PATCH] drm/qxl: Remove fbcon acceleration leftovers In-Reply-To: <20201029101428.4058311-3-daniel.vetter@ffwll.ch> References: <20201029101428.4058311-3-daniel.vetter@ffwll.ch> Message-ID: <20201029133347.4088884-1-daniel.vetter@ffwll.ch> These are leftovers from 13aff184ed9f ("drm/qxl: remove dead qxl fbdev emulation code"). v2: Somehow these structs provided the struct qxl_device pre-decl, reorder the header to not anger compilers. Acked-by: Gerd Hoffmann Signed-off-by: Daniel Vetter Cc: Dave Airlie Cc: Gerd Hoffmann Cc: virtualization at lists.linux-foundation.org Cc: spice-devel at lists.freedesktop.org --- drivers/gpu/drm/qxl/qxl_drv.h | 18 ++---------------- 1 file changed, 2 insertions(+), 16 deletions(-) diff --git a/drivers/gpu/drm/qxl/qxl_drv.h b/drivers/gpu/drm/qxl/qxl_drv.h index 3602e8b34189..6239626503ef 100644 --- a/drivers/gpu/drm/qxl/qxl_drv.h +++ b/drivers/gpu/drm/qxl/qxl_drv.h @@ -166,20 +166,6 @@ struct qxl_drm_image { struct list_head chunk_list; }; -struct qxl_fb_image { - struct qxl_device *qdev; - uint32_t pseudo_palette[16]; - struct fb_image fb_image; - uint32_t visual; -}; - -struct qxl_draw_fill { - struct qxl_device *qdev; - struct qxl_rect rect; - uint32_t color; - uint16_t rop; -}; - /* * Debugfs */ @@ -188,8 +174,6 @@ struct qxl_debugfs { unsigned int num_files; }; -int qxl_debugfs_fence_init(struct qxl_device *rdev); - struct qxl_device { struct drm_device ddev; @@ -271,6 +255,8 @@ struct qxl_device { #define to_qxl(dev) container_of(dev, struct qxl_device, ddev) +int qxl_debugfs_fence_init(struct qxl_device *rdev); + extern const struct drm_ioctl_desc qxl_ioctls[]; extern int qxl_max_ioctl; -- 2.28.0 From lkp at intel.com Thu Oct 29 13:56:01 2020 From: lkp at intel.com (kernel test robot) Date: Thu, 29 Oct 2020 21:56:01 +0800 Subject: [Spice-devel] [PATCH 3/3] drm/qxl: Remove fbcon acceleration leftovers In-Reply-To: <20201029101428.4058311-3-daniel.vetter@ffwll.ch> References: <20201029101428.4058311-3-daniel.vetter@ffwll.ch> Message-ID: <202010292120.7GihU8E4-lkp@intel.com> Hi Daniel, I love your patch! Perhaps something to improve: [auto build test WARNING on drm-intel/for-linux-next] [also build test WARNING on drm-exynos/exynos-drm-next tegra-drm/drm/tegra/for-next linus/master drm/drm-next v5.10-rc1 next-20201028] [If your patch is applied to the wrong git tree, kindly drop us a note. And when submitting patch, we suggest to use '--base' as documented in https://git-scm.com/docs/git-format-patch] url: https://github.com/0day-ci/linux/commits/Daniel-Vetter/fbcon-Disable-accelerated-scrolling/20201029-181618 base: git://anongit.freedesktop.org/drm-intel for-linux-next config: alpha-randconfig-r003-20201029 (attached as .config) compiler: alpha-linux-gcc (GCC) 9.3.0 reproduce (this is a W=1 build): wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross chmod +x ~/bin/make.cross # https://github.com/0day-ci/linux/commit/188b22d2b66860695df5d07bf2b7115976790b2c git remote add linux-review https://github.com/0day-ci/linux git fetch --no-tags linux-review Daniel-Vetter/fbcon-Disable-accelerated-scrolling/20201029-181618 git checkout 188b22d2b66860695df5d07bf2b7115976790b2c # save the attached .config to linux build tree COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-9.3.0 make.cross ARCH=alpha If you fix the issue, kindly add following tag as appropriate Reported-by: kernel test robot All warnings (new ones prefixed by >>): In file included from drivers/gpu/drm/qxl/qxl_drv.c:31: >> drivers/gpu/drm/qxl/qxl_drv.h:178:35: warning: 'struct qxl_device' declared inside parameter list will not be visible outside of this definition or declaration 178 | int qxl_debugfs_fence_init(struct qxl_device *rdev); | ^~~~~~~~~~ vim +178 drivers/gpu/drm/qxl/qxl_drv.h f64122c1f6ade30 Dave Airlie 2013-02-25 177 f64122c1f6ade30 Dave Airlie 2013-02-25 @178 int qxl_debugfs_fence_init(struct qxl_device *rdev); f64122c1f6ade30 Dave Airlie 2013-02-25 179 --- 0-DAY CI Kernel Test Service, Intel Corporation https://lists.01.org/hyperkitty/list/kbuild-all at lists.01.org -------------- next part -------------- A non-text attachment was scrubbed... Name: .config.gz Type: application/gzip Size: 27013 bytes Desc: not available URL: From arequipeno at gmail.com Thu Oct 29 15:10:23 2020 From: arequipeno at gmail.com (Ian Pilcher) Date: Thu, 29 Oct 2020 10:10:23 -0500 Subject: [Spice-devel] How to build Windows vdagent? In-Reply-To: <1165128467.5786858.1603959008695.JavaMail.zimbra@redhat.com> References: <1165128467.5786858.1603959008695.JavaMail.zimbra@redhat.com> Message-ID: <3cea45a2-9c3b-5d96-84ed-6298b91d68c2@gmail.com> On 10/29/20 3:10 AM, Frediano Ziglio wrote: > It depends on the environment you want to use. > Either you can use MingW + classic Autoconf (configure) and compile as many Unix > projects (better to have a Linux machine) or you can use CMake, better suited > for use with Visual Studio (for instance). The MingW/Linux (Fedora) option would be easiest for me, but I can't get it to work; I can't figure how to get around this error: configure: error: static libpng not found I have installed every single *libpng* package in Fedora: $ sudo dnf list '*libpng*' Last metadata expiration check: 2:32:12 ago on Thu 29 Oct 2020 07:37:22 AM CDT. Installed Packages libpng.i686 2:1.6.37-3.fc32 @fedora libpng.x86_64 2:1.6.37-3.fc32 @fedora libpng-devel.i686 2:1.6.37-3.fc32 @fedora libpng-devel.x86_64 2:1.6.37-3.fc32 @fedora libpng-static.i686 2:1.6.37-3.fc32 @fedora libpng-static.x86_64 2:1.6.37-3.fc32 @fedora libpng-tools.x86_64 2:1.6.37-3.fc32 @fedora libpng12.i686 1.2.57-11.fc32 @fedora libpng12.x86_64 1.2.57-11.fc32 @fedora libpng12-devel.i686 1.2.57-11.fc32 @fedora libpng12-devel.x86_64 1.2.57-11.fc32 @fedora libpng15.i686 1.5.30-9.fc32 @fedora libpng15.x86_64 1.5.30-9.fc32 @fedora mingw32-libpng.noarch 1.6.37-3.fc32 @fedora mingw32-libpng-static.noarch 1.6.37-3.fc32 @fedora mingw64-libpng.noarch 1.6.37-3.fc32 @fedora mingw64-libpng-static.noarch 1.6.37-3.fc32 @fedora -- ======================================================================== In Soviet Russia, Google searches you! ======================================================================== From jjanku at redhat.com Thu Oct 29 15:33:02 2020 From: jjanku at redhat.com (Jakub Janku) Date: Thu, 29 Oct 2020 16:33:02 +0100 Subject: [Spice-devel] How to build Windows vdagent? In-Reply-To: <3cea45a2-9c3b-5d96-84ed-6298b91d68c2@gmail.com> References: <1165128467.5786858.1603959008695.JavaMail.zimbra@redhat.com> <3cea45a2-9c3b-5d96-84ed-6298b91d68c2@gmail.com> Message-ID: Hi, are you using mingw64-configure? Sometimes, when you don't know how to build something, you can get some hints from the pipeline configuration file, in this case .gitlab-ci.yml. Regards, Jakub On Thu, Oct 29, 2020 at 4:10 PM Ian Pilcher wrote: > > On 10/29/20 3:10 AM, Frediano Ziglio wrote: > > It depends on the environment you want to use. > > Either you can use MingW + classic Autoconf (configure) and compile as many Unix > > projects (better to have a Linux machine) or you can use CMake, better suited > > for use with Visual Studio (for instance). > > The MingW/Linux (Fedora) option would be easiest for me, but I can't get > it to work; I can't figure how to get around this error: > > configure: error: static libpng not found > > I have installed every single *libpng* package in Fedora: > > $ sudo dnf list '*libpng*' > Last metadata expiration check: 2:32:12 ago on Thu 29 Oct 2020 07:37:22 > AM CDT. > Installed Packages > libpng.i686 2:1.6.37-3.fc32 @fedora > libpng.x86_64 2:1.6.37-3.fc32 @fedora > libpng-devel.i686 2:1.6.37-3.fc32 @fedora > libpng-devel.x86_64 2:1.6.37-3.fc32 @fedora > libpng-static.i686 2:1.6.37-3.fc32 @fedora > libpng-static.x86_64 2:1.6.37-3.fc32 @fedora > libpng-tools.x86_64 2:1.6.37-3.fc32 @fedora > libpng12.i686 1.2.57-11.fc32 @fedora > libpng12.x86_64 1.2.57-11.fc32 @fedora > libpng12-devel.i686 1.2.57-11.fc32 @fedora > libpng12-devel.x86_64 1.2.57-11.fc32 @fedora > libpng15.i686 1.5.30-9.fc32 @fedora > libpng15.x86_64 1.5.30-9.fc32 @fedora > mingw32-libpng.noarch 1.6.37-3.fc32 @fedora > mingw32-libpng-static.noarch 1.6.37-3.fc32 @fedora > mingw64-libpng.noarch 1.6.37-3.fc32 @fedora > mingw64-libpng-static.noarch 1.6.37-3.fc32 @fedora > > -- > ======================================================================== > In Soviet Russia, Google searches you! > ======================================================================== > _______________________________________________ > Spice-devel mailing list > Spice-devel at lists.freedesktop.org > https://lists.freedesktop.org/mailman/listinfo/spice-devel >