[PATCH 2/2 v3] drm/exynos: added userptr feature.

Inki Dae inki.dae at samsung.com
Wed May 9 23:57:06 PDT 2012



> -----Original Message-----
> From: Minchan Kim [mailto:minchan at kernel.org]
> Sent: Thursday, May 10, 2012 1:58 PM
> To: Inki Dae
> Cc: 'Jerome Glisse'; airlied at linux.ie; dri-devel at lists.freedesktop.org;
> kyungmin.park at samsung.com; sw0312.kim at samsung.com; linux-mm at kvack.org
> Subject: Re: [PATCH 2/2 v3] drm/exynos: added userptr feature.
> 
> On 05/10/2012 10:39 AM, Inki Dae wrote:
> 
> > Hi Jerome,
> >
> >> -----Original Message-----
> >> From: Jerome Glisse [mailto:j.glisse at gmail.com]
> >> Sent: Wednesday, May 09, 2012 11:46 PM
> >> To: Inki Dae
> >> Cc: airlied at linux.ie; dri-devel at lists.freedesktop.org;
> >> kyungmin.park at samsung.com; sw0312.kim at samsung.com; linux-mm at kvack.org
> >> Subject: Re: [PATCH 2/2 v3] drm/exynos: added userptr feature.
> >>
> >> On Wed, May 9, 2012 at 2:17 AM, Inki Dae <inki.dae at samsung.com> wrote:
> >>> this feature is used to import user space region allocated by malloc()
> >> or
> >>> mmaped into a gem. and to guarantee the pages to user space not to be
> >>> swapped out, the VMAs within the user space would be locked and then
> >> unlocked
> >>> when the pages are released.
> >>>
> >>> but this lock might result in significant degradation of system
> >> performance
> >>> because the pages couldn't be swapped out so we limit user-desired
> >> userptr
> >>> size to pre-defined.
> >>>
> >>> Signed-off-by: Inki Dae <inki.dae at samsung.com>
> >>> Signed-off-by: Kyungmin Park <kyungmin.park at samsung.com>
> >>
> >>
> >> Again i would like feedback from mm people (adding cc). I am not sure
> >
> > Thank you, I missed adding mm as cc.
> >
> >> locking the vma is the right anwser as i said in my previous mail,
> >> userspace can munlock it in your back, maybe VM_RESERVED is better.
> >
> > I know that with VM_RESERVED flag, also we can avoid the pages from
> being
> > swapped out. but these pages should be unlocked anytime we want because
> we
> > could allocate all pages on system and lock them, which in turn, it may
> > result in significant deterioration of system performance.(maybe other
> > processes requesting free memory would be blocked) so I used VM_LOCKED
> flags
> > instead. but I'm not sure this way is best also.
> >
> >> Anyway even not considering that you don't check at all that process
> >> don't go over the limit of locked page see mm/mlock.c RLIMIT_MEMLOCK
> >
> > Thank you for your advices.
> >
> >> for how it's done. Also you mlock complete vma but the userptr you get
> >> might be inside say 16M vma and you only care about 1M of userptr, if
> >> you mark the whole vma as locked than anytime a new page is fault in
> >> the vma else where than in the buffer you are interested then it got
> >> allocated for ever until the gem buffer is destroy, i am not sure of
> >> what happen to the vma on next malloc if it grows or not (i would
> >> think it won't grow at it would have different flags than new
> >> anonymous memory).
> 
> 
> I don't know history in detail because you didn't have sent full patches
> to linux-mm and
> I didn't read the below code, either.
> Just read your description and reply of Jerome. Apparently, there is
> something I missed.
> 
> Your goal is to avoid swap out some user pages which is used in kernel at
> the same time. Right?
> Let's use get_user_pages. Is there any issue you can't use it?
> It increases page count so reclaimer can't swap out page.
> Isn't it enough?
> Marking whole VMA into MLCOKED is overkill.
> 

As I mentioned, we are already using get_user_pages. as you said, this
function increases page count but just only things to the user address space
cpu already accessed. other would be allocated by page fault hander once
get_user_pages call. if so... ok, after that refcount(page->_count) of the
pages user already accessed would have 2 and just 1 for other all pages. so
we may have to consider only pages never accessed by cpu to be locked to
avoid from swapped out.

Thanks,
Inki Dae

> --
> Kind regards,
> Minchan Kim



More information about the dri-devel mailing list