DRM pull for v5.3-rc1
Jason Gunthorpe
jgg at mellanox.com
Mon Jul 15 12:29:28 UTC 2019
[urk, html email.. forgive the mess]
On Mon, Jul 15, 2019 at 04:59:39PM +1000, Dave Airlie wrote:
> VMware had some mm helpers go in via my tree (looking back I'm
> not sure Thomas really secured enough acks on these, but I'm
I saw those patches, honestly I couldn't entirely understand what
problem they were trying to address..
> going with it for now until I get push back). They conflicted
> with one of the mm cleanups in the hmm tree, I've pushed a
> patch to the top of my next to fix most of the fallout in my
> tree, and the resulting fixup is to pick the closure->ptefn
> hunk and apply something like in mm/memory.c
Did I mess a notification from StephenR in linux-next? I was unwaware
of this conflict?
The 'hmm' tree is something I ran to try and help workflow issues like
this, as it could be merged to DRM as a topic branch - maybe consider
this flow in future?
Linus, do you have any advice on how best to handle sharing mm
patches? The hmm.git was mildly painful as it sits between quilt on
the -mm side and what seems like 'a world of interesting git things'
on the DRM side (but maybe I just don't know enough about DRM).
> @@ -2201,7 +2162,7 @@ static int apply_to_page_range_wrapper(pte_t
> *pte,
> struct page_range_apply *pra =
> container_of(pter, typeof(*pra), pter);
> - return pra->fn(pte, NULL, addr, pra->data);
> + return pra->fn(pte, addr, pra->data);
> }
I looked through this and it looks OK to me, thanks
Jason
More information about the dri-devel
mailing list