DRM pull for v5.3-rc1

Daniel Vetter daniel.vetter at ffwll.ch
Mon Jul 15 14:19:26 UTC 2019


On Mon, Jul 15, 2019 at 2:29 PM Jason Gunthorpe <jgg at mellanox.com> wrote:
>
> [urk, html email.. forgive the mess]
>
> On Mon, Jul 15, 2019 at 04:59:39PM +1000, Dave Airlie wrote:
>
> >      VMware had some mm helpers go in via my tree (looking back I'm
> >      not sure Thomas really secured enough acks on these, but I'm
>
> I saw those patches, honestly I couldn't entirely understand what
> problem they were trying to address..
>
> >      going with it for now until I get push back). They conflicted
> >      with one of the mm cleanups in the hmm tree, I've pushed a
> >      patch to the top of my next to fix most of the fallout in my
> >      tree, and the resulting fixup is to pick the closure->ptefn
> >      hunk and apply something like in mm/memory.c
>
> Did I mess a notification from StephenR in linux-next? I was unwaware
> of this conflict?
>
> The 'hmm' tree is something I ran to try and help workflow issues like
> this, as it could be merged to DRM as a topic branch - maybe consider
> this flow in future?
>
> Linus, do you have any advice on how best to handle sharing mm
> patches? The hmm.git was mildly painful as it sits between quilt on
> the -mm side and what seems like 'a world of interesting git things'
> on the DRM side (but maybe I just don't know enough about DRM).

I think the approach in this merge window worked fairly well:
- refactor/rework core mm stuff in (h)mm.git
- handle all the gpu stuff in drm.git
- make the clashes workable through some clever prep patches like
we've done this time around

I think Linus wants to be able to look through core mm stuff quite
closely, so not a good idea if we deeply intertwin it with one of the
biggest subsystems there is. And I don't think there will be a real
conflict like this every merge window, this should be the exception.
Worst case we have to stage some work 1 release cycle apart, i.e.
merge mm stuff first, then drm 3 months later. Usually that's not
going to slow things down noticeable given average merge latency for
core mm features :-)
-Daniel

> > @@ -2201,7 +2162,7 @@ static int apply_to_page_range_wrapper(pte_t
> >      *pte,
> >              struct page_range_apply *pra =
> >                      container_of(pter, typeof(*pra), pter);
> >      -       return pra->fn(pte, NULL, addr, pra->data);
> >      +       return pra->fn(pte, addr, pra->data);
> >       }
>
> I looked through this and it looks OK to me, thanks
>
> Jason



-- 
Daniel Vetter
Software Engineer, Intel Corporation
+41 (0) 79 365 57 48 - http://blog.ffwll.ch


More information about the dri-devel mailing list