[Intel-gfx] linux-next: manual merge of the drm-intel tree with Linus' tree
Stephen Rothwell
sfr at canb.auug.org.au
Fri Mar 23 00:50:18 UTC 2018
Hi all,
On Thu, 22 Mar 2018 13:21:29 +1100 Stephen Rothwell <sfr at canb.auug.org.au> wrote:
>
> Today's linux-next merge of the drm-intel tree got a conflict in:
>
> drivers/gpu/drm/i915/gvt/scheduler.c
>
> between commit:
>
> fa3dd623e559 ("drm/i915/gvt: keep oa config in shadow ctx")
>
> from Linus' tree and commit:
>
> b20c0d5ce104 ("drm/i915/gvt: Update PDPs after a vGPU mm object is pinned.")
>
> from the drm-intel tree.
>
> I fixed it up (see below) and can carry the fix as necessary. This
> is now fixed as far as linux-next is concerned, but any non trivial
> conflicts should be mentioned to your upstream maintainer when your tree
> is submitted for merging. You may also want to consider cooperating
> with the maintainer of the conflicting tree to minimise any particularly
> complex conflicts.
>
> --
> Cheers,
> Stephen Rothwell
>
> diff --cc drivers/gpu/drm/i915/gvt/scheduler.c
> index 068126404151,a55b4975c154..000000000000
> --- a/drivers/gpu/drm/i915/gvt/scheduler.c
> +++ b/drivers/gpu/drm/i915/gvt/scheduler.c
> @@@ -52,54 -52,29 +52,77 @@@ static void set_context_pdp_root_pointe
> pdp_pair[i].val = pdp[7 - i];
> }
>
> +/*
> + * when populating shadow ctx from guest, we should not overrride oa related
> + * registers, so that they will not be overlapped by guest oa configs. Thus
> + * made it possible to capture oa data from host for both host and guests.
> + */
> +static void sr_oa_regs(struct intel_vgpu_workload *workload,
> + u32 *reg_state, bool save)
> +{
> + struct drm_i915_private *dev_priv = workload->vgpu->gvt->dev_priv;
> + u32 ctx_oactxctrl = dev_priv->perf.oa.ctx_oactxctrl_offset;
> + u32 ctx_flexeu0 = dev_priv->perf.oa.ctx_flexeu0_offset;
> + int i = 0;
> + u32 flex_mmio[] = {
> + i915_mmio_reg_offset(EU_PERF_CNTL0),
> + i915_mmio_reg_offset(EU_PERF_CNTL1),
> + i915_mmio_reg_offset(EU_PERF_CNTL2),
> + i915_mmio_reg_offset(EU_PERF_CNTL3),
> + i915_mmio_reg_offset(EU_PERF_CNTL4),
> + i915_mmio_reg_offset(EU_PERF_CNTL5),
> + i915_mmio_reg_offset(EU_PERF_CNTL6),
> + };
> +
> + if (!workload || !reg_state || workload->ring_id != RCS)
> + return;
> +
> + if (save) {
> + workload->oactxctrl = reg_state[ctx_oactxctrl + 1];
> +
> + for (i = 0; i < ARRAY_SIZE(workload->flex_mmio); i++) {
> + u32 state_offset = ctx_flexeu0 + i * 2;
> +
> + workload->flex_mmio[i] = reg_state[state_offset + 1];
> + }
> + } else {
> + reg_state[ctx_oactxctrl] =
> + i915_mmio_reg_offset(GEN8_OACTXCONTROL);
> + reg_state[ctx_oactxctrl + 1] = workload->oactxctrl;
> +
> + for (i = 0; i < ARRAY_SIZE(workload->flex_mmio); i++) {
> + u32 state_offset = ctx_flexeu0 + i * 2;
> + u32 mmio = flex_mmio[i];
> +
> + reg_state[state_offset] = mmio;
> + reg_state[state_offset + 1] = workload->flex_mmio[i];
> + }
> + }
> +}
> +
> + static void update_shadow_pdps(struct intel_vgpu_workload *workload)
> + {
> + struct intel_vgpu *vgpu = workload->vgpu;
> + int ring_id = workload->ring_id;
> + struct i915_gem_context *shadow_ctx = vgpu->submission.shadow_ctx;
> + struct drm_i915_gem_object *ctx_obj =
> + shadow_ctx->engine[ring_id].state->obj;
> + struct execlist_ring_context *shadow_ring_context;
> + struct page *page;
> +
> + if (WARN_ON(!workload->shadow_mm))
> + return;
> +
> + if (WARN_ON(!atomic_read(&workload->shadow_mm->pincount)))
> + return;
> +
> + page = i915_gem_object_get_page(ctx_obj, LRC_STATE_PN);
> + shadow_ring_context = kmap(page);
> + set_context_pdp_root_pointer(shadow_ring_context,
> + (void *)workload->shadow_mm->ppgtt_mm.shadow_pdps);
> + kunmap(page);
> + }
> +
> static int populate_shadow_context(struct intel_vgpu_workload *workload)
> {
> struct intel_vgpu *vgpu = workload->vgpu;
This is now a conflict between the drm tree and Linus' tree.
--
Cheers,
Stephen Rothwell
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 488 bytes
Desc: OpenPGP digital signature
URL: <https://lists.freedesktop.org/archives/intel-gfx/attachments/20180323/7be71b16/attachment.sig>
More information about the Intel-gfx
mailing list