[PATCH 2/2] drm/scheduler: Remove obsolete spinlock.

Lucas Stach l.stach at pengutronix.de
Wed May 16 13:00:00 UTC 2018


Am Mittwoch, den 16.05.2018, 14:32 +0200 schrieb Christian König:
> Am 16.05.2018 um 14:28 schrieb Lucas Stach:
> > Am Mittwoch, den 16.05.2018, 14:08 +0200 schrieb Christian König:
> > > Yes, exactly.
> > > 
> > > For normal user space command submission we should have tons of
> > > locks
> > > guaranteeing that (e.g. just the VM lock should do).
> > > 
> > > For kernel moves we have the mutex for the GTT windows which
> > > protects
> > > it.
> > > 
> > > The could be problems with the UVD/VCE queues to cleanup the
> > > handles
> > > when an application crashes.
> > 
> > FWIW, etnaviv is currently completely unlocked in this path, but I
> > believe that this isn't an issue as the sched fence seq numbers are
> > per
> > entity. So to actually end up with reversed seqnos one context has
> > to
> > preempt itself to do another submit, while the current one hasn't
> > returned from kernel space, which I believe is a fairly theoretical
> > issue. Is my understanding correct?
> 
> Yes. The problem is with the right timing this can be used to access 
> freed up memory.
> 
> If you then manage to place a page table in that freed up memory
> taking 
> over the system is just a typing exercise.

Thanks. I believe we don't have this problem in etnaviv, as memory
referencing is tied to the job and will only be unreferenced on
free_job, but I'll re-check this when I've got some time.

Regards,
Lucas


More information about the amd-gfx mailing list