[PATCH v2 00/25] AMDKFD kernel driver

Bridgman, John John.Bridgman at amd.com
Wed Jul 23 08:06:36 PDT 2014



>-----Original Message-----
>From: Daniel Vetter [mailto:daniel.vetter at ffwll.ch] On Behalf Of Daniel
>Vetter
>Sent: Wednesday, July 23, 2014 10:42 AM
>To: Bridgman, John
>Cc: Daniel Vetter; Gabbay, Oded; Jerome Glisse; Christian König; David Airlie;
>Alex Deucher; Andrew Morton; Joerg Roedel; Lewycky, Andrew; Daenzer,
>Michel; Goz, Ben; Skidanov, Alexey; linux-kernel at vger.kernel.org; dri-
>devel at lists.freedesktop.org; linux-mm; Sellek, Tom
>Subject: Re: [PATCH v2 00/25] AMDKFD kernel driver
>
>On Wed, Jul 23, 2014 at 01:33:24PM +0000, Bridgman, John wrote:
>>
>>
>> >-----Original Message-----
>> >From: Daniel Vetter [mailto:daniel.vetter at ffwll.ch]
>> >Sent: Wednesday, July 23, 2014 3:06 AM
>> >To: Gabbay, Oded
>> >Cc: Jerome Glisse; Christian König; David Airlie; Alex Deucher;
>> >Andrew Morton; Bridgman, John; Joerg Roedel; Lewycky, Andrew;
>> >Daenzer, Michel; Goz, Ben; Skidanov, Alexey;
>> >linux-kernel at vger.kernel.org; dri- devel at lists.freedesktop.org;
>> >linux-mm; Sellek, Tom
>> >Subject: Re: [PATCH v2 00/25] AMDKFD kernel driver
>> >
>> >On Wed, Jul 23, 2014 at 8:50 AM, Oded Gabbay <oded.gabbay at amd.com>
>> >wrote:
>> >> On 22/07/14 14:15, Daniel Vetter wrote:
>> >>>
>> >>> On Tue, Jul 22, 2014 at 12:52:43PM +0300, Oded Gabbay wrote:
>> >>>>
>> >>>> On 22/07/14 12:21, Daniel Vetter wrote:
>> >>>>>
>> >>>>> On Tue, Jul 22, 2014 at 10:19 AM, Oded Gabbay
>> ><oded.gabbay at amd.com>
>> >>>>> wrote:
>> >>>>>>>
>> >>>>>>> Exactly, just prevent userspace from submitting more. And if
>> >>>>>>> you have misbehaving userspace that submits too much, reset
>> >>>>>>> the gpu and tell it that you're sorry but won't schedule any more
>work.
>> >>>>>>
>> >>>>>>
>> >>>>>> I'm not sure how you intend to know if a userspace misbehaves or
>not.
>> >>>>>> Can
>> >>>>>> you elaborate ?
>> >>>>>
>> >>>>>
>> >>>>> Well that's mostly policy, currently in i915 we only have a
>> >>>>> check for hangs, and if userspace hangs a bit too often then we stop
>it.
>> >>>>> I guess you can do that with the queue unmapping you've describe
>> >>>>> in reply to Jerome's mail.
>> >>>>> -Daniel
>> >>>>>
>> >>>> What do you mean by hang ? Like the tdr mechanism in Windows
>> >>>> (checks if a gpu job takes more than 2 seconds, I think, and if
>> >>>> so, terminates the job).
>> >>>
>> >>>
>> >>> Essentially yes. But we also have some hw features to kill jobs
>> >>> quicker, e.g. for media workloads.
>> >>> -Daniel
>> >>>
>> >>
>> >> Yeah, so this is what I'm talking about when I say that you and
>> >> Jerome come from a graphics POV and amdkfd come from a compute
>POV,
>> >> no
>> >offense intended.
>> >>
>> >> For compute jobs, we simply can't use this logic to terminate jobs.
>> >> Graphics are mostly Real-Time while compute jobs can take from a
>> >> few ms to a few hours!!! And I'm not talking about an entire
>> >> application runtime but on a single submission of jobs by the
>> >> userspace app. We have tests with jobs that take between 20-30
>> >> minutes to complete. In theory, we can even imagine a compute job
>> >> which takes 1 or 2 days (on
>> >larger APUs).
>> >>
>> >> Now, I understand the question of how do we prevent the compute job
>> >> from monopolizing the GPU, and internally here we have some ideas
>> >> that we will probably share in the next few days, but my point is
>> >> that I don't think we can terminate a compute job because it is
>> >> running for more
>> >than x seconds.
>> >> It is like you would terminate a CPU process which runs more than x
>> >seconds.
>> >>
>> >> I think this is a *very* important discussion (detecting a
>> >> misbehaved compute process) and I would like to continue it, but I
>> >> don't think moving the job submission from userspace control to
>> >> kernel control will solve this core problem.
>> >
>> >Well graphics gets away with cooperative scheduling since usually
>> >people want to see stuff within a few frames, so we can legitimately
>> >kill jobs after a fairly short timeout. Imo if you want to allow
>> >userspace to submit compute jobs that are atomic and take a few
>> >minutes to hours with no break-up in between and no hw means to
>> >preempt then that design is screwed up. We really can't tell the core
>> >vm that "sorry we will hold onto these gobloads of memory you really
>> >need now for another few hours". Pinning memory like that essentially
>without a time limit is restricted to root.
>>
>> Hi Daniel;
>>
>> I don't really understand the reference to "gobloads of memory".
>> Unlike radeon graphics, the userspace data for HSA applications is
>> maintained in pageable system memory and accessed via the IOMMUv2
>> (ATC/PRI). The
>> IOMMUv2 driver and mm subsystem takes care of faulting in memory pages
>> as needed, nothing is long-term pinned.
>
>Yeah I've lost that part of the equation a bit since I've always thought that
>proper faulting support without preemption is not really possible. I guess
>those platforms completely stall on a fault until the ptes are all set up?

Correct. The GPU thread accessing the faulted page definitely stalls but processing can continue on other GPU threads. 

I don't remember offhand how much of the GPU=>ATC=>IOMMUv2=>system RAM path gets stalled (ie whether other HSA apps get blocked) but AFAIK graphics processing (assuming it is not using ATC path to system memory) is not affected. I will double-check that though, haven't asked internally for a couple of years but I do remember concluding something along the lines of "OK, that'll do" ;)
 
>-Daniel
>--
>Daniel Vetter
>Software Engineer, Intel Corporation
>+41 (0) 79 365 57 48 - http://blog.ffwll.ch


More information about the dri-devel mailing list