[Intel-gfx] i915 "GPU HANG", bisected to a2daa27c0c61 "swiotlb: simplify swiotlb_max_segment"

Juergen Gross jgross at suse.com
Tue Oct 18 14:53:50 UTC 2022


On 18.10.22 16:33, Christoph Hellwig wrote:
> On Tue, Oct 18, 2022 at 04:21:43PM +0200, Jan Beulich wrote:
>> Leaving the "i915 abuses" part aside (because I can't tell what exactly the
>> abuse is), but assuming that "can't cope with bounce buffering" means they
>> don't actually use the allocated buffers, I'd suggest this:
> 
> Except for one odd place i915 never uses dma_alloc_* but always allocates
> memory itself and then maps it, but then treats it as if it was a
> dma_alloc_coherent allocations, that is never does ownership changes.
> 
>> I've dropped the TDX related remark because I don't think it's meaningful
>> for PV guests.
> 
> This remark is for TDX in general, not Xen related.  With TDX and other
> confidentatial computing schemes, all DMA must be bounce buffered, and
> all drivers skipping dma_sync* calls are broken.
> 
>> Otoh I've left the "abuses ignores" word sequence as is, no
>> matter that it reads odd to me. Plus, as hinted at before, I'm not
>> convinced the IS_ENABLED() use is actually necessary or warranted here.
> 
> If we don't need the IS_ENABLED is not needed I'm all for dropping it.
> But unless I misread the code, on arm/arm64 even PV guests are 1:1
> mapped so that all Linux physically contigous memory also is Xen
> contigous, so we don't need the hack.

There are no PV guests on arm/arm64.


Juergen
-------------- next part --------------
A non-text attachment was scrubbed...
Name: OpenPGP_0xB0DE9DD628BF132F.asc
Type: application/pgp-keys
Size: 3098 bytes
Desc: OpenPGP public key
URL: <https://lists.freedesktop.org/archives/intel-gfx/attachments/20221018/646a73eb/attachment.key>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: OpenPGP_signature
Type: application/pgp-signature
Size: 495 bytes
Desc: OpenPGP digital signature
URL: <https://lists.freedesktop.org/archives/intel-gfx/attachments/20221018/646a73eb/attachment.sig>


More information about the Intel-gfx mailing list