[Intel-gfx] i915 and swiotlb_max_segment

Robin Murphy robin.murphy at arm.com
Thu Jun 3 10:54:19 UTC 2021


On 2021-06-03 10:17, Tvrtko Ursulin wrote:
> 
> Hi,
> 
> On 03/06/2021 09:40, Joonas Lahtinen wrote:
>> + Tvrtko to take a look
>>
>> Quoting Konrad Rzeszutek Wilk (2021-05-20 18:12:58)
>>> On Mon, May 10, 2021 at 05:25:25PM +0200, Christoph Hellwig wrote:
>>>> Hi all,
>>>>
>>>> swiotlb_max_segment is a rather strange "API" export by swiotlb.c,
>>>> and i915 is the only (remaining) user.
>>>>
>>>> swiotlb_max_segment returns 0 if swiotlb is not in use, 1 if
>>>> SWIOTLB_FORCE is set or swiotlb-zen is set, and the swiotlb segment
>>>> size when swiotlb is otherwise enabled.
>>>>
>>>> i915 then uses it to:
>>>>
>>>>   a) decided on the max order in i915_gem_object_get_pages_internal
>>>>   b) decide on a max segment size in i915_sg_segment_size
>>>>
>>>> for a) it really seems i915 should switch to dma_alloc_noncoherent
>>>> or dma_alloc_noncontigous ASAP instead of using alloc_page and
>>>> streaming DMA mappings.  Any chance I could trick one of the i915
>>>> maintaines into doing just that given that the callchain is not
>>>> exactly trivial?
>>>>
>>>> For b) I'm not sure swiotlb and i915 really agree on the meaning
>>>> of the value.  swiotlb_set_max_segment basically returns the entire
>>>> size of the swiotlb buffer, while i915 seems to use it to limit
>>>> the size each scatterlist entry.  It seems like dma_max_mapping_size
>>>> might be the best value to use here.
>>>
>>> Yes. The background behind that was SWIOTLB would fail because well, the
>>> size of the sg was too large. And some way to limit it to max size
>>> was needed - the dma_max_mapping_size "should" be just fine.
> 
> Can't say I am 100% at home here but what I remember is that the 
> limiting factor was maximum size of a sg segment and not total size of 
> the mapping.
> 
> Looking at the code today, if we would replace usage 
> swiotlb_max_segment() with dma_max_mapping_size(), I don't see that 
> would work when we call dma_map_sg_attrs().
> 
> Because AFAICT code can end up in dma_direct_max_mapping_size() (not 
> sure when the ops->map_sg path is active and where to trace that) where 
> we have:
> 
> size_t dma_direct_max_mapping_size(struct device *dev)
> {
>      /* If SWIOTLB is active, use its maximum mapping size */
>      if (is_swiotlb_active() &&
>          (dma_addressing_limited(dev) || swiotlb_force == SWIOTLB_FORCE))
>          return swiotlb_max_mapping_size(dev);
>      return SIZE_MAX;
> }
> 
> So for all swiotlb cases, including force, we get:
> 
> size_t swiotlb_max_mapping_size(struct device *dev)
> {
>      return ((size_t)IO_TLB_SIZE) * IO_TLB_SEGSIZE;
> }
> 
> Which is fixed and doesn't align with swiotlb_max_segment(). But you 
> guys are the experts here so please feel to correct me.

But swiotlb_max_segment is also effectively fixed for a given system 
coinfigration, at either a page (under certain circumstances), or a 
value considerably larger than what the longest mappable SG segment 
actually is. Neither seems particularly useful, and to be honest I 
suspect the forced-bounce cases only set it to a page as a sledgehammer 
to make things work *because* the "normal" value is nonsense.

Robin.


More information about the dri-devel mailing list