[PATCH v9 5/8] drm/xe/eustall: Add support to handle dropped EU stall data

Dixit, Ashutosh ashutosh.dixit at intel.com
Fri Feb 14 00:11:52 UTC 2025


On Thu, 13 Feb 2025 15:45:50 -0800, Dixit, Ashutosh wrote:
>
> On Thu, 13 Feb 2025 13:55:06 -0800, Harish Chegondi wrote:
> >
>
> Hi Harish,
>
> > On Wed, Feb 12, 2025 at 10:31:15PM -0800, Dixit, Ashutosh wrote:
> > > On Mon, 10 Feb 2025 05:46:46 -0800, Harish Chegondi wrote:
> > > >
> > >
> > Hi Ashutosh,
> > > Hi Harish,
> > >
> > > > If the user space doesn't read the EU stall data fast enough,
> > > > it is possible that the EU stall data buffer can get filled,
> > > > and if the hardware wants to write more data, it simply drops
> > > > data due to unavailable buffer space. In that case, hardware
> > > > sets a bit in a register. If the driver detects data drop,
> > > > the driver read() returns -EIO error to let the user space
> > > > know that HW has dropped data. The -EIO error is returned
> > > > even if there is EU stall data in the buffer. A subsequent
> > > > read by the user space returns the remaining EU stall data.
> > > >
> > > > v9:  Move all data drop handling code to this patch
> > >
> > > Good, separating out makes this easier to review. I would actually make
> > > this the last patch, but anyway it's ok as is too.
> > >
> > > >      Clear all drop data bits before returning -EIO.
> > > >
> > > > Signed-off-by: Harish Chegondi <harish.chegondi at intel.com>
> > > > ---
> > > >  drivers/gpu/drm/xe/xe_eu_stall.c | 39 ++++++++++++++++++++++++++++++++
> > > >  1 file changed, 39 insertions(+)
> > > >
> > > > diff --git a/drivers/gpu/drm/xe/xe_eu_stall.c b/drivers/gpu/drm/xe/xe_eu_stall.c
> > > > index 53f17aac7d3b..428267010805 100644
> > > > --- a/drivers/gpu/drm/xe/xe_eu_stall.c
> > > > +++ b/drivers/gpu/drm/xe/xe_eu_stall.c
> > > > @@ -53,6 +53,10 @@ struct xe_eu_stall_data_stream {
> > > >	struct xe_gt *gt;
> > > >	struct xe_bo *bo;
> > > >	struct per_xecore_buf *xecore_buf;
> > > > +	struct {
> > > > +		bool reported_to_user;
> > > > +		xe_dss_mask_t mask;
> > > > +	} data_drop;
> > > >	struct delayed_work buf_poll_work;
> > > >	struct workqueue_struct *buf_poll_wq;
> > > >  };
> > > > @@ -331,12 +335,24 @@ static bool eu_stall_data_buf_poll(struct xe_eu_stall_data_stream *stream)
> > > >			if (num_data_rows(total_data) >= stream->wait_num_reports)
> > > >				min_data_present = true;
> > > >		}
> > > > +		if (write_ptr_reg & XEHPC_EUSTALL_REPORT_OVERFLOW_DROP)
> > > > +			set_bit(xecore, stream->data_drop.mask);
> > > >		xecore_buf->write = write_ptr;
> > > >		mutex_unlock(&xecore_buf->ptr_lock);
> > > >	}
> > > >	return min_data_present;
> > > >  }
> > > >
> > > > +static void clear_dropped_eviction_line_bit(struct xe_gt *gt, u16 group, u16 instance)
> > > > +{
> > > > +	u32 write_ptr_reg;
> > > > +
> > > > +	/* On PVC, the overflow bit has to be cleared by writing 1 to it. */
> > > > +	write_ptr_reg = _MASKED_BIT_ENABLE(XEHPC_EUSTALL_REPORT_OVERFLOW_DROP);
> > > > +
> > > > +	xe_gt_mcr_unicast_write(gt, XEHPC_EUSTALL_REPORT, write_ptr_reg, group, instance);
> > > > +}
> > > > +
> > > >  static int xe_eu_stall_data_buf_read(struct xe_eu_stall_data_stream *stream,
> > > >				     char __user *buf, size_t count,
> > > >				     size_t *total_data_size, struct xe_gt *gt,
> > > > @@ -436,6 +452,22 @@ static ssize_t xe_eu_stall_stream_read_locked(struct xe_eu_stall_data_stream *st
> > > >	unsigned int xecore;
> > > >	int ret = 0;
> > > >
> > > > +	if (bitmap_weight(stream->data_drop.mask, XE_MAX_DSS_FUSE_BITS)) {
> > > > +		if (!stream->data_drop.reported_to_user) {
> > > > +			for_each_dss_steering(xecore, gt, group, instance) {
> > > > +				if (test_bit(xecore, stream->data_drop.mask)) {
> > > > +					clear_dropped_eviction_line_bit(gt, group, instance);
> > > > +					clear_bit(xecore, stream->data_drop.mask);
> > > > +				}
> > > > +			}
> > >
> > > This is not making any sense. How can we clear_dropped_eviction_line_bit
> > > before reading the data? The HW will set it right back up.
> >
> > This bit is being cleared here because the driver has acknowledged to
> > the user space all the data that has been dropped so far. Any new data
> > drop after we clear the bit *should* result in another -EIO return.
> > HW will set the bit again *only* if it drops more data. User space is
> > expected to do another read() to clear the buffer ASAP after getting
> > an -EIO from the first read().
> >
> > >
> > > At least the code in the previous version made some sense. So we should at
> > > least go back to that and review that. Though it also had issue with how
> > > many times -EIO is returned etc.
> > Can you please explain what is the issue? Every time there is a new data
> > drop after the drop bit is cleared, it *should* result in another -EIO return
> > The driver ensures that two successive reads will not return -EIO.
> > >
> > > The other issue is the locking. As I suggested in my comments on Patch 4/8,
> > > I don't think we need a per xecore_buf lock, just one ptr_lock for reading
> > > all the DSS's. And then we can use that here too.
> >
> > I used a per xecore_buf lock so that the polling function can update for
> > example subslice 1's write pointer while read() reads the data from
> > subslice 2. It would lead to more parallelism. With a higher granularity
> > lock, the polling for new data blocks during read and vice versa.

Why do we need this extra parallelism? I am fine with reads and polling
being serialized. Because if we don't do this, there are instances like
checking the drop data mask where we either take all the dss locks or don't
take any (which is what you are advocating). This situation is solved by
having a single lock for all dss's (I am assuming we don't need that
extra parallelism).

> > >
> > > But at least let's get something sane for review first.
> > Before I spawn another version of the patch series, I would like to
> > list the approaches to take here and cons for each approach.
> > Feel free to add any other approaches you suggest here. Feedback from
> > one of the EU stall consumers that they read data even after an -EIO
> > return means that any approach to discard data in the buffer
> > is unacceptable.
> >
> > 1. Approach used in version 8:
> >    First read() returns -EIO without clearing any drop bits
> >    Second read() reads the data and clears the bit for each subslice
> >    Cons: If the user space buffer is small and all subslice data is not
> >    read, those subslices whose drop bit is not cleared can cause -EIO in
> >    the next read(). This issue can be mitigated by user space buffer
> >    large enough to read data from all the subslices.
> >
> > 2. Approach used in version 9:
> >    First read() returns -EIO after clearing all drop bits
> >    Second read() reads all the EU stall data that can fit into the user
> >    buffer.
> >    Pros: Unlike in version 8 approach, even if the user buffer is small,
> >    all drop bits get cleared in the first read()
> >    Cons: Because the buffers are full, if the user space doesn't call
> >    another read() soon to clear the data, there is a higher chance of HW
> >    dropping the data again, setting the bit again.
> >
> > 3. New approach: Hybrid of version 8 and 9:
> >    First read() returns -EIO without clearing any drop bits
> >    Second read() reads all the EU stall data that can fit into the user
> >    buffer, but clears all the drop bits even for those subslices it
> >    can't read the data if the user buffer can't accommodate.
> >    Cons: For those subslices whose data buffers are full and drop bits
> >    are cleared, there is a chance that the HW can set the bits again if
> >    data is dropped again unless user space calls another read() fast
> >    enough. This can also be mitigated with a user space buffer large
> >    enough to read data from all the subslices.
> >
> > Please feel free to add any new approaches here that those don't discard
> > data in the buffers, also add any more cons to the above approaches.
> > Let's agree on an approach for the next patch version.
>
> Let's have some ground rules and not do random stuff. The basic ground rule
> for me is that there is no point doing clear_dropped_eviction_line_bit,
> till we have read the data (and moved the read pointer) to make some space
> in the per dss buffer. Otherwise, HW will just set that bit right up. HW is
> writing data and userspace reads cannot match the speed with which HW is
> writing data.
>
> So the only approach I see viable is that in your v8. These other
> approaches you are making up are unworkable imo. I am ok if you don't want
> to drop data in the driver, I am not pushing that approach which I had
> suggested. But please post something which follows the ground rule
> mentioned above. The only thing I see which does that is v8.
>
> Finally, yes, it comes down to that userspace will either have to use a
> large enough user buffer so that -EIO returns disappear. Or deal with -EIO
> returns, ignore them etc. Or read data fast enough using a small buffer,
> which is a bug in the current patches which we are saying we will fix after
> merging. If they use a small buffer and see multiple -EIO returns, that's
> fine, they just need to deal with it. The uapi is not promising anything
> about number of -EIO returns they will see. Maybe the -EIO returns will
> never clear up (I don't think they will till we fix the small buffer bug in
> the patches).

My approach would be to start with the stuff in v8 and then see if we can
do something there to make sure userspace can read all data even with a
small buffer, without seeing multiple -EIO returns till it has completed
reading all dss's, with the assumption that the small buffer bug is not
there (or has been fixed). Something like that. So at the minimum we start
with the code in v8.

>
> Thanks.
> --
> Ashutosh
>
> >
> > Thank you
> > Harish.
> > > > +			stream->data_drop.reported_to_user = true;
> > > > +			xe_gt_dbg(gt, "EU stall data dropped in XeCores: %*pb\n",
> > > > +				  XE_MAX_DSS_FUSE_BITS, stream->data_drop.mask);
> > > > +			return -EIO;
> > > > +		}
> > > > +		stream->data_drop.reported_to_user = false;
> > > > +	}
> > > > +
> > > >	for_each_dss_steering(xecore, gt, group, instance) {
> > > >		ret = xe_eu_stall_data_buf_read(stream, buf, count, &total_size,
> > > >						gt, group, instance, xecore);
> > > > @@ -457,6 +489,7 @@ static ssize_t xe_eu_stall_stream_read_locked(struct xe_eu_stall_data_stream *st
> > > >   * before calling read().
> > > >   *
> > > >   * Returns: The number of bytes copied or a negative error code on failure.
> > > > + *	    -EIO if HW drops any EU stall data when the buffer is full.
> > > >   */
> > > >  static ssize_t xe_eu_stall_stream_read(struct file *file, char __user *buf,
> > > >				       size_t count, loff_t *ppos)
> > > > @@ -543,6 +576,9 @@ static int xe_eu_stall_stream_enable(struct xe_eu_stall_data_stream *stream)
> > > >
> > > >	for_each_dss_steering(xecore, gt, group, instance) {
> > > >		write_ptr_reg = xe_gt_mcr_unicast_read(gt, XEHPC_EUSTALL_REPORT, group, instance);
> > > > +		/* Clear any drop bits set and not cleared in the previous session. */
> > > > +		if (write_ptr_reg & XEHPC_EUSTALL_REPORT_OVERFLOW_DROP)
> > > > +			clear_dropped_eviction_line_bit(gt, group, instance);
> > > >		write_ptr = REG_FIELD_GET(XEHPC_EUSTALL_REPORT_WRITE_PTR_MASK, write_ptr_reg);
> > > >		read_ptr_reg = REG_FIELD_PREP(XEHPC_EUSTALL_REPORT1_READ_PTR_MASK, write_ptr);
> > > >		read_ptr_reg = _MASKED_FIELD(XEHPC_EUSTALL_REPORT1_READ_PTR_MASK, read_ptr_reg);
> > > > @@ -554,6 +590,9 @@ static int xe_eu_stall_stream_enable(struct xe_eu_stall_data_stream *stream)
> > > >		xecore_buf->write = write_ptr;
> > > >		xecore_buf->read = write_ptr;
> > > >	}
> > > > +	stream->data_drop.reported_to_user = false;
> > > > +	bitmap_zero(stream->data_drop.mask, XE_MAX_DSS_FUSE_BITS);
> > > > +
> > > >	reg_value = _MASKED_FIELD(EUSTALL_MOCS | EUSTALL_SAMPLE_RATE,
> > > >				  REG_FIELD_PREP(EUSTALL_MOCS, gt->mocs.uc_index << 1) |
> > > >				  REG_FIELD_PREP(EUSTALL_SAMPLE_RATE,
> > > > --
> > > > 2.48.1
> > > >


More information about the Intel-xe mailing list