[PATCH v2 0/2] Chunk splitting of spi transfers

Meghana Madhyastha meghana.madhyastha at gmail.com
Fri Mar 2 11:11:53 UTC 2018


On Sun, Feb 25, 2018 at 02:19:10PM +0100, Lukas Wunner wrote:
> [cc += linux-rpi-kernel at lists.infradead.org]
> 
> On Sat, Feb 24, 2018 at 06:15:59PM +0000, Meghana Madhyastha wrote:
> > I've added bcm2835_spi_transfer_one_message in spi-bcm2835. This calls
> > spi_split_transfers_maxsize to split large chunks for spi dma transfers. 
> > I then removed chunk splitting in the tinydrm spi helper (as now the core
> > is handling the chunk splitting). However, although the SPI HW should be
> > able to accomodate up to 65535 bytes for dma transfers, the splitting of
> > chunks to 65535 bytes results in a dma transfer time out error. However,
> > when the chunks are split to < 64 bytes it seems to work fine.
> 
> Hm, that is really odd, how did you test this exactly, what did you
> use as SPI slave?  It contradicts our own experience, we're using
> Micrel KSZ8851 Ethernet chips as SPI slave on spi0 of a BCM2837
> and can send/receive messages via DMA to the tune of several hundred
> bytes without any issues.  In fact, for messages < 96 bytes, DMA is
> not used at all, so you've probably been using interrupt mode,
> see the BCM2835_SPI_DMA_MIN_LENGTH macro in spi-bcm2835.c.

Hi Lukas,

I think you are right. I checked it and its not using the DMA mode which
is why its working with 64 bytes.
Noralf, that leaves us back to the
initial time out problem. I've tried doing the message splitting in
spi_sync as well as spi_pump_messages. Martin had explained that DMA
will wait for
the SPI HW to set the send_more_data line, but the SPI-HW itself will
stop triggering it when SPI_LEN is 0 causing DMA to wait forever. I
thought if we split it before itself, the SPI_LEN will not go to zero
thus preventing this problem, however it didn't work and started
hanging. So I'm a little uncertain as to how to proceed and debug what
exactly has caused the time out due to the asynchronous methods.

Thanks and regards,
Meghana

> Thanks,
> 
> Lukas


More information about the dri-devel mailing list