No subject

Noralf Trønnes noralf at tronnes.org
Mon Mar 5 19:24:50 UTC 2018


Den 05.03.2018 18.06, skrev Meghana Madhyastha:
> linux-spi at vger.kernel.org,Noralf Trønnes <noralf at tronnes.org>,Sean Paul <seanpaul at chromium.org>,kernel at martin.sperl.org
> Cc:
> Bcc:
> Subject: Re: [PATCH v2 0/2] Chunk splitting of spi transfers
> Reply-To:
> In-Reply-To: <f6dbf3ca-4c1b-90cc-c4af-8889f7407180 at tronnes.org>
>
> On Sun, Mar 04, 2018 at 06:38:42PM +0100, Noralf Trønnes wrote:
>> Den 02.03.2018 12.11, skrev Meghana Madhyastha:
>>> On Sun, Feb 25, 2018 at 02:19:10PM +0100, Lukas Wunner wrote:
>>>> [cc += linux-rpi-kernel at lists.infradead.org]
>>>>
>>>> On Sat, Feb 24, 2018 at 06:15:59PM +0000, Meghana Madhyastha wrote:
>>>>> I've added bcm2835_spi_transfer_one_message in spi-bcm2835. This calls
>>>>> spi_split_transfers_maxsize to split large chunks for spi dma transfers.
>>>>> I then removed chunk splitting in the tinydrm spi helper (as now the core
>>>>> is handling the chunk splitting). However, although the SPI HW should be
>>>>> able to accomodate up to 65535 bytes for dma transfers, the splitting of
>>>>> chunks to 65535 bytes results in a dma transfer time out error. However,
>>>>> when the chunks are split to < 64 bytes it seems to work fine.
>>>> Hm, that is really odd, how did you test this exactly, what did you
>>>> use as SPI slave?  It contradicts our own experience, we're using
>>>> Micrel KSZ8851 Ethernet chips as SPI slave on spi0 of a BCM2837
>>>> and can send/receive messages via DMA to the tune of several hundred
>>>> bytes without any issues.  In fact, for messages < 96 bytes, DMA is
>>>> not used at all, so you've probably been using interrupt mode,
>>>> see the BCM2835_SPI_DMA_MIN_LENGTH macro in spi-bcm2835.c.
>>> Hi Lukas,
>>>
>>> I think you are right. I checked it and its not using the DMA mode which
>>> is why its working with 64 bytes.
>>> Noralf, that leaves us back to the
>>> initial time out problem. I've tried doing the message splitting in
>>> spi_sync as well as spi_pump_messages. Martin had explained that DMA
>>> will wait for
>>> the SPI HW to set the send_more_data line, but the SPI-HW itself will
>>> stop triggering it when SPI_LEN is 0 causing DMA to wait forever. I
>>> thought if we split it before itself, the SPI_LEN will not go to zero
>>> thus preventing this problem, however it didn't work and started
>>> hanging. So I'm a little uncertain as to how to proceed and debug what
>>> exactly has caused the time out due to the asynchronous methods.
>> I did a quick test and at least this is working:
>>
>> int tinydrm_spi_transfer(struct spi_device *spi, u32 speed_hz,
>>               struct spi_transfer *header, u8 bpw, const void *buf,
>>               size_t len)
>> {
>>      struct spi_transfer tr = {
>>          .bits_per_word = bpw,
>>          .speed_hz = speed_hz,
>>          .tx_buf = buf,
>>          .len = len,
>>      };
>>      struct spi_message m;
>>      size_t maxsize;
>>      int ret;
>>
>>      maxsize = tinydrm_spi_max_transfer_size(spi, 0);
>>
>>      if (drm_debug & DRM_UT_DRIVER)
>>          pr_debug("[drm:%s] bpw=%u, maxsize=%zu, transfers:\n",
>>               __func__, bpw, maxsize);
>>
>>      spi_message_init(&m);
>>      m.spi = spi;
>>      if (header)
>>          spi_message_add_tail(header, &m);
>>      spi_message_add_tail(&tr, &m);
>>
>>      ret = spi_split_transfers_maxsize(spi->controller, &m, maxsize,
>> GFP_KERNEL);
>>      if (ret)
>>          return ret;
>>
>>      tinydrm_dbg_spi_message(spi, &m);
>>
>>      return spi_sync(spi, &m);
>> }
>> EXPORT_SYMBOL(tinydrm_spi_transfer);
>>
>>
>> Log:
>> [   39.015644] [drm:mipi_dbi_fb_dirty [mipi_dbi]] Flushing [FB:36] x1=0,
>> x2=320, y1=0, y2=240
>>
>> [   39.018079] [drm:mipi_dbi_typec3_command [mipi_dbi]] cmd=2a, par=00 00 01
>> 3f
>> [   39.018129] [drm:tinydrm_spi_transfer] bpw=8, maxsize=65532, transfers:
>> [   39.018152]     tr(0): speed=10MHz, bpw=8, len=1, tx_buf=[2a]
>> [   39.018231] [drm:tinydrm_spi_transfer] bpw=8, maxsize=65532, transfers:
>> [   39.018248]     tr(0): speed=10MHz, bpw=8, len=4, tx_buf=[00 00 01 3f]
>>
>> [   39.018330] [drm:mipi_dbi_typec3_command [mipi_dbi]] cmd=2b, par=00 00 00
>> ef
>> [   39.018347] [drm:tinydrm_spi_transfer] bpw=8, maxsize=65532, transfers:
>> [   39.018362]     tr(0): speed=10MHz, bpw=8, len=1, tx_buf=[2b]
>> [   39.018396] [drm:tinydrm_spi_transfer] bpw=8, maxsize=65532, transfers:
>> [   39.018428]     tr(0): speed=10MHz, bpw=8, len=4, tx_buf=[00 00 00 ef]
>>
>> [   39.018487] [drm:mipi_dbi_typec3_command [mipi_dbi]] cmd=2c, len=153600
>> [   39.018502] [drm:tinydrm_spi_transfer] bpw=8, maxsize=65532, transfers:
>> [   39.018517]     tr(0): speed=10MHz, bpw=8, len=1, tx_buf=[2c]
>> [   39.018565] [drm:tinydrm_spi_transfer] bpw=8, maxsize=65532, transfers:
>> [   39.018594]     tr(0): speed=48MHz, bpw=8, len=65532, tx_buf=[c6 18 c6 18
>> c6 18 c6 18 c6 18 c6 18 c6 18 c6 18 ...]
>> [   39.018608]     tr(1): speed=48MHz, bpw=8, len=65532, tx_buf=[06 18 06 18
>> 06 18 06 18 06 18 06 18 06 18 06 18 ...]
>> [   39.018621]     tr(2): speed=48MHz, bpw=8, len=22536, tx_buf=[10 82 10 82
>> 10 82 10 82 10 82 10 82 18 e3 18 e3 ...]
> Hi Noralf,
>
> Yes this works but splitting in the spi subsystem doesn't seem to work.
> So this means that spi_split_transfers_maxsize is working.
> Should I just send in a patch with splitting done here in tinydrm? (I
> had thought we wanted to avoid splitting in the tinydrm helper).

Oh, I assumed you didn't get this to work in any way.
Yes, I prefer splitting without the client's knowledge.

Looking at the code the splitting has to happen before spi_map_msg() is
called. Have you tried to do it in the prepare_message callback?

static void __spi_pump_messages(struct spi_controller *ctlr, bool 
in_kthread)
{
<...>
     if (ctlr->prepare_message) {
         ret = ctlr->prepare_message(ctlr, ctlr->cur_msg);
<...>
     ret = spi_map_msg(ctlr, ctlr->cur_msg);
<...>
     ret = ctlr->transfer_one_message(ctlr, ctlr->cur_msg);
<...>
}

There was something wrong with this email, it was missing subject and
several recipients.

Noralf.



More information about the dri-devel mailing list