[Freedreno] [PATCH 2/2] drm/msm/adreno: Add A6XX device support
Lucas Stach
l.stach at pengutronix.de
Thu Feb 1 10:32:25 UTC 2018
Hi Jordan,
just some drive-by comments:
Am Mittwoch, den 31.01.2018, 11:25 -0700 schrieb Jordan Crouse:
> Add support for the A6XX family of Adreno GPUs. The biggest addition
> is the GMU (Graphics Management Unit) which takes over most of the
> power management of the GPU itself but in a ironic twist of fate
> needs a goodly amount of management itself. Add support for the
> A6XX core code, the GMU and the HFI (hardware firmware interface)
> queue that the CPU uses to communicate with the GMU.
>
> > Signed-off-by: Jordan Crouse <jcrouse at codeaurora.org>
> ---
[...]
> +static int a6xx_hfi_queue_read(struct a6xx_hfi_queue *queue, u32 *data,
> > + u32 dwords)
> +{
> > + struct a6xx_hfi_queue_header *header = queue->header;
> > + u32 i, hdr, index = header->read_index;
> +
> > + if (header->read_index == header->write_index) {
> > + header->rx_request = 1;
> > + return 0;
> > + }
> +
> > + hdr = queue->data[index];
> +
> > + /*
> > + * If we are to assume that the GMU firmware is in fact a rational actor
> > + * and is programmed to not send us a larger response than we expect
> > + * then we can also assume that if the header size is unexpectedly large
> > + * that it is due to memory corruption and/or hardware failure. In this
> > + * case the only reasonable course of action is to BUG() to help harden
> > + * the failure.
> > + */
> +
> + BUG_ON(HFI_HEADER_SIZE(hdr) > dwords);
Don't ever BUG the kernel due to a peripheral acting up, until you are
really certain that it has corrupted memory it doesn't own, which would
be written back to persistent storage. That's the only valid
justification for a BUG.
> +
> > + for (i = 0; i < HFI_HEADER_SIZE(hdr); i++) {
> > + data[i] = queue->data[index];
> > + index = (index + 1) % header->size;
> > + }
> +
> > + header->read_index = index;
> > + return HFI_HEADER_SIZE(hdr);
> +}
> +
> +static int a6xx_hfi_queue_write(struct a6xx_gmu *gmu,
> > + struct a6xx_hfi_queue *queue, u32 *data, u32 dwords)
> +{
> > + struct a6xx_hfi_queue_header *header = queue->header;
> > + u32 i, space, index = header->write_index;
> +
> > + spin_lock(&queue->lock);
> +
> > + space = CIRC_SPACE(header->write_index, header->read_index,
> > + header->size);
> > + if (space < dwords) {
> > + header->dropped++;
> > + spin_unlock(&queue->lock);
> > + return -ENOSPC;
> > + }
> +
> > + for (i = 0; i < dwords; i++) {
> > + queue->data[index] = data[i];
> > + index = (index + 1) % header->size;
> > + }
> +
> > + header->write_index = index;
> > + spin_unlock(&queue->lock);
> +
> > + /* Make sure the command is written to the queue */
> > + wmb();
The above memory barrier is unnecessary. The gmu_write below is a
wrapper around writel, which already includes the write barrier.
> + gmu_write(gmu, REG_A6XX_GMU_HOST2GMU_INTR_SET, 0x01);
> > + return 0;
> +}
[...]
> +static int a6xx_hfi_send_msg(struct a6xx_gmu *gmu, int id,
> > + void *data, u32 size, u32 *payload, u32 payload_size)
> +{
> > + struct a6xx_hfi_queue *queue = &gmu->queues[HFI_COMMAND_QUEUE];
> > + struct a6xx_hfi_response resp = { 0 };
> > + int ret, dwords = size >> 2;
> > + u32 seqnum;
> +
> > + seqnum = atomic_inc_return(&queue->seqnum) % 0xfff;
> +
> > + /* First dword of the message is the message header - fill it in */
> > + *((u32 *) data) = (seqnum << 20) | (HFI_MSG_CMD << 16) |
> > + (dwords << 8) | id;
> +
> > + init_completion(&resp.complete);
> > + resp.id = id;
> > + resp.seqnum = seqnum;
> +
> > + spin_lock_bh(&hfi_ack_lock);
> > + list_add_tail(&resp.node, &hfi_ack_list);
> > + spin_unlock_bh(&hfi_ack_lock);
What are you protecting against here by using the _bh spinlock
variants?
> + ret = a6xx_hfi_queue_write(gmu, queue, data, dwords);
> > + if (ret) {
> > + dev_err(gmu->dev, "Unable to send message %s id %d\n",
> > + a6xx_hfi_msg_id[id], seqnum);
> > + goto out;
> > + }
> +
> > + /* Wait up to 5 seconds for the response */
> > + ret = wait_for_completion_timeout(&resp.complete,
> > + msecs_to_jiffies(5000));
> > + if (!ret) {
> > + dev_err(gmu->dev,
> > + "Message %s id %d timed out waiting for response\n",
> > + a6xx_hfi_msg_id[id], seqnum);
> > + ret = -ETIMEDOUT;
> > + } else
> > + ret = 0;
> +
> +out:
> > + spin_lock_bh(&hfi_ack_lock);
> > + list_del(&resp.node);
> > + spin_unlock_bh(&hfi_ack_lock);
> +
> > + if (ret)
> > + return ret;
> +
> > + if (resp.error) {
> > + dev_err(gmu->dev, "Message %s id %d returned error %d\n",
> > + a6xx_hfi_msg_id[id], seqnum, resp.error);
> > + return -EINVAL;
> > + }
> +
> > + if (payload && payload_size) {
> > + int copy = min_t(u32, payload_size, sizeof(resp.payload));
> +
> > + memcpy(payload, resp.payload, copy);
> > + }
> +
> > + return 0;
> +}
Regards,
Lucas
More information about the Freedreno
mailing list