[PATCH 00/10] drm/vkms: rework crc worker

Rodrigo Siqueira rodrigosiqueiramelo at gmail.com
Wed Jun 12 13:28:41 UTC 2019


Hi Daniel,

First of all, thank you very much for your patchset.

I tried to make a detailed review of your series, and you can see my
comments in each patch. You’ll notice that I asked many things related
to the DRM subsystem with the hope that I could learn a little bit
more about DRM from your comments.

Before you go through the review, I would like to start a discussion
about the vkms race conditions. First, I have to admit that I did not
understand the race conditions that you described before because I
couldn’t reproduce them. Now, I'm suspecting that I could not
experience the problem because I'm using QEMU with KVM; with this idea
in mind, I suppose that we have two scenarios for using vkms in a
virtual machine:

* Scenario 1: The user has hardware virtualization support; in this
case, it is a little bit harder to experience race conditions with
vkms.

* Scenario 2: Without hardware virtualization support, it is much
easier to experience race conditions.

With these two scenarios in mind, I conducted a bunch of experiments
for trying to shed light on this issue. I did:

1. Enabled lockdep

2. Defined two different environments for testing both using QEMU with
and without kvm. See below the QEMU hardware options:

* env_kvm: -enable-kvm -daemonize -m 1G -smp cores=2,cpus=2
* env_no_kvm: -daemonize -m 2G -smp cores=4,cpus=4

3. My test protocol: I) turn on the vm, II) clean /proc/lock_stat,
III) execute kms_cursor_crc, III) save the lock_stat file, and IV)
turn off the vm.

4. From the lockdep_stat, I just highlighted the row related to vkms
and the columns holdtime-total and holdtime-avg

I would like to highlight that the following data does not have any
statistical approach, and its intention is solely to assist our
discussion. See below the summary of the collected data:

Summary of the experiment results:

+----------------+----------------+----------------+
|                |     env_kvm    |   env_no_kvm   |
+                +----------------+----------------+
| Test           | Before | After | Before | After |
+----------------+--------+-------+--------+-------+
| kms_cursor_crc |   S1   |   S2  |   M1   |   M2  |
+----------------+--------+-------+--------+-------+

* Before: before apply this patchset
* After: after apply this patchset

-----------------------------------------+------------------+-----------
S1: Without this patchset and with kvm   | holdtime-total | holdtime-avg
-----------------------------------------+----------------+-------------
&(&vkms_out->lock)->rlock:               |  21983.52      |  6.21
(work_completion)(&vkms_state->crc_wo:   |  20.47         |  20.47
(wq_completion)vkms_crc_workq:           |  3999507.87    |  3352.48
&(&vkms_out->state_lock)->rlock:         |  378.47        |  0.30
(work_completion)(&vkms_state->crc_#2:   |  3999066.30    |  2848.34
-----------------------------------------+----------------+-------------
S2: With this patchset and with kvm      |                |
-----------------------------------------+----------------+-------------
&(&vkms_out->lock)->rlock:               |  23262.83      |  6.34
(work_completion)(&vkms_state->crc_wo:   |  8.98          |  8.98
&(&vkms_out->crc_lock)->rlock:           |  307.28        |  0.32
(wq_completion)vkms_crc_workq:           |  6567727.05    |  12345.35
(work_completion)(&vkms_state->crc_#2:   |  6567135.15    |  4488.81
-----------------------------------------+----------------+-------------
M1: Without this patchset and without kvm|                |
-----------------------------------------+----------------+-------------
&(&vkms_out->state_lock)->rlock:         |  4994.72       |  1.61
&(&vkms_out->lock)->rlock:               |  247190.04     |  39.39
(work_completion)(&vkms_state->crc_wo:   |  31.32         |  31.32
(wq_completion)vkms_crc_workq:           |  20991073.78   |  13525.18
(work_completion)(&vkms_state->crc_#2:   |  20988347.18   |  11904.90
-----------------------------------------+----------------+-------------
M2: With this patchset and without kvm   |                |
-----------------------------------------+----------------+-------------
(wq_completion)vkms_crc_workq:           |  42819161.68   |  36597.57
&(&vkms_out->lock)->rlock:               |  251257.06     |  35.80
(work_completion)(&vkms_state->crc_wo:   |  69.37         |  69.37
&(&vkms_out->crc_lock)->rlock:           |  3620.92       |  1.54
(work_completion)(&vkms_state->crc_#2:   |  42803419.59   |  24306.31

First, I analyzed the scenarios with KVM (S1 and S2); more
specifically, I focused on these two classes:

1. (work_completion)(&vkms_state->crc_wo
2. (work_completion)(&vkms_state->crc_#2

After taking a look at the data, it looks like that this patchset
greatly reduces the hold time average for crc_wo. On the other hand,
it increases the hold time average for crc_#2. I didn’t understand
well the reason for the difference. Could you help me here?

When I looked for the second set of scenarios (M1 and M2, both without
KVM), the results look much more distant; basically, this patchset
increased the hold time average. Again, could you help me understand a
little bit better this issue?

Finally, after these tests, I have some questions:

1. VKMS is a software-only driver; because of this, how about defining
a minimal system resource for using it?

2. During my experiments, I noticed that running tests with a VM that
uses KVM had consistent results. For example, kms_flip never fails
with QEMU+KVM; however, without KVM, two or three tests failed (one of
them looks random). If we use vkms for test DRM stuff, should we
recommend the use of KVM?

Best Regards

On Thu, Jun 6, 2019 at 7:28 PM Daniel Vetter <daniel.vetter at ffwll.ch> wrote:
>
> Hi all,
>
> This here is the first part of a rework for the vkms crc worker. I think
> this should fix all the locking/races/use-after-free issues I spotted in
> the code. There's more work we can do in the future as a follow-up:
>
> - directly access vkms_plane_state->base in the crc worker, with this
>   approach in this series here that should be safe now. Follow-up patches
>   could switch and remove all the separate crc_data infrastructure.
>
> - I think some kerneldoc for vkms structures would be nice. Documentation
>   the various functions is probably overkill.
>
> - Implementing a more generic blending engine, as prep for adding more
>   pixel formats, more planes, and more features in general.
>
> Tested with kms_pipe_crc, kms_flip and kms_cursor_crc. Seems to not make
> things worse, but I didn't do a full run.
>
> Cheers, Daniel
>
> Daniel Vetter (10):
>   drm/vkms: Fix crc worker races
>   drm/vkms: Use spin_lock_irq in process context
>   drm/vkms: Rename vkms_output.state_lock to crc_lock
>   drm/vkms: Move format arrays to vkms_plane.c
>   drm/vkms: Add our own commit_tail
>   drm/vkms: flush crc workers earlier in commit flow
>   drm/vkms: Dont flush crc worker when we change crc status
>   drm/vkms: No _irqsave within spin_lock_irq needed
>   drm/vkms: totally reworked crc data tracking
>   drm/vkms: No need for ->pages_lock in crc work anymore
>
>  drivers/gpu/drm/vkms/vkms_crc.c   | 74 +++++++++-------------------
>  drivers/gpu/drm/vkms/vkms_crtc.c  | 80 ++++++++++++++++++++++++++-----
>  drivers/gpu/drm/vkms/vkms_drv.c   | 35 ++++++++++++++
>  drivers/gpu/drm/vkms/vkms_drv.h   | 24 +++++-----
>  drivers/gpu/drm/vkms/vkms_plane.c |  8 ++++
>  5 files changed, 146 insertions(+), 75 deletions(-)
>
> --
> 2.20.1
>
> _______________________________________________
> dri-devel mailing list
> dri-devel at lists.freedesktop.org
> https://lists.freedesktop.org/mailman/listinfo/dri-devel



-- 

Rodrigo Siqueira
https://siqueira.tech


More information about the dri-devel mailing list