lkml.org 
[lkml]   [2019]   [Dec]   [26]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH v1 2/2] virtio-mmio: add features for virtio-mmio specification version 3
From
Date

On 2019/12/25 下午11:20, Liu, Jiang wrote:
>
>> On Dec 25, 2019, at 6:20 PM, Jason Wang <jasowang@redhat.com> wrote:
>>
>>
>> On 2019/12/25 上午10:50, Zha Bin wrote:
>>> From: Liu Jiang <gerry@linux.alibaba.com>
>>>
>>> Userspace VMMs (e.g. Qemu microvm, Firecracker) take advantage of using
>>> virtio over mmio devices as a lightweight machine model for modern
>>> cloud. The standard virtio over MMIO transport layer only supports one
>>> legacy interrupt, which is much heavier than virtio over PCI transport
>>> layer using MSI. Legacy interrupt has long work path and causes specific
>>> VMExits in following cases, which would considerably slow down the
>>> performance:
>>>
>>> 1) read interrupt status register
>>> 2) update interrupt status register
>>> 3) write IOAPIC EOI register
>>>
>>> We proposed to update virtio over MMIO to version 3[1] to add the
>>> following new features and enhance the performance.
>>>
>>> 1) Support Message Signaled Interrupt(MSI), which increases the
>>> interrupt performance for virtio multi-queue devices
>>> 2) Support per-queue doorbell, so the guest kernel may directly write
>>> to the doorbells provided by virtio devices.
>>>
>>> The following is the network tcp_rr performance testing report, tested
>>> with virtio-pci device, vanilla virtio-mmio device and patched
>>> virtio-mmio device (run test 3 times for each case):
>>>
>>> netperf -t TCP_RR -H 192.168.1.36 -l 30 -- -r 32,1024
>>>
>>> Virtio-PCI Virtio-MMIO Virtio-MMIO(MSI)
>>> trans/s 9536 6939 9500
>>> trans/s 9734 7029 9749
>>> trans/s 9894 7095 9318
>>>
>>> [1] https://lkml.org/lkml/2019/12/20/113
>>
>> Thanks for the patch. Two questions after a quick glance:
>>
>> 1) In PCI we choose to support MSI-X instead of MSI for having extra flexibility like alias, independent data and address (e.g for affinity) . Any reason for not start from MSI-X? E.g having MSI-X table and PBA (both of which looks pretty independent).
> Hi Jason,
> Thanks for reviewing patches on Christmas Day:)
> The PCI MSI-x has several advantages over PCI MSI, mainly
> 1) support 2048 vectors, much more than 32 vectors supported by MSI.
> 2) dedicated address/data for each vector,
> 3) per vector mask/pending bits.
> The proposed MMIO MSI extension supports both 1) and 2),


Aha right, I mis-read the patch. But more questions comes:

1) The association between vq and MSI-X vector is fixed. This means it
can't work for a device that have more than 2047 queues. We probably
need something similar to virtio-pci to allow a dynamic association.
2) The mask and unmask control is missed


> but the extension doesn’t support 3) because
> we noticed that the Linux virtio subsystem doesn’t really make use of interrupt masking/unmasking.


Not directly used but masking/unmasking is widely used in irq subsystem
which allows lots of optimizations.


>
> On the other hand, we want to simplify VMM implementations as simple as possible, and mimicking the PCI MSI-x
> will cause some complexity to VMM implementations.


I agree to simplify VMM implementation, but it looks to me introducing
masking/pending won't cost too much code in the VMM implementation. Just
new type of command for VIRTIO_MMIO_MSI_COMMAND.

Thanks


>
>> 2) It's better to split notify_multiplexer out of MSI support to ease the reviewers (apply to spec patch as well)
> Great suggestion, we will try to split the patch.
>
> Thanks,
> Gerry
>
>> Thanks

\
 
 \ /
  Last update: 2019-12-26 09:10    [W:0.093 / U:3.756 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site