lkml.org 
[lkml]   [2018]   [Feb]   [27]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH] nvme-pci: assign separate irq vectors for adminq and ioq0
On Tue, Feb 27, 2018 at 04:46:17PM +0800, Jianchao Wang wrote:
> Currently, adminq and ioq0 share the same irq vector. This is
> unfair for both amdinq and ioq0.
> - For adminq, its completion irq has to be bound on cpu0.
> - For ioq0, when the irq fires for io completion, the adminq irq
> action has to be checked also.

This change log could use some improvements. Why is it bad if admin
interrupts affinity is with cpu0?

Are you able to measure _any_ performance difference on IO queue 1 vs IO
queue 2 that you can attribute to IO queue 1's sharing vector 0?

> @@ -1945,11 +1947,11 @@ static int nvme_setup_io_queues(struct nvme_dev *dev)
> * setting up the full range we need.
> */
> pci_free_irq_vectors(pdev);
> - nr_io_queues = pci_alloc_irq_vectors(pdev, 1, nr_io_queues,
> - PCI_IRQ_ALL_TYPES | PCI_IRQ_AFFINITY);
> - if (nr_io_queues <= 0)
> + ret = pci_alloc_irq_vectors_affinity(pdev, 1, (nr_io_queues + 1),
> + PCI_IRQ_ALL_TYPES | PCI_IRQ_AFFINITY, &affd);
> + if (ret <= 0)
> return -EIO;
> - dev->max_qid = nr_io_queues;
> + dev->max_qid = ret - 1;

So controllers that have only legacy or single-message MSI don't get any
IO queues?

\
 
 \ /
  Last update: 2018-02-27 16:13    [W:0.097 / U:0.560 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site