lkml.org 
[lkml]   [2019]   [Aug]   [5]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH v4 12/15] iommu/vt-d: Cleanup get_valid_domain_for_dev()
From
Date
Hi Alex,

On 8/3/19 12:54 AM, Alex Williamson wrote:
> On Fri, 2 Aug 2019 15:17:45 +0800
> Lu Baolu <baolu.lu@linux.intel.com> wrote:
>
>> Hi Alex,
>>
>> Thanks for reporting this. I will try to find a machine with a
>> pcie-to-pci bridge and get this issue fixed. I will update you
>> later.
>
> Further debug below...
>
>> On 8/2/19 9:30 AM, Alex Williamson wrote:
>>> DMAR: No ATSR found
>>> DMAR: dmar0: Using Queued invalidation
>>> DMAR: dmar1: Using Queued invalidation
>>> pci 0000:00:00.0: DMAR: Software identity mapping
>>> pci 0000:00:01.0: DMAR: Software identity mapping
>>> pci 0000:00:02.0: DMAR: Software identity mapping
>>> pci 0000:00:16.0: DMAR: Software identity mapping
>>> pci 0000:00:1a.0: DMAR: Software identity mapping
>>> pci 0000:00:1b.0: DMAR: Software identity mapping
>>> pci 0000:00:1c.0: DMAR: Software identity mapping
>>> pci 0000:00:1c.5: DMAR: Software identity mapping
>>> pci 0000:00:1c.6: DMAR: Software identity mapping
>>> pci 0000:00:1c.7: DMAR: Software identity mapping
>>> pci 0000:00:1d.0: DMAR: Software identity mapping
>>> pci 0000:00:1f.0: DMAR: Software identity mapping
>>> pci 0000:00:1f.2: DMAR: Software identity mapping
>>> pci 0000:00:1f.3: DMAR: Software identity mapping
>>> pci 0000:01:00.0: DMAR: Software identity mapping
>>> pci 0000:01:00.1: DMAR: Software identity mapping
>>> pci 0000:03:00.0: DMAR: Software identity mapping
>>> pci 0000:04:00.0: DMAR: Software identity mapping
>>> DMAR: Setting RMRR:
>>> pci 0000:00:02.0: DMAR: Setting identity map [0xbf800000 - 0xcf9fffff]
>>> pci 0000:00:1a.0: DMAR: Setting identity map [0xbe8d1000 - 0xbe8dffff]
>>> pci 0000:00:1d.0: DMAR: Setting identity map [0xbe8d1000 - 0xbe8dffff]
>>> DMAR: Prepare 0-16MiB unity mapping for LPC
>>> pci 0000:00:1f.0: DMAR: Setting identity map [0x0 - 0xffffff]
>>> pci 0000:00:00.0: Adding to iommu group 0
>>> pci 0000:00:00.0: Using iommu direct mapping
>>> pci 0000:00:01.0: Adding to iommu group 1
>>> pci 0000:00:01.0: Using iommu direct mapping
>>> pci 0000:00:02.0: Adding to iommu group 2
>>> pci 0000:00:02.0: Using iommu direct mapping
>>> pci 0000:00:16.0: Adding to iommu group 3
>>> pci 0000:00:16.0: Using iommu direct mapping
>>> pci 0000:00:1a.0: Adding to iommu group 4
>>> pci 0000:00:1a.0: Using iommu direct mapping
>>> pci 0000:00:1b.0: Adding to iommu group 5
>>> pci 0000:00:1b.0: Using iommu direct mapping
>>> pci 0000:00:1c.0: Adding to iommu group 6
>>> pci 0000:00:1c.0: Using iommu direct mapping
>>> pci 0000:00:1c.5: Adding to iommu group 7
>>> pci 0000:00:1c.5: Using iommu direct mapping
>>> pci 0000:00:1c.6: Adding to iommu group 8
>>> pci 0000:00:1c.6: Using iommu direct mapping
>>> pci 0000:00:1c.7: Adding to iommu group 9
>
> Note that group 9 contains device 00:1c.7
>
>>> pci 0000:00:1c.7: Using iommu direct mapping
>
> I'm booted with iommu=pt, so the domain type is IOMMU_DOMAIN_PT
>
>>> pci 0000:00:1d.0: Adding to iommu group 10
>>> pci 0000:00:1d.0: Using iommu direct mapping
>>> pci 0000:00:1f.0: Adding to iommu group 11
>>> pci 0000:00:1f.0: Using iommu direct mapping
>>> pci 0000:00:1f.2: Adding to iommu group 11
>>> pci 0000:00:1f.3: Adding to iommu group 11
>>> pci 0000:01:00.0: Adding to iommu group 1
>>> pci 0000:01:00.1: Adding to iommu group 1
>>> pci 0000:03:00.0: Adding to iommu group 12
>>> pci 0000:03:00.0: Using iommu direct mapping
>>> pci 0000:04:00.0: Adding to iommu group 13
>>> pci 0000:04:00.0: Using iommu direct mapping
>>> pci 0000:05:00.0: Adding to iommu group 9
>
> 05:00.0 is downstream of 00:1c.7 and in the same group. As above, the
> domain is type IOMMU_DOMAIN_IDENTITY, so we take the following branch:
>
> } else {
> if (device_def_domain_type(dev) == IOMMU_DOMAIN_DMA) {
>
> Default domain type is IOMMU_DOMAIN_DMA because of the code block in
> device_def_domain_type() handling bridges to conventional PCI and
> conventional PCI endpoints.
>
> ret = iommu_request_dma_domain_for_dev(dev);
>
> This fails in request_default_domain_for_dev() because there's more
> than one device in the group.
>
> if (ret) {
> dmar_domain->flags |= DOMAIN_FLAG_LOSE_CHILDREN;
> if (!get_private_domain_for_dev(dev)) {
>
> With this commit, this now returns NULL because find_domain() does find
> a domain, the same one we found before this code block.
>
> dev_warn(dev,
> "Failed to get a private domain.\n");
> return -ENOMEM;
> }
>
> So the key factors are that I'm booting with iommu=pt and I have a
> PCIe-to-PCI bridge grouped with its parent root port. The bridge
> wants an IOMMU_DOMAIN_DMA, but the group domain is already of type
> IOMMU_DOMAIN_IDENTITY. A temporary workaround is to not use
> passthrough mode, but this is a regression versus previous kernels.
> Thanks,
>

I can reproduce this issue with a local setup. I will submit the fix and
cc it to you. Please let me know if that fix doesn't solve this problem.

Best regards,
Baolu

\
 
 \ /
  Last update: 2019-08-06 02:08    [W:0.101 / U:2.476 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site