lkml.org 
[lkml]   [2021]   [Oct]   [20]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH v3 4/6] iommu: Move IOMMU pagesize check to attach_device
From
Date
On 10/20/21 10:22 PM, Marc Zyngier wrote:
> On Wed, 20 Oct 2021 06:21:44 +0100,
> Lu Baolu <baolu.lu@linux.intel.com> wrote:
>>
>> On 2021/10/20 0:37, Sven Peter via iommu wrote:
>>> The iova allocator is capable of handling any granularity which is a power
>>> of two. Remove the much stronger condition that the granularity must be
>>> smaller or equal to the CPU page size from a BUG_ON there.
>>> Instead, check this condition during __iommu_attach_device and fail
>>> gracefully.
>>>
>>> Signed-off-by: Sven Peter<sven@svenpeter.dev>
>>> ---
>>> drivers/iommu/iommu.c | 35 ++++++++++++++++++++++++++++++++---
>>> drivers/iommu/iova.c | 7 ++++---
>>> include/linux/iommu.h | 5 +++++
>>> 3 files changed, 41 insertions(+), 6 deletions(-)
>>>
>>> diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
>>> index dd7863e453a5..28896739964b 100644
>>> --- a/drivers/iommu/iommu.c
>>> +++ b/drivers/iommu/iommu.c
>>> @@ -80,6 +80,8 @@ static struct iommu_domain *__iommu_domain_alloc(struct bus_type *bus,
>>> unsigned type);
>>> static int __iommu_attach_device(struct iommu_domain *domain,
>>> struct device *dev);
>>> +static void __iommu_detach_device(struct iommu_domain *domain,
>>> + struct device *dev);
>>> static int __iommu_attach_group(struct iommu_domain *domain,
>>> struct iommu_group *group);
>>> static void __iommu_detach_group(struct iommu_domain *domain,
>>> @@ -1974,6 +1976,19 @@ void iommu_domain_free(struct iommu_domain *domain)
>>> }
>>> EXPORT_SYMBOL_GPL(iommu_domain_free);
>>> +static int iommu_check_page_size(struct iommu_domain *domain)
>>> +{
>>> + if (!iommu_is_paging_domain(domain))
>>> + return 0;
>>> +
>>> + if (!(domain->pgsize_bitmap & (PAGE_SIZE | (PAGE_SIZE - 1)))) {
>>> + pr_warn("IOMMU pages cannot exactly represent CPU pages.\n");
>>> + return -EFAULT;
>>> + }
>>> +
>>> + return 0;
>>> +}
>>> +
>>> static int __iommu_attach_device(struct iommu_domain *domain,
>>> struct device *dev)
>>> {
>>> @@ -1983,9 +1998,23 @@ static int __iommu_attach_device(struct iommu_domain *domain,
>>> return -ENODEV;
>>> ret = domain->ops->attach_dev(domain, dev);
>>> - if (!ret)
>>> - trace_attach_device_to_domain(dev);
>>> - return ret;
>>> + if (ret)
>>> + return ret;
>>> +
>>> + /*
>>> + * Check that CPU pages can be represented by the IOVA granularity.
>>> + * This has to be done after ops->attach_dev since many IOMMU drivers
>>> + * only limit domain->pgsize_bitmap after having attached the first
>>> + * device.
>>> + */
>>> + ret = iommu_check_page_size(domain);
>>> + if (ret) {
>>> + __iommu_detach_device(domain, dev);
>>> + return ret;
>>> + }
>>
>> It looks odd. __iommu_attach_device() attaches an I/O page table for a
>> device. How does it relate to CPU pages? Why is it a failure case if CPU
>> page size is not covered?
>
> If you allocate a CPU PAGE_SIZE'd region, and point it at a device
> that now can DMA to more than what you have allocated because the
> IOMMU's own page size is larger, the device has now access to data it
> shouldn't see. In my book, that's a pretty bad thing.

But even you enforce the CPU page size check here, this problem still
exists unless all DMA buffers are PAGE_SIZE aligned and sized, right?

Best regards,
baolu

\
 
 \ /
  Last update: 2021-10-21 04:27    [W:0.094 / U:2.264 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site