lkml.org 
[lkml]   [2020]   [Jan]   [10]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
From
Subject[PATCH RFC 02/10] mm: Handle pmd entries in follow_pfn()
Date
When follow_pfn hits a pmd_huge() it won't return a valid PFN
given it's usage of follow_pte(). Fix that up to pass a @pmdpp
and thus allow callers to get the pmd pointer. If we encounter
such a huge page, we calculate the pfn offset to the PMD
accordingly.

This allows KVM to handle 2M hugepage pfns on VM_PFNMAP vmas.

Signed-off-by: Joao Martins <joao.m.martins@oracle.com>
---
mm/memory.c | 9 +++++++--
1 file changed, 7 insertions(+), 2 deletions(-)

diff --git a/mm/memory.c b/mm/memory.c
index cfc3668bddeb..db99684d2cb3 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -4366,6 +4366,7 @@ EXPORT_SYMBOL(follow_pte_pmd);
int follow_pfn(struct vm_area_struct *vma, unsigned long address,
unsigned long *pfn)
{
+ pmd_t *pmdpp = NULL;
int ret = -EINVAL;
spinlock_t *ptl;
pte_t *ptep;
@@ -4373,10 +4374,14 @@ int follow_pfn(struct vm_area_struct *vma, unsigned long address,
if (!(vma->vm_flags & (VM_IO | VM_PFNMAP)))
return ret;

- ret = follow_pte(vma->vm_mm, address, &ptep, &ptl);
+ ret = follow_pte_pmd(vma->vm_mm, address, NULL,
+ &ptep, &pmdpp, &ptl);
if (ret)
return ret;
- *pfn = pte_pfn(*ptep);
+ if (pmdpp)
+ *pfn = pmd_pfn(*pmdpp) + ((address & ~PMD_MASK) >> PAGE_SHIFT);
+ else
+ *pfn = pte_pfn(*ptep);
pte_unmap_unlock(ptep, ptl);
return 0;
}
--
2.17.1
\
 
 \ /
  Last update: 2020-01-10 20:07    [W:0.184 / U:0.216 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site