Messages in this thread Patch in this message |  | | From | Joao Martins <> | Subject | [PATCH RFC 02/10] mm: Handle pmd entries in follow_pfn() | Date | Fri, 10 Jan 2020 19:03:05 +0000 |
| |
When follow_pfn hits a pmd_huge() it won't return a valid PFN given it's usage of follow_pte(). Fix that up to pass a @pmdpp and thus allow callers to get the pmd pointer. If we encounter such a huge page, we calculate the pfn offset to the PMD accordingly.
This allows KVM to handle 2M hugepage pfns on VM_PFNMAP vmas.
Signed-off-by: Joao Martins <joao.m.martins@oracle.com> --- mm/memory.c | 9 +++++++-- 1 file changed, 7 insertions(+), 2 deletions(-)
diff --git a/mm/memory.c b/mm/memory.c index cfc3668bddeb..db99684d2cb3 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -4366,6 +4366,7 @@ EXPORT_SYMBOL(follow_pte_pmd); int follow_pfn(struct vm_area_struct *vma, unsigned long address, unsigned long *pfn) { + pmd_t *pmdpp = NULL; int ret = -EINVAL; spinlock_t *ptl; pte_t *ptep; @@ -4373,10 +4374,14 @@ int follow_pfn(struct vm_area_struct *vma, unsigned long address, if (!(vma->vm_flags & (VM_IO | VM_PFNMAP))) return ret; - ret = follow_pte(vma->vm_mm, address, &ptep, &ptl); + ret = follow_pte_pmd(vma->vm_mm, address, NULL, + &ptep, &pmdpp, &ptl); if (ret) return ret; - *pfn = pte_pfn(*ptep); + if (pmdpp) + *pfn = pmd_pfn(*pmdpp) + ((address & ~PMD_MASK) >> PAGE_SHIFT); + else + *pfn = pte_pfn(*ptep); pte_unmap_unlock(ptep, ptl); return 0; } -- 2.17.1
|  |