lkml.org 
[lkml]   [2020]   [Mar]   [31]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
From
SubjectRe: [PATCH] /proc/PID/smaps: Add PMD migration entry parsing
Date
On 31 Mar 2020, at 4:56, Huang, Ying wrote:
>
> From: Huang Ying <ying.huang@intel.com>
>
> Now, when read /proc/PID/smaps, the PMD migration entry in page table is simply
> ignored. To improve the accuracy of /proc/PID/smaps, its parsing and processing
> is added.
>
> Signed-off-by: "Huang, Ying" <ying.huang@intel.com>
> Cc: Andrea Arcangeli <aarcange@redhat.com>
> Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
> Cc: Zi Yan <ziy@nvidia.com>
> Cc: Vlastimil Babka <vbabka@suse.cz>
> Cc: Alexey Dobriyan <adobriyan@gmail.com>
> Cc: Michal Hocko <mhocko@suse.com>
> Cc: Konstantin Khlebnikov <khlebnikov@yandex-team.ru>
> Cc: "Jérôme Glisse" <jglisse@redhat.com>
> Cc: Yang Shi <yang.shi@linux.alibaba.com>
> ---
> fs/proc/task_mmu.c | 16 ++++++++++++----
> 1 file changed, 12 insertions(+), 4 deletions(-)
>
> diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
> index 8d382d4ec067..b5b3aef8cb3b 100644
> --- a/fs/proc/task_mmu.c
> +++ b/fs/proc/task_mmu.c
> @@ -548,8 +548,17 @@ static void smaps_pmd_entry(pmd_t *pmd, unsigned long addr,
> bool locked = !!(vma->vm_flags & VM_LOCKED);
> struct page *page;

Like Konstantin pointed out in another email, you could initialize page to NULL here.
Plus you do not need the “else-return” below, if you do that.

>
> - /* FOLL_DUMP will return -EFAULT on huge zero page */
> - page = follow_trans_huge_pmd(vma, addr, pmd, FOLL_DUMP);
> + if (pmd_present(*pmd)) {
> + /* FOLL_DUMP will return -EFAULT on huge zero page */
> + page = follow_trans_huge_pmd(vma, addr, pmd, FOLL_DUMP);
> + } else if (unlikely(is_swap_pmd(*pmd))) {

Should be:
} else if (unlikely(thp_migration_support() && is_swap_pmd(*pmd))) {

Otherwise, when THP migration is disabled and the PMD is under splitting, VM_BUG_ON
will be triggered.

> + swp_entry_t entry = pmd_to_swp_entry(*pmd);
> +
> + VM_BUG_ON(!is_migration_entry(entry));
> + page = migration_entry_to_page(entry);
> + } else {
> + return;
> + }
> if (IS_ERR_OR_NULL(page))
> return;
> if (PageAnon(page))
> @@ -578,8 +587,7 @@ static int smaps_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end,
>
> ptl = pmd_trans_huge_lock(pmd, vma);
> if (ptl) {
> - if (pmd_present(*pmd))
> - smaps_pmd_entry(pmd, addr, walk);
> + smaps_pmd_entry(pmd, addr, walk);
> spin_unlock(ptl);
> goto out;
> }
> --
> 2.25.0

Everything else looks good to me. Thanks.

With the fixes mentioned above, you can add
Reviewed-by: Zi Yan <ziy@nvidia.com>



Best Regards,
Yan Zi
[unhandled content-type:application/pgp-signature]
\
 
 \ /
  Last update: 2020-03-31 14:25    [W:0.138 / U:32.736 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site