lkml.org 
[lkml]   [2020]   [Apr]   [1]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH] /proc/PID/smaps: Add PMD migration entry parsing
From
Date
On 01/04/2020 05.31, Huang, Ying wrote:
> Konstantin Khlebnikov <khlebnikov@yandex-team.ru> writes:
>
>> On 31/03/2020 11.56, Huang, Ying wrote:
>>> From: Huang Ying <ying.huang@intel.com>
>>>
>>> Now, when read /proc/PID/smaps, the PMD migration entry in page table is simply
>>> ignored. To improve the accuracy of /proc/PID/smaps, its parsing and processing
>>> is added.
>>>
>>> Signed-off-by: "Huang, Ying" <ying.huang@intel.com>
>>> Cc: Andrea Arcangeli <aarcange@redhat.com>
>>> Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
>>> Cc: Zi Yan <ziy@nvidia.com>
>>> Cc: Vlastimil Babka <vbabka@suse.cz>
>>> Cc: Alexey Dobriyan <adobriyan@gmail.com>
>>> Cc: Michal Hocko <mhocko@suse.com>
>>> Cc: Konstantin Khlebnikov <khlebnikov@yandex-team.ru>
>>> Cc: "Jérôme Glisse" <jglisse@redhat.com>
>>> Cc: Yang Shi <yang.shi@linux.alibaba.com>
>>> ---
>>> fs/proc/task_mmu.c | 16 ++++++++++++----
>>> 1 file changed, 12 insertions(+), 4 deletions(-)
>>>
>>> diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
>>> index 8d382d4ec067..b5b3aef8cb3b 100644
>>> --- a/fs/proc/task_mmu.c
>>> +++ b/fs/proc/task_mmu.c
>>> @@ -548,8 +548,17 @@ static void smaps_pmd_entry(pmd_t *pmd, unsigned long addr,
>>> bool locked = !!(vma->vm_flags & VM_LOCKED);
>>> struct page *page;
>>
>> struct page *page = NULL;
>
> Looks good. Will do this in the next version.
>
>>> - /* FOLL_DUMP will return -EFAULT on huge zero page */
>>> - page = follow_trans_huge_pmd(vma, addr, pmd, FOLL_DUMP);
>>> + if (pmd_present(*pmd)) {
>>> + /* FOLL_DUMP will return -EFAULT on huge zero page */
>>> + page = follow_trans_huge_pmd(vma, addr, pmd, FOLL_DUMP);
>>> + } else if (unlikely(is_swap_pmd(*pmd))) {
>>> + swp_entry_t entry = pmd_to_swp_entry(*pmd);
>>> +
>>> + VM_BUG_ON(!is_migration_entry(entry));
>>> + page = migration_entry_to_page(entry);
>>
>> if (is_migration_entry(entry))
>> page = migration_entry_to_page(entry);
>>
>> Seems safer and doesn't add much code.
>
> With this, we lose an opportunity to capture some bugs during debugging.
> Right?

You can keep VM_BUG_ON or VM_WARN_ON_ONCE

Off-by-page in statistics isn't a big deal and not a good reason to crash (even debug) kernel.
But for normal build should use safe behaviour if this isn't hard.

>
> Best Regards,
> Huang, Ying
>
>>> + } else {
>>> + return;
>>> + }
>>> if (IS_ERR_OR_NULL(page))
>>> return;
>>> if (PageAnon(page))
>>> @@ -578,8 +587,7 @@ static int smaps_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end,
>>> ptl = pmd_trans_huge_lock(pmd, vma);
>>> if (ptl) {
>>> - if (pmd_present(*pmd))
>>> - smaps_pmd_entry(pmd, addr, walk);
>>> + smaps_pmd_entry(pmd, addr, walk);
>>> spin_unlock(ptl);
>>> goto out;
>>> }
>>>

\
 
 \ /
  Last update: 2020-04-01 08:04    [W:0.055 / U:1.172 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site