lkml.org 
[lkml]   [2019]   [Sep]   [13]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
From
SubjectRe: [PATCH] mm: memory: fix /proc/meminfo reporting for MLOCK_ONFAULT
Date
On 9/13/19, 04:18, "Souptick Joarder" <jrdr.linux@gmail.com> wrote:

On Fri, Sep 13, 2019 at 4:49 AM Lucian Adrian Grijincu <lucian@fb.com> wrote:
>
> As pages are faulted in MLOCK_ONFAULT correctly updates
> /proc/self/smaps, but doesn't update /proc/meminfo's Mlocked field.
>
> - Before this /proc/meminfo fields didn't change as pages were faulted in:
> diff --git a/mm/memory.c b/mm/memory.c
> index e0c232fe81d9..7e8dc3ed4e89 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -3311,6 +3311,9 @@ vm_fault_t alloc_set_pte(struct vm_fault *vmf, struct mem_cgroup *memcg,
> } else {
> inc_mm_counter_fast(vma->vm_mm, mm_counter_file(page));
> page_add_file_rmap(page, false);
> + if ((vma->vm_flags & (VM_LOCKED | VM_SPECIAL)) == VM_LOCKED &&
> + !PageTransCompound(page))
Do we need to check against VM_SPECIAL ?

I think you're right. mlock/mlock2 already checks and doesn't set VM_LOCKED if VM_SPECIAL is set: https://github.com/torvalds/linux/blob/v5.2/mm/mlock.c#L519-L533

/*
* mlock_fixup - handle mlock[all]/munlock[all] requests.
*
* Filters out "special" vmas -- VM_LOCKED never gets set for these, and
* munlock is a no-op. However, for some special vmas, we go ahead and
* populate the ptes.
*
* For vmas that pass the filters, merge/split as appropriate.
*/
static int mlock_fixup(struct vm_area_struct *vma, struct vm_area_struct **prev,
unsigned long start, unsigned long end, vm_flags_t newflags)
{
struct mm_struct *mm = vma->vm_mm;
pgoff_t pgoff;
int nr_pages;
int ret = 0;
int lock = !!(newflags & VM_LOCKED);
vm_flags_t old_flags = vma->vm_flags;
if (newflags == vma->vm_flags || (vma->vm_flags & VM_SPECIAL) ||
is_vm_hugetlb_page(vma) || vma == get_gate_vma(current->mm) ||
vma_is_dax(vma))
/* don't set VM_LOCKED or VM_LOCKONFAULT and don't count */
goto out;

I got thrown off by this check https://github.com/torvalds/linux/blob/v5.2/mm/swap.c#L454-L469


void lru_cache_add_active_or_unevictable(struct page *page,
struct vm_area_struct *vma)
{
VM_BUG_ON_PAGE(PageLRU(page), page);
if (likely((vma->vm_flags & (VM_LOCKED | VM_SPECIAL)) != VM_LOCKED))
SetPageActive(page);
else if (!TestSetPageMlocked(page)) {
/*
* We use the irq-unsafe __mod_zone_page_stat because this
* counter is not modified from interrupt context, and the pte
* lock is held(spinlock), which implies preemption disabled.
*/
__mod_zone_page_state(page_zone(page), NR_MLOCK,
hpage_nr_pages(page));
count_vm_event(UNEVICTABLE_PGMLOCKED);
I'll remove VM_SPECIAL and re-submit.

--
Lucian


\
 
 \ /
  Last update: 2019-09-13 21:25    [W:0.062 / U:8.248 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site