[lkml]   [2017]   [Aug]   [2]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
Messages in this thread
Patch in this message
Subject[PATCH v6 7/7] mm: fix KSM data corruption
From: Minchan Kim <>

Nadav reported KSM can corrupt the user data by the TLB batching race[1].
That means data user written can be lost.

Quote from Nadav Amit
For this race we need 4 CPUs:

CPU0: Caches a writable and dirty PTE entry, and uses the stale value for
write later.

CPU1: Runs madvise_free on the range that includes the PTE. It would clear
the dirty-bit. It batches TLB flushes.

CPU2: Writes 4 to /proc/PID/clear_refs , clearing the PTEs soft-dirty. We
care about the fact that it clears the PTE write-bit, and of course,
batches TLB flushes.

CPU3: Runs KSM. Our purpose is to pass the following test in

if (pte_write(*pvmw.pte) || pte_dirty(*pvmw.pte) ||
(pte_protnone(*pvmw.pte) && pte_savedwrite(*pvmw.pte)))

Since it will avoid TLB flush. And we want to do it while the PTE is stale.
Later, and before replacing the page, we would be able to change the page.

Note that all the operations the CPU1-3 perform canhappen in parallel since
they only acquire mmap_sem for read.

We start with two identical pages. Everything below regards the same

---- ---- ---- ----
Write the same
value on page

[cache PTE as
dirty in TLB]


4 > clear_refs

[ success, no flush ]

[ ok ]

Write to page
different value

[Ok, using stale


Later, CPU1, CPU2 and CPU3 would flush the TLB, but that is too late. CPU0
already wrote on the page, but KSM ignored this write, and it got lost.

In above scenario, MADV_FREE is fixed by changing TLB batching API
including [set|clear]_tlb_flush_pending. Remained thing is soft-dirty part.

This patch changes soft-dirty uses TLB batching API instead of flush_tlb_mm
and KSM checks pending TLB flush by using mm_tlb_flush_pending so that it
will flush TLB to avoid data lost if there are other parallel threads
pending TLB flush.


Cc: Mel Gorman <>
Cc: Hugh Dickins <>
Signed-off-by: Minchan Kim <>
Signed-off-by: Nadav Amit <>
Reported-by: Nadav Amit <>
Reviewed-by: Andrea Arcangeli <>
Tested-by: Nadav Amit <>
fs/proc/task_mmu.c | 7 +++++--
mm/ksm.c | 3 ++-
2 files changed, 7 insertions(+), 3 deletions(-)

diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
index 520802da059c..aa20da220973 100644
--- a/fs/proc/task_mmu.c
+++ b/fs/proc/task_mmu.c
@@ -16,9 +16,10 @@
#include <linux/mmu_notifier.h>
#include <linux/page_idle.h>
#include <linux/shmem_fs.h>
+#include <linux/uaccess.h>

#include <asm/elf.h>
-#include <linux/uaccess.h>
+#include <asm/tlb.h>
#include <asm/tlbflush.h>
#include "internal.h"

@@ -1009,6 +1010,7 @@ static ssize_t clear_refs_write(struct file *file, const char __user *buf,
struct mm_struct *mm;
struct vm_area_struct *vma;
enum clear_refs_types type;
+ struct mmu_gather tlb;
int itype;
int rv;

@@ -1055,6 +1057,7 @@ static ssize_t clear_refs_write(struct file *file, const char __user *buf,

+ tlb_gather_mmu(&tlb, mm, 0, -1);
if (type == CLEAR_REFS_SOFT_DIRTY) {
for (vma = mm->mmap; vma; vma = vma->vm_next) {
if (!(vma->vm_flags & VM_SOFTDIRTY))
@@ -1076,7 +1079,7 @@ static ssize_t clear_refs_write(struct file *file, const char __user *buf,
walk_page_range(0, mm->highest_vm_end, &clear_refs_walk);
mmu_notifier_invalidate_range_end(mm, 0, -1);
- flush_tlb_mm(mm);
+ tlb_finish_mmu(&tlb, 0, -1);
diff --git a/mm/ksm.c b/mm/ksm.c
index 216184af0e19..e5bf02e39752 100644
--- a/mm/ksm.c
+++ b/mm/ksm.c
@@ -883,7 +883,8 @@ static int write_protect_page(struct vm_area_struct *vma, struct page *page,
goto out_unlock;

if (pte_write(*pvmw.pte) || pte_dirty(*pvmw.pte) ||
- (pte_protnone(*pvmw.pte) && pte_savedwrite(*pvmw.pte))) {
+ (pte_protnone(*pvmw.pte) && pte_savedwrite(*pvmw.pte)) ||
+ mm_tlb_flush_pending(mm)) {
pte_t entry;

swapped = PageSwapCache(page);
 \ /
  Last update: 2017-08-02 09:36    [W:0.180 / U:0.032 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site