[lkml]   [2019]   [Oct]   [3]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
Messages in this thread
Patch in this message
Subject[PATCH 5.3 282/344] arm64: tlb: Ensure we execute an ISB following walk cache invalidation
From: Will Deacon <>

commit 51696d346c49c6cf4f29e9b20d6e15832a2e3408 upstream.

05f2d2f83b5a ("arm64: tlbflush: Introduce __flush_tlb_kernel_pgtable")
added a new TLB invalidation helper which is used when freeing
intermediate levels of page table used for kernel mappings, but is
missing the required ISB instruction after completion of the TLBI

Add the missing barrier.

Cc: <>
Fixes: 05f2d2f83b5a ("arm64: tlbflush: Introduce __flush_tlb_kernel_pgtable")
Reviewed-by: Mark Rutland <>
Signed-off-by: Will Deacon <>
Signed-off-by: Greg Kroah-Hartman <>

arch/arm64/include/asm/tlbflush.h | 1 +
1 file changed, 1 insertion(+)

--- a/arch/arm64/include/asm/tlbflush.h
+++ b/arch/arm64/include/asm/tlbflush.h
@@ -251,6 +251,7 @@ static inline void __flush_tlb_kernel_pg
__tlbi(vaae1is, addr);
+ isb();

 \ /
  Last update: 2019-10-03 18:52    [W:0.850 / U:0.348 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site