lkml.org 
[lkml]   [2020]   [Feb]   [12]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH] mm/slub: Detach node lock from counting free objects
On Sat,  1 Feb 2020 11:15:02 +0800 Wen Yang <wenyang@linux.alibaba.com> wrote:

> The lock, protecting the node partial list, is taken when couting the free
> objects resident in that list. It introduces locking contention when the
> page(s) is moved between CPU and node partial lists in allocation path
> on another CPU. So reading "/proc/slabinfo" can possibily block the slab
> allocation on another CPU for a while, 200ms in extreme cases. If the
> slab object is to carry network packet, targeting the far-end disk array,
> it causes block IO jitter issue.
>
> This fixes the block IO jitter issue by caching the total inuse objects in
> the node in advance. The value is retrieved without taking the node partial
> list lock on reading "/proc/slabinfo".
>
> ...
>
> @@ -1768,7 +1774,9 @@ static void free_slab(struct kmem_cache *s, struct page *page)
>
> static void discard_slab(struct kmem_cache *s, struct page *page)
> {
> - dec_slabs_node(s, page_to_nid(page), page->objects);
> + int inuse = page->objects;
> +
> + dec_slabs_node(s, page_to_nid(page), page->objects, inuse);

Is this right? dec_slabs_node(..., page->objects, page->objects)?

If no, we could simply pass the page* to inc_slabs_node/dec_slabs_node
and save a function argument.

If yes then why?

> free_slab(s, page);
> }
>

\
 
 \ /
  Last update: 2020-02-12 23:53    [W:0.138 / U:0.320 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site