lkml.org 
[lkml]   [2018]   [Aug]   [7]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH RFC 00/10] Introduce lockless shrink_slab()
Hi Kirill,

On Tue, 07 Aug 2018 18:37:19 +0300 Kirill Tkhai <ktkhai@virtuozzo.com> wrote:
>
> After bitmaps of not-empty memcg shrinkers were implemented
> (see "[PATCH v9 00/17] Improve shrink_slab() scalability..."
> series, which is already in mm tree), all the evil in perf
> trace has moved from shrink_slab() to down_read_trylock().
> As reported by Shakeel Butt:
>
> > I created 255 memcgs, 255 ext4 mounts and made each memcg create a
> > file containing few KiBs on corresponding mount. Then in a separate
> > memcg of 200 MiB limit ran a fork-bomb.
> >
> > I ran the "perf record -ag -- sleep 60" and below are the results:
> > + 47.49% fb.sh [kernel.kallsyms] [k] down_read_trylock
> > + 30.72% fb.sh [kernel.kallsyms] [k] up_read
> > + 9.51% fb.sh [kernel.kallsyms] [k] mem_cgroup_iter
> > + 1.69% fb.sh [kernel.kallsyms] [k] shrink_node_memcg
> > + 1.35% fb.sh [kernel.kallsyms] [k] mem_cgroup_protected
> > + 1.05% fb.sh [kernel.kallsyms] [k] queued_spin_lock_slowpath
> > + 0.85% fb.sh [kernel.kallsyms] [k] _raw_spin_lock
> > + 0.78% fb.sh [kernel.kallsyms] [k] lruvec_lru_size
> > + 0.57% fb.sh [kernel.kallsyms] [k] shrink_node
> > + 0.54% fb.sh [kernel.kallsyms] [k] queue_work_on
> > + 0.46% fb.sh [kernel.kallsyms] [k] shrink_slab_memcg
>
> The patchset continues to improve shrink_slab() scalability and makes
> it lockless completely. Here are several steps for that:

So do you have any numbers for after theses changes?

--
Cheers,
Stephen Rothwell
[unhandled content-type:application/pgp-signature]
\
 
 \ /
  Last update: 2018-08-08 03:13    [W:0.140 / U:0.644 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site