lkml.org 
[lkml]   [2019]   [Oct]   [4]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH] mm/swap: piggyback lru_add_drain_all() calls
On Fri 04-10-19 05:10:17, Matthew Wilcox wrote:
> On Fri, Oct 04, 2019 at 01:11:06PM +0300, Konstantin Khlebnikov wrote:
> > This is very slow operation. There is no reason to do it again if somebody
> > else already drained all per-cpu vectors after we waited for lock.
> > + seq = raw_read_seqcount_latch(&seqcount);
> > +
> > mutex_lock(&lock);
> > +
> > + /* Piggyback on drain done by somebody else. */
> > + if (__read_seqcount_retry(&seqcount, seq))
> > + goto done;
> > +
> > + raw_write_seqcount_latch(&seqcount);
> > +
>
> Do we really need the seqcount to do this? Wouldn't a mutex_trylock()
> have the same effect?

Yeah, this makes sense. From correctness point of view it should be ok
because no caller can expect that per-cpu pvecs are empty on return.
This might have some runtime effects that some paths might retry more -
e.g. offlining path drains pcp pvces before migrating the range away, if
there are pages still waiting for a worker to drain them then the
migration would fail and we would retry. But this not a correctness
issue.

--
Michal Hocko
SUSE Labs

\
 
 \ /
  Last update: 2019-10-04 14:28    [W:0.051 / U:3.824 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site