lkml.org 
[lkml]   [2018]   [Aug]   [21]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH 8/9] psi: pressure stall information for CPU, memory, and IO
Hi,

a quick update on that feedback before I send out v4:

On Fri, Aug 03, 2018 at 06:56:41PM +0200, Peter Zijlstra wrote:
> On Wed, Aug 01, 2018 at 11:19:57AM -0400, Johannes Weiner wrote:
> > +static bool test_state(unsigned int *tasks, int cpu, enum psi_states state)
> > +{
> > + switch (state) {
> > + case PSI_IO_SOME:
> > + return tasks[NR_IOWAIT];
> > + case PSI_IO_FULL:
> > + return tasks[NR_IOWAIT] && !tasks[NR_RUNNING];
> > + case PSI_MEM_SOME:
> > + return tasks[NR_MEMSTALL];
> > + case PSI_MEM_FULL:
> > + /*
> > + * Since we care about lost potential, things are
> > + * fully blocked on memory when there are no other
> > + * working tasks, but also when the CPU is actively
> > + * being used by a reclaimer and nothing productive
> > + * could run even if it were runnable.
> > + */
> > + return tasks[NR_MEMSTALL] &&
> > + (!tasks[NR_RUNNING] ||
> > + cpu_curr(cpu)->flags & PF_MEMSTALL);
>
> I don't think you can do this, there is nothing that guarantees
> cpu_curr() still exists.

As discussed later in this thread, I've replaced this with time
sampling from inside scheduler_tick(): in the unlikely event that
rq->curr is PF_MEMSTALL, it'll record TICK_NSEC worth of MEM_FULL.

However:

> > + for (s = PSI_NONIDLE; s >= 0; s--) {
> > + u32 time, delta;
> > +
> > + time = READ_ONCE(groupc->times[s]);
> > + /*
> > + * In addition to already concluded states, we
> > + * also incorporate currently active states on
> > + * the CPU, since states may last for many
> > + * sampling periods.
> > + *
> > + * This way we keep our delta sampling buckets
> > + * small (u32) and our reported pressure close
> > + * to what's actually happening.
> > + */
> > + if (test_state(groupc->tasks, cpu, s)) {
> > + /*
> > + * We can race with a state change and
> > + * need to make sure the state_start
> > + * update is ordered against the
> > + * updates to the live state and the
> > + * time buckets (groupc->times).
> > + *
> > + * 1. If we observe task state that
> > + * needs to be recorded, make sure we
> > + * see state_start from when that
> > + * state went into effect or we'll
> > + * count time from the previous state.
> > + *
> > + * 2. If the time delta has already
> > + * been added to the bucket, make sure
> > + * we don't see it in state_start or
> > + * we'll count it twice.
> > + *
> > + * If the time delta is out of
> > + * state_start but not in the time
> > + * bucket yet, we'll miss it entirely
> > + * and handle it in the next period.
> > + */
> > + smp_rmb();
> > + time += cpu_clock(cpu) - groupc->state_start;
> > + }
>
> The alternative is adding an update to scheduler_tick(), that would
> ensure you're never more than nr_cpu_ids * TICK_NSEC behind.

I wasn't able to convert *all* states to tick updates like this.

The reason is that, while testing rq->curr for PF_MEMSTALL is cheap,
other tasks associated with the rq could be from any cgroup in the
system. That means we'd have to do for_each_cgroup() on every tick to
keep the groupc->times that closely uptodate, and that wouldn't scale.
We tend to have hundreds of them, some setups have thousands.

Since we don't need to be *that* current, I left the on-demand update
inside the aggregator for now. It's a bit trickier, but much cheaper.

\
 
 \ /
  Last update: 2018-08-21 21:45    [W:0.092 / U:15.972 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site