lkml.org 
[lkml]   [2019]   [Apr]   [23]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
Date
SubjectRe: [PATCH v4 14/16] locking/rwsem: Guard against making count negative
On Tue, Apr 23, 2019 at 7:17 AM Peter Zijlstra <peterz@infradead.org> wrote:
>
> I'm not aware of an architecture where disabling interrupts is faster
> than disabling preemption.

I don't thin kit ever is, but I'd worry a bit about the
preempt_enable() just because it also checks if need_resched() is true
when re-enabling preemption.

So doing preempt_enable() as part of rwsem_read_trylock() might cause
us to schedule in *exactly* the wrong place,

So if we play preemption games, I wonder if we should make them more
explicit than hiding them in that helper function, because
particularly for the slow path case, I think we'd be much better off
just avoiding the busy-loop in the slow path, rather than first
scheduling due to preempt_enable(), and then starting to look at the
slow path onlly afterwards.

IOW, I get the feeling that the preemption-off area might be better
off being potentially much bigger, and covering the whole (or a large
portion) of the semaphore operation, rather than just the
rwsem_read_trylock() fastpath.

Hmm?

Linus

\
 
 \ /
  Last update: 2019-04-23 18:28    [W:0.174 / U:4.388 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site