Messages in this thread Patch in this message |  | | From | Waiman Long <> | Subject | [PATCH v2 2/5] locking/lockdep: Eliminate redundant irqs check in __lock_acquire() | Date | Tue, 2 Oct 2018 16:19:17 -0400 |
| |
The static __lock_acquire() function has only two callers:
1) lock_acquire() 2) reacquire_held_locks()
In lock_acquire(), raw_local_irq_save() is called beforehand. So IRQs must have been disabled. So the check
DEBUG_LOCKS_WARN_ON(!irqs_disabled())
is kind of redundant in this case. So move the above check to reacquire_held_locks() to eliminate redundant code in the lock_acquire() path.
Signed-off-by: Waiman Long <longman@redhat.com> --- kernel/locking/lockdep.c | 15 +++++++-------- 1 file changed, 7 insertions(+), 8 deletions(-)
diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c index 8f9de7cd11ab..bd59163b0550 100644 --- a/kernel/locking/lockdep.c +++ b/kernel/locking/lockdep.c @@ -3192,6 +3192,10 @@ static int __lock_is_held(const struct lockdep_map *lock, int read); /* * This gets called for every mutex_lock*()/spin_lock*() operation. * We maintain the dependency maps and validate the locking attempt: + * + * The callers must make sure that IRQs are disabled before calling it, + * otherwise we could get an interrupt which would want to take locks, + * which would end up in lockdep again. */ static int __lock_acquire(struct lockdep_map *lock, unsigned int subclass, int trylock, int read, int check, int hardirqs_off, @@ -3209,14 +3213,6 @@ static int __lock_acquire(struct lockdep_map *lock, unsigned int subclass, if (unlikely(!debug_locks)) return 0; - /* - * Lockdep should run with IRQs disabled, otherwise we could - * get an interrupt which would want to take locks, which would - * end up in lockdep and have you got a head-ache already? - */ - if (DEBUG_LOCKS_WARN_ON(!irqs_disabled())) - return 0; - if (!prove_locking || lock->key == &__lockdep_no_validate__) check = 0; @@ -3473,6 +3469,9 @@ static int reacquire_held_locks(struct task_struct *curr, unsigned int depth, { struct held_lock *hlock; + if (DEBUG_LOCKS_WARN_ON(!irqs_disabled())) + return 0; + for (hlock = curr->held_locks + idx; idx < depth; idx++, hlock++) { if (!__lock_acquire(hlock->instance, hlock_class(hlock)->subclass, -- 2.18.0
|  |