From: Ingo Molnar <mingo@elte.hu>

lock->break_lock is set when a lock is contended, but cleared only in
cond_resched_lock.  Users of need_lockbreak (journal_commit_transaction,
copy_pte_range, unmap_vmas) don't necessarily use cond_resched_lock on it.

So, if the lock has been contended at some time in the past, break_lock
remains set thereafter, and the fastpath keeps dropping lock unnecessarily.
 Hanging the system if you make a change like I did, forever restarting a
loop before making any progress.  And even users of cond_resched_lock may
well suffer an initial unnecessary lockbreak.

There seems to be no point at which break_lock can be cleared when
unlocking, any point being either too early or too late; but that's okay,
it's only of interest while the lock is held.  So clear it whenever the
lock is acquired - and any waiting contenders will quickly set it again. 
Additional locking overhead?  well, this is only when CONFIG_PREEMPT is on.

Since cond_resched_lock's spin_lock clears break_lock, no need to clear it
itself; and use need_lockbreak there too, preferring optimizer to #ifdefs.

Signed-off-by: Hugh Dickins <hugh@veritas.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Andrew Morton <akpm@osdl.org>
---

 25-akpm/kernel/sched.c    |    5 +----
 25-akpm/kernel/spinlock.c |    2 ++
 2 files changed, 3 insertions(+), 4 deletions(-)

diff -puN kernel/sched.c~break_lock-fix-2 kernel/sched.c
--- 25/kernel/sched.c~break_lock-fix-2	Mon Mar 14 16:20:47 2005
+++ 25-akpm/kernel/sched.c	Mon Mar 14 16:20:47 2005
@@ -3746,14 +3746,11 @@ EXPORT_SYMBOL(cond_resched);
  */
 int cond_resched_lock(spinlock_t * lock)
 {
-#if defined(CONFIG_SMP) && defined(CONFIG_PREEMPT)
-	if (lock->break_lock) {
-		lock->break_lock = 0;
+	if (need_lockbreak(lock)) {
 		spin_unlock(lock);
 		cpu_relax();
 		spin_lock(lock);
 	}
-#endif
 	if (need_resched()) {
 		_raw_spin_unlock(lock);
 		preempt_enable_no_resched();
diff -puN kernel/spinlock.c~break_lock-fix-2 kernel/spinlock.c
--- 25/kernel/spinlock.c~break_lock-fix-2	Mon Mar 14 16:20:47 2005
+++ 25-akpm/kernel/spinlock.c	Mon Mar 14 16:20:47 2005
@@ -187,6 +187,7 @@ void __lockfunc _##op##_lock(locktype##_
 			cpu_relax();					\
 		preempt_disable();					\
 	}								\
+	(lock)->break_lock = 0;						\
 }									\
 									\
 EXPORT_SYMBOL(_##op##_lock);						\
@@ -209,6 +210,7 @@ unsigned long __lockfunc _##op##_lock_ir
 			cpu_relax();					\
 		preempt_disable();					\
 	}								\
+	(lock)->break_lock = 0;						\
 	return flags;							\
 }									\
 									\
_