From: Ingo Molnar <mingo@elte.hu>

the attached patch fixes the SMP balancing problem reported by David 
Mosberger. (the 'yield-ing threads do not get spread out properly' bug).

it turns out that we never really spread out tasks in the busy-rebalance
case - contrary to my intention. The most likely incarnation of this
balancing bug is via yield() - but in theory pipe users can be affected
too.

the patch balances more agressively with a slow frequency, on SMP (or
within the same node on NUMA - not between nodes on NUMA).



 kernel/sched.c |    3 +--
 1 files changed, 1 insertion(+), 2 deletions(-)

diff -puN kernel/sched.c~sched-balance-fix-2.6.0-test3-mm3-A0 kernel/sched.c
--- 25/kernel/sched.c~sched-balance-fix-2.6.0-test3-mm3-A0	2003-08-23 13:57:06.000000000 -0700
+++ 25-akpm/kernel/sched.c	2003-08-23 13:57:06.000000000 -0700
@@ -1144,7 +1144,6 @@ static void rebalance_tick(runqueue_t *t
 			load_balance(this_rq, idle, cpu_to_node_mask(this_cpu));
 			spin_unlock(&this_rq->lock);
 		}
-		return;
 	}
 #ifdef CONFIG_NUMA
 	if (!(j % BUSY_NODE_REBALANCE_TICK))
@@ -1152,7 +1151,7 @@ static void rebalance_tick(runqueue_t *t
 #endif
 	if (!(j % BUSY_REBALANCE_TICK)) {
 		spin_lock(&this_rq->lock);
-		load_balance(this_rq, idle, cpu_to_node_mask(this_cpu));
+		load_balance(this_rq, 0, cpu_to_node_mask(this_cpu));
 		spin_unlock(&this_rq->lock);
 	}
 }

_