From: Nick Piggin <nickpiggin@yahoo.com.au>

The vm-shrink-zone.patch has a failure case: if the inactive list becomes
very small compared to the size of the active list, active list scanning
(and therefore inactive list refilling) also becomes small.

The reason is that vm-shrink-zone.patch causes active list scanning to be
keyed off inactive list scanning, which in turn is keyed off the size of
the inactive list.  This patch causes inactive list scanning to be keyed
off the size of the active+inactive lists.  It has the plus of hiding
active and inactive balancing implementation from the higher level scanning
code.  It will slightly change other aspects of scanning behaviour, but
probably not significantly.


---

 25-akpm/mm/vmscan.c |    5 +++--
 1 files changed, 3 insertions(+), 2 deletions(-)

diff -puN mm/vmscan.c~vm-shrink-zone-fix mm/vmscan.c
--- 25/mm/vmscan.c~vm-shrink-zone-fix	2004-05-18 23:33:12.331323368 -0700
+++ 25-akpm/mm/vmscan.c	2004-05-18 23:33:12.335322760 -0700
@@ -818,7 +818,7 @@ shrink_caches(struct zone **zones, int p
 		if (zone->all_unreclaimable && priority != DEF_PRIORITY)
 			continue;	/* Let kswapd poll it */
 
-		max_scan = zone->nr_inactive >> priority;
+		max_scan = (zone->nr_active + zone->nr_inactive) >> priority;
 		ret += shrink_zone(zone, max_scan, gfp_mask,
 					total_scanned, ps, do_writepage);
 	}
@@ -994,7 +994,8 @@ scan:
 					all_zones_ok = 0;
 			}
 			zone->temp_priority = priority;
-			max_scan = zone->nr_inactive >> priority;
+			max_scan = (zone->nr_active + zone->nr_inactive)
+								>> priority;
 			reclaimed = shrink_zone(zone, max_scan, GFP_KERNEL,
 					&scanned, ps, do_writepage);
 			total_scanned += scanned;

_