We're currently just setting the referenced bit when modifying pagecache in
write().

Consequently overwritten (and redirtied) pages are remaining on the inactive
list.  The net result is that a lot of dirty pages are reaching the tail of
the LRU in page reclaim and are getting written via the writepage() in there.

But a core design objective is to minimise the amount of IO via that path,
and to maximise the amount of IO via balance_dirty_pages().  Because the
latter has better IO patterns.

This may explain the bad IO patterns which Gerrit talked about at KS.


 mm/filemap.c |    4 +---
 1 files changed, 1 insertion(+), 3 deletions(-)

diff -puN mm/filemap.c~write-mark_page_accessed mm/filemap.c
--- 25/mm/filemap.c~write-mark_page_accessed	2003-07-26 21:51:09.000000000 -0700
+++ 25-akpm/mm/filemap.c	2003-07-26 21:53:01.000000000 -0700
@@ -1863,10 +1863,8 @@ generic_file_aio_write_nolock(struct kio
 		if (unlikely(copied != bytes))
 			if (status >= 0)
 				status = -EFAULT;
-
-		if (!PageReferenced(page))
-			SetPageReferenced(page);
 		unlock_page(page);
+		mark_page_accessed(page);
 		page_cache_release(page);
 		if (status < 0)
 			break;

_