From: Hugh Dickins <hugh@veritas.com>

Two concurrent exits (of the last two mms sharing the anonhd).  First
exit_rmap brings anonhd->count down to 2, gets preempted (at the spin_unlock)
by second, which brings anonhd->count down to 1, sees it's 1 and frees the
anonhd (without making any change to anonhd->count itself), cpu goes on to do
something new which reallocates the old anonhd as a new struct anonmm
(probably not a head, in which case count will start at 1), first resumes
after the spin_unlock and sees anonhd->count 1, frees "anonhd" again, it's
used for something else, a later exit_rmap list_del finds list corrupt.


---

 25-akpm/mm/rmap.c |    4 +++-
 1 files changed, 3 insertions(+), 1 deletion(-)

diff -puN mm/rmap.c~rmap-anonhd-locking-fix mm/rmap.c
--- 25/mm/rmap.c~rmap-anonhd-locking-fix	Mon May  3 13:38:47 2004
+++ 25-akpm/mm/rmap.c	Mon May  3 13:38:47 2004
@@ -103,6 +103,7 @@ void exit_rmap(struct mm_struct *mm)
 {
 	struct anonmm *anonmm = mm->anonmm;
 	struct anonmm *anonhd = anonmm->head;
+	int anonhd_count;
 
 	mm->anonmm = NULL;
 	spin_lock(&anonhd->lock);
@@ -114,8 +115,9 @@ void exit_rmap(struct mm_struct *mm)
 		if (atomic_dec_and_test(&anonhd->count))
 			BUG();
 	}
+	anonhd_count = atomic_read(&anonhd->count);
 	spin_unlock(&anonhd->lock);
-	if (atomic_read(&anonhd->count) == 1) {
+	if (anonhd_count == 1) {
 		BUG_ON(anonhd->mm);
 		BUG_ON(!list_empty(&anonhd->list));
 		kmem_cache_free(anonmm_cachep, anonhd);

_