mm: do not shrink pages marked for reclaim by MADV_FREE
MADV_FREE clears pte dirty bit and then marks the page lazyfree (clear
SwapBacked). PPR increments ISOLATE_FILES count, then isolates page and
invokes a reclaim. Inbetween if this lazyfreed page is touched by user then
it becomes dirty. PPR in shrink_page_list in try_to_unmap finds the page
dirty, marks it back as PageSwapBacked and skips reclaim. As PageSwapBacked
set, PPR identifies the page as anon and decrements ISOLATED_ANON, thus
creating isolated count mismatch.
This results in too_many_isolated() check causing delay in reclaim. Skip
reclaiming lazyfreed pages in PPR path.
Change-Id: I87223c4fa492c5e373ac48f116384b5de03da9fa
Signed-off-by: Prakash Gupta <guptap@codeaurora.org>
Signed-off-by: Vinayak Menon <vinmenon@codeaurora.org>
diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c
index 48a5934..6d41c70 100644
--- a/fs/proc/task_mmu.c
+++ b/fs/proc/task_mmu.c
@@ -1665,6 +1665,18 @@ static int reclaim_pte_range(pmd_t *pmd, unsigned long addr,
if (isolate_lru_page(page))
continue;
+ /* MADV_FREE clears pte dirty bit and then marks the page
+ * lazyfree (clear SwapBacked). Inbetween if this lazyfreed page
+ * is touched by user then it becomes dirty. PPR in
+ * shrink_page_list in try_to_unmap finds the page dirty, marks
+ * it back as PageSwapBacked and skips reclaim. This can cause
+ * isolated count mismatch.
+ */
+ if (PageAnon(page) && !PageSwapBacked(page)) {
+ putback_lru_page(page);
+ continue;
+ }
+
list_add(&page->lru, &page_list);
inc_node_page_state(page, NR_ISOLATED_ANON +
page_is_file_cache(page));