Skip to content

Commit 5db4f15

Browse files
yang-shitorvalds
authored andcommitted
mm: memory: add orig_pmd to struct vm_fault
Pach series "mm: thp: use generic THP migration for NUMA hinting fault", v3. When the THP NUMA fault support was added THP migration was not supported yet. So the ad hoc THP migration was implemented in NUMA fault handling. Since v4.14 THP migration has been supported so it doesn't make too much sense to still keep another THP migration implementation rather than using the generic migration code. It is definitely a maintenance burden to keep two THP migration implementation for different code paths and it is more error prone. Using the generic THP migration implementation allows us remove the duplicate code and some hacks needed by the old ad hoc implementation. A quick grep shows x86_64, PowerPC (book3s), ARM64 ans S390 support both THP and NUMA balancing. The most of them support THP migration except for S390. Zi Yan tried to add THP migration support for S390 before but it was not accepted due to the design of S390 PMD. For the discussion, please see: https://lkml.org/lkml/2018/4/27/953. Per the discussion with Gerald Schaefer in v1 it is acceptible to skip huge PMD for S390 for now. I saw there were some hacks about gup from git history, but I didn't figure out if they have been removed or not since I just found FOLL_NUMA code in the current gup implementation and they seems useful. Patch #1 ~ #2 are preparation patches. Patch #3 is the real meat. Patch #4 ~ #6 keep consistent counters and behaviors with before. Patch #7 skips change huge PMD to prot_none if thp migration is not supported. Test ---- Did some tests to measure the latency of do_huge_pmd_numa_page. The test VM has 80 vcpus and 64G memory. The test would create 2 processes to consume 128G memory together which would incur memory pressure to cause THP splits. And it also creates 80 processes to hog cpu, and the memory consumer processes are bound to different nodes periodically in order to increase NUMA faults. The below test script is used: echo 3 > /proc/sys/vm/drop_caches # Run stress-ng for 24 hours ./stress-ng/stress-ng --vm 2 --vm-bytes 64G --timeout 24h & PID=$! ./stress-ng/stress-ng --cpu $NR_CPUS --timeout 24h & # Wait for vm stressors forked sleep 5 PID_1=`pgrep -P $PID | awk 'NR == 1'` PID_2=`pgrep -P $PID | awk 'NR == 2'` JOB1=`pgrep -P $PID_1` JOB2=`pgrep -P $PID_2` # Bind load jobs to different nodes periodically to force generate # cross node memory access while [ -d "/proc/$PID" ] do taskset -apc 8 $JOB1 taskset -apc 8 $JOB2 sleep 300 taskset -apc 58 $JOB1 taskset -apc 58 $JOB2 sleep 300 done With the above test the histogram of latency of do_huge_pmd_numa_page is as shown below. Since the number of do_huge_pmd_numa_page varies drastically for each run (should be due to scheduler), so I converted the raw number to percentage. patched base @us[stress-ng]: [0] 3.57% 0.16% [1] 55.68% 18.36% [2, 4) 10.46% 40.44% [4, 8) 7.26% 17.82% [8, 16) 21.12% 13.41% [16, 32) 1.06% 4.27% [32, 64) 0.56% 4.07% [64, 128) 0.16% 0.35% [128, 256) < 0.1% < 0.1% [256, 512) < 0.1% < 0.1% [512, 1K) < 0.1% < 0.1% [1K, 2K) < 0.1% < 0.1% [2K, 4K) < 0.1% < 0.1% [4K, 8K) < 0.1% < 0.1% [8K, 16K) < 0.1% < 0.1% [16K, 32K) < 0.1% < 0.1% [32K, 64K) < 0.1% < 0.1% Per the result, patched kernel is even slightly better than the base kernel. I think this is because the lock contention against THP split is less than base kernel due to the refactor. To exclude the affect from THP split, I also did test w/o memory pressure. No obvious regression is spotted. The below is the test result *w/o* memory pressure. patched base @us[stress-ng]: [0] 7.97% 18.4% [1] 69.63% 58.24% [2, 4) 4.18% 2.63% [4, 8) 0.22% 0.17% [8, 16) 1.03% 0.92% [16, 32) 0.14% < 0.1% [32, 64) < 0.1% < 0.1% [64, 128) < 0.1% < 0.1% [128, 256) < 0.1% < 0.1% [256, 512) 0.45% 1.19% [512, 1K) 15.45% 17.27% [1K, 2K) < 0.1% < 0.1% [2K, 4K) < 0.1% < 0.1% [4K, 8K) < 0.1% < 0.1% [8K, 16K) 0.86% 0.88% [16K, 32K) < 0.1% 0.15% [32K, 64K) < 0.1% < 0.1% [64K, 128K) < 0.1% < 0.1% [128K, 256K) < 0.1% < 0.1% The series also survived a series of tests that exercise NUMA balancing migrations by Mel. This patch (of 7): Add orig_pmd to struct vm_fault so the "orig_pmd" parameter used by huge page fault could be removed, just like its PTE counterpart does. Link: https://lkml.kernel.org/r/[email protected] Link: https://lkml.kernel.org/r/[email protected] Signed-off-by: Yang Shi <[email protected]> Acked-by: Mel Gorman <[email protected]> Cc: Kirill A. Shutemov <[email protected]> Cc: Zi Yan <[email protected]> Cc: Huang Ying <[email protected]> Cc: Michal Hocko <[email protected]> Cc: Hugh Dickins <[email protected]> Cc: Gerald Schaefer <[email protected]> Cc: Heiko Carstens <[email protected]> Cc: Vasily Gorbik <[email protected]> Cc: Christian Borntraeger <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
1 parent eb6ecbe commit 5db4f15

File tree

4 files changed

+29
-22
lines changed

4 files changed

+29
-22
lines changed

include/linux/huge_mm.h

Lines changed: 4 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@ vm_fault_t do_huge_pmd_anonymous_page(struct vm_fault *vmf);
1111
int copy_huge_pmd(struct mm_struct *dst_mm, struct mm_struct *src_mm,
1212
pmd_t *dst_pmd, pmd_t *src_pmd, unsigned long addr,
1313
struct vm_area_struct *dst_vma, struct vm_area_struct *src_vma);
14-
void huge_pmd_set_accessed(struct vm_fault *vmf, pmd_t orig_pmd);
14+
void huge_pmd_set_accessed(struct vm_fault *vmf);
1515
int copy_huge_pud(struct mm_struct *dst_mm, struct mm_struct *src_mm,
1616
pud_t *dst_pud, pud_t *src_pud, unsigned long addr,
1717
struct vm_area_struct *vma);
@@ -24,7 +24,7 @@ static inline void huge_pud_set_accessed(struct vm_fault *vmf, pud_t orig_pud)
2424
}
2525
#endif
2626

27-
vm_fault_t do_huge_pmd_wp_page(struct vm_fault *vmf, pmd_t orig_pmd);
27+
vm_fault_t do_huge_pmd_wp_page(struct vm_fault *vmf);
2828
struct page *follow_trans_huge_pmd(struct vm_area_struct *vma,
2929
unsigned long addr, pmd_t *pmd,
3030
unsigned int flags);
@@ -288,7 +288,7 @@ struct page *follow_devmap_pmd(struct vm_area_struct *vma, unsigned long addr,
288288
struct page *follow_devmap_pud(struct vm_area_struct *vma, unsigned long addr,
289289
pud_t *pud, int flags, struct dev_pagemap **pgmap);
290290

291-
vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf, pmd_t orig_pmd);
291+
vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf);
292292

293293
extern struct page *huge_zero_page;
294294
extern unsigned long huge_zero_pfn;
@@ -441,8 +441,7 @@ static inline spinlock_t *pud_trans_huge_lock(pud_t *pud,
441441
return NULL;
442442
}
443443

444-
static inline vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf,
445-
pmd_t orig_pmd)
444+
static inline vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf)
446445
{
447446
return 0;
448447
}

include/linux/mm.h

Lines changed: 6 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -550,7 +550,12 @@ struct vm_fault {
550550
pud_t *pud; /* Pointer to pud entry matching
551551
* the 'address'
552552
*/
553-
pte_t orig_pte; /* Value of PTE at the time of fault */
553+
union {
554+
pte_t orig_pte; /* Value of PTE at the time of fault */
555+
pmd_t orig_pmd; /* Value of PMD at the time of fault,
556+
* used by PMD fault only.
557+
*/
558+
};
554559

555560
struct page *cow_page; /* Page handler may use for COW fault */
556561
struct page *page; /* ->fault handlers should return a

mm/huge_memory.c

Lines changed: 6 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1257,11 +1257,12 @@ void huge_pud_set_accessed(struct vm_fault *vmf, pud_t orig_pud)
12571257
}
12581258
#endif /* CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD */
12591259

1260-
void huge_pmd_set_accessed(struct vm_fault *vmf, pmd_t orig_pmd)
1260+
void huge_pmd_set_accessed(struct vm_fault *vmf)
12611261
{
12621262
pmd_t entry;
12631263
unsigned long haddr;
12641264
bool write = vmf->flags & FAULT_FLAG_WRITE;
1265+
pmd_t orig_pmd = vmf->orig_pmd;
12651266

12661267
vmf->ptl = pmd_lock(vmf->vma->vm_mm, vmf->pmd);
12671268
if (unlikely(!pmd_same(*vmf->pmd, orig_pmd)))
@@ -1278,11 +1279,12 @@ void huge_pmd_set_accessed(struct vm_fault *vmf, pmd_t orig_pmd)
12781279
spin_unlock(vmf->ptl);
12791280
}
12801281

1281-
vm_fault_t do_huge_pmd_wp_page(struct vm_fault *vmf, pmd_t orig_pmd)
1282+
vm_fault_t do_huge_pmd_wp_page(struct vm_fault *vmf)
12821283
{
12831284
struct vm_area_struct *vma = vmf->vma;
12841285
struct page *page;
12851286
unsigned long haddr = vmf->address & HPAGE_PMD_MASK;
1287+
pmd_t orig_pmd = vmf->orig_pmd;
12861288

12871289
vmf->ptl = pmd_lockptr(vma->vm_mm, vmf->pmd);
12881290
VM_BUG_ON_VMA(!vma->anon_vma, vma);
@@ -1418,9 +1420,10 @@ struct page *follow_trans_huge_pmd(struct vm_area_struct *vma,
14181420
}
14191421

14201422
/* NUMA hinting page fault entry point for trans huge pmds */
1421-
vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf, pmd_t pmd)
1423+
vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf)
14221424
{
14231425
struct vm_area_struct *vma = vmf->vma;
1426+
pmd_t pmd = vmf->orig_pmd;
14241427
struct anon_vma *anon_vma = NULL;
14251428
struct page *page;
14261429
unsigned long haddr = vmf->address & HPAGE_PMD_MASK;

mm/memory.c

Lines changed: 13 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -4298,12 +4298,12 @@ static inline vm_fault_t create_huge_pmd(struct vm_fault *vmf)
42984298
}
42994299

43004300
/* `inline' is required to avoid gcc 4.1.2 build error */
4301-
static inline vm_fault_t wp_huge_pmd(struct vm_fault *vmf, pmd_t orig_pmd)
4301+
static inline vm_fault_t wp_huge_pmd(struct vm_fault *vmf)
43024302
{
43034303
if (vma_is_anonymous(vmf->vma)) {
4304-
if (userfaultfd_huge_pmd_wp(vmf->vma, orig_pmd))
4304+
if (userfaultfd_huge_pmd_wp(vmf->vma, vmf->orig_pmd))
43054305
return handle_userfault(vmf, VM_UFFD_WP);
4306-
return do_huge_pmd_wp_page(vmf, orig_pmd);
4306+
return do_huge_pmd_wp_page(vmf);
43074307
}
43084308
if (vmf->vma->vm_ops->huge_fault) {
43094309
vm_fault_t ret = vmf->vma->vm_ops->huge_fault(vmf, PE_SIZE_PMD);
@@ -4530,26 +4530,26 @@ static vm_fault_t __handle_mm_fault(struct vm_area_struct *vma,
45304530
if (!(ret & VM_FAULT_FALLBACK))
45314531
return ret;
45324532
} else {
4533-
pmd_t orig_pmd = *vmf.pmd;
4533+
vmf.orig_pmd = *vmf.pmd;
45344534

45354535
barrier();
4536-
if (unlikely(is_swap_pmd(orig_pmd))) {
4536+
if (unlikely(is_swap_pmd(vmf.orig_pmd))) {
45374537
VM_BUG_ON(thp_migration_supported() &&
4538-
!is_pmd_migration_entry(orig_pmd));
4539-
if (is_pmd_migration_entry(orig_pmd))
4538+
!is_pmd_migration_entry(vmf.orig_pmd));
4539+
if (is_pmd_migration_entry(vmf.orig_pmd))
45404540
pmd_migration_entry_wait(mm, vmf.pmd);
45414541
return 0;
45424542
}
4543-
if (pmd_trans_huge(orig_pmd) || pmd_devmap(orig_pmd)) {
4544-
if (pmd_protnone(orig_pmd) && vma_is_accessible(vma))
4545-
return do_huge_pmd_numa_page(&vmf, orig_pmd);
4543+
if (pmd_trans_huge(vmf.orig_pmd) || pmd_devmap(vmf.orig_pmd)) {
4544+
if (pmd_protnone(vmf.orig_pmd) && vma_is_accessible(vma))
4545+
return do_huge_pmd_numa_page(&vmf);
45464546

4547-
if (dirty && !pmd_write(orig_pmd)) {
4548-
ret = wp_huge_pmd(&vmf, orig_pmd);
4547+
if (dirty && !pmd_write(vmf.orig_pmd)) {
4548+
ret = wp_huge_pmd(&vmf);
45494549
if (!(ret & VM_FAULT_FALLBACK))
45504550
return ret;
45514551
} else {
4552-
huge_pmd_set_accessed(&vmf, orig_pmd);
4552+
huge_pmd_set_accessed(&vmf);
45534553
return 0;
45544554
}
45554555
}

0 commit comments

Comments
 (0)