Skip to content
This repository has been archived by the owner on Jul 23, 2020. It is now read-only.

Commit

Permalink
x86: Fix early boot crash on gcc-10, third try
Browse files Browse the repository at this point in the history
commit a9a3ed1 upstream.

... or the odyssey of trying to disable the stack protector for the
function which generates the stack canary value.

The whole story started with Sergei reporting a boot crash with a kernel
built with gcc-10:

  Kernel panic — not syncing: stack-protector: Kernel stack is corrupted in: start_secondary
  CPU: 1 PID: 0 Comm: swapper/1 Not tainted 5.6.0-rc5—00235—gfffb08b37df9 rockchip-linux#139
  Hardware name: Gigabyte Technology Co., Ltd. To be filled by O.E.M./H77M—D3H, BIOS F12 11/14/2013
  Call Trace:
    dump_stack
    panic
    ? start_secondary
    __stack_chk_fail
    start_secondary
    secondary_startup_64
  -—-[ end Kernel panic — not syncing: stack—protector: Kernel stack is corrupted in: start_secondary

This happens because gcc-10 tail-call optimizes the last function call
in start_secondary() - cpu_startup_entry() - and thus emits a stack
canary check which fails because the canary value changes after the
boot_init_stack_canary() call.

To fix that, the initial attempt was to mark the one function which
generates the stack canary with:

  __attribute__((optimize("-fno-stack-protector"))) ... start_secondary(void *unused)

however, using the optimize attribute doesn't work cumulatively
as the attribute does not add to but rather replaces previously
supplied optimization options - roughly all -fxxx options.

The key one among them being -fno-omit-frame-pointer and thus leading to
not present frame pointer - frame pointer which the kernel needs.

The next attempt to prevent compilers from tail-call optimizing
the last function call cpu_startup_entry(), shy of carving out
start_secondary() into a separate compilation unit and building it with
-fno-stack-protector, was to add an empty asm("").

This current solution was short and sweet, and reportedly, is supported
by both compilers but we didn't get very far this time: future (LTO?)
optimization passes could potentially eliminate this, which leads us
to the third attempt: having an actual memory barrier there which the
compiler cannot ignore or move around etc.

That should hold for a long time, but hey we said that about the other
two solutions too so...

Reported-by: Sergei Trofimovich <[email protected]>
Signed-off-by: Borislav Petkov <[email protected]>
Tested-by: Kalle Valo <[email protected]>
Cc: <[email protected]>
Link: https://lkml.kernel.org/r/[email protected]
Signed-off-by: Greg Kroah-Hartman <[email protected]>
  • Loading branch information
suryasaimadhu authored and gregkh committed May 20, 2020
1 parent f8e370c commit 91b9ce0
Show file tree
Hide file tree
Showing 5 changed files with 23 additions and 1 deletion.
7 changes: 6 additions & 1 deletion arch/x86/include/asm/stackprotector.h
Original file line number Diff line number Diff line change
Expand Up @@ -55,8 +55,13 @@
/*
* Initialize the stackprotector canary value.
*
* NOTE: this must only be called from functions that never return,
* NOTE: this must only be called from functions that never return
* and it must always be inlined.
*
* In addition, it should be called from a compilation unit for which
* stack protector is disabled. Alternatively, the caller should not end
* with a function call which gets tail-call optimized as that would
* lead to checking a modified canary value.
*/
static __always_inline void boot_init_stack_canary(void)
{
Expand Down
8 changes: 8 additions & 0 deletions arch/x86/kernel/smpboot.c
Original file line number Diff line number Diff line change
Expand Up @@ -262,6 +262,14 @@ static void notrace start_secondary(void *unused)

wmb();
cpu_startup_entry(CPUHP_AP_ONLINE_IDLE);

/*
* Prevent tail call to cpu_startup_entry() because the stack protector
* guard has been changed a couple of function calls up, in
* boot_init_stack_canary() and must not be checked before tail calling
* another function.
*/
prevent_tail_call_optimization();
}

/**
Expand Down
1 change: 1 addition & 0 deletions arch/x86/xen/smp_pv.c
Original file line number Diff line number Diff line change
Expand Up @@ -92,6 +92,7 @@ asmlinkage __visible void cpu_bringup_and_idle(void)
cpu_bringup();
boot_init_stack_canary();
cpu_startup_entry(CPUHP_AP_ONLINE_IDLE);
prevent_tail_call_optimization();
}

void xen_smp_intr_free_pv(unsigned int cpu)
Expand Down
6 changes: 6 additions & 0 deletions include/linux/compiler.h
Original file line number Diff line number Diff line change
Expand Up @@ -356,4 +356,10 @@ static inline void *offset_to_ptr(const int *off)
/* &a[0] degrades to a pointer: a different type from an array */
#define __must_be_array(a) BUILD_BUG_ON_ZERO(__same_type((a), &(a)[0]))

/*
* This is needed in functions which generate the stack canary, see
* arch/x86/kernel/smpboot.c::start_secondary() for an example.
*/
#define prevent_tail_call_optimization() mb()

#endif /* __LINUX_COMPILER_H */
2 changes: 2 additions & 0 deletions init/main.c
Original file line number Diff line number Diff line change
Expand Up @@ -782,6 +782,8 @@ asmlinkage __visible void __init start_kernel(void)

/* Do the rest non-__init'ed, we're now alive */
arch_call_rest_init();

prevent_tail_call_optimization();
}

/* Call all constructor functions linked into the kernel. */
Expand Down

0 comments on commit 91b9ce0

Please sign in to comment.