summaryrefslogtreecommitdiff
path: root/arch/powerpc/kernel/entry_32.S
AgeCommit message (Collapse)Author
2013-06-01powerpc/32bit:Store temporary result in r0 instead of r8Priyanka Jain
Commit a9c4e541ea9b22944da356f2a9258b4eddcc953b "powerpc/kprobe: Complete kprobe and migrate exception frame" introduced a regression: While returning from exception handling in case of PREEMPT enabled, _TIF_NEED_RESCHED bit is checked in TI_FLAGS (thread_info flag) of current task. Only if this bit is set, it should continue with the process of calling preempt_schedule_irq() to schedule highest priority task if available. Current code assumes that r8 contains TI_FLAGS and check this for _TIF_NEED_RESCHED, but as r8 is modified in the code which executes before this check, r8 no longer contains the expected TI_FLAGS information. As a result check for comparison with _TIF_NEED_RESCHED was failing even if NEED_RESCHED bit is set in the current thread_info flag. Due to this, preempt_schedule_irq() and in turn scheduler was not getting called even if highest priority task is ready for execution. So, store temporary results in r0 instead of r8 to prevent r8 from getting modified as subsequent code is dependent on its value. Signed-off-by: Priyanka Jain <Priyanka.Jain@freescale.com> CC: <stable@vger.kernel.org> [v3.7+] Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2013-05-14powerpc: Fix MAX_STACK_TRACE_ENTRIES too low warning againLi Zhong
Saw this warning again, and this time from the ret_from_fork path. It seems we could clear the back chain earlier in copy_thread(), which could cover both path, and also fix potential lockdep usage in schedule_tail(), or exception occurred before we clear the back chain. Signed-off-by: Li Zhong <zhong@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2013-01-29powerpc: Fix MAX_STACK_TRACE_ENTRIES too low warning for ppc32Li Zhong
This patch fixes MAX_STACK_TRACE_ENTRIES too low warning for ppc32, which is similar to commit 12660b17. Reported-by: Christian Kujau <lists@nerdbynature.de> Signed-off-by: Li Zhong <zhong@linux.vnet.ibm.com> Tested-by: Christian Kujau <lists@nerdbynature.de> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2012-10-14powerpc: switch to saner kernel_execve() semanticsAl Viro
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2012-10-12Merge branch 'for-linus' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/viro/signal Pull pile 2 of execve and kernel_thread unification work from Al Viro: "Stuff in there: kernel_thread/kernel_execve/sys_execve conversions for several more architectures plus assorted signal fixes and cleanups. There'll be more (in particular, real fixes for the alpha do_notify_resume() irq mess)..." * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/signal: (43 commits) alpha: don't open-code trace_report_syscall_{enter,exit} Uninclude linux/freezer.h m32r: trim masks avr32: trim masks tile: don't bother with SIGTRAP in setup_frame microblaze: don't bother with SIGTRAP in setup_rt_frame() mn10300: don't bother with SIGTRAP in setup_frame() frv: no need to raise SIGTRAP in setup_frame() x86: get rid of duplicate code in case of CONFIG_VM86 unicore32: remove pointless test h8300: trim _TIF_WORK_MASK parisc: decide whether to go to slow path (tracesys) based on thread flags parisc: don't bother looping in do_signal() parisc: fix double restarts bury the rest of TIF_IRET sanitize tsk_is_polling() bury _TIF_RESTORE_SIGMASK unicore32: unobfuscate _TIF_WORK_MASK mips: NOTIFY_RESUME is not needed in TIF masks mips: merge the identical "return from syscall" per-ABI code ... Conflicts: arch/arm/include/asm/thread_info.h
2012-09-30powerpc: switch to generic sys_execve()/kernel_execve()Al Viro
the only non-obvious part is that current_pt_regs() is really needed here - task_pt_regs() is NULL for kernel threads; it's OK for ptrace uses (the thing task_pt_regs() is intended for), but not for us. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2012-09-30powerpc: split ret_from_forkAl Viro
... and get rid of in-kernel syscalls in kernel_thread() Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
2012-09-18powerpc/kprobe: Complete kprobe and migrate exception frameTiejun Chen
We can't emulate stwu since that may corrupt current exception stack. So we will have to do real store operation in the exception return code. Firstly we'll allocate a trampoline exception frame below the kprobed function stack and copy the current exception frame to the trampoline. Then we can do this real store operation to implement 'stwu', and reroute the trampoline frame to r1 to complete this exception migration. Signed-off-by: Tiejun Chen <tiejun.chen@windriver.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2012-07-27powerpc: Set stack limit properly in crit_transfer_to_handlerStuart Yoder
Commit 9778b696a0188ad3b3524b383953ee73b31b7b68 incorrectly changes the code setting the stack limit on entry to the kernel to mark the thread_info at the bottom of the stack out of bounds anymore. This fixes it. Signed-off-by: Stuart Yoder <stuart.yoder@freescale.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2012-07-11powerpc: Fixup oddity in entry_32.SBenjamin Herrenschmidt
When I "fixed" the CONFIG_TRACE_IRQFLAGS case on interrupt entry, I screwed up a little bit with the test for user space vs. kernel. The code is fine, there's just some dead code around it. I basically removed the test and always create the added stack frame whether coming from user or kernel since in any case we do need to save a bunch of volatile registers or bad things would happen (we can take page faults in the kernel for example). Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2012-07-11powerpc: Use CURRENT_THREAD_INFO instead of open coded assemblyStuart Yoder
Signed-off-by: Stuart Yoder <stuart.yoder@freescale.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2012-04-10powerpc: Fix page fault with lockdep regressionBenjamin Herrenschmidt
commit a546498f3bf9aac311c66f965186373aee2ca0b0 introduced a regression on 32-bit when irq tracing is enabled by exposing an old bug in our irq tracing code for exception entry. The code would save and restore some GPRs around the calls to the C lockdep code, however, it tries to be too smart for its own good and restores some of the GPRs from the exception frame (as saved there on exception entry). However, for page faults, we do replace those GPRs with arguments to do_page_fault before we call transfer_to_handler and so restoring from the exception frame is plain wrong in this case. This was fine as long as we didn't touch the interrupt state when taking page fault, but when I started doing it, it would trigger the lockdep calls and the bug. This fixes it by cleaning up that code a bit. It did create a small stack frame for the sake of backtraces, so let's make it a bit bigger and use it to save and restore the stuff we care about. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2012-02-22powerpc: Fix various issues with return to userspaceBenjamin Herrenschmidt
We have a few problems when returning to userspace. This is a quick set of fixes for 3.3, I'll look into a more comprehensive rework for 3.4. This fixes: - We kept interrupts soft-disabled when schedule'ing or calling do_signal when returning to userspace as a result of a hardware interrupt. - Rename do_signal to do_notify_resume like all other archs (and do_signal_pending back to do_signal, which it was before Roland changed it). - Add the missing call to key_replace_session_keyring() to do_notify_resume(). Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> ---
2011-11-16powerpc/trace: Add a dummy stack frame for trace_hardirqs_offKevin Hao
The trace_hardirqs_off will use CALLER_ADDR0 and CALLER_ADDR1. If an exception occurs in user mode, there is only one stack frame on the stack and accessing the CALLER_ADDR1 will causes the following call trace. So we create a dummy stack frame to make trace_hardirqs_off happy. WARNING: at kernel/smp.c:459 Modules linked in: NIP: c0093280 LR: c00930a0 CTR: c0010780 REGS: edb87ae0 TRAP: 0700 Not tainted (3.1.0) MSR: 00021002 <ME,CE> CR: 28002888 XER: 00000000 TASK = edce2ac0[17658] 'mthread-lock-on' THREAD: edb86000 CPU: 5 GPR00: 00000001 edb87b90 edce2ac0 00000005 c0019594 edb87bd8 00000001 00000fe3 GPR08: 00041000 c084138c 4e20120d edb87b90 48002888 1001aa7c 00000000 00000000 GPR16: 48830000 10012a8c 00000000 10000af4 00000001 c0810000 00000000 00000000 GPR24: ee9aa920 c0816a18 00000000 00000005 c0019594 edb87bd8 ee20178c edb87b90 NIP [c0093280] smp_call_function_many+0x214/0x2b4 LR [c00930a0] smp_call_function_many+0x34/0x2b4 Call Trace: [edb87b90] [c00930a0] smp_call_function_many+0x34/0x2b4 (unreliable) [edb87bd0] [c00194ec] __flush_tlb_page+0xac/0x100 [edb87c00] [c001957c] flush_tlb_page+0x3c/0x54 [edb87c10] [c00180ac] ptep_set_access_flags+0x74/0x12c [edb87c40] [c0128068] handle_pte_fault+0x2f0/0x9ac [edb87cb0] [c0128c3c] handle_mm_fault+0x104/0x1dc [edb87ce0] [c05f40f4] do_page_fault+0x2dc/0x630 [edb87e50] [c001078c] handle_page_fault+0xc/0x80 Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2011-01-21powerpc/ppc32/tracing: Add stack frame to calls of trace_hardirqs_on/offSteven Rostedt
32-bit variant of the previous patch for 64-bit: << When an interrupt occurs in userspace, we can call trace_hardirqs_on/off() With one level stack. But if we have irqsoff tracing enabled, it checks both CALLER_ADDR0 and CALLER_ADDR1. The second call goes two stack frames up. If this is from user space, then there may not exist a second stack.... >> Signed-off-by: Steven Rostedt <rostedt@goodmis.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2010-11-29powerpc: Remove second definition of STACK_FRAME_OVERHEADStephen Rothwell
Since STACK_FRAME_OVERHEAD is defined in asm/ptrace.h and that is ASSEMBER safe, we can just include that instead of going via asm-offsets.h. Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2010-05-05powerpc/47x: Base ppc476 supportDave Kleikamp
This patch adds the base support for the 476 processor. The code was primarily written by Ben Herrenschmidt and Torez Smith, but I've been maintaining it for a while. The goal is to have a single binary that will run on 44x and 47x, but we still have some details to work out. The biggest is that the L1 cache line size differs on the two platforms, but it's currently a compile-time option. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Torez Smith <lnxtorez@linux.vnet.ibm.com> Signed-off-by: Dave Kleikamp <shaggy@linux.vnet.ibm.com> Signed-off-by: Josh Boyer <jwboyer@linux.vnet.ibm.com>
2009-08-20powerpc: Use names rather than numbers for SPRGs (v2)Benjamin Herrenschmidt
The kernel uses SPRG registers for various purposes, typically in low level assembly code as scratch registers or to hold per-cpu global infos such as the PACA or the current thread_info pointer. We want to be able to easily shuffle the usage of those registers as some implementations have specific constraints realted to some of them, for example, some have userspace readable aliases, etc.. and the current choice isn't always the best. This patch should not change any code generation, and replaces the usage of SPRN_SPRGn everywhere in the kernel with a named replacement and adds documentation next to the definition of the names as to what those are used for on each processor family. The only parts that still use the original numbers are bits of KVM or suspend/resume code that just blindly needs to save/restore all the SPRGs. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2009-06-26powerpc: Add irqtrace support for 32-bit powerpcBenjamin Herrenschmidt
Based on initial work from: Dale Farnsworth <dale@farnsworth.org> Add the low level irq tracing hooks for 32-bit powerpc needed to enable full lockdep functionality. The approach taken to deal with the code in entry_32.S is that we don't trace all the transitions of MSR:EE when we just turn it off to peek at TI_FLAGS without races. Only when we are calling into C code or returning from exceptions with a state that have changed from what lockdep thinks. There's a little bugger though: If we take an exception that keeps interrupts enabled (such as an alignment exception) while interrupts are enabled, we will call trace_hardirqs_on() on the way back spurriously. Not a big deal, but to get rid of it would require remembering in pt_regs that the exception was one of the type that kept interrupts enabled which we don't know at this stage. (Well, we could test all cases for regs->trap but that sucks too much). Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Tested-by: Kumar Gala <galak@kernel.crashing.org>
2009-02-23powerpc: Unify opcode definitions and supportKumar Gala
Create a new header that becomes a single location for defining PowerPC opcodes used by code that is either generationg instructions at runtime (fixups, debug, etc.), emulating instructions, or just compiling instructions old assemblers don't know about. We currently don't handle the floating point emulation or alignment decode as both are better handled by the specific decode support they already have. Added support for the new dcbzl, dcbal, msgsnd, tlbilx, & wait instructions since older assemblers don't know about them. Signed-off-by: Kumar Gala <galak@kernel.crashing.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2009-02-23powerpc32, ftrace: dynamic function graph tracerSteven Rostedt
This patch gets function graph tracing working with dynamic function tracer on PowerPC32. Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Steven Rostedt <rostedt@goodmis.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2009-02-23powerpc32, ftrace: port function graph tracer to ppc32, static onlySteven Rostedt
This patch ports the function graph tracer for PowerPC, but only for static function tracing. Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Steven Rostedt <rostedt@goodmis.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2009-02-23powerpc32, ftrace: save and restore mcount regs with macroSteven Rostedt
Impact: clean up Use a macro to save and restore the registers for PowerPC32, since that code is duplicated. This is similar to the work done by Cyrill Gorcunov for the mcount code in x86_64. Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Steven Rostedt <rostedt@goodmis.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2009-02-12powerpc/book-3e: Introduce concept of Book-3e MMUKumar Gala
The Power ISA 2.06 spec introduces a standard MMU programming model that is based on the Freescale Book-E MMU programing model. The Freescale version is pretty backwards compatiable with the ISA 2.06 definition so we are starting to refactor some of the Freescale code so it can be easily shared. Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
2008-11-28powerpc: ftrace, do nothing in mcount call for dyn ftraceSteven Rostedt
Impact: quicken mcount calls that are not replaced by dyn ftrace Dynamic ftrace no longer does on the fly recording of mcount locations. The mcount locations are now found at compile time. The mcount function no longer needs to store registers and call a stub function. It can now just simply return. Since there are some functions that do not get converted to a nop (.init sections and other code that may disappear), this patch should help speed up that code. Also, the stub for mcount on PowerPC 32 can not be a simple branch link register like it is on PowerPC 64. According to the ABI specification: "The _mcount routine is required to restore the link register from the stack so that the profiling code can be inserted transparently, whether or not the profiled function saves the link register itself." This means that we must restore the link register that was used to make the call to mcount. The minimal mcount function for PPC32 ends up being: mcount: mflr r0 mtctr r0 lwz r0, 4(r1) mtlr r0 bctr Where we move the link register used to call mcount into the ctr register, and then restore the link register from the stack. Then we use the ctr register to jump back to the mcount caller. The r0 register is free for us to use. Signed-off-by: Steven Rostedt <srostedt@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-10-20ftrace: rename FTRACE to FUNCTION_TRACERSteven Rostedt
Due to confusion between the ftrace infrastructure and the gcc profiling tracer "ftrace", this patch renames the config options from FTRACE to FUNCTION_TRACER. The other two names that are offspring from FTRACE DYNAMIC_FTRACE and FTRACE_MCOUNT_RECORD will stay the same. This patch was generated mostly by script, and partially by hand. Signed-off-by: Steven Rostedt <srostedt@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-07-28powerpc: Add TIF_NOTIFY_RESUME support for tracehookRoland McGrath
This adds TIF_NOTIFY_RESUME support for powerpc. When set, we call tracehook_notify_resume() on the way to user mode. This overloads do_signal() to do the work, but changes its arguments to it has the TIF_* bits handy in a register and drops the useless first argument that was always zero. Signed-off-by: Roland McGrath <roland@redhat.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2008-07-28powerpc: Make syscall tracing use tracehook.h helpersRoland McGrath
This changes powerpc syscall tracing to use the new tracehook.h entry points. There is no change, only cleanup. In addition, the assembly changes allow do_syscall_trace_enter() to abort the syscall without losing the information about the original r0 value. Signed-off-by: Roland McGrath <roland@redhat.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2008-07-28powerpc/booke: Clean up the hardware watchpoint supportKumar Gala
* CONFIG_BOOKE is selected by CONFIG_44x so we dont need both * Fixed a few comments * Go back to only using DBCR0_IDM to determine if we are using debug resources. Signed-off-by: Kumar Gala <galak@kernel.crashing.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2008-07-25powerpc: BookE hardware watchpoint supportLuis Machado
This patch implements support for HW based watchpoint via the DBSR_DAC (Data Address Compare) facility of the BookE processors. It does so by interfacing with the existing DABR breakpoint code and adding the necessary bits and pieces for the new bits to be properly set or cleared Signed-off-by: Luis Machado <luisgpm@br.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
2008-07-15Merge commit '85082fd7cbe3173198aac0eb5e85ab1edcc6352c' into test-buildBenjamin Herrenschmidt
Manual fixup of: arch/powerpc/Kconfig
2008-06-26powerpc/85xx: add DOZE/NAP support for e500 coreKumar Gala
The e500 core enter DOZE/NAP power-saving modes when the core go to cpu_idle routine. The power management default running mode is DOZE, If the user echo 1 > /proc/sys/kernel/powersave-nap the system will change to NAP running mode. Signed-off-by: Dave Liu <daveliu@freescale.com> Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
2008-06-23ftrace: store mcount address in rec->ipAbhishek Sagar
Record the address of the mcount call-site. Currently all archs except sparc64 record the address of the instruction following the mcount call-site. Some general cleanups are entailed. Storing mcount addresses in rec->ip enables looking them up in the kprobe hash table later on to check if they're kprobe'd. Signed-off-by: Abhishek Sagar <sagar.abhishek@gmail.com> Cc: davem@davemloft.net Cc: Steven Rostedt <rostedt@goodmis.org> Signed-off-by: Ingo Molnar <mingo@elte.hu>
2008-06-02[POWERPC] 40x/Book-E: Save/restore volatile exception registersKumar Gala
On machines with more than one exception level any system register that might be modified by the "normal" exception level needs to be saved and restored on taking a higher level exception. We already are saving and restoring ESR and DEAR. For critical level add SRR0/1. For debug level add CSRR0/1 and SRR0/1. For machine check level add DSRR0/1, CSRR0/1, and SRR0/1. On FSL Book-E parts we always save/restore the MAS registers for critical, debug, and machine check level exceptions. On 44x we always save/restore the MMUCR. Additionally, we save and restore the ksp_limit since we have to adjust it for each exception level. Signed-off-by: Kumar Gala <galak@kernel.crashing.org> Acked-by: Paul Mackerras <paulus@samba.org>
2008-06-02[POWERPC] Rework EXC_LEVEL_EXCEPTION_PROLOG codeKumar Gala
* Cleanup the code a bit my allocating an INT_FRAME on our exception stack there by make references go from GPR11-INT_FRAME_SIZE(r8) to just GPR11(r8) * simplify {lvl}_transfer_to_handler code by moving the copying of the temp registers we use if we come from user space into the PROLOG * If the exception came from kernel mode copy thread_info flags, preempt, and task pointer from the process thread_info. Signed-off-by: Kumar Gala <galak@kernel.crashing.org> Acked-by: Paul Mackerras <paulus@samba.org>
2008-05-26ftrace: powerpc clean upsSteven Rostedt
This patch cleans up the ftrace code in PowerPC based on the comments from Michael Ellerman. Signed-off-by: Steven Rostedt <srostedt@redhat.com> Cc: Michael Ellerman <michael@ellerman.id.au> Cc: proski@gnu.org Cc: a.p.zijlstra@chello.nl Cc: Pekka Paalanen <pq@iki.fi> Cc: Steven Rostedt <srostedt@redhat.com> Cc: linuxppc-dev@ozlabs.org Cc: Soeren Sandmann Pedersen <sandmann@redhat.com> Cc: paulus@samba.org Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-05-23ftrace: support for PowerPCSteven Rostedt
This patch adds full support for ftrace for PowerPC (both 64 and 32 bit). This includes dynamic tracing and function filtering. Signed-off-by: Steven Rostedt <srostedt@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-05-16[POWERPC] Defer processing of interrupts when the CPU wakes from sleep modePaul Mackerras
This provides a way to defer processing of an interrupt that wakes the processor out of sleep mode. On 32-bit platforms that use an interrupt to wake the processor, we have to have interrupts enabled in hardware at the point where we go to sleep, otherwise the processor will never wake up. However, because interrupts are logically disabled at this point, we don't want to process the interrupt straight away. This is handled by setting the _TLF_SLEEPING flag. When we get an interrupt and _TLF_SLEEPING is set, we firstly clear the MSR_EE (external interrupt enable) bit in the saved MSR value, and secondly we then return to the address in the link register, like we do for _TLF_NAPPING, but without actually handling the interrupt. Note that this is handled somewhat differently on powerbooks, so this new code will only be used on non-Apple machines. Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-05-14[POWERPC] Define and use TLF_RESTORE_SIGMASKRoland McGrath
Replace TIF_RESTORE_SIGMASK with TLF_RESTORE_SIGMASK and define our own set_restore_sigmask() function. This saves the costly SMP-safe set_bit operation, which we do not need for the sigmask flag since TIF_SIGPENDING always has to be set too. Signed-off-by: Roland McGrath <roland@redhat.com> Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-04-29[POWERPC] Add IRQSTACKS support on ppc32Kumar Gala
This makes it possible to use separate stacks for hard and soft IRQs on 32-bit powerpc as well as on 64-bit. The code for 32-bit is just the 32-bit analog of the 64-bit code. * Added allocation and initialization of the irq stacks. We limit the stacks to be in lowmem for ppc32. * Implemented ppc32 versions of call_do_softirq() and call_handle_irq() to switch the stack pointers * Reworked how we do stack overflow detection. We now keep around the limit of the stack in the thread_struct and compare against the limit to see if we've overflowed. We can now use this on ppc64 if desired. [ paulus@samba.org: Fixed bug on 6xx where we need to reload r9 with the thread_info pointer. ] Signed-off-by: Kumar Gala <galak@kernel.crashing.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
2008-04-17[POWERPC] Make Book-E debug handling SMP safeKumar Gala
global_dbcr0 needs to be a per cpu set of save areas instead of a single global on all processors. Also, we switch to using DBCR0_IDM to determine if the user space app is being debugged as its a more consistent way. In the future we should support features like hardware breakpoint and watchpoints which will have DBCR0_IDM set but not necessarily DBCR0_IC (single step). Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
2007-11-13[POWERPC] Avoid unpaired stwcx. on some processorsBecky Bruce
The context switch code in the kernel issues a dummy stwcx. to clear the reservation, as recommended by the architecture. However, some processors can have issues if this stwcx to address A occurs while the reservation is already held to a different address B. To avoid this problem, the dummy stwcx. needs to be paired with a dummy lwarx to the same address. This adds the dummy lwarx, and creates a cpu feature bit to indicate which cpus are affected. Tested on mpc8641_hpcn_defconfig in arch/powerpc; build tested in arch/ppc. Signed-off-by: Becky Bruce <becky.bruce@freescale.com> Signed-off-by: Paul Mackerras <paulus@samba.org>
2007-11-01[POWERPC] 4xx: Deal with 44x virtually tagged icacheBenjamin Herrenschmidt
The 44x family has an interesting "feature" which is a virtually tagged instruction cache (yuck !). So far, we haven't dealt with it properly, which means we've been mostly lucky or people didn't report the problems, unless people have been running custom patches in their distro... This is an attempt at fixing it properly. I chose to do it by setting a global flag whenever we change a PTE that was previously marked executable, and flush the entire instruction cache upon return to user space when that happens. This is a bit heavy handed, but it's hard to do more fine grained flushes as the icbi instruction, on those processor, for some very strange reasons (since the cache is virtually mapped) still requires a valid TLB entry for reading in the target address space, which isn't something I want to deal with. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Josh Boyer <jwboyer@linux.vnet.ibm.com>
2007-09-14[POWERPC] Add cpu feature for SPE handlingKumar Gala
Make it so that SPE support can be determined at runtime. This is similiar to how we handle AltiVec. This allows us to have SPE support built in and work on processors with and without SPE. Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
2007-05-17[POWERPC] Fix COMMON symbol warningsKumar Gala
We get the following warnings in various ARCH=powerpc builds: WARNING: "ee_restarts" [arch/powerpc/kernel/built-in] is COMMON symbol WARNING: "fee_restarts" [arch/powerpc/kernel/built-in] is COMMON symbol WARNING: "htab_hash_searches" [arch/powerpc/mm/built-in] is COMMON symbol WARNING: "next_slot" [arch/powerpc/mm/built-in] is COMMON symbol WARNING: "mmu_hash_lock" [arch/powerpc/mm/built-in] is COMMON symbol WARNING: "primary_pteg_full" [arch/powerpc/mm/built-in] is COMMON symbol WARNING: "global_dbcr0" [arch/powerpc/kernel/built-in] is COMMON symbol Switch to moving local symbols (except mmu_hash_lock which is global) and space directive instead. Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
2007-03-22[POWERPC] Remove last_syscallAnton Blanchard
Remove last_syscall from 32bit powerpc, its been gone in 64bit for years. Signed-off-by: Anton Blanchard <anton@samba.org> Signed-off-by: Paul Mackerras <paulus@samba.org>
2006-06-30Remove obsolete #include <linux/config.h>Jörn Engel
Signed-off-by: Jörn Engel <joern@wohnheim.fh-wedel.de> Signed-off-by: Adrian Bunk <bunk@stusta.de>
2006-04-18powerpc: Use correct sequence for putting CPU into nap modePaul Mackerras
We weren't using the recommended sequence for putting the CPU into nap mode. When I changed the idle loop, for some reason 7447A cpus started hanging when we put them into nap mode. Changing to the recommended sequence fixes that. The complexity here is that the recommended sequence is a loop that keeps putting the cpu back into nap mode. Clearly we need some way to break out of the loop when an interrupt (external interrupt, decrementer, performance monitor) occurs. Here we use a bit in the thread_info struct to indicate that we need this, and the exception entry code notices this and arranges for the exception to return to the value in the link register, thus breaking out of the loop. We use a new `local_flags' field in the thread_info which we can alter without needing to use an atomic update sequence. The PPC970 has the same recommended sequence, so we do the same thing there too. This also fixes a bug in the kernel stack overflow handling code on 32-bit, since it was causing a value that we needed in a register to get trashed. Signed-off-by: Paul Mackerras <paulus@samba.org>
2006-03-27powerpc: Unify the 32 and 64 bit idle loopsPaul Mackerras
This unifies the 32-bit (ARCH=ppc and ARCH=powerpc) and 64-bit idle loops. It brings over the concept of having a ppc_md.power_save function from 32-bit to ARCH=powerpc, which lets us get rid of native_idle(). With this we will also be able to simplify the idle handling for pSeries and cell. Signed-off-by: Paul Mackerras <paulus@samba.org>
2006-03-08powerpc: Fix various syscall/signal/swapcontext bugsPaul Mackerras
A careful reading of the recent changes to the system call entry/exit paths revealed several problems, plus some things that could be simplified and improved: * 32-bit wasn't testing the _TIF_NOERROR bit in the syscall fast exit path, so it was only doing anything with it once it saw some other bit being set. In other words, the noerror behaviour would apply to the next system call where we had to reschedule or deliver a signal, which is not necessarily the current system call. * 32-bit wasn't doing the call to ptrace_notify in the syscall exit path when the _TIF_SINGLESTEP bit was set. * _TIF_RESTOREALL was in both _TIF_USER_WORK_MASK and _TIF_PERSYSCALL_MASK, which is odd since _TIF_RESTOREALL is only set by system calls. I took it out of _TIF_USER_WORK_MASK. * On 64-bit, _TIF_RESTOREALL wasn't causing the non-volatile registers to be restored (unless perhaps a signal was delivered or the syscall was traced or single-stepped). Thus the non-volatile registers weren't restored on exit from a signal handler. We probably got away with it mostly because signal handlers written in C wouldn't alter the non-volatile registers. * On 32-bit I simplified the code and made it more like 64-bit by making the syscall exit path jump to ret_from_except to handle preemption and signal delivery. * 32-bit was calling do_signal unnecessarily when _TIF_RESTOREALL was set - but I think because of that 32-bit was actually restoring the non-volatile registers on exit from a signal handler. * I changed the order of enabling interrupts and saving the non-volatile registers before calling do_syscall_trace_leave; now we enable interrupts first. Signed-off-by: Paul Mackerras <paulus@samba.org>