Merge Upstream's stable 3.0.21 branch into msm-3.0
This consists 814 commits and some merge conflicts.
The merge conflicts are because of some local changes to
msm-3.0 as well as some conflicts between google's tree and
the upstream tree.
Conflicts:
arch/arm/kernel/head.S
drivers/bluetooth/ath3k.c
drivers/bluetooth/btusb.c
drivers/mmc/core/core.c
drivers/tty/serial/serial_core.c
drivers/usb/host/ehci-hub.c
drivers/usb/serial/qcserial.c
fs/namespace.c
fs/proc/base.c
Change-Id: I62e2edbe213f84915e27f8cd6e4f6ce23db22a21
Signed-off-by: Rohit Vaswani <rvaswani@codeaurora.org>
Store the context id in the register trace buffer.
The process id can be derived from the context id.
This gives a general idea about what process was last
running when the RTB stopped.
Change-Id: I2fb8934d008b8cf3666f1df2652846c15faca776
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
Bootup with lockdep enabled has been broken on v7 since b46c0f74657d
("ARM: 7321/1: cache-v7: Disable preemption when reading CCSIDR").
This is because v7_setup (which is called very early during boot) calls
v7_flush_dcache_all, and the save_and_disable_irqs added by that patch
ends up attempting to call into lockdep C code (trace_hardirqs_off())
when we are in no position to execute it (no stack, MMU off).
Fix this by using a notrace variant of save_and_disable_irqs. The code
already uses the notrace variant of restore_irqs.
Change-Id: I9453f2f278c715a0480d4962f9cbbea65a43ac39
Reviewed-by: Nicolas Pitre <nico@linaro.org>
Acked-by: Stephen Boyd <sboyd@codeaurora.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: stable@vger.kernel.org
Signed-off-by: Rabin Vincent <rabin@rab.in>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
armv7's flush_cache_all() flushes caches via set/way. To
determine the cache attributes (line size, number of sets,
etc.) the assembly first writes the CSSELR register to select a
cache level and then reads the CCSIDR register. The CSSELR register
is banked per-cpu and is used to determine which cache level CCSIDR
reads. If the task is migrated between when the CSSELR is written and
the CCSIDR is read the CCSIDR value may be for an unexpected cache
level (for example L1 instead of L2) and incorrect cache flushing
could occur.
Disable interrupts across the write and read so that the correct
cache attributes are read and used for the cache flushing
routine. We disable interrupts instead of disabling preemption
because the critical section is only 3 instructions and we want
to call v7_dcache_flush_all from __v7_setup which doesn't have a
full kernel stack with a struct thread_info.
This fixes a problem we see in scm_call() when flush_cache_all()
is called from preemptible context and sometimes the L2 cache is
not properly flushed out.
Change-Id: I34f131ec4e2f8518a7b9a112c459c9f2650600b6
CRs-Fixed: 298842
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
This patch introduces a new Kconfig option which, when enabled, causes
the kernel to write the PID of the current task into the PROCID field
of the CONTEXTIDR on context switch. This is useful when analysing
hardware trace, since writes to this register can be configured to emit
an event into the trace stream.
The thread notifier for writing the PID is deliberately kept separate
from the ASID code, so that we can easily support newer processors (A15
onwards) which store the ASID in TTBR0. As such, the switch_mm code is
updated to perform a read-modify-write sequence to ensure that we don't
clobber the PID on older CPUs.
Change-Id: I7236834cf4b5e984c9d9f24ba6b872078c2b936f
Cc: Wolfgang Betz <wolfgang.betz@st.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Jeff Ohlstein <johlstei@codeaurora.org>
commit 612539e81f655f6ac73c7af1da8701c1ee618aee upstream.
On v7, we use the same cache maintenance instructions for data lines
as for unified lines. This was not the case for v6, where HARVARD_CACHE
was defined to indicate the L1 cache topology.
This patch removes the erroneous compile-time check for HARVARD_CACHE in
proc-v7.S, ensuring that we perform I-side invalidation at boot.
Reported-and-Acked-by: Shawn Guo <shawn.guo@linaro.org>
Acked-by: Catalin Marinas <Catalin.Marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
The definition of __exception_irq_entry for
CONFIG_FUNCTION_GRAPH_TRACER=y needs linux/ftrace.h, but this creates a
circular dependency with it's current home in asm/system.h. Create
asm/exception.h and update all current users.
v4: - rebase to rmk/for-next
v3: - remove redundant includes of linux/ftrace.h
v2: - document the usage restricitions of __exception*
Change-Id: I9efdb6a621bb915ad00b9a133ec93e98182fd483
Cc: Zoltan Devai <zdevai@gmail.com>
Signed-off-by: Jamie Iles <jamie@jamieiles.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
[tsoni@codeaurora.org: Merge fixes]
Signed-off-by: Trilok Soni <tsoni@codeaurora.org>
If a memblock has been removed with memblock_remove, it will
be skipped in 'for_each_memblock'. If a SPARSEMEM section
is completely enclosed within this removed memblock,
memory_present will never be called and the section will
never be initialized. This will cause garbage dereferences
later on.
This change loops on the memory banks, instead of the memblocks.
Memory banks always exist, regardless of memblock_remove and
ensure that all SPARSEMEM sections will be initialized, even
if they are removed later.
Change-Id: I1b7b0418a7e752f5bf69c3ec2ea8ea17e8ecfec5
Signed-off-by: Jack Cheung <jackc@codeaurora.org>
Doing __va(movable_reserved_start) can overflow if
movable_reserved_start is a very high address. This
will cause the comparison to evaluate incorrectly.
This change instead compares the physical addresses, which
cannot overflow/underflow.
Change-Id: I6c82df16b77a905617aa6f59c2eeaf7acb36c76d
Signed-off-by: Jack Cheung <jackc@codeaurora.org>
Poisoning __init marked memory can be useful when tracking down
obscure memory corruption bugs. Therefore, poison init memory
with 0xe7fddef0 to catch bugs earlier. The poison value is an
undefined instruction in ARM mode and branch to an undefined
instruction in Thumb mode.
Change-Id: Ibb91bee46b829b0deac9f4e592040a0054968998
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
Acked-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
[sboyd@codeaurora.org: Fix conflicts from total_unmovable_pages]
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
ARM: 7010/1: mm: fix invalid loop for poison_init_mem
poison_init_mem() used a loop of:
while ((count = count - 4))
which has 2 problems - an off by one error so that we do one less word
than we should, and the other is that if count == 0 then we loop forever
and poison too much. On a platform with HAVE_TCM=y but nothing in the
TCM's, this caused corruption and the platform failed to boot.
Acked-by: Stephen Boyd <sboyd@codeaurora.org>
Acked-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Signed-off-by: Jamie Iles <jamie@jamieiles.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
If CONFIG_FIX_MOVABLE_ZONE is enabled but the
the movable zone is not set up, vmalloc_min
will become 0 and the kernel will crash.
This change checks that movable_reserved_size
is nonzero before doing anything.
Change-Id: I3d3cb2482fde20fbd1dd0668dbbd414e42eda995
Signed-off-by: Jack Cheung <jackc@codeaurora.org>
Vmalloc will exit if the amount it needs to allocate is
greater than totalram_pages. Vmalloc cannot allocate
from the movable zone, so pages in the movable zone should
not be counted.
This change adds a new global variable: total_unmovable_pages.
It is calculated in init.c, based on totalram_pages minus
the pages in the movable zone. Vmalloc now looks at this new
global instead of totalram_pages.
total_unmovable_pages can be modified during memory_hotplug.
If the zone you are offlining/onlining is unmovable, then
you modify it similar to totalram_pages. If the zone is
movable, then no change is needed.
Change-Id: Ie55c41051e9ad4b921eb04ecbb4798a8bd2344d6
Signed-off-by: Jack Cheung <jackc@codeaurora.org>
Annotate the low level hardware locks which must not be preempted.
In mainline this change documents the low level nature of
the lock - otherwise there's no functional difference. Lockdep
and Sparse checking will work as usual.
Change-Id: I1c73fd5472b9ab356173637a7819095394004ebf
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
[kumarrav@codeaurora.org: fixup gic.c and cache-l2x0.c merge conflict]
Signed-off-by: Ravi Kumar <kumarrav@codeaurora.org>
* changes:
arm: common: Makefile: Add cpaccess
ARM: cpaccess: write-enables kernel space for write
mm: add wrapper function for writing word to kernel text space
Adds a function to encapsulate the locking, removal of write-protection,
word write, cache flush and invalidate and restoration
of write protection. This is a convenience function for callers
needing to update a word in kernel text space.
Change-Id: I9832f0ff659ddc62c55819af5318c94b70f5c11c
Signed-off-by: Neil Leeder <nleeder@codeaurora.org>
commit f973fab692 added a 5MB highmem zone to fix
a page fault error due to generic 3.0 bug.
Remove this extra 5MB offset because we now have a 256MB
highmem zone.
Change-Id: I430fed88baccd44d438c21607f87219206f1f20a
Signed-off-by: Jack Cheung <jackc@codeaurora.org>
* common/android-3.0: (570 commits)
misc: remove kernel debugger core
ARM: common: fiq_debugger: dump sysrq directly to console if enabled
ARM: common: fiq_debugger: add irq context debug functions
net: wireless: bcmdhd: Call init_ioctl() only if was started properly for WEXT
net: wireless: bcmdhd: Call init_ioctl() only if was started properly
net: wireless: bcmdhd: Fix possible memory leak in escan/iscan
cpufreq: interactive governor: default 20ms timer
cpufreq: interactive governor: go to intermediate hi speed before max
cpufreq: interactive governor: scale to max only if at min speed
cpufreq: interactive governor: apply intermediate load on current speed
ARM: idle: update idle ticks before call idle end notifier
input: gpio_input: don't print debounce message unless flag is set
net: wireless: bcm4329: Skip dhd_bus_stop() if bus is already down
net: wireless: bcmdhd: Skip dhd_bus_stop() if bus is already down
net: wireless: bcmdhd: Improve suspend/resume processing
net: wireless: bcmdhd: Check if FW is Ok for internal FW call
tcp: Don't nuke connections for the wrong protocol
ARM: common: fiq_debugger: make uart irq be no_suspend
net: wireless: Skip connect warning for CONFIG_CFG80211_ALLOW_RECONNECT
mm: avoid livelock on !__GFP_FS allocations
...
Conflicts:
arch/arm/mm/cache-l2x0.c
arch/arm/vfp/vfpmodule.c
drivers/mmc/core/host.c
kernel/power/wakelock.c
net/bluetooth/hci_event.c
Signed-off-by: Bryan Huntsman <bryanh@codeaurora.org>
Multicore implementations of the Cortex-A15 require bit 6 of the
auxiliary control register to be set in order for cache and TLB
maintenance operations to be broadcast between CPUs.
This patch adds a new proc_info structure for Cortex-A15, which enables
the SMP bit during setup and includes the new HWCAP for integer
division.
Change-Id: Ib862459b238e1e69ffd27b077eb22f465e84dff5
Signed-off-by: Will Deacon <will.deacon@arm.com>
[tdas@codeaurora.org: fixup proc-v7.S, read Aux CTRL register
for ALT_SMP and ALT_UP got removed in commit 3f2bc4d6]
Signed-off-by: Taniya Das <tdas@codeaurora.org>
This patch adds processor info for ARM Ltd. Cortex A5,
which has SCU initialisation procedure identical to A9.
Change-Id: Ie13605a4a0071bb8a3ccf5d91191fccb044191ab
Signed-off-by: Pawel Moll <pawel.moll@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Taniya Das <tdas@codeaurora.org>
As most of the proc info content is common across all v7
processors, this patch converts existing A9 and generic v7
descriptions into a macro (allowing extra flags in future).
Change-Id: I7f943ebb9e572ab78579e909287ecf65808e104e
Signed-off-by: Pawel Moll <pawel.moll@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Taniya Das <tdas@codeaurora.org>
If SPARSEMEM is enabled and there is a large amount of
memory, the page structures for the various sections
may not be contiguous. The code to traverse all of the
page structures in show_mem() was incorrectly assuming
all of the page structures were contiguous, causing
kernel panics in this case.
CRs-fixed: 315006
Change-Id: I5e9437c369d23f1513c73feb46623006561d15cf
Signed-off-by: Olav Haugan <ohaugan@codeaurora.org>
STRICT_MEMORY_RWX write-protects the kernel text section. This
is a problem for tools such as kprobes which need write access
to kernel text space.
This patch introduces a function to temporarily make part of the
kernel text space writeable and another to restore the original state.
They can be called by code which is intentionally writing to
this space, while still leaving the kernel protected from
unintentional writes at other times.
Change-Id: I879009c41771198852952e5e7c3b4d1368f12d5f
Signed-off-by: Neil Leeder <nleeder@codeaurora.org>
If CONFIG_HIGHMEM is enabled, the memory from the movable
zone will come from highmem as the zone of highest
address present is converted to ZONE_MOVABLE, and only
one zone may be used for ZONE_MOVABLE.
Ensure that ZONE_HIGHMEM is large enough for the
movable zone's required size to be created from
it if CONFIG_FIX_MOVABLE_ZONE is enabled as well.
There needs to be a small ZONE_HIGHMEM even after
memory hotremove (due to a generic 3.0 bug), so
extra is reserved for now.
Change-Id: I285c6ad24a6f2f18fc9e6510d379b126f201b082
Signed-off-by: Larry Bassel <lbassel@codeaurora.org>
commit 002ea9eefec98dada56fd5f8e432a4e8570c2a26 upstream.
The VM subsystem assumes that there are valid memmap entries from
the bank start aligned to MAX_ORDER_NR_PAGES.
On the Ux500 we have a lot of mem=N arguments on the commandline
triggering this bug several times over and causing kernel
oops messages.
Cc: Michael Bohan <mbohan@codeaurora.org>
Cc: Nicolas Pitre <nico@fluxnic.net>
Signed-off-by: Johan Palsson <johan.palsson@stericsson.com>
Signed-off-by: Rabin Vincent <rabin.vincent@stericsson.com>
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
If SPARSEMEM is enabled and there is a large amount of
memory, the page structures for the various sections
may not be contiguous. The code to traverse all of the
page structures in page_init() was incorrectly assuming
all of the page structures were contiguous, causing
early kernel panics in this case.
Change-Id: I10548520f4d1c0c232a2df940ab0f9d57078c586
Signed-off-by: Larry Bassel <lbassel@codeaurora.org>
ARCH_POPULATES_NODE_MAP is used by most of the other
architectures and allows finer-grained control of
how and where zones are placed.
Signed-off-by: Larry Bassel <lbassel@codeaurora.org>
Conflicts:
arch/arm/mm/init.c
Some platforms have memory at the top of the
first memory bank which the kernel cannot
access. Mapping this is unnecessary and wastes
precious virtual space.
Signed-off-by: Larry Bassel <lbassel@codeaurora.org>
Add support to enable L2 cache for MSM9615. It uses the
PL310 L2 cache controller.
Signed-off-by: Rohit Vaswani <rvaswani@codeaurora.org>
Conflicts:
arch/arm/mm/Kconfig
Since the PMEM driver establishes ioremaps on the fly for
on demand devices it is possible for the virtual address space
to become quickly fragmented. For such devices, pre-reserve the
virtual address range and only set up page table mappings when
required.
CRs-Fixed: 299510
Signed-off-by: Naveen Ramaraj <nramaraj@codeaurora.org>
The various routines to change memory power state used
in physical memory hotplug and hotremove used to take
a start pfn and a number of pages and return 1 for success
and 0 for failure.
The generic API these are called from now takes a start address
and size and returns a byte count of memory powered on or
off, so the ARM and platform specific routines should as well.
Signed-off-by: Larry Bassel <lbassel@codeaurora.org>
commit f630c1bdfbf8fe423325beaf60027cfc7fd7c610 upstream.
This patch implements a workaround for erratum 764369 affecting
Cortex-A9 MPCore with two or more processors (all current revisions).
Under certain timing circumstances, a data cache line maintenance
operation by MVA targeting an Inner Shareable memory region may fail to
proceed up to either the Point of Coherency or to the Point of
Unification of the system. This workaround adds a DSB instruction before
the relevant cache maintenance functions and sets a specific bit in the
diagnostic control register of the SCU.
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
commit d8e89b47e00ee80e920761145144640aac4cf71a upstream.
If the attempt to map a page for DMA fails (eg, because we're out of
mapping space) then we must not hold on to the page we allocated for
DMA - doing so will result in a memory leak.
Reported-by: Bryan Phillippe <bp@darkforest.org>
Tested-by: Bryan Phillippe <bp@darkforest.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
A previous patch addressed the issue of move_freepages_block()
trampling on erronously freed mem_map entries for the bank end
pfn. We also need to restrict the start pfn in a
complementary manner.
Also make macro usage consistent by adopting the use of
round_down and round_up.
Signed-off-by: Michael Bohan <mbohan@codeaurora.org>
The function arch_add_memory() should only perform logical
memory hotplug, but it was also improperly performing
physical memory hotplug. Even worse, it was adding pages
to the free list before the memory bank they were in
was powered on.
Signed-off-by: Larry Bassel <lbassel@codeaurora.org>
On certain early samples of the Krait processor, certain
thumb instructions may cause data aborts even if these
instructions fail their condition code checks. Add a
handler to ignore data aborts generated by such
instructions.
Signed-off-by: Stepan Moskovchenko <stepanm@codeaurora.org>
In older versions, there were memory tags that represented
memory which could be hotplugged, memory which could be
put into self-refresh and memory which could be reclaimed
from OSBL. The last of these was never implemented and
due to a re-architecting of the way DMM determines which
memory can be hotplugged, the other two tags are no longer necessary.
Signed-off-by: Larry Bassel <lbassel@codeaurora.org>
Conflicts:
arch/arm/mm/init.c
ARM errata 727915 for PL310 has been updated to include a new
workaround required for PL310 r2p0 for l2x0_flush_all, which also
affects l2x0_clean_all in my testing. For r2p0, clean or flush
each set/way individually. For r3p0 or greater, use the debug
register for cleaning and flushing.
Requires exporting the cache_id, sets and ways detected in the
init function for later use.
Change-Id: I215055cbe5dc7e4e8184fb2befc4aff672ef0a12
Signed-off-by: Colin Cross <ccross@android.com>
The l2x0_disable function attempts to writel with the l2x0_lock held.
This results in deadlock when the writel contains an outer_sync call
for the platform since the l2x0_lock is already held by the disable
function. A further problem is that disabling the L2 without flushing it
first can lead to the spin_lock operation becoming visible after the
spin_unlock, causing any subsequent L2 maintenance to deadlock.
This patch replaces the writel with a call to writel_relaxed in the
disabling code and adds a flush before disabling in the control
register, preventing livelock from occurring.
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Ensure that the meminfo array is sanity checked before we pass the
memory to memblock. This helps to ensure that memblock and meminfo
agree on the dimensions of memory, especially when more memory is
passed than the kernel can deal with.
Acked-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>