Commit Graph

923 Commits

Author SHA1 Message Date
Rohit Vaswani
a76e99abc5 Merge branch 'Linux 3.0.21' into msm-3.0
Merge Upstream's stable 3.0.21 branch into msm-3.0
This consists 814 commits and some merge conflicts.

The merge conflicts are because of some local changes to
msm-3.0 as well as some conflicts between google's tree and
the upstream tree.

Conflicts:
	arch/arm/kernel/head.S
	drivers/bluetooth/ath3k.c
	drivers/bluetooth/btusb.c
	drivers/mmc/core/core.c
	drivers/tty/serial/serial_core.c
	drivers/usb/host/ehci-hub.c
	drivers/usb/serial/qcserial.c
	fs/namespace.c
	fs/proc/base.c

Change-Id: I62e2edbe213f84915e27f8cd6e4f6ce23db22a21
Signed-off-by: Rohit Vaswani <rvaswani@codeaurora.org>
2012-03-30 00:09:34 -07:00
Linux Build Service Account
e6af9bd6f5 Merge "ARM: 7325/1: fix v7 boot with lockdep enabled" into msm-3.0 2012-03-05 08:28:17 -08:00
Laura Abbott
445eb9a872 msm: rtb: Log the context id in the rtb
Store the context id in the register trace buffer.
The process id can be derived from the context id.
This gives a general idea about what process was last
running when the RTB stopped.

Change-Id: I2fb8934d008b8cf3666f1df2652846c15faca776
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
2012-02-21 18:23:32 -08:00
Rabin Vincent
6de5581531 ARM: 7325/1: fix v7 boot with lockdep enabled
Bootup with lockdep enabled has been broken on v7 since b46c0f74657d
("ARM: 7321/1: cache-v7: Disable preemption when reading CCSIDR").

This is because v7_setup (which is called very early during boot) calls
v7_flush_dcache_all, and the save_and_disable_irqs added by that patch
ends up attempting to call into lockdep C code (trace_hardirqs_off())
when we are in no position to execute it (no stack, MMU off).

Fix this by using a notrace variant of save_and_disable_irqs.  The code
already uses the notrace variant of restore_irqs.

Change-Id: I9453f2f278c715a0480d4962f9cbbea65a43ac39
Reviewed-by: Nicolas Pitre <nico@linaro.org>
Acked-by: Stephen Boyd <sboyd@codeaurora.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: stable@vger.kernel.org
Signed-off-by: Rabin Vincent <rabin@rab.in>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
2012-02-17 23:35:53 -08:00
Stephen Boyd
3a2f1add57 ARM: cache-v7: Disable preemption when reading CCSIDR
armv7's flush_cache_all() flushes caches via set/way. To
determine the cache attributes (line size, number of sets,
etc.) the assembly first writes the CSSELR register to select a
cache level and then reads the CCSIDR register. The CSSELR register
is banked per-cpu and is used to determine which cache level CCSIDR
reads. If the task is migrated between when the CSSELR is written and
the CCSIDR is read the CCSIDR value may be for an unexpected cache
level (for example L1 instead of L2) and incorrect cache flushing
could occur.

Disable interrupts across the write and read so that the correct
cache attributes are read and used for the cache flushing
routine. We disable interrupts instead of disabling preemption
because the critical section is only 3 instructions and we want
to call v7_dcache_flush_all from __v7_setup which doesn't have a
full kernel stack with a struct thread_info.

This fixes a problem we see in scm_call() when flush_cache_all()
is called from preemptible context and sometimes the L2 cache is
not properly flushed out.

Change-Id: I34f131ec4e2f8518a7b9a112c459c9f2650600b6
CRs-Fixed: 298842
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
2012-02-06 23:48:22 -08:00
Will Deacon
a7a6f92544 ARM: mm: update CONTEXTIDR register to contain PID of current process
This patch introduces a new Kconfig option which, when enabled, causes
the kernel to write the PID of the current task into the PROCID field
of the CONTEXTIDR on context switch. This is useful when analysing
hardware trace, since writes to this register can be configured to emit
an event into the trace stream.

The thread notifier for writing the PID is deliberately kept separate
from the ASID code, so that we can easily support newer processors (A15
onwards) which store the ASID in TTBR0. As such, the switch_mm code is
updated to perform a read-modify-write sequence to ensure that we don't
clobber the PID on older CPUs.

Change-Id: I7236834cf4b5e984c9d9f24ba6b872078c2b936f
Cc: Wolfgang Betz <wolfgang.betz@st.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Jeff Ohlstein <johlstei@codeaurora.org>
2012-02-03 14:13:54 -08:00
Will Deacon
95086de856 ARM: 7296/1: proc-v7.S: remove HARVARD_CACHE preprocessor guards
commit 612539e81f655f6ac73c7af1da8701c1ee618aee upstream.

On v7, we use the same cache maintenance instructions for data lines
as for unified lines. This was not the case for v6, where HARVARD_CACHE
was defined to indicate the L1 cache topology.

This patch removes the erroneous compile-time check for HARVARD_CACHE in
proc-v7.S, ensuring that we perform I-side invalidation at boot.

Reported-and-Acked-by: Shawn Guo <shawn.guo@linaro.org>

Acked-by: Catalin Marinas <Catalin.Marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2012-02-03 09:18:58 -08:00
Jamie Iles
0fd86293bb ARM: 7115/4: move __exception and friends to asm/exception.h
The definition of __exception_irq_entry for
CONFIG_FUNCTION_GRAPH_TRACER=y needs linux/ftrace.h, but this creates a
circular dependency with it's current home in asm/system.h. Create
asm/exception.h and update all current users.

v4:	- rebase to rmk/for-next
v3:	- remove redundant includes of linux/ftrace.h
v2:	- document the usage restricitions of __exception*

Change-Id: I9efdb6a621bb915ad00b9a133ec93e98182fd483
Cc: Zoltan Devai <zdevai@gmail.com>
Signed-off-by: Jamie Iles <jamie@jamieiles.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
[tsoni@codeaurora.org: Merge fixes]
Signed-off-by: Trilok Soni <tsoni@codeaurora.org>
2011-12-29 13:52:38 +05:30
Linux Build Service Account
c56b6c4f12 Merge "arm: mm: Compare phys address instead of virt" into msm-3.0 2011-12-23 02:36:17 -08:00
Jack Cheung
7245af9e71 arm: Init SPARSEMEM section for removed memory
If a memblock has been removed with memblock_remove, it will
be skipped in 'for_each_memblock'. If a SPARSEMEM section
is completely enclosed within this removed memblock,
memory_present will never be called and the section will
never be initialized. This will cause garbage dereferences
later on.

This change loops on the memory banks, instead of the memblocks.
Memory banks always exist, regardless of memblock_remove and
ensure that all SPARSEMEM sections will be initialized, even
if they are removed later.

Change-Id: I1b7b0418a7e752f5bf69c3ec2ea8ea17e8ecfec5
Signed-off-by: Jack Cheung <jackc@codeaurora.org>
2011-12-21 15:29:21 -08:00
Jack Cheung
22cda04882 arm: mm: Compare phys address instead of virt
Doing __va(movable_reserved_start) can overflow if
movable_reserved_start is a very high address. This
will cause the comparison to evaluate incorrectly.

This change instead compares the physical addresses, which
cannot overflow/underflow.

Change-Id: I6c82df16b77a905617aa6f59c2eeaf7acb36c76d
Signed-off-by: Jack Cheung <jackc@codeaurora.org>
2011-12-20 15:15:49 -08:00
Linux Build Service Account
755c36e12b Merge "arm: mm: Check if movable zone has a nonzero size" into msm-3.0 2011-12-16 07:26:21 -08:00
Stephen Boyd
3ceed597e9 ARM: 6996/1: mm: Poison freed init memory
Poisoning __init marked memory can be useful when tracking down
obscure memory corruption bugs. Therefore, poison init memory
with 0xe7fddef0 to catch bugs earlier. The poison value is an
undefined instruction in ARM mode and branch to an undefined
instruction in Thumb mode.

Change-Id: Ibb91bee46b829b0deac9f4e592040a0054968998
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
Acked-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
[sboyd@codeaurora.org: Fix conflicts from total_unmovable_pages]
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>

ARM: 7010/1: mm: fix invalid loop for poison_init_mem

poison_init_mem() used a loop of:

	while ((count = count - 4))

which has 2 problems - an off by one error so that we do one less word
than we should, and the other is that if count == 0 then we loop forever
and poison too much.  On a platform with HAVE_TCM=y but nothing in the
TCM's, this caused corruption and the platform failed to boot.

Acked-by: Stephen Boyd <sboyd@codeaurora.org>
Acked-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Signed-off-by: Jamie Iles <jamie@jamieiles.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
2011-12-14 16:03:02 -08:00
Jack Cheung
ebce0b8bf7 arm: mm: Check if movable zone has a nonzero size
If CONFIG_FIX_MOVABLE_ZONE is enabled but the
the movable zone is not set up, vmalloc_min
will become 0 and the kernel will crash.

This change checks that movable_reserved_size
is nonzero before doing anything.

Change-Id: I3d3cb2482fde20fbd1dd0668dbbd414e42eda995
Signed-off-by: Jack Cheung <jackc@codeaurora.org>
2011-12-13 19:53:42 -08:00
Jack Cheung
59f9f1c9ae mm: Add total_unmovable_pages global variable
Vmalloc will exit if the amount it needs to allocate is
greater than totalram_pages. Vmalloc cannot allocate
from the movable zone, so pages in the movable zone should
not be counted.

This change adds a new global variable: total_unmovable_pages.
It is calculated in init.c, based on totalram_pages minus
the pages in the movable zone. Vmalloc now looks at this new
global instead of totalram_pages.

total_unmovable_pages can be modified during memory_hotplug.
If the zone you are offlining/onlining is unmovable, then
you modify it similar to totalram_pages.  If the zone is
movable, then no change is needed.

Change-Id: Ie55c41051e9ad4b921eb04ecbb4798a8bd2344d6
Signed-off-by: Jack Cheung <jackc@codeaurora.org>
2011-12-06 15:00:36 -08:00
Thomas Gleixner
450ea485b0 locking, ARM: Annotate low level hw locks as raw
Annotate the low level hardware locks which must not be preempted.

In mainline this change documents the low level nature of
the lock - otherwise there's no functional difference. Lockdep
and Sparse checking will work as usual.

Change-Id: I1c73fd5472b9ab356173637a7819095394004ebf
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
[kumarrav@codeaurora.org: fixup gic.c and cache-l2x0.c merge conflict]
Signed-off-by: Ravi Kumar <kumarrav@codeaurora.org>
2011-12-06 00:39:13 +05:30
Linux Build Service Account
1d2d6d20e4 Merge changes I79bc950f,Idfe44bb8,I9832f0ff into msm-3.0
* changes:
  arm: common: Makefile: Add cpaccess
  ARM: cpaccess: write-enables kernel space for write
  mm: add wrapper function for writing word to kernel text space
2011-12-03 19:45:06 -08:00
David Ng
76c5892fc1 ARM: Change CP15 regs to bump memory throughput on ScorpionMP
Change-Id: I9ace6222750954e43b4b57d049bb74645fb06424
Signed-off-by: David Ng <dave@codeaurora.org>
2011-11-30 14:52:43 -08:00
Neil Leeder
32942757bd mm: add wrapper function for writing word to kernel text space
Adds a function to encapsulate the locking, removal of write-protection,
word write, cache flush and invalidate and restoration
of write protection. This is a convenience function for callers
needing to update a word in kernel text space.

Change-Id: I9832f0ff659ddc62c55819af5318c94b70f5c11c
Signed-off-by: Neil Leeder <nleeder@codeaurora.org>
2011-11-30 10:38:39 -05:00
Jack Cheung
635e3505bf arm: mm: Remove 5MB offset for vmalloc
commit f973fab692 added a 5MB highmem zone to fix
a page fault error due to generic 3.0 bug.
Remove this extra 5MB offset because we now have a 256MB
highmem zone.

Change-Id: I430fed88baccd44d438c21607f87219206f1f20a
Signed-off-by: Jack Cheung <jackc@codeaurora.org>
2011-11-22 18:02:01 -08:00
Bryan Huntsman
d074fa2796 Merge remote-tracking branch 'common/android-3.0' into msm-3.0
* common/android-3.0: (570 commits)
  misc: remove kernel debugger core
  ARM: common: fiq_debugger: dump sysrq directly to console if enabled
  ARM: common: fiq_debugger: add irq context debug functions
  net: wireless: bcmdhd: Call init_ioctl() only if was started properly for WEXT
  net: wireless: bcmdhd: Call init_ioctl() only if was started properly
  net: wireless: bcmdhd: Fix possible memory leak in escan/iscan
  cpufreq: interactive governor: default 20ms timer
  cpufreq: interactive governor: go to intermediate hi speed before max
  cpufreq: interactive governor: scale to max only if at min speed
  cpufreq: interactive governor: apply intermediate load on current speed
  ARM: idle: update idle ticks before call idle end notifier
  input: gpio_input: don't print debounce message unless flag is set
  net: wireless: bcm4329: Skip dhd_bus_stop() if bus is already down
  net: wireless: bcmdhd: Skip dhd_bus_stop() if bus is already down
  net: wireless: bcmdhd: Improve suspend/resume processing
  net: wireless: bcmdhd: Check if FW is Ok for internal FW call
  tcp: Don't nuke connections for the wrong protocol
  ARM: common: fiq_debugger: make uart irq be no_suspend
  net: wireless: Skip connect warning for CONFIG_CFG80211_ALLOW_RECONNECT
  mm: avoid livelock on !__GFP_FS allocations
  ...

Conflicts:
	arch/arm/mm/cache-l2x0.c
	arch/arm/vfp/vfpmodule.c
	drivers/mmc/core/host.c
	kernel/power/wakelock.c
	net/bluetooth/hci_event.c

Signed-off-by: Bryan Huntsman <bryanh@codeaurora.org>
2011-11-16 13:52:50 -08:00
Will Deacon
5b7cedf97d ARM: proc: add proc info for Cortex-A15MP using classic page tables
Multicore implementations of the Cortex-A15 require bit 6 of the
auxiliary control register to be set in order for cache and TLB
maintenance operations to be broadcast between CPUs.

This patch adds a new proc_info structure for Cortex-A15, which enables
the SMP bit during setup and includes the new HWCAP for integer
division.

Change-Id: Ib862459b238e1e69ffd27b077eb22f465e84dff5
Signed-off-by: Will Deacon <will.deacon@arm.com>
[tdas@codeaurora.org: fixup proc-v7.S, read Aux CTRL register
 for ALT_SMP and ALT_UP got removed in commit 3f2bc4d6]
Signed-off-by: Taniya Das <tdas@codeaurora.org>
2011-11-12 14:40:15 +05:30
Linux Build Service Account
99af3c9567 Merge changes Ie13605a4,I7f943ebb into msm-3.0
* changes:
  ARM: proc: add Cortex-A5 proc info
  ARM: proc: convert v7 proc infos into a common macro
2011-11-10 16:32:29 -08:00
Pawel Moll
a177d5541d ARM: proc: add Cortex-A5 proc info
This patch adds processor info for ARM Ltd. Cortex A5,
which has SCU initialisation procedure identical to A9.

Change-Id: Ie13605a4a0071bb8a3ccf5d91191fccb044191ab
Signed-off-by: Pawel Moll <pawel.moll@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Taniya Das <tdas@codeaurora.org>
2011-11-10 14:03:39 +05:30
Pawel Moll
a2a480af0f ARM: proc: convert v7 proc infos into a common macro
As most of the proc info content is common across all v7
processors, this patch converts existing A9 and generic v7
descriptions into a macro (allowing extra flags in future).

Change-Id: I7f943ebb9e572ab78579e909287ecf65808e104e
Signed-off-by: Pawel Moll <pawel.moll@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Taniya Das <tdas@codeaurora.org>
2011-11-10 14:02:51 +05:30
Olav Haugan
ea5a90bd71 arm: handle discontig page struct between sections
If SPARSEMEM is enabled and there is a large amount of
memory, the page structures for the various sections
may not be contiguous. The code to traverse all of the
page structures in show_mem() was incorrectly assuming
all of the page structures were contiguous, causing
kernel panics in this case.

CRs-fixed: 315006
Change-Id: I5e9437c369d23f1513c73feb46623006561d15cf
Signed-off-by: Olav Haugan <ohaugan@codeaurora.org>
2011-11-07 14:30:11 -08:00
Neil Leeder
f06ab97f06 arm: mm: add functions to temporarily allow write to kernel text
STRICT_MEMORY_RWX write-protects the kernel text section. This
is a problem for tools such as kprobes which need write access
to kernel text space.

This patch introduces a function to temporarily make part of the
kernel text space writeable and another to restore the original state.
They can be called by code which is intentionally writing to
this space, while still leaving the kernel protected from
unintentional writes at other times.

Change-Id: I879009c41771198852952e5e7c3b4d1368f12d5f
Signed-off-by: Neil Leeder <nleeder@codeaurora.org>
2011-11-04 19:01:04 -06:00
Larry Bassel
f973fab692 arm: ensure enough memory is available for movable zone
If CONFIG_HIGHMEM is enabled, the memory from the movable
zone will come from highmem as the zone of highest
address present is converted to ZONE_MOVABLE, and only
one zone may be used for ZONE_MOVABLE.

Ensure that ZONE_HIGHMEM is large enough for the
movable zone's required size to be created from
it if CONFIG_FIX_MOVABLE_ZONE is enabled as well.

There needs to be a small ZONE_HIGHMEM even after
memory hotremove (due to a generic 3.0 bug), so
extra is reserved for now.

Change-Id: I285c6ad24a6f2f18fc9e6510d379b126f201b082
Signed-off-by: Larry Bassel <lbassel@codeaurora.org>
2011-10-30 09:14:50 -07:00
Colin Cross
2bb3e31015 Merge commit 'v3.0.8' into android-3.0 2011-10-27 15:01:19 -07:00
Linus Walleij
1289deb9b5 ARM: 7113/1: mm: Align bank start to MAX_ORDER_NR_PAGES
commit 002ea9eefec98dada56fd5f8e432a4e8570c2a26 upstream.

The VM subsystem assumes that there are valid memmap entries from
the bank start aligned to MAX_ORDER_NR_PAGES.

On the Ux500 we have a lot of mem=N arguments on the commandline
triggering this bug several times over and causing kernel
oops messages.

Cc: Michael Bohan <mbohan@codeaurora.org>
Cc: Nicolas Pitre <nico@fluxnic.net>
Signed-off-by: Johan Palsson <johan.palsson@stericsson.com>
Signed-off-by: Rabin Vincent <rabin.vincent@stericsson.com>
Signed-off-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2011-10-25 07:10:13 +02:00
Larry Bassel
b2a4c825be arm: handle discontiguous page structures between sections
If SPARSEMEM is enabled and there is a large amount of
memory, the page structures for the various sections
may not be contiguous. The code to traverse all of the
page structures in page_init() was incorrectly assuming
all of the page structures were contiguous, causing
early kernel panics in this case.

Change-Id: I10548520f4d1c0c232a2df940ab0f9d57078c586
Signed-off-by: Larry Bassel <lbassel@codeaurora.org>
2011-10-12 14:48:28 -07:00
Maheshkumar Sivasubramanian
c71d8ff4b4 arm: cache-l2x0: Restore the data latency ctrl register after suspend.
Signed-off-by: Maheshkumar Sivasubramanian <msivasub@codeaurora.org>
2011-10-11 09:59:27 -07:00
Larry Bassel
d4e809ea8c arm: add support for ARCH_POPULATES_NODE_MAP
ARCH_POPULATES_NODE_MAP is used by most of the other
architectures and allows finer-grained control of
how and where zones are placed.

Signed-off-by: Larry Bassel <lbassel@codeaurora.org>

Conflicts:

	arch/arm/mm/init.c
2011-10-04 17:14:15 -07:00
Larry Bassel
150d503386 arm: add support for DONT_MAP_HOLE_AFTER_MEMBANK0
Some platforms have memory at the top of the
first memory bank which the kernel cannot
access. Mapping this is unnecessary and wastes
precious virtual space.

Signed-off-by: Larry Bassel <lbassel@codeaurora.org>
2011-10-03 16:21:29 -07:00
Rohit Vaswani
f0ce9ae61d msm: 9615: Enable L2 cache for MSM9615
Add support to enable L2 cache for MSM9615. It uses the
PL310 L2 cache controller.

Signed-off-by: Rohit Vaswani <rvaswani@codeaurora.org>

Conflicts:

	arch/arm/mm/Kconfig
2011-10-03 16:18:36 -07:00
Naveen Ramaraj
189f188d29 PMEM: Pre reserve virtual address range for on demand devices.
Since the PMEM driver establishes ioremaps on the fly for
on demand devices it is possible for the virtual address space
to become quickly fragmented. For such devices, pre-reserve the
virtual address range and only set up page table mappings when
required.

CRs-Fixed: 299510
Signed-off-by: Naveen Ramaraj <nramaraj@codeaurora.org>
2011-10-03 16:15:12 -07:00
Larry Bassel
a4414b164e arm: make memory power routines conform to current generic API
The various routines to change memory power state used
in physical memory hotplug and hotremove used to take
a start pfn and a number of pages and return 1 for success
and 0 for failure.

The generic API these are called from now takes a start address
and size and returns a byte count of memory powered on or
off, so the ARM and platform specific routines should as well.

Signed-off-by: Larry Bassel <lbassel@codeaurora.org>
2011-10-03 16:15:10 -07:00
Will Deacon
85fd323003 ARM: 7091/1: errata: D-cache line maintenance operation by MVA may not succeed
commit f630c1bdfbf8fe423325beaf60027cfc7fd7c610 upstream.

This patch implements a workaround for erratum 764369 affecting
Cortex-A9 MPCore with two or more processors (all current revisions).
Under certain timing circumstances, a data cache line maintenance
operation by MVA targeting an Inner Shareable memory region may fail to
proceed up to either the Point of Coherency or to the Point of
Unification of the system. This workaround adds a DSB instruction before
the relevant cache maintenance functions and sets a specific bit in the
diagnostic control register of the SCU.

Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2011-10-03 11:41:06 -07:00
Russell King
017a4b5497 ARM: dma-mapping: free allocated page if unable to map
commit d8e89b47e00ee80e920761145144640aac4cf71a upstream.

If the attempt to map a page for DMA fails (eg, because we're out of
mapping space) then we must not hold on to the page we allocated for
DMA - doing so will result in a memory leak.

Reported-by: Bryan Phillippe <bp@darkforest.org>
Tested-by: Bryan Phillippe <bp@darkforest.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2011-10-03 11:41:06 -07:00
Jin Hong
ada9e12f26 Revert "Revert "arm: mm: restrict kernel memory permissions if CONFIG_STRICT_MEMORY_RWX set""
This reverts commit b45243a26a1a3bee14b3ff506a61d1eb3cb80e76.

Signed-off-by: Jin Hong <jinh@codeaurora.org>
2011-10-03 10:28:19 -07:00
Michael Bohan
ccd78a45dc arm: mm: Exclude additional mem_map entries from free
A previous patch addressed the issue of move_freepages_block()
trampling on erronously freed mem_map entries for the bank end
pfn. We also need to restrict the start pfn in a
complementary manner.

Also make macro usage consistent by adopting the use of
round_down and round_up.

Signed-off-by: Michael Bohan <mbohan@codeaurora.org>
2011-10-03 10:26:53 -07:00
Larry Bassel
61fe47257a msm: arch_add_memory should only perform logical memory hotplug
The function arch_add_memory() should only perform logical
memory hotplug, but it was also improperly performing
physical memory hotplug. Even worse, it was adding pages
to the free list before the memory bank they were in
was powered on.

Signed-off-by: Larry Bassel <lbassel@codeaurora.org>
2011-10-03 10:26:09 -07:00
Stepan Moskovchenko
00da074482 arm: krait: Conditional execution abort handler
On certain early samples of the Krait processor, certain
thumb instructions may cause data aborts even if these
instructions fail their condition code checks. Add a
handler to ignore data aborts generated by such
instructions.

Signed-off-by: Stepan Moskovchenko <stepanm@codeaurora.org>
2011-10-03 09:59:03 -07:00
Larry Bassel
ae683e860b arm: remove support for non-standard memory tags
In older versions, there were memory tags that represented
memory which could be hotplugged, memory which could be
put into self-refresh and memory which could be reclaimed
from OSBL. The last of these was never implemented and
due to a re-architecting of the way DMM determines which
memory can be hotplugged, the other two tags are no longer necessary.

Signed-off-by: Larry Bassel <lbassel@codeaurora.org>

Conflicts:

	arch/arm/mm/init.c
2011-10-03 09:59:01 -07:00
Bryan Huntsman
3f2bc4d6eb Initial Contribution
msm-2.6.38: tag AU_LINUX_ANDROID_GINGERBREAD.02.03.04.00.142

Signed-off-by: Bryan Huntsman <bryanh@codeaurora.org>
2011-10-03 09:57:10 -07:00
Colin Cross
5ea3a7c6c6 ARM: cache-l2x0: update workaround for PL310 errata 727915
ARM errata 727915 for PL310 has been updated to include a new
workaround required for PL310 r2p0 for l2x0_flush_all, which also
affects l2x0_clean_all in my testing.  For r2p0, clean or flush
each set/way individually.  For r3p0 or greater, use the debug
register for cleaning and flushing.

Requires exporting the cache_id, sets and ways detected in the
init function for later use.

Change-Id: I215055cbe5dc7e4e8184fb2befc4aff672ef0a12
Signed-off-by: Colin Cross <ccross@android.com>
2011-09-19 23:35:45 -07:00
Colin Cross
75c56a8111 Merge commit 'v3.0-rc7' into android-3.0 2011-07-12 20:10:37 -07:00
Will Deacon
38a8914f9a ARM: 6987/1: l2x0: fix disabling function to avoid deadlock
The l2x0_disable function attempts to writel with the l2x0_lock held.
This results in deadlock when the writel contains an outer_sync call
for the platform since the l2x0_lock is already held by the disable
function. A further problem is that disabling the L2 without flushing it
first can lead to the spin_lock operation becoming visible after the
spin_unlock, causing any subsequent L2 maintenance to deadlock.

This patch replaces the writel with a call to writel_relaxed in the
disabling code and adds a flush before disabling in the control
register, preventing livelock from occurring.

Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-07-06 20:48:08 +01:00
Russell King
0371d3f7e8 ARM: move memory layout sanity checking before meminfo initialization
Ensure that the meminfo array is sanity checked before we pass the
memory to memblock.  This helps to ensure that memblock and meminfo
agree on the dimensions of memory, especially when more memory is
passed than the kernel can deal with.

Acked-by: Nicolas Pitre <nicolas.pitre@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2011-07-05 20:27:16 +01:00
Colin Cross
e55d4fa967 Merge commit 'v3.0-rc5' into android-3.0 2011-06-29 13:54:42 -07:00