I don't have a z10 to test this anymore, so I have no idea if the code
works at all or even crashes. I can try to emulate, but it is just
guess work.
Nor do we know if the z10 special handling is performance wise still
better than the generic handling. There have been a lot of changes to
the scheduler.
Therefore let's play safe and remove the special handling.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
The z13 machine added a fourth level to the cpu topology
information. The new top level is called drawer.
A drawer contains two books, which used to be the top level.
Adding this additional scheduling domain did show performance
improvements for some workloads of up to 8%, while there don't
seem to be any workloads impacted in a negative way.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
The s390 cpu topology gained another hierarchy level. The top level is
now called drawer and contains several books. A book used to be the
top level.
In order to expose the cpu topology to user space allow to create new
sysfs attributes dependent on CONFIG_SCHED_DRAWER which an
architecture may define and select.
These additional attributes will be available:
/sys/devices/system/cpu/cpuX/topology/drawer_id
/sys/devices/system/cpu/cpuX/topology/drawer_siblings
/sys/devices/system/cpu/cpuX/topology/drawer_siblings_list
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Rename DIAG308_IPL and DIAG308_DUMP to DIAG308_LOAD_CLEAR and
DIAG308_LOAD_NORMAL_DUMP to better reflect the associated IPL
functions.
Suggested-by: Cornelia Huck <cornelia.huck@de.ibm.com>
Suggested-by: Christian Borntraeger <borntraeger@de.ibm.com>
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Acked-by: Michael Holzheu <holzheu@linux.vnet.ibm.com>
Reviewed-by: Peter Oberparleiter <oberpar@linux.vnet.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Avoid clearing memory for CCW-type re-ipl within a logical
partition. This can save a significant amount of time if a logical
partition contains a lot of memory.
On the other hand we still clear memory if running within a second
level hypervisor, since the hypervisor can simply free all memory that
was used for the guest.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Acked-by: Christian Borntraeger <borntraeger@de.ibm.com>
Acked-by: Michael Holzheu <holzheu@linux.vnet.ibm.com>
Reviewed-by: Peter Oberparleiter <oberpar@linux.vnet.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
We have some inline assemblies where the extable entry points to a
label at the end of an inline assembly which is not followed by an
instruction.
On the other hand we have also inline assemblies where the extable
entry points to the first instruction of an inline assembly.
If a first type inline asm (extable point to empty label at the end)
would be directly followed by a second type inline asm (extable points
to first instruction) then we would have two different extable entries
that point to the same instruction but would have a different target
address.
This can lead to quite random behaviour, depending on sorting order.
I verified that we currently do not have such collisions within the
kernel. However to avoid such subtle bugs add a couple of nop
instructions to those inline assemblies which contain an extable that
points to an empty label.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
We always expect that get_user and put_user return with zero. Give the
compiler a hint so it can slightly optimize the code and avoid
branches.
This is the same what x86 got with commit a76cf66e94 ("x86/uaccess:
Tell the compiler that uaccess is unlikely to fault").
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Fix some whitespace damage that was introduced by me with a
query-replace when removing 31 bit support.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
When we use the iommu_area_alloc helper to get dma addresses
we specify the boundary_size parameter but not the offset (called
shift in this context).
As long as the offset (start_dma) is a multiple of the boundary
we're ok (on current machines start_dma always seems to be 4GB).
Don't leave this to chance and specify the offset for iommu_area_alloc.
Signed-off-by: Sebastian Ott <sebott@linux.vnet.ibm.com>
Reviewed-by: Gerald Schaefer <gerald.schaefer@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
We don't have an architectural guarantee on the value of
the dma offset but rely on it to be at least page aligned.
Enforce page alignemt of start_dma.
Signed-off-by: Sebastian Ott <sebott@linux.vnet.ibm.com>
Reviewed-by: Gerald Schaefer <gerald.schaefer@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Use dynamically allocated irq descriptors on s390 which allows
us to get rid of the s390 specific config option PCI_NR_MSI and
exploit more MSI interrupts. Also the size of the kernel image
is reduced by 131K (using performance_defconfig).
Signed-off-by: Sebastian Ott <sebott@linux.vnet.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
When s390 traces with hex_ascii or sprintf view are
extracted and sorted, use the sort option -s (stable)
to avoid multiple lines with the same time stamp being
sorted using the rest of the line as secondary key.
Signed-off-by: Thomas Richter <tmricht@linux.vnet.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Small cleanup patch to use the shorter __section macro everywhere.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
On s390 __ro_after_init is currently mapped to __read_mostly which
means that data marked as __ro_after_init will not be protected.
Reason for this is that the common code __ro_after_init implementation
is x86 centric: the ro_after_init data section was added to rodata,
since x86 enables write protection to kernel text and rodata very
late. On s390 we have write protection for these sections enabled with
the initial page tables. So adding the ro_after_init data section to
rodata does not work on s390.
In order to make __ro_after_init work properly on s390 move the
ro_after_init data, right behind rodata. Unlike the rodata section it
will be marked read-only later after all init calls happened.
This s390 specific implementation adds new __start_ro_after_init and
__end_ro_after_init labels. Everything in between will be marked
read-only after the init calls happened. In addition to the
__ro_after_init data move also the exception table there, since from a
practical point of view it fits the __ro_after_init requirements.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Reviewed-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
commit c74ba8b348 ("arch: Introduce post-init read-only memory")
introduced the __ro_after_init attribute which allows to add variables
to the ro_after_init data section.
This new section was added to rodata, even though it contains writable
data. This in turn causes problems on architectures which mark the
page table entries read-only that point to rodata very early.
This patch allows architectures to implement an own handling of the
.data..ro_after_init section.
Usually that would be:
- mark the rodata section read-only very early
- mark the ro_after_init section read-only within mark_rodata_ro
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Reviewed-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
ptep_flush_lazy and pmdp_flush_lazy use mm->context.attach_count to
decide between a lazy TLB flush vs an immediate TLB flush. The field
contains two 16-bit counters, the number of CPUs that have the mm
attached and can create TLB entries for it and the number of CPUs in
the middle of a page table update.
The __tlb_flush_asce, ptep_flush_direct and pmdp_flush_direct functions
use the attach counter and a mask check with mm_cpumask(mm) to decide
between a local flush local of the current CPU and a global flush.
For all these functions the decision between lazy vs immediate and
local vs global TLB flush can be based on CPU masks. There are two
masks: the mm->context.cpu_attach_mask with the CPUs that are actively
using the mm, and the mm_cpumask(mm) with the CPUs that have used the
mm since the last full flush. The decision between lazy vs immediate
flush is based on the mm->context.cpu_attach_mask, to decide between
local vs global flush the mm_cpumask(mm) is used.
With this patch all checks will use the CPU masks, the old counter
mm->context.attach_count with its two 16-bit values is turned into a
single counter mm->context.flush_count that keeps track of the number
of CPUs with incomplete page table updates. The sole user of this
counter is finish_arch_post_lock_switch() which waits for the end of
all page table updates.
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
The bitmap_equal function has optimized code for small bitmaps with less
than BITS_PER_LONG bits. For larger bitmaps the out-of-line function
__bitmap_equal is called.
For a constant number of bits divisible by BITS_PER_LONG the memcmp
function can be used. For s390 gcc knows how to optimize this function,
memcmp calls with up to 256 bytes / 2048 bits are translated into a
single instruction.
Reviewed-by: David Hildenbrand <dahi@linux.vnet.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
The vunmap_pte_range() function calls ptep_get_and_clear() without any
locking. ptep_get_and_clear() uses ptep_xchg_lazy()/ptep_flush_direct()
for the page table update. ptep_flush_direct requires that preemption
is disabled, but without any locking this is not the case. If the kernel
preempts the task while the attach_counter is increased an endless loop
in finish_arch_post_lock_switch() will occur the next time the task is
scheduled.
Add explicit preempt_disable()/preempt_enable() calls to the relevant
functions in arch/s390/mm/pgtable.c.
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
The External-Time-Reference (ETR) clock synchronization interface has
been superseded by Server-Time-Protocol (STP). Remove the outdated
ETR interface.
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
The PTFF instruction can be used to retrieve information about UTC
including the current number of leap seconds. Use this value to
convert the coordinated server time value of the TOD clock to a
proper UTC timestamp to initialize the system time. Without this
correction the system time will be off by the number of leap seonds
until it has been corrected via NTP.
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
It is possible to specify a user offset for the TOD clock, e.g. +2 hours.
The TOD clock will carry this offset even if the clock is synchronized
with STP. This makes the time stamps acquired with get_sync_clock()
useless as another LPAR migth use a different TOD offset.
Use the PTFF instrution to get the TOD epoch difference and subtract
it from the TOD clock value to get a physical timestamp. As the epoch
difference contains the sync check delta as well the LPAR offset value
to the physical clock needs to be refreshed after each clock
synchronization.
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
The PTFF instruction is not a function of ETR, rename and move the
PTFF definitions from etr.h to timex.h.
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
The sync clock operation of the channel subsystem call for STP delivers
the TOD clock difference as a result. Use this TOD clock difference
instead of the difference between the TOD timestamps before and after
the sync clock operation.
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Reducing the size of reserved memory for the crash kernel will result
in an immediate crash on s390. Reason for that is that we do not
create struct pages for memory that is reserved. If that memory is
freed any access to struct pages which correspond to this memory will
result in invalid memory accesses and a kernel panic.
Fix this by properly creating struct pages when the system gets
initialized. Change the code also to make use of set_memory_ro() and
set_memory_rw() so page tables will be split if required.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Implement an s390 version of the weak crash_free_reserved_phys_range
function. This allows us to update the size of the reserved crash
kernel memory if it will be resized.
This was previously done with a call to crash_unmap_reserved_pages
from crash_shrink_memory which was removed with ("s390/kexec:
consolidate crash_map/unmap_reserved_pages() and
arch_kexec_protect(unprotect)_crashkres()")
Fixes: 7a0058ec78 ("s390/kexec: consolidate crash_map/unmap_reserved_pages() and arch_kexec_protect(unprotect)_crashkres()")
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
The segment/region table that is part of the kernel image must be
properly aligned to 16k in order to make the crdte inline assembly
work.
Otherwise it will calculate a wrong segment/region table start address
and access incorrect memory locations if the swapper_pg_dir is not
aligned to 16k.
Therefore define BSS_FIRST_SECTIONS in order to put the swapper_pg_dir
at the beginning of the bss section and also align the bss section to
16k just like other architectures did.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Lets provide the basic machine information for dump_stack on
s390. This enables the "Hardware name:" line and results in
output like
[...]
Oops: 0004 ilc:2 [#1] SMP
Modules linked in:
CPU: 1 PID: 74 Comm: sh Not tainted 4.5.0+ #205
Hardware name: IBM 2964 NC9 704 (KVM)
[...]
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Acked-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: Daniel van Gerpen <daniel@vangerpen.de>
Acked-by: Peter Oberparleiter <oberpar@linux.vnet.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Show the dynamic and static cpu mhz of each cpu. Since these values
are per cpu this requires a fundamental extension of the format of
/proc/cpuinfo.
Historically we had only a single line per cpu and a summary at the
top of the file. This format is hardly extendible if we want to add
more per cpu information.
Therefore this patch adds per cpu blocks at the end of /proc/cpuinfo:
cpu : 0
cpu Mhz dynamic : 5504
cpu Mhz static : 5504
cpu : 1
cpu Mhz dynamic : 5504
cpu Mhz static : 5504
cpu : 2
cpu Mhz dynamic : 5504
cpu Mhz static : 5504
cpu : 3
cpu Mhz dynamic : 5504
cpu Mhz static : 5504
Right now each block contains only the dynamic and static cpu mhz,
but it can be easily extended like on every other architecture.
This extension is supposed to be compatible with the old format.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Acked-by: Sascha Silbe <silbe@linux.vnet.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Change the code to print all the current output during the first
iteration. This is a preparation patch for the upcoming per cpu block
extension to /proc/cpuinfo.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Ensure that we always have __stringify().
Signed-off-by: Jason Baron <jbaron@akamai.com>
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Add statistics that show how memory is mapped within the kernel
identity mapping. This is more or less the same like git
commit ce0c0e50f9 ("x86, generic: CPA add statistics about state
of direct mapping v4") for x86.
I also intentionally copied the lower case "k" within DirectMap4k vs
the upper case "M" and "G" within the two other lines. Let's have
consistent inconsistencies across architectures.
The output of /proc/meminfo now contains these additional lines:
DirectMap4k: 2048 kB
DirectMap1M: 3991552 kB
DirectMap2G: 4194304 kB
The implementation on s390 is lockless unlike the x86 version, since I
assume changes to the kernel mapping are a very rare event. Therefore
it really doesn't matter if these statistics could potentially be
inconsistent if read while kernel pages tables are being changed.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Acked-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
For the kernel identity mapping map everything read-writeable and
subsequently call set_memory_ro() to make the ro section read-only.
This simplifies the code a lot.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Acked-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
set_memory_ro() and set_memory_rw() currently only work on 4k
mappings, which is good enough for module code aka the vmalloc area.
However we stumbled already twice into the need to make this also work
on larger mappings:
- the ro after init patch set
- the crash kernel resize code
Therefore this patch implements automatic kernel page table splitting
if e.g. set_memory_ro() would be called on parts of a 2G mapping.
This works quite the same as the x86 code, but is much simpler.
In order to make this work and to be architecturally compliant we now
always use the csp, cspg or crdte instructions to replace valid page
table entries. This means that set_memory_ro() and set_memory_rw()
will be much more expensive than before. In order to avoid huge
latencies the code contains a couple of cond_resched() calls.
The current code only splits page tables, but does not merge them if
it would be possible. The reason for this is that currently there is
no real life scenarion where this would really happen. All current use
cases that I know of only change access rights once during the life
time. If that should change we can still implement kernel page table
merging at a later time.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Acked-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Make pmd_wrprotect() and pmd_mkwrite() available independently from
CONFIG_TRANSPARENT_HUGEPAGE and CONFIG_HUGETLB_PAGE so these can be
used on the kernel mapping.
Also introduce a couple of pud helper functions, namely pud_pfn(),
pud_wrprotect(), pud_mkwrite(), pud_mkdirty() and pud_mkclean()
which only work on the kernel mapping.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Acked-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Always use PAGE_KERNEL when re-enabling pages within the kernel
mapping due to debug pagealloc. Without using this pgprot value
pte_mkwrite() and pte_wrprotect() won't work on such mappings after an
unmap -> map cycle anymore.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Acked-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Use pte_clear() instead of open-coding it.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Acked-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
_REGION3_ENTRY_RO is a duplicate of _REGION_ENTRY_PROTECT.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Acked-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Instead of open-coded SEGMENT_KERNEL and REGION3_KERNEL assignments use
defines. Also to make e.g. pmd_wrprotect() work on the kernel mapping
a couple more flags must be set. Therefore add the missing flags also.
In order to make everything symmetrical this patch also adds software
dirty, young, read and write bits for region 3 table entries.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Acked-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Usually segment and region tables are 16k aligned due to the way the
buddy allocator works. This is not true for the vmem code which only
asks for a 4k alignment. In order to be consistent enforce a 16k
alignment here as well.
This alignment will be assumed and therefore is required by the
pageattr code.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Acked-by: Christian Borntraeger <borntraeger@de.ibm.com>
Acked-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
We have already two inline assemblies which make use of the csp
instruction. Since I need a third instance let's introduce a generic
inline assmebly which can be used by everyone.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Acked-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Use memdup_user_nul to duplicate a memory region from user-space
to kernel-space and terminate with a NULL, instead of open coding
using kmalloc + copy_from_user and explicitly NULL terminating.
Signed-off-by: Muhammad Falak R Wani <falakreyaz@gmail.com>
[heiko.carstens@de.ibm.com: remove comment]
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
commit 1e133ab296 ("s390/mm: split arch/s390/mm/pgtable.c") factored
out the page table handling code from __gmap_zap and __s390_reset_cmma
into ptep_zap_unused and added a simple flag that tells which one of the
function (reset or not) is to be made. This also changed the behaviour,
as it also zaps unused page table entries on reset.
Turns out that this is wrong as s390_reset_cmma uses the page walker,
which DOES NOT take the ptl lock.
The most simple fix is to not do the zapping part on reset (which uses
the walker)
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Fixes: 1e133ab296 ("s390/mm: split arch/s390/mm/pgtable.c")
Cc: stable@vger.kernel.org # 4.6+
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Pull thermal management fixes from Zhang Rui:
- fix an ordering issue in cpu cooling that cooling device is
registered before it's ready (freq_table being populated).
(Lukasz Luba)
- fix a missing comment update (Caesar Wang)
* 'for-rc' of git://git.kernel.org/pub/scm/linux/kernel/git/rzhang/linux:
thermal: add the note for set_trip_temp
thermal: cpu_cooling: fix improper order during initialization
Pull block layer fixes from Jens Axboe:
"A small collection of fixes for the current series. This contains:
- Two fixes for xen-blkfront, from Bob Liu.
- A bug fix for NVMe, releasing only the specific resources we
requested.
- Fix for a debugfs flags entry for nbd, from Josef.
- Plug fix from Omar, fixing up a case of code being switched between
two functions.
- A missing bio_put() for the new discard callers of
submit_bio_wait(), fixing a regression causing a leak of the bio.
From Shaun.
- Improve dirty limit calculation precision in the writeback code,
fixing a case where setting a limit lower than 1% of memory would
end up being zero. From Tejun"
* 'for-linus' of git://git.kernel.dk/linux-block:
NVMe: Only release requested regions
xen-blkfront: fix resume issues after a migration
xen-blkfront: don't call talk_to_blkback when already connected to blkback
nbd: pass the nbd pointer for flags debugfs
block: missing bio_put following submit_bio_wait
blk-mq: really fix plug list flushing for nomerge queues
writeback: use higher precision calculation in domain_dirty_limits()
- Fix a NULL pointer dereference when we are searching the
GPIO device list but one of the devices have been removed
(struct gpio_chip pointer is NULL).
- Fix unaligned reference counters: we were ending on +3 after
all said and done. It should be 0. Remove an extraneous
get_device(), and call cdev_del() followed by device_del()
in gpiochip_remove() instead and the count goes to zero and
calls the release() function properly.
- Fix a compile warning due to a missing #include in the
OF/device tree portions.
- Select ANON_INODES for GPIOLIB, we're using that for our
character device. Some randconfig tests disclosed the
problem.
- Make sure the Zynq driver clock runs also without CONFIG_PM
enabled
- Fix an off-by-one error in the 104-DIO-48E driver
- Fix warnings in bcm_kona_gpio_reset()
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQIcBAABAgAGBQJXXKHgAAoJEEEQszewGV1zELIP/0puBntMtTmu1RP7WI1NisWd
XMMpE+Qn+FSzgZD+zg5UOjovYOJBUB8FDAcj/WWypXxjGWlouOJhf/EIluLnZCsX
T/L+kKcXtzU0e97eyw0TCwo/EjpZXHV9d1jxJqYmkJyhK5C2MLQ0a4pLXVIk2Tov
q4t9CDOq1rH/SzWFPSDyk1PBdN031U6Z5ASQyvMAqYG/X/aTJE/5btIeif51xKzV
QpU/5YBRAmstSsRqPXYJt6p82Voozd03h0Kyc4QVwhhYo0+QtVQ22Tr6RRrGWD20
vaPHLvFIO3lfVVlZrvjzwbFB4ffQRGSnUWYlSmEMp2O2DxahfC0Q2U6Sj2phDGKM
IxvkGHYvJypXtl9qQtzK2mRgvNOitGhRobvWx/+kGbBGEvWHvr8YwgsPDX+96ydJ
x01OR2tSmoPYQmaOP87sKuDTV0cYuGsnGJxEPR9A7RHg4fAuK1NoOLG3d/xcx5Az
klNpHAwQHDfq/H69OChhorIvXNNFW09MQI1wyd9NPhiJpS030J/HkFxy3x6Ne6km
jkmnjcDCtdq4PrPHkZkoUJar3v0cQIjAP73kK9IR3MRiZgEbc5a3bAulYXUdLQNj
9YHHFdzMcvXPcjKEU28vU9cIN2j8jMfr7B0q5awwensZGou0rLcQ41b631n5B+Bp
1SsubtXxLckRwPrYMVLd
=EaGr
-----END PGP SIGNATURE-----
Merge tag 'gpio-v4.7-3' of git://git.kernel.org/pub/scm/linux/kernel/git/linusw/linux-gpio
Pull GPIO fixes from Linus Walleij:
"A new bunch of GPIO fixes for v4.7.
This time I am very grateful that Ricardo Ribalda Delgado went in and
fixed my stupid refcounting mistakes in the removal path for GPIO
chips. I had a feeling something was wrong here and so it was. It
exploded on OMAP and it fixes their problem. Now it should be (more)
solid.
The rest i compilation, Kconfig and driver fixes. Some tagged for
stable.
Summary:
- Fix a NULL pointer dereference when we are searching the GPIO
device list but one of the devices have been removed (struct
gpio_chip pointer is NULL).
- Fix unaligned reference counters: we were ending on +3 after all
said and done. It should be 0. Remove an extraneous get_device(),
and call cdev_del() followed by device_del() in gpiochip_remove()
instead and the count goes to zero and calls the release() function
properly.
- Fix a compile warning due to a missing #include in the OF/device
tree portions.
- Select ANON_INODES for GPIOLIB, we're using that for our character
device. Some randconfig tests disclosed the problem.
- Make sure the Zynq driver clock runs also without CONFIG_PM enabled
- Fix an off-by-one error in the 104-DIO-48E driver
- Fix warnings in bcm_kona_gpio_reset()"
* tag 'gpio-v4.7-3' of git://git.kernel.org/pub/scm/linux/kernel/git/linusw/linux-gpio:
gpio: bcm-kona: fix bcm_kona_gpio_reset() warnings
gpio: select ANON_INODES
gpio: include <linux/io-mapping.h> in gpiolib-of
gpiolib: Fix unaligned used of reference counters
gpiolib: Fix NULL pointer deference
gpio: zynq: initialize clock even without CONFIG_PM
gpio: 104-dio-48e: Fix control port offset computation off-by-one error
Two current fixes: one affects Qemu CD ROM emulation, which stopped
working after the updates in SCSI to require VPD pages from all
conformant devices. Fix temporarily by blacklisting Qemu (we can
relax later when they come into compliance). The other is a fix to
the optimal transfer size. We set up a minefield for ourselves by
being confused about whether the limits are in bytes or sectors (SCSI
optimal is in blocks and the queue parameter is in bytes). This tries
to fix the problem (wrong setting for queue limits max_sectors) and
make the problem more obvious by introducing a wrapper function.
Signed-off-by: James Bottomley <jejb@linux.vnet.ibm.com>
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v2
iQIcBAABAgAGBQJXXFPNAAoJEAVr7HOZEZN4csgP/07DjzZ8Q3Qz4TKnDfXGU+Ba
qL2feBWk+anJTHM+WZ3RYLemCiG4k2tKtuFt7V1NoYovPXCXTz4QbL1rNqBMRjME
iD+FHkIXReGMQc8dQTD09UHPY9aEeQLgK/+qJ79PhSuAxZd8PSW+jWR6Df7PNlZ9
AWIXpcKShRy0YxTacIQg7M8wjgQXyhEC9D0epvJOGxAJvTxT5Ruju6WEzZrkmr4J
Nv2QXFUhlIPxeADulBJ/0muQLU6ZMW10v7J2SpBAtsgKBJsYCYAhcDMmyZRZkWx3
CsqKH80BAc7YdqZriH2YJ+5srotCbHDCmsNpCzmjo0lav7ws0m+7jHSaur/YDDOm
B9c1ZOWpvaHDEMdZzDn2acpC9J+Xcd5kVyIRKU1heMp4MNWBvbak9YHNB1UfJcio
FTaYEYx1DiXg61rNecGV4i0mqsoUgSB9P8woszPj6Udp5TPkFadjXTqezbsXCFhi
wCeCyKlALHwm/G74Mds/prUeSpjuUsgJdS4zOm78bG1nZecLOtqk2OJeS0A03y9F
KKh4aYikG8yskajjr/t9KvKv9yBSrfJA/vtGFwRN92urU/rfhWRihrgHoi/TikfJ
dIo7EM1VrFYiIPofSdqhlLXRzxM/MlQm71A66tGcnFxv/iMOA1IybukpGz1zGPko
kepWpHLaWQL07qvluCi3
=L9za
-----END PGP SIGNATURE-----
Merge tag 'scsi-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi
Pull SCSI fixes from James Bottomley:
"Two current fixes:
- one affects Qemu CD ROM emulation, which stopped working after the
updates in SCSI to require VPD pages from all conformant devices.
Fix temporarily by blacklisting Qemu (we can relax later when they
come into compliance).
- The other is a fix to the optimal transfer size. We set up a
minefield for ourselves by being confused about whether the limits
are in bytes or sectors (SCSI optimal is in blocks and the queue
parameter is in bytes).
This tries to fix the problem (wrong setting for queue limits
max_sectors) and make the problem more obvious by introducing a
wrapper function"
* tag 'scsi-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi:
sd: Fix rw_max for devices that report an optimal xfer size
scsi: Add QEMU CD-ROM to VPD Inquiry Blacklist
Pull i2c fixes from Wolfram Sang:
- a bigger fix for i801 to finally be able to be loaded on some
machines again
- smaller driver fixes
- documentation update because of a renamed file
* 'i2c/for-current' of git://git.kernel.org/pub/scm/linux/kernel/git/wsa/linux:
i2c: mux: reg: Provide of_match_table
i2c: mux: refer to i2c-mux.txt
i2c: octeon: Avoid printk after too long SMBUS message
i2c: octeon: Missing AAK flag in case of I2C_M_RECV_LEN
i2c: i801: Allow ACPI SystemIO OpRegion to conflict with PCI BAR
- Fix unflatten_dt_nodes when dad parameter is set.
- Add vendor prefixes for TechNexion and UniWest
- Documentation fix for Marvell BT
- OF IRQ kerneldoc fixes
- Restrict CMA alignment adjustments to non dma-coherent
- Couple of warning fixes in reserved-memory code
- DT maintainers updates
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQIcBAABAgAGBQJXWcdCAAoJEPr7XbWNvGHDY7IQAIGnb+J9cZPZCCGykk+nx2q3
GOpG/f+Y6U7EPTfWJPIbN2TG1WAfwBsej2SQDkl9lr1kb4oCqQWy7d4oDH9v4PzO
tPXbE3TsJsGxNvnCk9oMDH2i+TvrtXKD1OJNwH9DzheuSlwMcBTwTfG5N/wETNIt
6mSlG36V1p+Znrvr2yL1x2brtp0jVo2MX8/eLpGRHOeF37dLUqsXmgD8YBo1yenw
jj6oQ+6oWsKho3PZjbR3jmoDYxjYZoluioBPfNZrW9h3nMn6/yD+hoBWUxc/Rxg2
3LGEt0t4v1GZF1Dl852mECb+oi0dlcCULGLpOYk/Xb7vmgh8y/WH84NbCeGxDEiZ
Q/4QYDxaMvd+OCTUi4VyToOxrmCCPZ/oNIAt/+1hCjP1IEljGDP+pksoQWE5tmUX
CfEI4A4GK/0hNnjehXJ0NOJrCsFDuKdGsv3wCtIt51GQsYzJAEXqzrB6tHp9OIwO
1BqDk1a2DtNack7Yj9nJib0/IR4iguSIGOchk8zlnBK8Jgqmd9YeJT1A/bIwaT8u
SJo0u/4H684IIdfZX5Lf1YElETJvNat020cB2ObQYxqTD40o64MitJiBYBVEIYDh
tMF9rBNMY4+gdblef0HILkL2ePUW/tnrLCfR1vCtFy9INUrk5OntN+uBkkfSQZrs
tbAOfpuL6ff+5x5evj17
=rXx9
-----END PGP SIGNATURE-----
Merge tag 'devicetree-fixes-for-4.7' of git://git.kernel.org/pub/scm/linux/kernel/git/robh/linux
Pull DeviceTree fixes from Rob Herring:
- fix unflatten_dt_nodes when dad parameter is set.
- add vendor prefixes for TechNexion and UniWest
- documentation fix for Marvell BT
- OF IRQ kerneldoc fixes
- restrict CMA alignment adjustments to non dma-coherent
- a couple of warning fixes in reserved-memory code
- DT maintainers updates
* tag 'devicetree-fixes-for-4.7' of git://git.kernel.org/pub/scm/linux/kernel/git/robh/linux:
drivers: of: add definition of early_init_dt_alloc_reserved_memory_arch
drivers/of: Fix depth for sub-tree blob in unflatten_dt_nodes()
drivers: of: Fix of_pci.h header guard
dt-bindings: Add vendor prefix for TechNexion
of: add vendor prefix for UniWest
dt: bindings: fix documentation for MARVELL's bt-sd8xxx wireless device
of: add missing const for of_parse_phandle_with_args() in !CONFIG_OF
of: silence warnings due to max() usage
drivers: of: of_reserved_mem: fixup the CMA alignment not to affect dma-coherent
of: irq: fix of_irq_get[_byname]() kernel-doc
MAINTAINERS: DeviceTree maintainer updates