Merge branch 'master' of github.com:davem330/net

Conflicts:
	net/batman-adv/soft-interface.c
This commit is contained in:
David S. Miller 2011-10-07 13:38:43 -04:00
commit 88c5100c28
178 changed files with 1255 additions and 1000 deletions

View file

@ -35,13 +35,6 @@ the Out-Of-Spec bit. Following table summarizes the exported sysfs files:
All Sysfs entries are named with their core_id (represented here by 'X').
tempX_input - Core temperature (in millidegrees Celsius).
tempX_max - All cooling devices should be turned on (on Core2).
Initialized with IA32_THERM_INTERRUPT. When the CPU
temperature reaches this temperature, an interrupt is
generated and tempX_max_alarm is set.
tempX_max_hyst - If the CPU temperature falls below than temperature,
an interrupt is generated and tempX_max_alarm is reset.
tempX_max_alarm - Set if the temperature reaches or exceeds tempX_max.
Reset if the temperature drops to or below tempX_max_hyst.
tempX_crit - Maximum junction temperature (in millidegrees Celsius).
tempX_crit_alarm - Set when Out-of-spec bit is set, never clears.
Correct CPU operation is no longer guaranteed.
@ -49,9 +42,10 @@ tempX_label - Contains string "Core X", where X is processor
number. For Package temp, this will be "Physical id Y",
where Y is the package number.
The TjMax temperature is set to 85 degrees C if undocumented model specific
register (UMSR) 0xee has bit 30 set. If not the TjMax is 100 degrees C as
(sometimes) documented in processor datasheet.
On CPU models which support it, TjMax is read from a model-specific register.
On other models, it is set to an arbitrary value based on weak heuristics.
If these heuristics don't work for you, you can pass the correct TjMax value
as a module parameter (tjmax).
Appendix A. Known TjMax lists (TBD):
Some information comes from ark.intel.com

View file

@ -1042,7 +1042,7 @@ conf/interface/*:
The functional behaviour for certain settings is different
depending on whether local forwarding is enabled or not.
accept_ra - BOOLEAN
accept_ra - INTEGER
Accept Router Advertisements; autoconfigure using them.
It also determines whether or not to transmit Router
@ -1111,7 +1111,7 @@ dad_transmits - INTEGER
The amount of Duplicate Address Detection probes to send.
Default: 1
forwarding - BOOLEAN
forwarding - INTEGER
Configure interface-specific Host/Router behaviour.
Note: It is recommended to have the same setting on all

View file

@ -27,7 +27,7 @@ applying a filter to each packet that assigns it to one of a small number
of logical flows. Packets for each flow are steered to a separate receive
queue, which in turn can be processed by separate CPUs. This mechanism is
generally known as “Receive-side Scaling” (RSS). The goal of RSS and
the other scaling techniques to increase performance uniformly.
the other scaling techniques is to increase performance uniformly.
Multi-queue distribution can also be used for traffic prioritization, but
that is not the focus of these techniques.
@ -186,10 +186,10 @@ are steered using plain RPS. Multiple table entries may point to the
same CPU. Indeed, with many flows and few CPUs, it is very likely that
a single application thread handles flows with many different flow hashes.
rps_sock_table is a global flow table that contains the *desired* CPU for
flows: the CPU that is currently processing the flow in userspace. Each
table value is a CPU index that is updated during calls to recvmsg and
sendmsg (specifically, inet_recvmsg(), inet_sendmsg(), inet_sendpage()
rps_sock_flow_table is a global flow table that contains the *desired* CPU
for flows: the CPU that is currently processing the flow in userspace.
Each table value is a CPU index that is updated during calls to recvmsg
and sendmsg (specifically, inet_recvmsg(), inet_sendmsg(), inet_sendpage()
and tcp_splice_read()).
When the scheduler moves a thread to a new CPU while it has outstanding
@ -243,7 +243,7 @@ configured. The number of entries in the global flow table is set through:
The number of entries in the per-queue flow table are set through:
/sys/class/net/<dev>/queues/tx-<n>/rps_flow_cnt
/sys/class/net/<dev>/queues/rx-<n>/rps_flow_cnt
== Suggested Configuration

View file

@ -123,10 +123,11 @@ be automatically shutdown if it's set to "never".
khugepaged runs usually at low frequency so while one may not want to
invoke defrag algorithms synchronously during the page faults, it
should be worth invoking defrag at least in khugepaged. However it's
also possible to disable defrag in khugepaged:
also possible to disable defrag in khugepaged by writing 0 or enable
defrag in khugepaged by writing 1:
echo yes >/sys/kernel/mm/transparent_hugepage/khugepaged/defrag
echo no >/sys/kernel/mm/transparent_hugepage/khugepaged/defrag
echo 0 >/sys/kernel/mm/transparent_hugepage/khugepaged/defrag
echo 1 >/sys/kernel/mm/transparent_hugepage/khugepaged/defrag
You can also control how many pages khugepaged should scan at each
pass:

View file

@ -6382,7 +6382,6 @@ S: Supported
F: arch/arm/mach-tegra
TEHUTI ETHERNET DRIVER
M: Alexander Indenbaum <baum@tehutinetworks.net>
M: Andy Gospodarek <andy@greyhouse.net>
L: netdev@vger.kernel.org
S: Supported

View file

@ -1,7 +1,7 @@
VERSION = 3
PATCHLEVEL = 1
SUBLEVEL = 0
EXTRAVERSION = -rc7
EXTRAVERSION = -rc9
NAME = "Divemaster Edition"
# *DOCUMENTATION*

View file

@ -1283,6 +1283,20 @@ config ARM_ERRATA_364296
processor into full low interrupt latency mode. ARM11MPCore
is not affected.
config ARM_ERRATA_764369
bool "ARM errata: Data cache line maintenance operation by MVA may not succeed"
depends on CPU_V7 && SMP
help
This option enables the workaround for erratum 764369
affecting Cortex-A9 MPCore with two or more processors (all
current revisions). Under certain timing circumstances, a data
cache line maintenance operation by MVA targeting an Inner
Shareable memory region may fail to proceed up to either the
Point of Coherency or to the Point of Unification of the
system. This workaround adds a DSB instruction before the
relevant cache maintenance functions and sets a specific bit
in the diagnostic control register of the SCU.
endmenu
source "arch/arm/common/Kconfig"

View file

@ -25,17 +25,17 @@
#ifdef CONFIG_SMP
#define __futex_atomic_op(insn, ret, oldval, uaddr, oparg) \
#define __futex_atomic_op(insn, ret, oldval, tmp, uaddr, oparg) \
smp_mb(); \
__asm__ __volatile__( \
"1: ldrex %1, [%2]\n" \
"1: ldrex %1, [%3]\n" \
" " insn "\n" \
"2: strex %1, %0, [%2]\n" \
" teq %1, #0\n" \
"2: strex %2, %0, [%3]\n" \
" teq %2, #0\n" \
" bne 1b\n" \
" mov %0, #0\n" \
__futex_atomic_ex_table("%4") \
: "=&r" (ret), "=&r" (oldval) \
__futex_atomic_ex_table("%5") \
: "=&r" (ret), "=&r" (oldval), "=&r" (tmp) \
: "r" (uaddr), "r" (oparg), "Ir" (-EFAULT) \
: "cc", "memory")
@ -73,14 +73,14 @@ futex_atomic_cmpxchg_inatomic(u32 *uval, u32 __user *uaddr,
#include <linux/preempt.h>
#include <asm/domain.h>
#define __futex_atomic_op(insn, ret, oldval, uaddr, oparg) \
#define __futex_atomic_op(insn, ret, oldval, tmp, uaddr, oparg) \
__asm__ __volatile__( \
"1: " T(ldr) " %1, [%2]\n" \
"1: " T(ldr) " %1, [%3]\n" \
" " insn "\n" \
"2: " T(str) " %0, [%2]\n" \
"2: " T(str) " %0, [%3]\n" \
" mov %0, #0\n" \
__futex_atomic_ex_table("%4") \
: "=&r" (ret), "=&r" (oldval) \
__futex_atomic_ex_table("%5") \
: "=&r" (ret), "=&r" (oldval), "=&r" (tmp) \
: "r" (uaddr), "r" (oparg), "Ir" (-EFAULT) \
: "cc", "memory")
@ -117,7 +117,7 @@ futex_atomic_op_inuser (int encoded_op, u32 __user *uaddr)
int cmp = (encoded_op >> 24) & 15;
int oparg = (encoded_op << 8) >> 20;
int cmparg = (encoded_op << 20) >> 20;
int oldval = 0, ret;
int oldval = 0, ret, tmp;
if (encoded_op & (FUTEX_OP_OPARG_SHIFT << 28))
oparg = 1 << oparg;
@ -129,19 +129,19 @@ futex_atomic_op_inuser (int encoded_op, u32 __user *uaddr)
switch (op) {
case FUTEX_OP_SET:
__futex_atomic_op("mov %0, %3", ret, oldval, uaddr, oparg);
__futex_atomic_op("mov %0, %4", ret, oldval, tmp, uaddr, oparg);
break;
case FUTEX_OP_ADD:
__futex_atomic_op("add %0, %1, %3", ret, oldval, uaddr, oparg);
__futex_atomic_op("add %0, %1, %4", ret, oldval, tmp, uaddr, oparg);
break;
case FUTEX_OP_OR:
__futex_atomic_op("orr %0, %1, %3", ret, oldval, uaddr, oparg);
__futex_atomic_op("orr %0, %1, %4", ret, oldval, tmp, uaddr, oparg);
break;
case FUTEX_OP_ANDN:
__futex_atomic_op("and %0, %1, %3", ret, oldval, uaddr, ~oparg);
__futex_atomic_op("and %0, %1, %4", ret, oldval, tmp, uaddr, ~oparg);
break;
case FUTEX_OP_XOR:
__futex_atomic_op("eor %0, %1, %3", ret, oldval, uaddr, oparg);
__futex_atomic_op("eor %0, %1, %4", ret, oldval, tmp, uaddr, oparg);
break;
default:
ret = -ENOSYS;

View file

@ -478,8 +478,8 @@
/*
* Unimplemented (or alternatively implemented) syscalls
*/
#define __IGNORE_fadvise64_64 1
#define __IGNORE_migrate_pages 1
#define __IGNORE_fadvise64_64
#define __IGNORE_migrate_pages
#endif /* __KERNEL__ */
#endif /* __ASM_ARM_UNISTD_H */

View file

@ -13,6 +13,7 @@
#include <asm/smp_scu.h>
#include <asm/cacheflush.h>
#include <asm/cputype.h>
#define SCU_CTRL 0x00
#define SCU_CONFIG 0x04
@ -37,6 +38,15 @@ void __init scu_enable(void __iomem *scu_base)
{
u32 scu_ctrl;
#ifdef CONFIG_ARM_ERRATA_764369
/* Cortex-A9 only */
if ((read_cpuid(CPUID_ID) & 0xff0ffff0) == 0x410fc090) {
scu_ctrl = __raw_readl(scu_base + 0x30);
if (!(scu_ctrl & 1))
__raw_writel(scu_ctrl | 0x1, scu_base + 0x30);
}
#endif
scu_ctrl = __raw_readl(scu_base + SCU_CTRL);
/* already enabled? */
if (scu_ctrl & 1)

View file

@ -23,8 +23,10 @@
#if defined(CONFIG_SMP_ON_UP) && !defined(CONFIG_DEBUG_SPINLOCK)
#define ARM_EXIT_KEEP(x) x
#define ARM_EXIT_DISCARD(x)
#else
#define ARM_EXIT_KEEP(x)
#define ARM_EXIT_DISCARD(x) x
#endif
OUTPUT_ARCH(arm)
@ -39,6 +41,11 @@ jiffies = jiffies_64 + 4;
SECTIONS
{
/*
* XXX: The linker does not define how output sections are
* assigned to input sections when there are multiple statements
* matching the same input section name. There is no documented
* order of matching.
*
* unwind exit sections must be discarded before the rest of the
* unwind sections get included.
*/
@ -47,6 +54,9 @@ SECTIONS
*(.ARM.extab.exit.text)
ARM_CPU_DISCARD(*(.ARM.exidx.cpuexit.text))
ARM_CPU_DISCARD(*(.ARM.extab.cpuexit.text))
ARM_EXIT_DISCARD(EXIT_TEXT)
ARM_EXIT_DISCARD(EXIT_DATA)
EXIT_CALL
#ifndef CONFIG_HOTPLUG
*(.ARM.exidx.devexit.text)
*(.ARM.extab.devexit.text)
@ -58,6 +68,8 @@ SECTIONS
#ifndef CONFIG_SMP_ON_UP
*(.alt.smp.init)
#endif
*(.discard)
*(.discard.*)
}
#ifdef CONFIG_XIP_KERNEL
@ -279,9 +291,6 @@ SECTIONS
STABS_DEBUG
.comment 0 : { *(.comment) }
/* Default discards */
DISCARDS
}
/*

View file

@ -899,8 +899,7 @@ static struct clksrc_clk clksrcs[] = {
.reg_div = { .reg = S5P_CLKDIV_CAM, .shift = 28, .size = 4 },
}, {
.clk = {
.name = "sclk_cam",
.devname = "exynos4-fimc.0",
.name = "sclk_cam0",
.enable = exynos4_clksrc_mask_cam_ctrl,
.ctrlbit = (1 << 16),
},
@ -909,8 +908,7 @@ static struct clksrc_clk clksrcs[] = {
.reg_div = { .reg = S5P_CLKDIV_CAM, .shift = 16, .size = 4 },
}, {
.clk = {
.name = "sclk_cam",
.devname = "exynos4-fimc.1",
.name = "sclk_cam1",
.enable = exynos4_clksrc_mask_cam_ctrl,
.ctrlbit = (1 << 20),
},

View file

@ -128,7 +128,7 @@ static int s3c2443_armclk_setrate(struct clk *clk, unsigned long rate)
unsigned long clkcon0;
clkcon0 = __raw_readl(S3C2443_CLKDIV0);
clkcon0 &= S3C2443_CLKDIV0_ARMDIV_MASK;
clkcon0 &= ~S3C2443_CLKDIV0_ARMDIV_MASK;
clkcon0 |= val << S3C2443_CLKDIV0_ARMDIV_SHIFT;
__raw_writel(clkcon0, S3C2443_CLKDIV0);
}

View file

@ -815,8 +815,7 @@ static struct clksrc_clk clksrcs[] = {
.reg_div = { .reg = S5P_CLK_DIV3, .shift = 20, .size = 4 },
}, {
.clk = {
.name = "sclk_cam",
.devname = "s5pv210-fimc.0",
.name = "sclk_cam0",
.enable = s5pv210_clk_mask0_ctrl,
.ctrlbit = (1 << 3),
},
@ -825,8 +824,7 @@ static struct clksrc_clk clksrcs[] = {
.reg_div = { .reg = S5P_CLK_DIV1, .shift = 12, .size = 4 },
}, {
.clk = {
.name = "sclk_cam",
.devname = "s5pv210-fimc.1",
.name = "sclk_cam1",
.enable = s5pv210_clk_mask0_ctrl,
.ctrlbit = (1 << 4),
},

View file

@ -174,6 +174,10 @@ ENTRY(v7_coherent_user_range)
dcache_line_size r2, r3
sub r3, r2, #1
bic r12, r0, r3
#ifdef CONFIG_ARM_ERRATA_764369
ALT_SMP(W(dsb))
ALT_UP(W(nop))
#endif
1:
USER( mcr p15, 0, r12, c7, c11, 1 ) @ clean D line to the point of unification
add r12, r12, r2
@ -223,6 +227,10 @@ ENTRY(v7_flush_kern_dcache_area)
add r1, r0, r1
sub r3, r2, #1
bic r0, r0, r3
#ifdef CONFIG_ARM_ERRATA_764369
ALT_SMP(W(dsb))
ALT_UP(W(nop))
#endif
1:
mcr p15, 0, r0, c7, c14, 1 @ clean & invalidate D line / unified line
add r0, r0, r2
@ -247,6 +255,10 @@ v7_dma_inv_range:
sub r3, r2, #1
tst r0, r3
bic r0, r0, r3
#ifdef CONFIG_ARM_ERRATA_764369
ALT_SMP(W(dsb))
ALT_UP(W(nop))
#endif
mcrne p15, 0, r0, c7, c14, 1 @ clean & invalidate D / U line
tst r1, r3
@ -270,6 +282,10 @@ v7_dma_clean_range:
dcache_line_size r2, r3
sub r3, r2, #1
bic r0, r0, r3
#ifdef CONFIG_ARM_ERRATA_764369
ALT_SMP(W(dsb))
ALT_UP(W(nop))
#endif
1:
mcr p15, 0, r0, c7, c10, 1 @ clean D / U line
add r0, r0, r2
@ -288,6 +304,10 @@ ENTRY(v7_dma_flush_range)
dcache_line_size r2, r3
sub r3, r2, #1
bic r0, r0, r3
#ifdef CONFIG_ARM_ERRATA_764369
ALT_SMP(W(dsb))
ALT_UP(W(nop))
#endif
1:
mcr p15, 0, r0, c7, c14, 1 @ clean & invalidate D / U line
add r0, r0, r2

View file

@ -324,6 +324,8 @@ __dma_alloc(struct device *dev, size_t size, dma_addr_t *handle, gfp_t gfp,
if (addr)
*handle = pfn_to_dma(dev, page_to_pfn(page));
else
__dma_free_buffer(page, size);
return addr;
}

View file

@ -114,17 +114,18 @@ static __init int s5p_gpioint_add(struct s3c_gpio_chip *chip)
{
static int used_gpioint_groups = 0;
int group = chip->group;
struct s5p_gpioint_bank *bank = NULL;
struct s5p_gpioint_bank *b, *bank = NULL;
struct irq_chip_generic *gc;
struct irq_chip_type *ct;
if (used_gpioint_groups >= S5P_GPIOINT_GROUP_COUNT)
return -ENOMEM;
list_for_each_entry(bank, &banks, list) {
if (group >= bank->start &&
group < bank->start + bank->nr_groups)
list_for_each_entry(b, &banks, list) {
if (group >= b->start && group < b->start + b->nr_groups) {
bank = b;
break;
}
}
if (!bank)
return -EINVAL;

View file

@ -561,6 +561,20 @@ static struct pci_ops u4_pcie_pci_ops =
.write = u4_pcie_write_config,
};
static void __devinit pmac_pci_fixup_u4_of_node(struct pci_dev *dev)
{
/* Apple's device-tree "hides" the root complex virtual P2P bridge
* on U4. However, Linux sees it, causing the PCI <-> OF matching
* code to fail to properly match devices below it. This works around
* it by setting the node of the bridge to point to the PHB node,
* which is not entirely correct but fixes the matching code and
* doesn't break anything else. It's also the simplest possible fix.
*/
if (dev->dev.of_node == NULL)
dev->dev.of_node = pcibios_get_phb_of_node(dev->bus);
}
DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_APPLE, 0x5b, pmac_pci_fixup_u4_of_node);
#endif /* CONFIG_PPC64 */
#ifdef CONFIG_PPC32

View file

@ -188,7 +188,8 @@ extern char elf_platform[];
#define SET_PERSONALITY(ex) \
do { \
if (personality(current->personality) != PER_LINUX32) \
set_personality(PER_LINUX); \
set_personality(PER_LINUX | \
(current->personality & ~PER_MASK)); \
if ((ex).e_ident[EI_CLASS] == ELFCLASS32) \
set_thread_flag(TIF_31BIT); \
else \

View file

@ -658,12 +658,14 @@ static inline void pgste_set_pte(pte_t *ptep, pgste_t pgste)
* struct gmap_struct - guest address space
* @mm: pointer to the parent mm_struct
* @table: pointer to the page directory
* @asce: address space control element for gmap page table
* @crst_list: list of all crst tables used in the guest address space
*/
struct gmap {
struct list_head list;
struct mm_struct *mm;
unsigned long *table;
unsigned long asce;
struct list_head crst_list;
};

View file

@ -10,6 +10,7 @@
#include <linux/sched.h>
#include <asm/vdso.h>
#include <asm/sigp.h>
#include <asm/pgtable.h>
/*
* Make sure that the compiler is new enough. We want a compiler that
@ -126,6 +127,7 @@ int main(void)
DEFINE(__LC_KERNEL_STACK, offsetof(struct _lowcore, kernel_stack));
DEFINE(__LC_ASYNC_STACK, offsetof(struct _lowcore, async_stack));
DEFINE(__LC_PANIC_STACK, offsetof(struct _lowcore, panic_stack));
DEFINE(__LC_USER_ASCE, offsetof(struct _lowcore, user_asce));
DEFINE(__LC_INT_CLOCK, offsetof(struct _lowcore, int_clock));
DEFINE(__LC_MCCK_CLOCK, offsetof(struct _lowcore, mcck_clock));
DEFINE(__LC_MACHINE_FLAGS, offsetof(struct _lowcore, machine_flags));
@ -151,6 +153,7 @@ int main(void)
DEFINE(__LC_VDSO_PER_CPU, offsetof(struct _lowcore, vdso_per_cpu_data));
DEFINE(__LC_GMAP, offsetof(struct _lowcore, gmap));
DEFINE(__LC_CMF_HPP, offsetof(struct _lowcore, cmf_hpp));
DEFINE(__GMAP_ASCE, offsetof(struct gmap, asce));
#endif /* CONFIG_32BIT */
return 0;
}

View file

@ -1076,6 +1076,11 @@ sie_loop:
lg %r14,__LC_THREAD_INFO # pointer thread_info struct
tm __TI_flags+7(%r14),_TIF_EXIT_SIE
jnz sie_exit
lg %r14,__LC_GMAP # get gmap pointer
ltgr %r14,%r14
jz sie_gmap
lctlg %c1,%c1,__GMAP_ASCE(%r14) # load primary asce
sie_gmap:
lg %r14,__SF_EMPTY(%r15) # get control block pointer
SPP __SF_EMPTY(%r15) # set guest id
sie 0(%r14)
@ -1083,6 +1088,7 @@ sie_done:
SPP __LC_CMF_HPP # set host id
lg %r14,__LC_THREAD_INFO # pointer thread_info struct
sie_exit:
lctlg %c1,%c1,__LC_USER_ASCE # load primary asce
ni __TI_flags+6(%r14),255-(_TIF_SIE>>8)
lg %r14,__SF_EMPTY+8(%r15) # load guest register save area
stmg %r0,%r13,0(%r14) # save guest gprs 0-13

View file

@ -123,6 +123,7 @@ int kvm_dev_ioctl_check_extension(long ext)
switch (ext) {
case KVM_CAP_S390_PSW:
case KVM_CAP_S390_GMAP:
r = 1;
break;
default:
@ -263,10 +264,12 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
vcpu->arch.guest_fpregs.fpc &= FPC_VALID_MASK;
restore_fp_regs(&vcpu->arch.guest_fpregs);
restore_access_regs(vcpu->arch.guest_acrs);
gmap_enable(vcpu->arch.gmap);
}
void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu)
{
gmap_disable(vcpu->arch.gmap);
save_fp_regs(&vcpu->arch.guest_fpregs);
save_access_regs(vcpu->arch.guest_acrs);
restore_fp_regs(&vcpu->arch.host_fpregs);
@ -461,7 +464,6 @@ static void __vcpu_run(struct kvm_vcpu *vcpu)
local_irq_disable();
kvm_guest_enter();
local_irq_enable();
gmap_enable(vcpu->arch.gmap);
VCPU_EVENT(vcpu, 6, "entering sie flags %x",
atomic_read(&vcpu->arch.sie_block->cpuflags));
if (sie64a(vcpu->arch.sie_block, vcpu->arch.guest_gprs)) {
@ -470,7 +472,6 @@ static void __vcpu_run(struct kvm_vcpu *vcpu)
}
VCPU_EVENT(vcpu, 6, "exit sie icptcode %d",
vcpu->arch.sie_block->icptcode);
gmap_disable(vcpu->arch.gmap);
local_irq_disable();
kvm_guest_exit();
local_irq_enable();

View file

@ -160,6 +160,8 @@ struct gmap *gmap_alloc(struct mm_struct *mm)
table = (unsigned long *) page_to_phys(page);
crst_table_init(table, _REGION1_ENTRY_EMPTY);
gmap->table = table;
gmap->asce = _ASCE_TYPE_REGION1 | _ASCE_TABLE_LENGTH |
_ASCE_USER_BITS | __pa(table);
list_add(&gmap->list, &mm->context.gmap_list);
return gmap;
@ -240,10 +242,6 @@ EXPORT_SYMBOL_GPL(gmap_free);
*/
void gmap_enable(struct gmap *gmap)
{
/* Load primary space page table origin. */
S390_lowcore.user_asce = _ASCE_TYPE_REGION1 | _ASCE_TABLE_LENGTH |
_ASCE_USER_BITS | __pa(gmap->table);
asm volatile("lctlg 1,1,%0\n" : : "m" (S390_lowcore.user_asce) );
S390_lowcore.gmap = (unsigned long) gmap;
}
EXPORT_SYMBOL_GPL(gmap_enable);
@ -254,10 +252,6 @@ EXPORT_SYMBOL_GPL(gmap_enable);
*/
void gmap_disable(struct gmap *gmap)
{
/* Load primary space page table origin. */
S390_lowcore.user_asce =
gmap->mm->context.asce_bits | __pa(gmap->mm->pgd);
asm volatile("lctlg 1,1,%0\n" : : "m" (S390_lowcore.user_asce) );
S390_lowcore.gmap = 0UL;
}
EXPORT_SYMBOL_GPL(gmap_disable);
@ -309,15 +303,15 @@ int gmap_unmap_segment(struct gmap *gmap, unsigned long to, unsigned long len)
/* Walk the guest addr space page table */
table = gmap->table + (((to + off) >> 53) & 0x7ff);
if (*table & _REGION_ENTRY_INV)
return 0;
goto out;
table = (unsigned long *)(*table & _REGION_ENTRY_ORIGIN);
table = table + (((to + off) >> 42) & 0x7ff);
if (*table & _REGION_ENTRY_INV)
return 0;
goto out;
table = (unsigned long *)(*table & _REGION_ENTRY_ORIGIN);
table = table + (((to + off) >> 31) & 0x7ff);
if (*table & _REGION_ENTRY_INV)
return 0;
goto out;
table = (unsigned long *)(*table & _REGION_ENTRY_ORIGIN);
table = table + (((to + off) >> 20) & 0x7ff);
@ -325,6 +319,7 @@ int gmap_unmap_segment(struct gmap *gmap, unsigned long to, unsigned long len)
flush |= gmap_unlink_segment(gmap, table);
*table = _SEGMENT_ENTRY_INV;
}
out:
up_read(&gmap->mm->mmap_sem);
if (flush)
gmap_flush_tlb(gmap);

View file

@ -43,6 +43,8 @@
#define SUN4V_CHIP_NIAGARA1 0x01
#define SUN4V_CHIP_NIAGARA2 0x02
#define SUN4V_CHIP_NIAGARA3 0x03
#define SUN4V_CHIP_NIAGARA4 0x04
#define SUN4V_CHIP_NIAGARA5 0x05
#define SUN4V_CHIP_UNKNOWN 0xff
#ifndef __ASSEMBLY__

View file

@ -66,6 +66,8 @@ static struct xor_block_template xor_block_niagara = {
((tlb_type == hypervisor && \
(sun4v_chip_type == SUN4V_CHIP_NIAGARA1 || \
sun4v_chip_type == SUN4V_CHIP_NIAGARA2 || \
sun4v_chip_type == SUN4V_CHIP_NIAGARA3)) ? \
sun4v_chip_type == SUN4V_CHIP_NIAGARA3 || \
sun4v_chip_type == SUN4V_CHIP_NIAGARA4 || \
sun4v_chip_type == SUN4V_CHIP_NIAGARA5)) ? \
&xor_block_niagara : \
&xor_block_VIS)

View file

@ -481,6 +481,18 @@ static void __init sun4v_cpu_probe(void)
sparc_pmu_type = "niagara3";
break;
case SUN4V_CHIP_NIAGARA4:
sparc_cpu_type = "UltraSparc T4 (Niagara4)";
sparc_fpu_type = "UltraSparc T4 integrated FPU";
sparc_pmu_type = "niagara4";
break;
case SUN4V_CHIP_NIAGARA5:
sparc_cpu_type = "UltraSparc T5 (Niagara5)";
sparc_fpu_type = "UltraSparc T5 integrated FPU";
sparc_pmu_type = "niagara5";
break;
default:
printk(KERN_WARNING "CPU: Unknown sun4v cpu type [%s]\n",
prom_cpu_compatible);

View file

@ -325,6 +325,8 @@ static int iterate_cpu(struct cpuinfo_tree *t, unsigned int root_index)
case SUN4V_CHIP_NIAGARA1:
case SUN4V_CHIP_NIAGARA2:
case SUN4V_CHIP_NIAGARA3:
case SUN4V_CHIP_NIAGARA4:
case SUN4V_CHIP_NIAGARA5:
rover_inc_table = niagara_iterate_method;
break;
default:

View file

@ -133,7 +133,7 @@ prom_sun4v_name:
prom_niagara_prefix:
.asciz "SUNW,UltraSPARC-T"
prom_sparc_prefix:
.asciz "SPARC-T"
.asciz "SPARC-"
.align 4
prom_root_compatible:
.skip 64
@ -396,7 +396,7 @@ sun4v_chip_type:
or %g1, %lo(prom_cpu_compatible), %g1
sethi %hi(prom_sparc_prefix), %g7
or %g7, %lo(prom_sparc_prefix), %g7
mov 7, %g3
mov 6, %g3
90: ldub [%g7], %g2
ldub [%g1], %g4
cmp %g2, %g4
@ -408,10 +408,23 @@ sun4v_chip_type:
sethi %hi(prom_cpu_compatible), %g1
or %g1, %lo(prom_cpu_compatible), %g1
ldub [%g1 + 7], %g2
ldub [%g1 + 6], %g2
cmp %g2, 'T'
be,pt %xcc, 70f
cmp %g2, 'M'
bne,pn %xcc, 4f
nop
70: ldub [%g1 + 7], %g2
cmp %g2, '3'
be,pt %xcc, 5f
mov SUN4V_CHIP_NIAGARA3, %g4
cmp %g2, '4'
be,pt %xcc, 5f
mov SUN4V_CHIP_NIAGARA4, %g4
cmp %g2, '5'
be,pt %xcc, 5f
mov SUN4V_CHIP_NIAGARA5, %g4
ba,pt %xcc, 4f
nop
@ -543,6 +556,12 @@ niagara_tlb_fixup:
be,pt %xcc, niagara2_patch
nop
cmp %g1, SUN4V_CHIP_NIAGARA3
be,pt %xcc, niagara2_patch
nop
cmp %g1, SUN4V_CHIP_NIAGARA4
be,pt %xcc, niagara2_patch
nop
cmp %g1, SUN4V_CHIP_NIAGARA5
be,pt %xcc, niagara2_patch
nop

View file

@ -380,8 +380,7 @@ void flush_thread(void)
#endif
}
/* Now, this task is no longer a kernel thread. */
current->thread.current_ds = USER_DS;
/* This task is no longer a kernel thread. */
if (current->thread.flags & SPARC_FLAG_KTHREAD) {
current->thread.flags &= ~SPARC_FLAG_KTHREAD;

View file

@ -368,9 +368,6 @@ void flush_thread(void)
/* Clear FPU register state. */
t->fpsaved[0] = 0;
if (get_thread_current_ds() != ASI_AIUS)
set_fs(USER_DS);
}
/* It's a bit more tricky when 64-bit tasks are involved... */

View file

@ -137,7 +137,7 @@ static void __init process_switch(char c)
prom_halt();
break;
case 'p':
/* Just ignore, this behavior is now the default. */
prom_early_console.flags &= ~CON_BOOT;
break;
default:
printk("Unknown boot switch (-%c)\n", c);

View file

@ -106,7 +106,7 @@ static void __init process_switch(char c)
prom_halt();
break;
case 'p':
/* Just ignore, this behavior is now the default. */
prom_early_console.flags &= ~CON_BOOT;
break;
case 'P':
/* Force UltraSPARC-III P-Cache on. */
@ -425,10 +425,14 @@ static void __init init_sparc64_elf_hwcap(void)
else if (tlb_type == hypervisor) {
if (sun4v_chip_type == SUN4V_CHIP_NIAGARA1 ||
sun4v_chip_type == SUN4V_CHIP_NIAGARA2 ||
sun4v_chip_type == SUN4V_CHIP_NIAGARA3)
sun4v_chip_type == SUN4V_CHIP_NIAGARA3 ||
sun4v_chip_type == SUN4V_CHIP_NIAGARA4 ||
sun4v_chip_type == SUN4V_CHIP_NIAGARA5)
cap |= HWCAP_SPARC_BLKINIT;
if (sun4v_chip_type == SUN4V_CHIP_NIAGARA2 ||
sun4v_chip_type == SUN4V_CHIP_NIAGARA3)
sun4v_chip_type == SUN4V_CHIP_NIAGARA3 ||
sun4v_chip_type == SUN4V_CHIP_NIAGARA4 ||
sun4v_chip_type == SUN4V_CHIP_NIAGARA5)
cap |= HWCAP_SPARC_N2;
}
@ -452,11 +456,15 @@ static void __init init_sparc64_elf_hwcap(void)
if (sun4v_chip_type == SUN4V_CHIP_NIAGARA1)
cap |= AV_SPARC_ASI_BLK_INIT;
if (sun4v_chip_type == SUN4V_CHIP_NIAGARA2 ||
sun4v_chip_type == SUN4V_CHIP_NIAGARA3)
sun4v_chip_type == SUN4V_CHIP_NIAGARA3 ||
sun4v_chip_type == SUN4V_CHIP_NIAGARA4 ||
sun4v_chip_type == SUN4V_CHIP_NIAGARA5)
cap |= (AV_SPARC_VIS | AV_SPARC_VIS2 |
AV_SPARC_ASI_BLK_INIT |
AV_SPARC_POPC);
if (sun4v_chip_type == SUN4V_CHIP_NIAGARA3)
if (sun4v_chip_type == SUN4V_CHIP_NIAGARA3 ||
sun4v_chip_type == SUN4V_CHIP_NIAGARA4 ||
sun4v_chip_type == SUN4V_CHIP_NIAGARA5)
cap |= (AV_SPARC_VIS3 | AV_SPARC_HPC |
AV_SPARC_FMAF);
}

View file

@ -511,6 +511,11 @@ static void __init read_obp_translations(void)
for (i = 0; i < prom_trans_ents; i++)
prom_trans[i].data &= ~0x0003fe0000000000UL;
}
/* Force execute bit on. */
for (i = 0; i < prom_trans_ents; i++)
prom_trans[i].data |= (tlb_type == hypervisor ?
_PAGE_EXEC_4V : _PAGE_EXEC_4U);
}
static void __init hypervisor_tlb_lock(unsigned long vaddr,

View file

@ -42,8 +42,11 @@ int mach_set_rtc_mmss(unsigned long nowtime)
{
int real_seconds, real_minutes, cmos_minutes;
unsigned char save_control, save_freq_select;
unsigned long flags;
int retval = 0;
spin_lock_irqsave(&rtc_lock, flags);
/* tell the clock it's being set */
save_control = CMOS_READ(RTC_CONTROL);
CMOS_WRITE((save_control|RTC_SET), RTC_CONTROL);
@ -93,12 +96,17 @@ int mach_set_rtc_mmss(unsigned long nowtime)
CMOS_WRITE(save_control, RTC_CONTROL);
CMOS_WRITE(save_freq_select, RTC_FREQ_SELECT);
spin_unlock_irqrestore(&rtc_lock, flags);
return retval;
}
unsigned long mach_get_cmos_time(void)
{
unsigned int status, year, mon, day, hour, min, sec, century = 0;
unsigned long flags;
spin_lock_irqsave(&rtc_lock, flags);
/*
* If UIP is clear, then we have >= 244 microseconds before
@ -125,6 +133,8 @@ unsigned long mach_get_cmos_time(void)
status = CMOS_READ(RTC_CONTROL);
WARN_ON_ONCE(RTC_ALWAYS_BCD && (status & RTC_DM_BINARY));
spin_unlock_irqrestore(&rtc_lock, flags);
if (RTC_ALWAYS_BCD || !(status & RTC_DM_BINARY)) {
sec = bcd2bin(sec);
min = bcd2bin(min);
@ -169,24 +179,15 @@ EXPORT_SYMBOL(rtc_cmos_write);
int update_persistent_clock(struct timespec now)
{
unsigned long flags;
int retval;
spin_lock_irqsave(&rtc_lock, flags);
retval = x86_platform.set_wallclock(now.tv_sec);
spin_unlock_irqrestore(&rtc_lock, flags);
return retval;
return x86_platform.set_wallclock(now.tv_sec);
}
/* not static: needed by APM */
void read_persistent_clock(struct timespec *ts)
{
unsigned long retval, flags;
unsigned long retval;
spin_lock_irqsave(&rtc_lock, flags);
retval = x86_platform.get_wallclock();
spin_unlock_irqrestore(&rtc_lock, flags);
ts->tv_sec = retval;
ts->tv_nsec = 0;

View file

@ -3603,7 +3603,7 @@ int x86_decode_insn(struct x86_emulate_ctxt *ctxt, void *insn, int insn_len)
break;
case Src2CL:
ctxt->src2.bytes = 1;
ctxt->src2.val = ctxt->regs[VCPU_REGS_RCX] & 0x8;
ctxt->src2.val = ctxt->regs[VCPU_REGS_RCX] & 0xff;
break;
case Src2ImmByte:
rc = decode_imm(ctxt, &ctxt->src2, 1, true);

View file

@ -400,7 +400,8 @@ static u64 __update_clear_spte_slow(u64 *sptep, u64 spte)
/* xchg acts as a barrier before the setting of the high bits */
orig.spte_low = xchg(&ssptep->spte_low, sspte.spte_low);
orig.spte_high = ssptep->spte_high = sspte.spte_high;
orig.spte_high = ssptep->spte_high;
ssptep->spte_high = sspte.spte_high;
count_spte_clear(sptep, spte);
return orig.spte;

View file

@ -43,6 +43,17 @@ static const struct dmi_system_id pci_use_crs_table[] __initconst = {
DMI_MATCH(DMI_PRODUCT_NAME, "ALiveSATA2-GLAN"),
},
},
/* https://bugzilla.kernel.org/show_bug.cgi?id=30552 */
/* 2006 AMD HT/VIA system with two host bridges */
{
.callback = set_use_crs,
.ident = "ASUS M2V-MX SE",
.matches = {
DMI_MATCH(DMI_BOARD_VENDOR, "ASUSTeK Computer INC."),
DMI_MATCH(DMI_BOARD_NAME, "M2V-MX SE"),
DMI_MATCH(DMI_BIOS_VENDOR, "American Megatrends Inc."),
},
},
{}
};

View file

@ -58,8 +58,11 @@ EXPORT_SYMBOL_GPL(vrtc_cmos_write);
unsigned long vrtc_get_time(void)
{
u8 sec, min, hour, mday, mon;
unsigned long flags;
u32 year;
spin_lock_irqsave(&rtc_lock, flags);
while ((vrtc_cmos_read(RTC_FREQ_SELECT) & RTC_UIP))
cpu_relax();
@ -70,6 +73,8 @@ unsigned long vrtc_get_time(void)
mon = vrtc_cmos_read(RTC_MONTH);
year = vrtc_cmos_read(RTC_YEAR);
spin_unlock_irqrestore(&rtc_lock, flags);
/* vRTC YEAR reg contains the offset to 1960 */
year += 1960;
@ -83,8 +88,10 @@ unsigned long vrtc_get_time(void)
int vrtc_set_mmss(unsigned long nowtime)
{
int real_sec, real_min;
unsigned long flags;
int vrtc_min;
spin_lock_irqsave(&rtc_lock, flags);
vrtc_min = vrtc_cmos_read(RTC_MINUTES);
real_sec = nowtime % 60;
@ -95,6 +102,8 @@ int vrtc_set_mmss(unsigned long nowtime)
vrtc_cmos_write(real_sec, RTC_SECONDS);
vrtc_cmos_write(real_min, RTC_MINUTES);
spin_unlock_irqrestore(&rtc_lock, flags);
return 0;
}

View file

@ -348,9 +348,10 @@ void blk_put_queue(struct request_queue *q)
EXPORT_SYMBOL(blk_put_queue);
/*
* Note: If a driver supplied the queue lock, it should not zap that lock
* unexpectedly as some queue cleanup components like elevator_exit() and
* blk_throtl_exit() need queue lock.
* Note: If a driver supplied the queue lock, it is disconnected
* by this function. The actual state of the lock doesn't matter
* here as the request_queue isn't accessible after this point
* (QUEUE_FLAG_DEAD is set) and no other requests will be queued.
*/
void blk_cleanup_queue(struct request_queue *q)
{
@ -367,10 +368,8 @@ void blk_cleanup_queue(struct request_queue *q)
queue_flag_set_unlocked(QUEUE_FLAG_DEAD, q);
mutex_unlock(&q->sysfs_lock);
if (q->elevator)
elevator_exit(q->elevator);
blk_throtl_exit(q);
if (q->queue_lock != &q->__queue_lock)
q->queue_lock = &q->__queue_lock;
blk_put_queue(q);
}

View file

@ -479,6 +479,11 @@ static void blk_release_queue(struct kobject *kobj)
blk_sync_queue(q);
if (q->elevator)
elevator_exit(q->elevator);
blk_throtl_exit(q);
if (rl->rq_pool)
mempool_destroy(rl->rq_pool);

View file

@ -41,6 +41,22 @@ static struct pm_clk_data *__to_pcd(struct device *dev)
return dev ? dev->power.subsys_data : NULL;
}
/**
* pm_clk_acquire - Acquire a device clock.
* @dev: Device whose clock is to be acquired.
* @ce: PM clock entry corresponding to the clock.
*/
static void pm_clk_acquire(struct device *dev, struct pm_clock_entry *ce)
{
ce->clk = clk_get(dev, ce->con_id);
if (IS_ERR(ce->clk)) {
ce->status = PCE_STATUS_ERROR;
} else {
ce->status = PCE_STATUS_ACQUIRED;
dev_dbg(dev, "Clock %s managed by runtime PM.\n", ce->con_id);
}
}
/**
* pm_clk_add - Start using a device clock for power management.
* @dev: Device whose clock is going to be used for power management.
@ -73,6 +89,8 @@ int pm_clk_add(struct device *dev, const char *con_id)
}
}
pm_clk_acquire(dev, ce);
spin_lock_irq(&pcd->lock);
list_add_tail(&ce->node, &pcd->clock_list);
spin_unlock_irq(&pcd->lock);
@ -82,17 +100,12 @@ int pm_clk_add(struct device *dev, const char *con_id)
/**
* __pm_clk_remove - Destroy PM clock entry.
* @ce: PM clock entry to destroy.
*
* This routine must be called under the spinlock protecting the PM list of
* clocks corresponding the the @ce's device.
*/
static void __pm_clk_remove(struct pm_clock_entry *ce)
{
if (!ce)
return;
list_del(&ce->node);
if (ce->status < PCE_STATUS_ERROR) {
if (ce->status == PCE_STATUS_ENABLED)
clk_disable(ce->clk);
@ -126,18 +139,22 @@ void pm_clk_remove(struct device *dev, const char *con_id)
spin_lock_irq(&pcd->lock);
list_for_each_entry(ce, &pcd->clock_list, node) {
if (!con_id && !ce->con_id) {
__pm_clk_remove(ce);
break;
} else if (!con_id || !ce->con_id) {
if (!con_id && !ce->con_id)
goto remove;
else if (!con_id || !ce->con_id)
continue;
} else if (!strcmp(con_id, ce->con_id)) {
__pm_clk_remove(ce);
break;
}
else if (!strcmp(con_id, ce->con_id))
goto remove;
}
spin_unlock_irq(&pcd->lock);
return;
remove:
list_del(&ce->node);
spin_unlock_irq(&pcd->lock);
__pm_clk_remove(ce);
}
/**
@ -175,43 +192,33 @@ void pm_clk_destroy(struct device *dev)
{
struct pm_clk_data *pcd = __to_pcd(dev);
struct pm_clock_entry *ce, *c;
struct list_head list;
if (!pcd)
return;
dev->power.subsys_data = NULL;
INIT_LIST_HEAD(&list);
spin_lock_irq(&pcd->lock);
list_for_each_entry_safe_reverse(ce, c, &pcd->clock_list, node)
__pm_clk_remove(ce);
list_move(&ce->node, &list);
spin_unlock_irq(&pcd->lock);
kfree(pcd);
list_for_each_entry_safe_reverse(ce, c, &list, node) {
list_del(&ce->node);
__pm_clk_remove(ce);
}
}
#endif /* CONFIG_PM */
#ifdef CONFIG_PM_RUNTIME
/**
* pm_clk_acquire - Acquire a device clock.
* @dev: Device whose clock is to be acquired.
* @con_id: Connection ID of the clock.
*/
static void pm_clk_acquire(struct device *dev,
struct pm_clock_entry *ce)
{
ce->clk = clk_get(dev, ce->con_id);
if (IS_ERR(ce->clk)) {
ce->status = PCE_STATUS_ERROR;
} else {
ce->status = PCE_STATUS_ACQUIRED;
dev_dbg(dev, "Clock %s managed by runtime PM.\n", ce->con_id);
}
}
/**
* pm_clk_suspend - Disable clocks in a device's PM clock list.
* @dev: Device to disable the clocks for.
@ -230,9 +237,6 @@ int pm_clk_suspend(struct device *dev)
spin_lock_irqsave(&pcd->lock, flags);
list_for_each_entry_reverse(ce, &pcd->clock_list, node) {
if (ce->status == PCE_STATUS_NONE)
pm_clk_acquire(dev, ce);
if (ce->status < PCE_STATUS_ERROR) {
clk_disable(ce->clk);
ce->status = PCE_STATUS_ACQUIRED;
@ -262,9 +266,6 @@ int pm_clk_resume(struct device *dev)
spin_lock_irqsave(&pcd->lock, flags);
list_for_each_entry(ce, &pcd->clock_list, node) {
if (ce->status == PCE_STATUS_NONE)
pm_clk_acquire(dev, ce);
if (ce->status < PCE_STATUS_ERROR) {
clk_enable(ce->clk);
ce->status = PCE_STATUS_ENABLED;

View file

@ -43,6 +43,7 @@ config TCG_NSC
config TCG_ATMEL
tristate "Atmel TPM Interface"
depends on PPC64 || HAS_IOPORT
---help---
If you have a TPM security chip from Atmel say Yes and it
will be accessible from within Linux. To compile this driver

View file

@ -383,6 +383,9 @@ static ssize_t tpm_transmit(struct tpm_chip *chip, const char *buf,
u32 count, ordinal;
unsigned long stop;
if (bufsiz > TPM_BUFSIZE)
bufsiz = TPM_BUFSIZE;
count = be32_to_cpu(*((__be32 *) (buf + 2)));
ordinal = be32_to_cpu(*((__be32 *) (buf + 6)));
if (count == 0)
@ -1102,6 +1105,7 @@ ssize_t tpm_read(struct file *file, char __user *buf,
{
struct tpm_chip *chip = file->private_data;
ssize_t ret_size;
int rc;
del_singleshot_timer_sync(&chip->user_read_timer);
flush_work_sync(&chip->work);
@ -1112,8 +1116,11 @@ ssize_t tpm_read(struct file *file, char __user *buf,
ret_size = size;
mutex_lock(&chip->buffer_mutex);
if (copy_to_user(buf, chip->data_buffer, ret_size))
rc = copy_to_user(buf, chip->data_buffer, ret_size);
memset(chip->data_buffer, 0, ret_size);
if (rc)
ret_size = -EFAULT;
mutex_unlock(&chip->buffer_mutex);
}

View file

@ -396,8 +396,6 @@ static void __exit cleanup_nsc(void)
if (pdev) {
tpm_nsc_remove(&pdev->dev);
platform_device_unregister(pdev);
kfree(pdev);
pdev = NULL;
}
platform_driver_unregister(&nsc_drv);

View file

@ -67,11 +67,11 @@ module_param_named(i915_enable_rc6, i915_enable_rc6, int, 0600);
MODULE_PARM_DESC(i915_enable_rc6,
"Enable power-saving render C-state 6 (default: true)");
unsigned int i915_enable_fbc __read_mostly = 1;
unsigned int i915_enable_fbc __read_mostly = -1;
module_param_named(i915_enable_fbc, i915_enable_fbc, int, 0600);
MODULE_PARM_DESC(i915_enable_fbc,
"Enable frame buffer compression for power savings "
"(default: false)");
"(default: -1 (use per-chip default))");
unsigned int i915_lvds_downclock __read_mostly = 0;
module_param_named(lvds_downclock, i915_lvds_downclock, int, 0400);

View file

@ -1799,6 +1799,7 @@ static void intel_update_fbc(struct drm_device *dev)
struct drm_framebuffer *fb;
struct intel_framebuffer *intel_fb;
struct drm_i915_gem_object *obj;
int enable_fbc;
DRM_DEBUG_KMS("\n");
@ -1839,8 +1840,15 @@ static void intel_update_fbc(struct drm_device *dev)
intel_fb = to_intel_framebuffer(fb);
obj = intel_fb->obj;
if (!i915_enable_fbc) {
DRM_DEBUG_KMS("fbc disabled per module param (default off)\n");
enable_fbc = i915_enable_fbc;
if (enable_fbc < 0) {
DRM_DEBUG_KMS("fbc set to per-chip default\n");
enable_fbc = 1;
if (INTEL_INFO(dev)->gen <= 5)
enable_fbc = 0;
}
if (!enable_fbc) {
DRM_DEBUG_KMS("fbc disabled per module param\n");
dev_priv->no_fbc_reason = FBC_MODULE_PARAM;
goto out_disable;
}
@ -4687,13 +4695,13 @@ static bool intel_choose_pipe_bpp_dither(struct drm_crtc *crtc,
bpc = 6; /* min is 18bpp */
break;
case 24:
bpc = min((unsigned int)8, display_bpc);
bpc = 8;
break;
case 30:
bpc = min((unsigned int)10, display_bpc);
bpc = 10;
break;
case 48:
bpc = min((unsigned int)12, display_bpc);
bpc = 12;
break;
default:
DRM_DEBUG("unsupported depth, assuming 24 bits\n");
@ -4701,10 +4709,12 @@ static bool intel_choose_pipe_bpp_dither(struct drm_crtc *crtc,
break;
}
display_bpc = min(display_bpc, bpc);
DRM_DEBUG_DRIVER("setting pipe bpc to %d (max display bpc %d)\n",
bpc, display_bpc);
*pipe_bpp = bpc * 3;
*pipe_bpp = display_bpc * 3;
return display_bpc != bpc;
}

View file

@ -337,9 +337,6 @@ extern void intel_release_load_detect_pipe(struct intel_encoder *intel_encoder,
struct drm_connector *connector,
struct intel_load_detect_pipe *old);
extern struct drm_connector* intel_sdvo_find(struct drm_device *dev, int sdvoB);
extern int intel_sdvo_supports_hotplug(struct drm_connector *connector);
extern void intel_sdvo_set_hotplug(struct drm_connector *connector, int enable);
extern void intelfb_restore(void);
extern void intel_crtc_fb_gamma_set(struct drm_crtc *crtc, u16 red, u16 green,
u16 blue, int regno);

View file

@ -92,6 +92,11 @@ struct intel_sdvo {
*/
uint16_t attached_output;
/*
* Hotplug activation bits for this device
*/
uint8_t hotplug_active[2];
/**
* This is used to select the color range of RBG outputs in HDMI mode.
* It is only valid when using TMDS encoding and 8 bit per color mode.
@ -1208,74 +1213,20 @@ static bool intel_sdvo_get_capabilities(struct intel_sdvo *intel_sdvo, struct in
return true;
}
/* No use! */
#if 0
struct drm_connector* intel_sdvo_find(struct drm_device *dev, int sdvoB)
{
struct drm_connector *connector = NULL;
struct intel_sdvo *iout = NULL;
struct intel_sdvo *sdvo;
/* find the sdvo connector */
list_for_each_entry(connector, &dev->mode_config.connector_list, head) {
iout = to_intel_sdvo(connector);
if (iout->type != INTEL_OUTPUT_SDVO)
continue;
sdvo = iout->dev_priv;
if (sdvo->sdvo_reg == SDVOB && sdvoB)
return connector;
if (sdvo->sdvo_reg == SDVOC && !sdvoB)
return connector;
}
return NULL;
}
int intel_sdvo_supports_hotplug(struct drm_connector *connector)
static int intel_sdvo_supports_hotplug(struct intel_sdvo *intel_sdvo)
{
u8 response[2];
u8 status;
struct intel_sdvo *intel_sdvo;
DRM_DEBUG_KMS("\n");
if (!connector)
return 0;
intel_sdvo = to_intel_sdvo(connector);
return intel_sdvo_get_value(intel_sdvo, SDVO_CMD_GET_HOT_PLUG_SUPPORT,
&response, 2) && response[0];
}
void intel_sdvo_set_hotplug(struct drm_connector *connector, int on)
static void intel_sdvo_enable_hotplug(struct intel_encoder *encoder)
{
u8 response[2];
u8 status;
struct intel_sdvo *intel_sdvo = to_intel_sdvo(connector);
struct intel_sdvo *intel_sdvo = to_intel_sdvo(&encoder->base);
intel_sdvo_write_cmd(intel_sdvo, SDVO_CMD_GET_ACTIVE_HOT_PLUG, NULL, 0);
intel_sdvo_read_response(intel_sdvo, &response, 2);
if (on) {
intel_sdvo_write_cmd(intel_sdvo, SDVO_CMD_GET_HOT_PLUG_SUPPORT, NULL, 0);
status = intel_sdvo_read_response(intel_sdvo, &response, 2);
intel_sdvo_write_cmd(intel_sdvo, SDVO_CMD_SET_ACTIVE_HOT_PLUG, &response, 2);
} else {
response[0] = 0;
response[1] = 0;
intel_sdvo_write_cmd(intel_sdvo, SDVO_CMD_SET_ACTIVE_HOT_PLUG, &response, 2);
}
intel_sdvo_write_cmd(intel_sdvo, SDVO_CMD_GET_ACTIVE_HOT_PLUG, NULL, 0);
intel_sdvo_read_response(intel_sdvo, &response, 2);
intel_sdvo_write_cmd(intel_sdvo, SDVO_CMD_SET_ACTIVE_HOT_PLUG, &intel_sdvo->hotplug_active, 2);
}
#endif
static bool
intel_sdvo_multifunc_encoder(struct intel_sdvo *intel_sdvo)
@ -2045,6 +1996,7 @@ intel_sdvo_dvi_init(struct intel_sdvo *intel_sdvo, int device)
{
struct drm_encoder *encoder = &intel_sdvo->base.base;
struct drm_connector *connector;
struct intel_encoder *intel_encoder = to_intel_encoder(encoder);
struct intel_connector *intel_connector;
struct intel_sdvo_connector *intel_sdvo_connector;
@ -2062,7 +2014,17 @@ intel_sdvo_dvi_init(struct intel_sdvo *intel_sdvo, int device)
intel_connector = &intel_sdvo_connector->base;
connector = &intel_connector->base;
connector->polled = DRM_CONNECTOR_POLL_CONNECT | DRM_CONNECTOR_POLL_DISCONNECT;
if (intel_sdvo_supports_hotplug(intel_sdvo) & (1 << device)) {
connector->polled = DRM_CONNECTOR_POLL_HPD;
intel_sdvo->hotplug_active[0] |= 1 << device;
/* Some SDVO devices have one-shot hotplug interrupts.
* Ensure that they get re-enabled when an interrupt happens.
*/
intel_encoder->hot_plug = intel_sdvo_enable_hotplug;
intel_sdvo_enable_hotplug(intel_encoder);
}
else
connector->polled = DRM_CONNECTOR_POLL_CONNECT | DRM_CONNECTOR_POLL_DISCONNECT;
encoder->encoder_type = DRM_MODE_ENCODER_TMDS;
connector->connector_type = DRM_MODE_CONNECTOR_DVID;
@ -2569,6 +2531,14 @@ bool intel_sdvo_init(struct drm_device *dev, int sdvo_reg)
if (!intel_sdvo_get_capabilities(intel_sdvo, &intel_sdvo->caps))
goto err;
/* Set up hotplug command - note paranoia about contents of reply.
* We assume that the hardware is in a sane state, and only touch
* the bits we think we understand.
*/
intel_sdvo_get_value(intel_sdvo, SDVO_CMD_GET_ACTIVE_HOT_PLUG,
&intel_sdvo->hotplug_active, 2);
intel_sdvo->hotplug_active[0] &= ~0x3;
if (intel_sdvo_output_setup(intel_sdvo,
intel_sdvo->caps.output_flags) != true) {
DRM_DEBUG_KMS("SDVO output failed to setup on SDVO%c\n",

View file

@ -115,6 +115,7 @@ static int radeon_dp_aux_native_write(struct radeon_connector *radeon_connector,
u8 msg[20];
int msg_bytes = send_bytes + 4;
u8 ack;
unsigned retry;
if (send_bytes > 16)
return -1;
@ -125,20 +126,20 @@ static int radeon_dp_aux_native_write(struct radeon_connector *radeon_connector,
msg[3] = (msg_bytes << 4) | (send_bytes - 1);
memcpy(&msg[4], send, send_bytes);
while (1) {
for (retry = 0; retry < 4; retry++) {
ret = radeon_process_aux_ch(dig_connector->dp_i2c_bus,
msg, msg_bytes, NULL, 0, delay, &ack);
if (ret < 0)
return ret;
if ((ack & AUX_NATIVE_REPLY_MASK) == AUX_NATIVE_REPLY_ACK)
break;
return send_bytes;
else if ((ack & AUX_NATIVE_REPLY_MASK) == AUX_NATIVE_REPLY_DEFER)
udelay(400);
else
return -EIO;
}
return send_bytes;
return -EIO;
}
static int radeon_dp_aux_native_read(struct radeon_connector *radeon_connector,
@ -149,26 +150,29 @@ static int radeon_dp_aux_native_read(struct radeon_connector *radeon_connector,
int msg_bytes = 4;
u8 ack;
int ret;
unsigned retry;
msg[0] = address;
msg[1] = address >> 8;
msg[2] = AUX_NATIVE_READ << 4;
msg[3] = (msg_bytes << 4) | (recv_bytes - 1);
while (1) {
for (retry = 0; retry < 4; retry++) {
ret = radeon_process_aux_ch(dig_connector->dp_i2c_bus,
msg, msg_bytes, recv, recv_bytes, delay, &ack);
if (ret == 0)
return -EPROTO;
if (ret < 0)
return ret;
if ((ack & AUX_NATIVE_REPLY_MASK) == AUX_NATIVE_REPLY_ACK)
return ret;
else if ((ack & AUX_NATIVE_REPLY_MASK) == AUX_NATIVE_REPLY_DEFER)
udelay(400);
else if (ret == 0)
return -EPROTO;
else
return -EIO;
}
return -EIO;
}
static void radeon_write_dpcd_reg(struct radeon_connector *radeon_connector,

View file

@ -1590,48 +1590,6 @@ static u32 evergreen_get_tile_pipe_to_backend_map(struct radeon_device *rdev,
return backend_map;
}
static void evergreen_program_channel_remap(struct radeon_device *rdev)
{
u32 tcp_chan_steer_lo, tcp_chan_steer_hi, mc_shared_chremap, tmp;
tmp = RREG32(MC_SHARED_CHMAP);
switch ((tmp & NOOFCHAN_MASK) >> NOOFCHAN_SHIFT) {
case 0:
case 1:
case 2:
case 3:
default:
/* default mapping */
mc_shared_chremap = 0x00fac688;
break;
}
switch (rdev->family) {
case CHIP_HEMLOCK:
case CHIP_CYPRESS:
case CHIP_BARTS:
tcp_chan_steer_lo = 0x54763210;
tcp_chan_steer_hi = 0x0000ba98;
break;
case CHIP_JUNIPER:
case CHIP_REDWOOD:
case CHIP_CEDAR:
case CHIP_PALM:
case CHIP_SUMO:
case CHIP_SUMO2:
case CHIP_TURKS:
case CHIP_CAICOS:
default:
tcp_chan_steer_lo = 0x76543210;
tcp_chan_steer_hi = 0x0000ba98;
break;
}
WREG32(TCP_CHAN_STEER_LO, tcp_chan_steer_lo);
WREG32(TCP_CHAN_STEER_HI, tcp_chan_steer_hi);
WREG32(MC_SHARED_CHREMAP, mc_shared_chremap);
}
static void evergreen_gpu_init(struct radeon_device *rdev)
{
u32 cc_rb_backend_disable = 0;
@ -2078,8 +2036,6 @@ static void evergreen_gpu_init(struct radeon_device *rdev)
WREG32(DMIF_ADDR_CONFIG, gb_addr_config);
WREG32(HDP_ADDR_CONFIG, gb_addr_config);
evergreen_program_channel_remap(rdev);
num_shader_engines = ((RREG32(GB_ADDR_CONFIG) & NUM_SHADER_ENGINES(3)) >> 12) + 1;
grbm_gfx_index = INSTANCE_BROADCAST_WRITES;

View file

@ -569,36 +569,6 @@ static u32 cayman_get_tile_pipe_to_backend_map(struct radeon_device *rdev,
return backend_map;
}
static void cayman_program_channel_remap(struct radeon_device *rdev)
{
u32 tcp_chan_steer_lo, tcp_chan_steer_hi, mc_shared_chremap, tmp;
tmp = RREG32(MC_SHARED_CHMAP);
switch ((tmp & NOOFCHAN_MASK) >> NOOFCHAN_SHIFT) {
case 0:
case 1:
case 2:
case 3:
default:
/* default mapping */
mc_shared_chremap = 0x00fac688;
break;
}
switch (rdev->family) {
case CHIP_CAYMAN:
default:
//tcp_chan_steer_lo = 0x54763210
tcp_chan_steer_lo = 0x76543210;
tcp_chan_steer_hi = 0x0000ba98;
break;
}
WREG32(TCP_CHAN_STEER_LO, tcp_chan_steer_lo);
WREG32(TCP_CHAN_STEER_HI, tcp_chan_steer_hi);
WREG32(MC_SHARED_CHREMAP, mc_shared_chremap);
}
static u32 cayman_get_disable_mask_per_asic(struct radeon_device *rdev,
u32 disable_mask_per_se,
u32 max_disable_mask_per_se,
@ -842,8 +812,6 @@ static void cayman_gpu_init(struct radeon_device *rdev)
WREG32(DMIF_ADDR_CONFIG, gb_addr_config);
WREG32(HDP_ADDR_CONFIG, gb_addr_config);
cayman_program_channel_remap(rdev);
/* primary versions */
WREG32(CC_RB_BACKEND_DISABLE, cc_rb_backend_disable);
WREG32(CC_SYS_RB_BACKEND_DISABLE, cc_rb_backend_disable);

View file

@ -773,8 +773,8 @@ int r100_copy_blit(struct radeon_device *rdev,
radeon_ring_write(rdev, (0x1fff) | (0x1fff << 16));
radeon_ring_write(rdev, 0);
radeon_ring_write(rdev, (0x1fff) | (0x1fff << 16));
radeon_ring_write(rdev, cur_pages);
radeon_ring_write(rdev, cur_pages);
radeon_ring_write(rdev, num_gpu_pages);
radeon_ring_write(rdev, num_gpu_pages);
radeon_ring_write(rdev, cur_pages | (stride_pixels << 16));
}
radeon_ring_write(rdev, PACKET0(RADEON_DSTCACHE_CTLSTAT, 0));

View file

@ -68,11 +68,11 @@ void radeon_connector_hotplug(struct drm_connector *connector)
if (connector->connector_type == DRM_MODE_CONNECTOR_DisplayPort) {
int saved_dpms = connector->dpms;
if (radeon_hpd_sense(rdev, radeon_connector->hpd.hpd) &&
radeon_dp_needs_link_train(radeon_connector))
drm_helper_connector_dpms(connector, DRM_MODE_DPMS_ON);
else
/* Only turn off the display it it's physically disconnected */
if (!radeon_hpd_sense(rdev, radeon_connector->hpd.hpd))
drm_helper_connector_dpms(connector, DRM_MODE_DPMS_OFF);
else if (radeon_dp_needs_link_train(radeon_connector))
drm_helper_connector_dpms(connector, DRM_MODE_DPMS_ON);
connector->dpms = saved_dpms;
}
}

View file

@ -208,24 +208,26 @@ int radeon_crtc_cursor_move(struct drm_crtc *crtc,
int xorigin = 0, yorigin = 0;
int w = radeon_crtc->cursor_width;
if (x < 0)
xorigin = -x + 1;
if (y < 0)
yorigin = -y + 1;
if (xorigin >= CURSOR_WIDTH)
xorigin = CURSOR_WIDTH - 1;
if (yorigin >= CURSOR_HEIGHT)
yorigin = CURSOR_HEIGHT - 1;
if (ASIC_IS_AVIVO(rdev)) {
/* avivo cursor are offset into the total surface */
x += crtc->x;
y += crtc->y;
}
DRM_DEBUG("x %d y %d c->x %d c->y %d\n", x, y, crtc->x, crtc->y);
if (x < 0) {
xorigin = min(-x, CURSOR_WIDTH - 1);
x = 0;
}
if (y < 0) {
yorigin = min(-y, CURSOR_HEIGHT - 1);
y = 0;
}
if (ASIC_IS_AVIVO(rdev)) {
int i = 0;
struct drm_crtc *crtc_p;
/* avivo cursor are offset into the total surface */
x += crtc->x;
y += crtc->y;
DRM_DEBUG("x %d y %d c->x %d c->y %d\n", x, y, crtc->x, crtc->y);
/* avivo cursor image can't end on 128 pixel boundary or
* go past the end of the frame if both crtcs are enabled
*/
@ -253,16 +255,12 @@ int radeon_crtc_cursor_move(struct drm_crtc *crtc,
radeon_lock_cursor(crtc, true);
if (ASIC_IS_DCE4(rdev)) {
WREG32(EVERGREEN_CUR_POSITION + radeon_crtc->crtc_offset,
((xorigin ? 0 : x) << 16) |
(yorigin ? 0 : y));
WREG32(EVERGREEN_CUR_POSITION + radeon_crtc->crtc_offset, (x << 16) | y);
WREG32(EVERGREEN_CUR_HOT_SPOT + radeon_crtc->crtc_offset, (xorigin << 16) | yorigin);
WREG32(EVERGREEN_CUR_SIZE + radeon_crtc->crtc_offset,
((w - 1) << 16) | (radeon_crtc->cursor_height - 1));
} else if (ASIC_IS_AVIVO(rdev)) {
WREG32(AVIVO_D1CUR_POSITION + radeon_crtc->crtc_offset,
((xorigin ? 0 : x) << 16) |
(yorigin ? 0 : y));
WREG32(AVIVO_D1CUR_POSITION + radeon_crtc->crtc_offset, (x << 16) | y);
WREG32(AVIVO_D1CUR_HOT_SPOT + radeon_crtc->crtc_offset, (xorigin << 16) | yorigin);
WREG32(AVIVO_D1CUR_SIZE + radeon_crtc->crtc_offset,
((w - 1) << 16) | (radeon_crtc->cursor_height - 1));
@ -276,8 +274,8 @@ int radeon_crtc_cursor_move(struct drm_crtc *crtc,
| yorigin));
WREG32(RADEON_CUR_HORZ_VERT_POSN + radeon_crtc->crtc_offset,
(RADEON_CUR_LOCK
| ((xorigin ? 0 : x) << 16)
| (yorigin ? 0 : y)));
| (x << 16)
| y));
/* offset is from DISP(2)_BASE_ADDRESS */
WREG32(RADEON_CUR_OFFSET + radeon_crtc->crtc_offset, (radeon_crtc->legacy_cursor_offset +
(yorigin * 256)));

View file

@ -1507,7 +1507,14 @@ radeon_atom_encoder_dpms(struct drm_encoder *encoder, int mode)
switch (mode) {
case DRM_MODE_DPMS_ON:
args.ucAction = ATOM_ENABLE;
atom_execute_table(rdev->mode_info.atom_context, index, (uint32_t *)&args);
/* workaround for DVOOutputControl on some RS690 systems */
if (radeon_encoder->encoder_id == ENCODER_OBJECT_ID_INTERNAL_DDI) {
u32 reg = RREG32(RADEON_BIOS_3_SCRATCH);
WREG32(RADEON_BIOS_3_SCRATCH, reg & ~ATOM_S3_DFP2I_ACTIVE);
atom_execute_table(rdev->mode_info.atom_context, index, (uint32_t *)&args);
WREG32(RADEON_BIOS_3_SCRATCH, reg);
} else
atom_execute_table(rdev->mode_info.atom_context, index, (uint32_t *)&args);
if (radeon_encoder->devices & (ATOM_DEVICE_LCD_SUPPORT)) {
args.ucAction = ATOM_LCD_BLON;
atom_execute_table(rdev->mode_info.atom_context, index, (uint32_t *)&args);

View file

@ -536,55 +536,6 @@ static u32 r700_get_tile_pipe_to_backend_map(struct radeon_device *rdev,
return backend_map;
}
static void rv770_program_channel_remap(struct radeon_device *rdev)
{
u32 tcp_chan_steer, mc_shared_chremap, tmp;
bool force_no_swizzle;
switch (rdev->family) {
case CHIP_RV770:
case CHIP_RV730:
force_no_swizzle = false;
break;
case CHIP_RV710:
case CHIP_RV740:
default:
force_no_swizzle = true;
break;
}
tmp = RREG32(MC_SHARED_CHMAP);
switch ((tmp & NOOFCHAN_MASK) >> NOOFCHAN_SHIFT) {
case 0:
case 1:
default:
/* default mapping */
mc_shared_chremap = 0x00fac688;
break;
case 2:
case 3:
if (force_no_swizzle)
mc_shared_chremap = 0x00fac688;
else
mc_shared_chremap = 0x00bbc298;
break;
}
if (rdev->family == CHIP_RV740)
tcp_chan_steer = 0x00ef2a60;
else
tcp_chan_steer = 0x00fac688;
/* RV770 CE has special chremap setup */
if (rdev->pdev->device == 0x944e) {
tcp_chan_steer = 0x00b08b08;
mc_shared_chremap = 0x00b08b08;
}
WREG32(TCP_CHAN_STEER, tcp_chan_steer);
WREG32(MC_SHARED_CHREMAP, mc_shared_chremap);
}
static void rv770_gpu_init(struct radeon_device *rdev)
{
int i, j, num_qd_pipes;
@ -785,8 +736,6 @@ static void rv770_gpu_init(struct radeon_device *rdev)
WREG32(DCP_TILING_CONFIG, (gb_tiling_config & 0xffff));
WREG32(HDP_TILING_CONFIG, (gb_tiling_config & 0xffff));
rv770_program_channel_remap(rdev);
WREG32(CC_RB_BACKEND_DISABLE, cc_rb_backend_disable);
WREG32(CC_GC_SHADER_PIPE_CONFIG, cc_gc_shader_pipe_config);
WREG32(GC_USER_SHADER_PIPE_CONFIG, cc_gc_shader_pipe_config);

View file

@ -36,17 +36,25 @@
#include <linux/cpu.h>
#include <linux/pci.h>
#include <linux/smp.h>
#include <linux/moduleparam.h>
#include <asm/msr.h>
#include <asm/processor.h>
#define DRVNAME "coretemp"
/*
* force_tjmax only matters when TjMax can't be read from the CPU itself.
* When set, it replaces the driver's suboptimal heuristic.
*/
static int force_tjmax;
module_param_named(tjmax, force_tjmax, int, 0444);
MODULE_PARM_DESC(tjmax, "TjMax value in degrees Celsius");
#define BASE_SYSFS_ATTR_NO 2 /* Sysfs Base attr no for coretemp */
#define NUM_REAL_CORES 16 /* Number of Real cores per cpu */
#define CORETEMP_NAME_LENGTH 17 /* String Length of attrs */
#define MAX_CORE_ATTRS 4 /* Maximum no of basic attrs */
#define MAX_THRESH_ATTRS 3 /* Maximum no of Threshold attrs */
#define TOTAL_ATTRS (MAX_CORE_ATTRS + MAX_THRESH_ATTRS)
#define TOTAL_ATTRS (MAX_CORE_ATTRS + 1)
#define MAX_CORE_DATA (NUM_REAL_CORES + BASE_SYSFS_ATTR_NO)
#ifdef CONFIG_SMP
@ -69,8 +77,6 @@
* This value is passed as "id" field to rdmsr/wrmsr functions.
* @status_reg: One of IA32_THERM_STATUS or IA32_PACKAGE_THERM_STATUS,
* from where the temperature values should be read.
* @intrpt_reg: One of IA32_THERM_INTERRUPT or IA32_PACKAGE_THERM_INTERRUPT,
* from where the thresholds are read.
* @attr_size: Total number of pre-core attrs displayed in the sysfs.
* @is_pkg_data: If this is 1, the temp_data holds pkgtemp data.
* Otherwise, temp_data holds coretemp data.
@ -79,13 +85,11 @@
struct temp_data {
int temp;
int ttarget;
int tmin;
int tjmax;
unsigned long last_updated;
unsigned int cpu;
u32 cpu_core_id;
u32 status_reg;
u32 intrpt_reg;
int attr_size;
bool is_pkg_data;
bool valid;
@ -143,19 +147,6 @@ static ssize_t show_crit_alarm(struct device *dev,
return sprintf(buf, "%d\n", (eax >> 5) & 1);
}
static ssize_t show_max_alarm(struct device *dev,
struct device_attribute *devattr, char *buf)
{
u32 eax, edx;
struct sensor_device_attribute *attr = to_sensor_dev_attr(devattr);
struct platform_data *pdata = dev_get_drvdata(dev);
struct temp_data *tdata = pdata->core_data[attr->index];
rdmsr_on_cpu(tdata->cpu, tdata->status_reg, &eax, &edx);
return sprintf(buf, "%d\n", !!(eax & THERM_STATUS_THRESHOLD1));
}
static ssize_t show_tjmax(struct device *dev,
struct device_attribute *devattr, char *buf)
{
@ -174,83 +165,6 @@ static ssize_t show_ttarget(struct device *dev,
return sprintf(buf, "%d\n", pdata->core_data[attr->index]->ttarget);
}
static ssize_t store_ttarget(struct device *dev,
struct device_attribute *devattr,
const char *buf, size_t count)
{
struct platform_data *pdata = dev_get_drvdata(dev);
struct sensor_device_attribute *attr = to_sensor_dev_attr(devattr);
struct temp_data *tdata = pdata->core_data[attr->index];
u32 eax, edx;
unsigned long val;
int diff;
if (strict_strtoul(buf, 10, &val))
return -EINVAL;
/*
* THERM_MASK_THRESHOLD1 is 7 bits wide. Values are entered in terms
* of milli degree celsius. Hence don't accept val > (127 * 1000)
*/
if (val > tdata->tjmax || val > 127000)
return -EINVAL;
diff = (tdata->tjmax - val) / 1000;
mutex_lock(&tdata->update_lock);
rdmsr_on_cpu(tdata->cpu, tdata->intrpt_reg, &eax, &edx);
eax = (eax & ~THERM_MASK_THRESHOLD1) |
(diff << THERM_SHIFT_THRESHOLD1);
wrmsr_on_cpu(tdata->cpu, tdata->intrpt_reg, eax, edx);
tdata->ttarget = val;
mutex_unlock(&tdata->update_lock);
return count;
}
static ssize_t show_tmin(struct device *dev,
struct device_attribute *devattr, char *buf)
{
struct sensor_device_attribute *attr = to_sensor_dev_attr(devattr);
struct platform_data *pdata = dev_get_drvdata(dev);
return sprintf(buf, "%d\n", pdata->core_data[attr->index]->tmin);
}
static ssize_t store_tmin(struct device *dev,
struct device_attribute *devattr,
const char *buf, size_t count)
{
struct platform_data *pdata = dev_get_drvdata(dev);
struct sensor_device_attribute *attr = to_sensor_dev_attr(devattr);
struct temp_data *tdata = pdata->core_data[attr->index];
u32 eax, edx;
unsigned long val;
int diff;
if (strict_strtoul(buf, 10, &val))
return -EINVAL;
/*
* THERM_MASK_THRESHOLD0 is 7 bits wide. Values are entered in terms
* of milli degree celsius. Hence don't accept val > (127 * 1000)
*/
if (val > tdata->tjmax || val > 127000)
return -EINVAL;
diff = (tdata->tjmax - val) / 1000;
mutex_lock(&tdata->update_lock);
rdmsr_on_cpu(tdata->cpu, tdata->intrpt_reg, &eax, &edx);
eax = (eax & ~THERM_MASK_THRESHOLD0) |
(diff << THERM_SHIFT_THRESHOLD0);
wrmsr_on_cpu(tdata->cpu, tdata->intrpt_reg, eax, edx);
tdata->tmin = val;
mutex_unlock(&tdata->update_lock);
return count;
}
static ssize_t show_temp(struct device *dev,
struct device_attribute *devattr, char *buf)
{
@ -374,7 +288,6 @@ static int adjust_tjmax(struct cpuinfo_x86 *c, u32 id, struct device *dev)
static int get_tjmax(struct cpuinfo_x86 *c, u32 id, struct device *dev)
{
/* The 100C is default for both mobile and non mobile CPUs */
int err;
u32 eax, edx;
u32 val;
@ -385,7 +298,8 @@ static int get_tjmax(struct cpuinfo_x86 *c, u32 id, struct device *dev)
*/
err = rdmsr_safe_on_cpu(id, MSR_IA32_TEMPERATURE_TARGET, &eax, &edx);
if (err) {
dev_warn(dev, "Unable to read TjMax from CPU.\n");
if (c->x86_model > 0xe && c->x86_model != 0x1c)
dev_warn(dev, "Unable to read TjMax from CPU %u\n", id);
} else {
val = (eax >> 16) & 0xff;
/*
@ -393,11 +307,17 @@ static int get_tjmax(struct cpuinfo_x86 *c, u32 id, struct device *dev)
* will be used
*/
if (val) {
dev_info(dev, "TjMax is %d C.\n", val);
dev_dbg(dev, "TjMax is %d degrees C\n", val);
return val * 1000;
}
}
if (force_tjmax) {
dev_notice(dev, "TjMax forced to %d degrees C by user\n",
force_tjmax);
return force_tjmax * 1000;
}
/*
* An assumption is made for early CPUs and unreadable MSR.
* NOTE: the calculated value may not be correct.
@ -414,21 +334,6 @@ static void __devinit get_ucode_rev_on_cpu(void *edx)
rdmsr(MSR_IA32_UCODE_REV, eax, *(u32 *)edx);
}
static int get_pkg_tjmax(unsigned int cpu, struct device *dev)
{
int err;
u32 eax, edx, val;
err = rdmsr_safe_on_cpu(cpu, MSR_IA32_TEMPERATURE_TARGET, &eax, &edx);
if (!err) {
val = (eax >> 16) & 0xff;
if (val)
return val * 1000;
}
dev_warn(dev, "Unable to read Pkg-TjMax from CPU:%u\n", cpu);
return 100000; /* Default TjMax: 100 degree celsius */
}
static int create_name_attr(struct platform_data *pdata, struct device *dev)
{
sysfs_attr_init(&pdata->name_attr.attr);
@ -442,19 +347,14 @@ static int create_core_attrs(struct temp_data *tdata, struct device *dev,
int attr_no)
{
int err, i;
static ssize_t (*rd_ptr[TOTAL_ATTRS]) (struct device *dev,
static ssize_t (*const rd_ptr[TOTAL_ATTRS]) (struct device *dev,
struct device_attribute *devattr, char *buf) = {
show_label, show_crit_alarm, show_temp, show_tjmax,
show_max_alarm, show_ttarget, show_tmin };
static ssize_t (*rw_ptr[TOTAL_ATTRS]) (struct device *dev,
struct device_attribute *devattr, const char *buf,
size_t count) = { NULL, NULL, NULL, NULL, NULL,
store_ttarget, store_tmin };
static const char *names[TOTAL_ATTRS] = {
show_ttarget };
static const char *const names[TOTAL_ATTRS] = {
"temp%d_label", "temp%d_crit_alarm",
"temp%d_input", "temp%d_crit",
"temp%d_max_alarm", "temp%d_max",
"temp%d_max_hyst" };
"temp%d_max" };
for (i = 0; i < tdata->attr_size; i++) {
snprintf(tdata->attr_name[i], CORETEMP_NAME_LENGTH, names[i],
@ -462,10 +362,6 @@ static int create_core_attrs(struct temp_data *tdata, struct device *dev,
sysfs_attr_init(&tdata->sd_attrs[i].dev_attr.attr);
tdata->sd_attrs[i].dev_attr.attr.name = tdata->attr_name[i];
tdata->sd_attrs[i].dev_attr.attr.mode = S_IRUGO;
if (rw_ptr[i]) {
tdata->sd_attrs[i].dev_attr.attr.mode |= S_IWUSR;
tdata->sd_attrs[i].dev_attr.store = rw_ptr[i];
}
tdata->sd_attrs[i].dev_attr.show = rd_ptr[i];
tdata->sd_attrs[i].index = attr_no;
err = device_create_file(dev, &tdata->sd_attrs[i].dev_attr);
@ -481,9 +377,9 @@ static int create_core_attrs(struct temp_data *tdata, struct device *dev,
}
static int __devinit chk_ucode_version(struct platform_device *pdev)
static int __cpuinit chk_ucode_version(unsigned int cpu)
{
struct cpuinfo_x86 *c = &cpu_data(pdev->id);
struct cpuinfo_x86 *c = &cpu_data(cpu);
int err;
u32 edx;
@ -494,17 +390,15 @@ static int __devinit chk_ucode_version(struct platform_device *pdev)
*/
if (c->x86_model == 0xe && c->x86_mask < 0xc) {
/* check for microcode update */
err = smp_call_function_single(pdev->id, get_ucode_rev_on_cpu,
err = smp_call_function_single(cpu, get_ucode_rev_on_cpu,
&edx, 1);
if (err) {
dev_err(&pdev->dev,
"Cannot determine microcode revision of "
"CPU#%u (%d)!\n", pdev->id, err);
pr_err("Cannot determine microcode revision of "
"CPU#%u (%d)!\n", cpu, err);
return -ENODEV;
} else if (edx < 0x39) {
dev_err(&pdev->dev,
"Errata AE18 not fixed, update BIOS or "
"microcode of the CPU!\n");
pr_err("Errata AE18 not fixed, update BIOS or "
"microcode of the CPU!\n");
return -ENODEV;
}
}
@ -538,8 +432,6 @@ static struct temp_data *init_temp_data(unsigned int cpu, int pkg_flag)
tdata->status_reg = pkg_flag ? MSR_IA32_PACKAGE_THERM_STATUS :
MSR_IA32_THERM_STATUS;
tdata->intrpt_reg = pkg_flag ? MSR_IA32_PACKAGE_THERM_INTERRUPT :
MSR_IA32_THERM_INTERRUPT;
tdata->is_pkg_data = pkg_flag;
tdata->cpu = cpu;
tdata->cpu_core_id = TO_CORE_ID(cpu);
@ -548,11 +440,11 @@ static struct temp_data *init_temp_data(unsigned int cpu, int pkg_flag)
return tdata;
}
static int create_core_data(struct platform_data *pdata,
struct platform_device *pdev,
static int create_core_data(struct platform_device *pdev,
unsigned int cpu, int pkg_flag)
{
struct temp_data *tdata;
struct platform_data *pdata = platform_get_drvdata(pdev);
struct cpuinfo_x86 *c = &cpu_data(cpu);
u32 eax, edx;
int err, attr_no;
@ -588,25 +480,21 @@ static int create_core_data(struct platform_data *pdata,
goto exit_free;
/* We can access status register. Get Critical Temperature */
if (pkg_flag)
tdata->tjmax = get_pkg_tjmax(pdev->id, &pdev->dev);
else
tdata->tjmax = get_tjmax(c, cpu, &pdev->dev);
tdata->tjmax = get_tjmax(c, cpu, &pdev->dev);
/*
* Test if we can access the intrpt register. If so, increase the
* 'size' enough to have ttarget/tmin/max_alarm interfaces.
* Initialize ttarget with bits 16:22 of MSR_IA32_THERM_INTERRUPT
* Read the still undocumented bits 8:15 of IA32_TEMPERATURE_TARGET.
* The target temperature is available on older CPUs but not in this
* register. Atoms don't have the register at all.
*/
err = rdmsr_safe_on_cpu(cpu, tdata->intrpt_reg, &eax, &edx);
if (!err) {
tdata->attr_size += MAX_THRESH_ATTRS;
tdata->tmin = tdata->tjmax -
((eax & THERM_MASK_THRESHOLD0) >>
THERM_SHIFT_THRESHOLD0) * 1000;
tdata->ttarget = tdata->tjmax -
((eax & THERM_MASK_THRESHOLD1) >>
THERM_SHIFT_THRESHOLD1) * 1000;
if (c->x86_model > 0xe && c->x86_model != 0x1c) {
err = rdmsr_safe_on_cpu(cpu, MSR_IA32_TEMPERATURE_TARGET,
&eax, &edx);
if (!err) {
tdata->ttarget
= tdata->tjmax - ((eax >> 8) & 0xff) * 1000;
tdata->attr_size++;
}
}
pdata->core_data[attr_no] = tdata;
@ -618,22 +506,20 @@ static int create_core_data(struct platform_data *pdata,
return 0;
exit_free:
pdata->core_data[attr_no] = NULL;
kfree(tdata);
return err;
}
static void coretemp_add_core(unsigned int cpu, int pkg_flag)
{
struct platform_data *pdata;
struct platform_device *pdev = coretemp_get_pdev(cpu);
int err;
if (!pdev)
return;
pdata = platform_get_drvdata(pdev);
err = create_core_data(pdata, pdev, cpu, pkg_flag);
err = create_core_data(pdev, cpu, pkg_flag);
if (err)
dev_err(&pdev->dev, "Adding Core %u failed\n", cpu);
}
@ -657,11 +543,6 @@ static int __devinit coretemp_probe(struct platform_device *pdev)
struct platform_data *pdata;
int err;
/* Check the microcode version of the CPU */
err = chk_ucode_version(pdev);
if (err)
return err;
/* Initialize the per-package data structures */
pdata = kzalloc(sizeof(struct platform_data), GFP_KERNEL);
if (!pdata)
@ -671,7 +552,7 @@ static int __devinit coretemp_probe(struct platform_device *pdev)
if (err)
goto exit_free;
pdata->phys_proc_id = TO_PHYS_ID(pdev->id);
pdata->phys_proc_id = pdev->id;
platform_set_drvdata(pdev, pdata);
pdata->hwmon_dev = hwmon_device_register(&pdev->dev);
@ -723,7 +604,7 @@ static int __cpuinit coretemp_device_add(unsigned int cpu)
mutex_lock(&pdev_list_mutex);
pdev = platform_device_alloc(DRVNAME, cpu);
pdev = platform_device_alloc(DRVNAME, TO_PHYS_ID(cpu));
if (!pdev) {
err = -ENOMEM;
pr_err("Device allocation failed\n");
@ -743,7 +624,7 @@ static int __cpuinit coretemp_device_add(unsigned int cpu)
}
pdev_entry->pdev = pdev;
pdev_entry->phys_proc_id = TO_PHYS_ID(cpu);
pdev_entry->phys_proc_id = pdev->id;
list_add_tail(&pdev_entry->list, &pdev_list);
mutex_unlock(&pdev_list_mutex);
@ -804,6 +685,10 @@ static void __cpuinit get_core_online(unsigned int cpu)
return;
if (!pdev) {
/* Check the microcode version of the CPU */
if (chk_ucode_version(cpu))
return;
/*
* Alright, we have DTS support.
* We are bringing the _first_ core in this pkg

View file

@ -72,7 +72,7 @@ struct ds620_data {
char valid; /* !=0 if following fields are valid */
unsigned long last_updated; /* In jiffies */
u16 temp[3]; /* Register values, word */
s16 temp[3]; /* Register values, word */
};
/*

View file

@ -329,8 +329,8 @@ static int w83791d_detect(struct i2c_client *client,
struct i2c_board_info *info);
static int w83791d_remove(struct i2c_client *client);
static int w83791d_read(struct i2c_client *client, u8 register);
static int w83791d_write(struct i2c_client *client, u8 register, u8 value);
static int w83791d_read(struct i2c_client *client, u8 reg);
static int w83791d_write(struct i2c_client *client, u8 reg, u8 value);
static struct w83791d_data *w83791d_update_device(struct device *dev);
#ifdef DEBUG

View file

@ -435,7 +435,12 @@ static int idedisk_prep_fn(struct request_queue *q, struct request *rq)
if (!(rq->cmd_flags & REQ_FLUSH))
return BLKPREP_OK;
cmd = kzalloc(sizeof(*cmd), GFP_ATOMIC);
if (rq->special) {
cmd = rq->special;
memset(cmd, 0, sizeof(*cmd));
} else {
cmd = kzalloc(sizeof(*cmd), GFP_ATOMIC);
}
/* FIXME: map struct ide_taskfile on rq->cmd[] */
BUG_ON(cmd == NULL);

View file

@ -287,7 +287,7 @@ void __free_ep(struct kref *kref)
if (test_bit(RELEASE_RESOURCES, &ep->com.flags)) {
cxgb3_remove_tid(ep->com.tdev, (void *)ep, ep->hwtid);
dst_release(ep->dst);
l2t_release(L2DATA(ep->com.tdev), ep->l2t);
l2t_release(ep->com.tdev, ep->l2t);
}
kfree(ep);
}
@ -1178,7 +1178,7 @@ static int act_open_rpl(struct t3cdev *tdev, struct sk_buff *skb, void *ctx)
release_tid(ep->com.tdev, GET_TID(rpl), NULL);
cxgb3_free_atid(ep->com.tdev, ep->atid);
dst_release(ep->dst);
l2t_release(L2DATA(ep->com.tdev), ep->l2t);
l2t_release(ep->com.tdev, ep->l2t);
put_ep(&ep->com);
return CPL_RET_BUF_DONE;
}
@ -1377,7 +1377,7 @@ static int pass_accept_req(struct t3cdev *tdev, struct sk_buff *skb, void *ctx)
if (!child_ep) {
printk(KERN_ERR MOD "%s - failed to allocate ep entry!\n",
__func__);
l2t_release(L2DATA(tdev), l2t);
l2t_release(tdev, l2t);
dst_release(dst);
goto reject;
}
@ -1956,7 +1956,7 @@ int iwch_connect(struct iw_cm_id *cm_id, struct iw_cm_conn_param *conn_param)
if (!err)
goto out;
l2t_release(L2DATA(h->rdev.t3cdev_p), ep->l2t);
l2t_release(h->rdev.t3cdev_p, ep->l2t);
fail4:
dst_release(ep->dst);
fail3:
@ -2127,7 +2127,7 @@ int iwch_ep_redirect(void *ctx, struct dst_entry *old, struct dst_entry *new,
PDBG("%s ep %p redirect to dst %p l2t %p\n", __func__, ep, new,
l2t);
dst_hold(new);
l2t_release(L2DATA(ep->com.tdev), ep->l2t);
l2t_release(ep->com.tdev, ep->l2t);
ep->l2t = l2t;
dst_release(old);
ep->dst = new;

View file

@ -1124,11 +1124,8 @@ void wacom_setup_input_capabilities(struct input_dev *input_dev,
for (i = 0; i < 8; i++)
__set_bit(BTN_0 + i, input_dev->keybit);
if (wacom_wac->features.type != WACOM_21UX2) {
input_set_abs_params(input_dev, ABS_RX, 0, 4096, 0, 0);
input_set_abs_params(input_dev, ABS_RY, 0, 4096, 0, 0);
}
input_set_abs_params(input_dev, ABS_RX, 0, 4096, 0, 0);
input_set_abs_params(input_dev, ABS_RY, 0, 4096, 0, 0);
input_set_abs_params(input_dev, ABS_Z, -900, 899, 0, 0);
__set_bit(INPUT_PROP_DIRECT, input_dev->propbit);

View file

@ -1698,6 +1698,8 @@ static int crypt_ctr(struct dm_target *ti, unsigned int argc, char **argv)
}
ti->num_flush_requests = 1;
ti->discard_zeroes_data_unsupported = 1;
return 0;
bad:

View file

@ -81,8 +81,10 @@ static int parse_features(struct dm_arg_set *as, struct flakey_c *fc,
* corrupt_bio_byte <Nth_byte> <direction> <value> <bio_flags>
*/
if (!strcasecmp(arg_name, "corrupt_bio_byte")) {
if (!argc)
if (!argc) {
ti->error = "Feature corrupt_bio_byte requires parameters";
return -EINVAL;
}
r = dm_read_arg(_args + 1, as, &fc->corrupt_bio_byte, &ti->error);
if (r)

View file

@ -449,7 +449,7 @@ static int parse_raid_params(struct raid_set *rs, char **argv,
rs->ti->error = "write_mostly option is only valid for RAID1";
return -EINVAL;
}
if (value > rs->md.raid_disks) {
if (value >= rs->md.raid_disks) {
rs->ti->error = "Invalid write_mostly drive index given";
return -EINVAL;
}

View file

@ -1238,14 +1238,15 @@ static void dm_table_set_integrity(struct dm_table *t)
return;
template_disk = dm_table_get_integrity_disk(t, true);
if (!template_disk &&
blk_integrity_is_initialized(dm_disk(t->md))) {
if (template_disk)
blk_integrity_register(dm_disk(t->md),
blk_get_integrity(template_disk));
else if (blk_integrity_is_initialized(dm_disk(t->md)))
DMWARN("%s: device no longer has a valid integrity profile",
dm_device_name(t->md));
return;
}
blk_integrity_register(dm_disk(t->md),
blk_get_integrity(template_disk));
else
DMWARN("%s: unable to establish an integrity profile",
dm_device_name(t->md));
}
static int device_flush_capable(struct dm_target *ti, struct dm_dev *dev,
@ -1282,6 +1283,22 @@ static bool dm_table_supports_flush(struct dm_table *t, unsigned flush)
return 0;
}
static bool dm_table_discard_zeroes_data(struct dm_table *t)
{
struct dm_target *ti;
unsigned i = 0;
/* Ensure that all targets supports discard_zeroes_data. */
while (i < dm_table_get_num_targets(t)) {
ti = dm_table_get_target(t, i++);
if (ti->discard_zeroes_data_unsupported)
return 0;
}
return 1;
}
void dm_table_set_restrictions(struct dm_table *t, struct request_queue *q,
struct queue_limits *limits)
{
@ -1304,6 +1321,9 @@ void dm_table_set_restrictions(struct dm_table *t, struct request_queue *q,
}
blk_queue_flush(q, flush);
if (!dm_table_discard_zeroes_data(t))
q->limits.discard_zeroes_data = 0;
dm_table_set_integrity(t);
/*

View file

@ -61,6 +61,11 @@
static void autostart_arrays(int part);
#endif
/* pers_list is a list of registered personalities protected
* by pers_lock.
* pers_lock does extra service to protect accesses to
* mddev->thread when the mutex cannot be held.
*/
static LIST_HEAD(pers_list);
static DEFINE_SPINLOCK(pers_lock);
@ -739,7 +744,12 @@ static void mddev_unlock(mddev_t * mddev)
} else
mutex_unlock(&mddev->reconfig_mutex);
/* was we've dropped the mutex we need a spinlock to
* make sur the thread doesn't disappear
*/
spin_lock(&pers_lock);
md_wakeup_thread(mddev->thread);
spin_unlock(&pers_lock);
}
static mdk_rdev_t * find_rdev_nr(mddev_t *mddev, int nr)
@ -6429,11 +6439,18 @@ mdk_thread_t *md_register_thread(void (*run) (mddev_t *), mddev_t *mddev,
return thread;
}
void md_unregister_thread(mdk_thread_t *thread)
void md_unregister_thread(mdk_thread_t **threadp)
{
mdk_thread_t *thread = *threadp;
if (!thread)
return;
dprintk("interrupting MD-thread pid %d\n", task_pid_nr(thread->tsk));
/* Locking ensures that mddev_unlock does not wake_up a
* non-existent thread
*/
spin_lock(&pers_lock);
*threadp = NULL;
spin_unlock(&pers_lock);
kthread_stop(thread->tsk);
kfree(thread);
@ -7340,8 +7357,7 @@ static void reap_sync_thread(mddev_t *mddev)
mdk_rdev_t *rdev;
/* resync has finished, collect result */
md_unregister_thread(mddev->sync_thread);
mddev->sync_thread = NULL;
md_unregister_thread(&mddev->sync_thread);
if (!test_bit(MD_RECOVERY_INTR, &mddev->recovery) &&
!test_bit(MD_RECOVERY_REQUESTED, &mddev->recovery)) {
/* success...*/

View file

@ -560,7 +560,7 @@ extern int register_md_personality(struct mdk_personality *p);
extern int unregister_md_personality(struct mdk_personality *p);
extern mdk_thread_t * md_register_thread(void (*run) (mddev_t *mddev),
mddev_t *mddev, const char *name);
extern void md_unregister_thread(mdk_thread_t *thread);
extern void md_unregister_thread(mdk_thread_t **threadp);
extern void md_wakeup_thread(mdk_thread_t *thread);
extern void md_check_recovery(mddev_t *mddev);
extern void md_write_start(mddev_t *mddev, struct bio *bi);

View file

@ -514,8 +514,7 @@ static int multipath_stop (mddev_t *mddev)
{
multipath_conf_t *conf = mddev->private;
md_unregister_thread(mddev->thread);
mddev->thread = NULL;
md_unregister_thread(&mddev->thread);
blk_sync_queue(mddev->queue); /* the unplug fn references 'conf'*/
mempool_destroy(conf->pool);
kfree(conf->multipaths);

View file

@ -2562,8 +2562,7 @@ static int stop(mddev_t *mddev)
raise_barrier(conf);
lower_barrier(conf);
md_unregister_thread(mddev->thread);
mddev->thread = NULL;
md_unregister_thread(&mddev->thread);
if (conf->r1bio_pool)
mempool_destroy(conf->r1bio_pool);
kfree(conf->mirrors);

View file

@ -2955,7 +2955,7 @@ static int run(mddev_t *mddev)
return 0;
out_free_conf:
md_unregister_thread(mddev->thread);
md_unregister_thread(&mddev->thread);
if (conf->r10bio_pool)
mempool_destroy(conf->r10bio_pool);
safe_put_page(conf->tmppage);
@ -2973,8 +2973,7 @@ static int stop(mddev_t *mddev)
raise_barrier(conf, 0);
lower_barrier(conf);
md_unregister_thread(mddev->thread);
mddev->thread = NULL;
md_unregister_thread(&mddev->thread);
blk_sync_queue(mddev->queue); /* the unplug fn references 'conf'*/
if (conf->r10bio_pool)
mempool_destroy(conf->r10bio_pool);

View file

@ -4941,8 +4941,7 @@ static int run(mddev_t *mddev)
return 0;
abort:
md_unregister_thread(mddev->thread);
mddev->thread = NULL;
md_unregister_thread(&mddev->thread);
if (conf) {
print_raid5_conf(conf);
free_conf(conf);
@ -4956,8 +4955,7 @@ static int stop(mddev_t *mddev)
{
raid5_conf_t *conf = mddev->private;
md_unregister_thread(mddev->thread);
mddev->thread = NULL;
md_unregister_thread(&mddev->thread);
if (mddev->queue)
mddev->queue->backing_dev_info.congested_fn = NULL;
free_conf(conf);

View file

@ -2194,19 +2194,6 @@ static int __init omap_vout_probe(struct platform_device *pdev)
"'%s' Display already enabled\n",
def_display->name);
}
/* set the update mode */
if (def_display->caps &
OMAP_DSS_DISPLAY_CAP_MANUAL_UPDATE) {
if (dssdrv->enable_te)
dssdrv->enable_te(def_display, 0);
if (dssdrv->set_update_mode)
dssdrv->set_update_mode(def_display,
OMAP_DSS_UPDATE_MANUAL);
} else {
if (dssdrv->set_update_mode)
dssdrv->set_update_mode(def_display,
OMAP_DSS_UPDATE_AUTO);
}
}
}

View file

@ -31,6 +31,7 @@
#include <linux/dma-mapping.h>
#include <linux/mm.h>
#include <linux/sched.h>
#include <linux/slab.h>
#include <media/v4l2-event.h>
#include "isp.h"

View file

@ -1961,7 +1961,7 @@ static int __uvc_resume(struct usb_interface *intf, int reset)
list_for_each_entry(stream, &dev->streams, list) {
if (stream->intf == intf)
return uvc_video_resume(stream);
return uvc_video_resume(stream, reset);
}
uvc_trace(UVC_TRACE_SUSPEND, "Resume: video streaming USB interface "

View file

@ -49,7 +49,7 @@ static int uvc_mc_register_entity(struct uvc_video_chain *chain,
if (remote == NULL)
return -EINVAL;
source = (UVC_ENTITY_TYPE(remote) != UVC_TT_STREAMING)
source = (UVC_ENTITY_TYPE(remote) == UVC_TT_STREAMING)
? (remote->vdev ? &remote->vdev->entity : NULL)
: &remote->subdev.entity;
if (source == NULL)

View file

@ -1104,10 +1104,18 @@ int uvc_video_suspend(struct uvc_streaming *stream)
* buffers, making sure userspace applications are notified of the problem
* instead of waiting forever.
*/
int uvc_video_resume(struct uvc_streaming *stream)
int uvc_video_resume(struct uvc_streaming *stream, int reset)
{
int ret;
/* If the bus has been reset on resume, set the alternate setting to 0.
* This should be the default value, but some devices crash or otherwise
* misbehave if they don't receive a SET_INTERFACE request before any
* other video control request.
*/
if (reset)
usb_set_interface(stream->dev->udev, stream->intfnum, 0);
stream->frozen = 0;
ret = uvc_commit_video(stream, &stream->ctrl);

View file

@ -638,7 +638,7 @@ extern void uvc_mc_cleanup_entity(struct uvc_entity *entity);
/* Video */
extern int uvc_video_init(struct uvc_streaming *stream);
extern int uvc_video_suspend(struct uvc_streaming *stream);
extern int uvc_video_resume(struct uvc_streaming *stream);
extern int uvc_video_resume(struct uvc_streaming *stream, int reset);
extern int uvc_video_enable(struct uvc_streaming *stream, int enable);
extern int uvc_probe_video(struct uvc_streaming *stream,
struct uvc_streaming_control *probe);

View file

@ -173,6 +173,17 @@ static void v4l2_device_release(struct device *cd)
media_device_unregister_entity(&vdev->entity);
#endif
/* Do not call v4l2_device_put if there is no release callback set.
* Drivers that have no v4l2_device release callback might free the
* v4l2_dev instance in the video_device release callback below, so we
* must perform this check here.
*
* TODO: In the long run all drivers that use v4l2_device should use the
* v4l2_device release callback. This check will then be unnecessary.
*/
if (v4l2_dev->release == NULL)
v4l2_dev = NULL;
/* Release video_device and perform other
cleanups as needed. */
vdev->release(vdev);

View file

@ -38,6 +38,7 @@ int v4l2_device_register(struct device *dev, struct v4l2_device *v4l2_dev)
mutex_init(&v4l2_dev->ioctl_lock);
v4l2_prio_init(&v4l2_dev->prio);
kref_init(&v4l2_dev->ref);
get_device(dev);
v4l2_dev->dev = dev;
if (dev == NULL) {
/* If dev == NULL, then name must be filled in by the caller */
@ -93,6 +94,7 @@ void v4l2_device_disconnect(struct v4l2_device *v4l2_dev)
if (dev_get_drvdata(v4l2_dev->dev) == v4l2_dev)
dev_set_drvdata(v4l2_dev->dev, NULL);
put_device(v4l2_dev->dev);
v4l2_dev->dev = NULL;
}
EXPORT_SYMBOL_GPL(v4l2_device_disconnect);

View file

@ -273,7 +273,7 @@ static int __devinit jz4740_adc_probe(struct platform_device *pdev)
ct->regs.ack = JZ_REG_ADC_STATUS;
ct->chip.irq_mask = irq_gc_mask_set_bit;
ct->chip.irq_unmask = irq_gc_mask_clr_bit;
ct->chip.irq_ack = irq_gc_ack;
ct->chip.irq_ack = irq_gc_ack_set_bit;
irq_setup_generic_chip(gc, IRQ_MSK(5), 0, 0, IRQ_NOPROBE | IRQ_LEVEL);

View file

@ -375,12 +375,14 @@ void lis3lv02d_poweron(struct lis3lv02d *lis3)
* both have been read. So the value read will always be correct.
* Set BOOT bit to refresh factory tuning values.
*/
lis3->read(lis3, CTRL_REG2, &reg);
if (lis3->whoami == WAI_12B)
reg |= CTRL2_BDU | CTRL2_BOOT;
else
reg |= CTRL2_BOOT_8B;
lis3->write(lis3, CTRL_REG2, reg);
if (lis3->pdata) {
lis3->read(lis3, CTRL_REG2, &reg);
if (lis3->whoami == WAI_12B)
reg |= CTRL2_BDU | CTRL2_BOOT;
else
reg |= CTRL2_BOOT_8B;
lis3->write(lis3, CTRL_REG2, reg);
}
/* LIS3 power on delay is quite long */
msleep(lis3->pwron_delay / lis3lv02d_get_odr());

View file

@ -2168,7 +2168,8 @@ void bond_3ad_state_machine_handler(struct work_struct *work)
}
re_arm:
queue_delayed_work(bond->wq, &bond->ad_work, ad_delta_in_ticks);
if (!bond->kill_timers)
queue_delayed_work(bond->wq, &bond->ad_work, ad_delta_in_ticks);
out:
read_unlock(&bond->lock);
}

View file

@ -1440,7 +1440,8 @@ void bond_alb_monitor(struct work_struct *work)
}
re_arm:
queue_delayed_work(bond->wq, &bond->alb_work, alb_delta_in_ticks);
if (!bond->kill_timers)
queue_delayed_work(bond->wq, &bond->alb_work, alb_delta_in_ticks);
out:
read_unlock(&bond->lock);
}

View file

@ -774,6 +774,9 @@ static void bond_resend_igmp_join_requests(struct bonding *bond)
read_lock(&bond->lock);
if (bond->kill_timers)
goto out;
/* rejoin all groups on bond device */
__bond_resend_igmp_join_requests(bond->dev);
@ -787,9 +790,9 @@ static void bond_resend_igmp_join_requests(struct bonding *bond)
__bond_resend_igmp_join_requests(vlan_dev);
}
if (--bond->igmp_retrans > 0)
if ((--bond->igmp_retrans > 0) && !bond->kill_timers)
queue_delayed_work(bond->wq, &bond->mcast_work, HZ/5);
out:
read_unlock(&bond->lock);
}
@ -2535,7 +2538,7 @@ void bond_mii_monitor(struct work_struct *work)
}
re_arm:
if (bond->params.miimon)
if (bond->params.miimon && !bond->kill_timers)
queue_delayed_work(bond->wq, &bond->mii_work,
msecs_to_jiffies(bond->params.miimon));
out:
@ -2883,7 +2886,7 @@ void bond_loadbalance_arp_mon(struct work_struct *work)
}
re_arm:
if (bond->params.arp_interval)
if (bond->params.arp_interval && !bond->kill_timers)
queue_delayed_work(bond->wq, &bond->arp_work, delta_in_ticks);
out:
read_unlock(&bond->lock);
@ -3151,7 +3154,7 @@ void bond_activebackup_arp_mon(struct work_struct *work)
bond_ab_arp_probe(bond);
re_arm:
if (bond->params.arp_interval)
if (bond->params.arp_interval && !bond->kill_timers)
queue_delayed_work(bond->wq, &bond->arp_work, delta_in_ticks);
out:
read_unlock(&bond->lock);

View file

@ -2123,6 +2123,7 @@ static u8 bnx2x_dcbnl_get_cap(struct net_device *netdev, int capid, u8 *cap)
break;
case DCB_CAP_ATTR_DCBX:
*cap = BNX2X_DCBX_CAPS;
break;
default:
rval = -EINVAL;
break;

View file

@ -4937,7 +4937,7 @@ static void bnx2x_init_def_sb(struct bnx2x *bp)
int igu_seg_id;
int port = BP_PORT(bp);
int func = BP_FUNC(bp);
int reg_offset;
int reg_offset, reg_offset_en5;
u64 section;
int index;
struct hc_sp_status_block_data sp_sb_data;
@ -4960,6 +4960,8 @@ static void bnx2x_init_def_sb(struct bnx2x *bp)
reg_offset = (port ? MISC_REG_AEU_ENABLE1_FUNC_1_OUT_0 :
MISC_REG_AEU_ENABLE1_FUNC_0_OUT_0);
reg_offset_en5 = (port ? MISC_REG_AEU_ENABLE5_FUNC_1_OUT_0 :
MISC_REG_AEU_ENABLE5_FUNC_0_OUT_0);
for (index = 0; index < MAX_DYNAMIC_ATTN_GRPS; index++) {
int sindex;
/* take care of sig[0]..sig[4] */
@ -4974,7 +4976,7 @@ static void bnx2x_init_def_sb(struct bnx2x *bp)
* and not 16 between the different groups
*/
bp->attn_group[index].sig[4] = REG_RD(bp,
reg_offset + 0x10 + 0x4*index);
reg_offset_en5 + 0x4*index);
else
bp->attn_group[index].sig[4] = 0;
}
@ -7619,8 +7621,11 @@ u32 bnx2x_send_unload_req(struct bnx2x *bp, int unload_mode)
u32 emac_base = port ? GRCBASE_EMAC1 : GRCBASE_EMAC0;
u8 *mac_addr = bp->dev->dev_addr;
u32 val;
u16 pmc;
/* The mac address is written to entries 1-4 to
preserve entry 0 which is used by the PMF */
* preserve entry 0 which is used by the PMF
*/
u8 entry = (BP_VN(bp) + 1)*8;
val = (mac_addr[0] << 8) | mac_addr[1];
@ -7630,6 +7635,11 @@ u32 bnx2x_send_unload_req(struct bnx2x *bp, int unload_mode)
(mac_addr[4] << 8) | mac_addr[5];
EMAC_WR(bp, EMAC_REG_EMAC_MAC_MATCH + entry + 4, val);
/* Enable the PME and clear the status */
pci_read_config_word(bp->pdev, bp->pm_cap + PCI_PM_CTRL, &pmc);
pmc |= PCI_PM_CTRL_PME_ENABLE | PCI_PM_CTRL_PME_STATUS;
pci_write_config_word(bp->pdev, bp->pm_cap + PCI_PM_CTRL, pmc);
reset_code = DRV_MSG_CODE_UNLOAD_REQ_WOL_EN;
} else

View file

@ -1384,6 +1384,18 @@
Latched ump_tx_parity; [31] MCP Latched scpad_parity; */
#define MISC_REG_AEU_ENABLE4_PXP_0 0xa108
#define MISC_REG_AEU_ENABLE4_PXP_1 0xa1a8
/* [RW 32] fifth 32b for enabling the output for function 0 output0. Mapped
* as follows: [0] PGLUE config_space; [1] PGLUE misc_flr; [2] PGLUE B RBC
* attention [3] PGLUE B RBC parity; [4] ATC attention; [5] ATC parity; [6]
* mstat0 attention; [7] mstat0 parity; [8] mstat1 attention; [9] mstat1
* parity; [31-10] Reserved; */
#define MISC_REG_AEU_ENABLE5_FUNC_0_OUT_0 0xa688
/* [RW 32] Fifth 32b for enabling the output for function 1 output0. Mapped
* as follows: [0] PGLUE config_space; [1] PGLUE misc_flr; [2] PGLUE B RBC
* attention [3] PGLUE B RBC parity; [4] ATC attention; [5] ATC parity; [6]
* mstat0 attention; [7] mstat0 parity; [8] mstat1 attention; [9] mstat1
* parity; [31-10] Reserved; */
#define MISC_REG_AEU_ENABLE5_FUNC_1_OUT_0 0xa6b0
/* [RW 1] set/clr general attention 0; this will set/clr bit 94 in the aeu
128 bit vector */
#define MISC_REG_AEU_GENERAL_ATTN_0 0xa000

View file

@ -1146,12 +1146,14 @@ static void cxgb_redirect(struct dst_entry *old, struct dst_entry *new)
if (te && te->ctx && te->client && te->client->redirect) {
update_tcb = te->client->redirect(te->ctx, old, new, e);
if (update_tcb) {
rcu_read_lock();
l2t_hold(L2DATA(tdev), e);
rcu_read_unlock();
set_l2t_ix(tdev, tid, e);
}
}
}
l2t_release(L2DATA(tdev), e);
l2t_release(tdev, e);
}
/*
@ -1264,7 +1266,7 @@ int cxgb3_offload_activate(struct adapter *adapter)
goto out_free;
err = -ENOMEM;
L2DATA(dev) = t3_init_l2t(l2t_capacity);
RCU_INIT_POINTER(dev->l2opt, t3_init_l2t(l2t_capacity));
if (!L2DATA(dev))
goto out_free;
@ -1298,16 +1300,24 @@ int cxgb3_offload_activate(struct adapter *adapter)
out_free_l2t:
t3_free_l2t(L2DATA(dev));
L2DATA(dev) = NULL;
rcu_assign_pointer(dev->l2opt, NULL);
out_free:
kfree(t);
return err;
}
static void clean_l2_data(struct rcu_head *head)
{
struct l2t_data *d = container_of(head, struct l2t_data, rcu_head);
t3_free_l2t(d);
}
void cxgb3_offload_deactivate(struct adapter *adapter)
{
struct t3cdev *tdev = &adapter->tdev;
struct t3c_data *t = T3C_DATA(tdev);
struct l2t_data *d;
remove_adapter(adapter);
if (list_empty(&adapter_list))
@ -1315,8 +1325,11 @@ void cxgb3_offload_deactivate(struct adapter *adapter)
free_tid_maps(&t->tid_maps);
T3C_DATA(tdev) = NULL;
t3_free_l2t(L2DATA(tdev));
L2DATA(tdev) = NULL;
rcu_read_lock();
d = L2DATA(tdev);
rcu_read_unlock();
rcu_assign_pointer(tdev->l2opt, NULL);
call_rcu(&d->rcu_head, clean_l2_data);
if (t->nofail_skb)
kfree_skb(t->nofail_skb);
kfree(t);

View file

@ -300,14 +300,21 @@ static inline void reuse_entry(struct l2t_entry *e, struct neighbour *neigh)
struct l2t_entry *t3_l2t_get(struct t3cdev *cdev, struct neighbour *neigh,
struct net_device *dev)
{
struct l2t_entry *e;
struct l2t_data *d = L2DATA(cdev);
struct l2t_entry *e = NULL;
struct l2t_data *d;
int hash;
u32 addr = *(u32 *) neigh->primary_key;
int ifidx = neigh->dev->ifindex;
int hash = arp_hash(addr, ifidx, d);
struct port_info *p = netdev_priv(dev);
int smt_idx = p->port_id;
rcu_read_lock();
d = L2DATA(cdev);
if (!d)
goto done_rcu;
hash = arp_hash(addr, ifidx, d);
write_lock_bh(&d->lock);
for (e = d->l2tab[hash].first; e; e = e->next)
if (e->addr == addr && e->ifindex == ifidx &&
@ -338,6 +345,8 @@ struct l2t_entry *t3_l2t_get(struct t3cdev *cdev, struct neighbour *neigh,
}
done:
write_unlock_bh(&d->lock);
done_rcu:
rcu_read_unlock();
return e;
}

View file

@ -76,6 +76,7 @@ struct l2t_data {
atomic_t nfree; /* number of free entries */
rwlock_t lock;
struct l2t_entry l2tab[0];
struct rcu_head rcu_head; /* to handle rcu cleanup */
};
typedef void (*arp_failure_handler_func)(struct t3cdev * dev,
@ -99,7 +100,7 @@ static inline void set_arp_failure_handler(struct sk_buff *skb,
/*
* Getting to the L2 data from an offload device.
*/
#define L2DATA(dev) ((dev)->l2opt)
#define L2DATA(cdev) (rcu_dereference((cdev)->l2opt))
#define W_TCB_L2T_IX 0
#define S_TCB_L2T_IX 7
@ -126,15 +127,22 @@ static inline int l2t_send(struct t3cdev *dev, struct sk_buff *skb,
return t3_l2t_send_slow(dev, skb, e);
}
static inline void l2t_release(struct l2t_data *d, struct l2t_entry *e)
static inline void l2t_release(struct t3cdev *t, struct l2t_entry *e)
{
if (atomic_dec_and_test(&e->refcnt))
struct l2t_data *d;
rcu_read_lock();
d = L2DATA(t);
if (atomic_dec_and_test(&e->refcnt) && d)
t3_l2e_free(d, e);
rcu_read_unlock();
}
static inline void l2t_hold(struct l2t_data *d, struct l2t_entry *e)
{
if (atomic_add_return(1, &e->refcnt) == 1) /* 0 -> 1 transition */
if (d && atomic_add_return(1, &e->refcnt) == 1) /* 0 -> 1 transition */
atomic_dec(&d->nfree);
}

View file

@ -3715,6 +3715,9 @@ static int __devinit init_one(struct pci_dev *pdev,
setup_debugfs(adapter);
}
/* PCIe EEH recovery on powerpc platforms needs fundamental reset */
pdev->needs_freset = 1;
if (is_offload(adapter))
attach_ulds(adapter);

View file

@ -636,8 +636,8 @@ static int ibmveth_open(struct net_device *netdev)
netdev_err(netdev, "unable to request irq 0x%x, rc %d\n",
netdev->irq, rc);
do {
rc = h_free_logical_lan(adapter->vdev->unit_address);
} while (H_IS_LONG_BUSY(rc) || (rc == H_BUSY));
lpar_rc = h_free_logical_lan(adapter->vdev->unit_address);
} while (H_IS_LONG_BUSY(lpar_rc) || (lpar_rc == H_BUSY));
goto err_out;
}

View file

@ -1198,6 +1198,8 @@ static irqreturn_t pch_gbe_intr(int irq, void *data)
iowrite32((int_en & ~PCH_GBE_INT_RX_FIFO_ERR),
&hw->reg->INT_EN);
pch_gbe_stop_receive(adapter);
int_st |= ioread32(&hw->reg->INT_ST);
int_st = int_st & ioread32(&hw->reg->INT_EN);
}
if (int_st & PCH_GBE_INT_RX_DMA_ERR)
adapter->stats.intr_rx_dma_err_count++;
@ -1217,14 +1219,11 @@ static irqreturn_t pch_gbe_intr(int irq, void *data)
/* Set Pause packet */
pch_gbe_mac_set_pause_packet(hw);
}
if ((int_en & (PCH_GBE_INT_RX_DMA_CMPLT | PCH_GBE_INT_TX_CMPLT))
== 0) {
return IRQ_HANDLED;
}
}
/* When request status is Receive interruption */
if ((int_st & (PCH_GBE_INT_RX_DMA_CMPLT | PCH_GBE_INT_TX_CMPLT))) {
if ((int_st & (PCH_GBE_INT_RX_DMA_CMPLT | PCH_GBE_INT_TX_CMPLT)) ||
(adapter->rx_stop_flag == true)) {
if (likely(napi_schedule_prep(&adapter->napi))) {
/* Enable only Rx Descriptor empty */
atomic_inc(&adapter->irq_sem);
@ -1384,7 +1383,7 @@ pch_gbe_clean_tx(struct pch_gbe_adapter *adapter,
struct sk_buff *skb;
unsigned int i;
unsigned int cleaned_count = 0;
bool cleaned = false;
bool cleaned = true;
pr_debug("next_to_clean : %d\n", tx_ring->next_to_clean);
@ -1395,7 +1394,6 @@ pch_gbe_clean_tx(struct pch_gbe_adapter *adapter,
while ((tx_desc->gbec_status & DSC_INIT16) == 0x0000) {
pr_debug("gbec_status:0x%04x\n", tx_desc->gbec_status);
cleaned = true;
buffer_info = &tx_ring->buffer_info[i];
skb = buffer_info->skb;
@ -1438,8 +1436,10 @@ pch_gbe_clean_tx(struct pch_gbe_adapter *adapter,
tx_desc = PCH_GBE_TX_DESC(*tx_ring, i);
/* weight of a sort for tx, to avoid endless transmit cleanup */
if (cleaned_count++ == PCH_GBE_TX_WEIGHT)
if (cleaned_count++ == PCH_GBE_TX_WEIGHT) {
cleaned = false;
break;
}
}
pr_debug("called pch_gbe_unmap_and_free_tx_resource() %d count\n",
cleaned_count);
@ -2167,7 +2167,6 @@ static int pch_gbe_napi_poll(struct napi_struct *napi, int budget)
{
struct pch_gbe_adapter *adapter =
container_of(napi, struct pch_gbe_adapter, napi);
struct net_device *netdev = adapter->netdev;
int work_done = 0;
bool poll_end_flag = false;
bool cleaned = false;
@ -2175,33 +2174,32 @@ static int pch_gbe_napi_poll(struct napi_struct *napi, int budget)
pr_debug("budget : %d\n", budget);
/* Keep link state information with original netdev */
if (!netif_carrier_ok(netdev)) {
pch_gbe_clean_rx(adapter, adapter->rx_ring, &work_done, budget);
cleaned = pch_gbe_clean_tx(adapter, adapter->tx_ring);
if (!cleaned)
work_done = budget;
/* If no Tx and not enough Rx work done,
* exit the polling mode
*/
if (work_done < budget)
poll_end_flag = true;
} else {
pch_gbe_clean_rx(adapter, adapter->rx_ring, &work_done, budget);
if (poll_end_flag) {
napi_complete(napi);
if (adapter->rx_stop_flag) {
adapter->rx_stop_flag = false;
pch_gbe_start_receive(&adapter->hw);
}
pch_gbe_irq_enable(adapter);
} else
if (adapter->rx_stop_flag) {
adapter->rx_stop_flag = false;
pch_gbe_start_receive(&adapter->hw);
int_en = ioread32(&adapter->hw.reg->INT_EN);
iowrite32((int_en | PCH_GBE_INT_RX_FIFO_ERR),
&adapter->hw.reg->INT_EN);
&adapter->hw.reg->INT_EN);
}
cleaned = pch_gbe_clean_tx(adapter, adapter->tx_ring);
if (cleaned)
work_done = budget;
/* If no Tx and not enough Rx work done,
* exit the polling mode
*/
if ((work_done < budget) || !netif_running(netdev))
poll_end_flag = true;
}
if (poll_end_flag) {
napi_complete(napi);
pch_gbe_irq_enable(adapter);
}
pr_debug("poll_end_flag : %d work_done : %d budget : %d\n",
poll_end_flag, work_done, budget);

View file

@ -239,7 +239,7 @@ static int macvlan_queue_xmit(struct sk_buff *skb, struct net_device *dev)
dest = macvlan_hash_lookup(port, eth->h_dest);
if (dest && dest->mode == MACVLAN_MODE_BRIDGE) {
/* send to lowerdev first for its network taps */
vlan->forward(vlan->lowerdev, skb);
dev_forward_skb(vlan->lowerdev, skb);
return NET_XMIT_SUCCESS;
}

View file

@ -686,7 +686,7 @@ static void decode_rxts(struct dp83640_private *dp83640,
prune_rx_ts(dp83640);
if (list_empty(&dp83640->rxpool)) {
pr_warning("dp83640: rx timestamp pool is empty\n");
pr_debug("dp83640: rx timestamp pool is empty\n");
goto out;
}
rxts = list_first_entry(&dp83640->rxpool, struct rxts, list);
@ -709,7 +709,7 @@ static void decode_txts(struct dp83640_private *dp83640,
skb = skb_dequeue(&dp83640->tx_queue);
if (!skb) {
pr_warning("dp83640: have timestamp but tx_queue empty\n");
pr_debug("dp83640: have timestamp but tx_queue empty\n");
return;
}
ns = phy2txts(phy_txts);

View file

@ -327,12 +327,12 @@ int xenvif_connect(struct xenvif *vif, unsigned long tx_ring_ref,
xenvif_get(vif);
rtnl_lock();
if (netif_running(vif->dev))
xenvif_up(vif);
if (!vif->can_sg && vif->dev->mtu > ETH_DATA_LEN)
dev_set_mtu(vif->dev, ETH_DATA_LEN);
netdev_update_features(vif->dev);
netif_carrier_on(vif->dev);
if (netif_running(vif->dev))
xenvif_up(vif);
rtnl_unlock();
return 0;

View file

@ -77,7 +77,7 @@ unsigned long pci_cardbus_mem_size = DEFAULT_CARDBUS_MEM_SIZE;
unsigned long pci_hotplug_io_size = DEFAULT_HOTPLUG_IO_SIZE;
unsigned long pci_hotplug_mem_size = DEFAULT_HOTPLUG_MEM_SIZE;
enum pcie_bus_config_types pcie_bus_config = PCIE_BUS_SAFE;
enum pcie_bus_config_types pcie_bus_config = PCIE_BUS_TUNE_OFF;
/*
* The default CLS is used if arch didn't set CLS explicitly and not
@ -3568,10 +3568,14 @@ static int __init pci_setup(char *str)
pci_hotplug_io_size = memparse(str + 9, &str);
} else if (!strncmp(str, "hpmemsize=", 10)) {
pci_hotplug_mem_size = memparse(str + 10, &str);
} else if (!strncmp(str, "pcie_bus_tune_off", 17)) {
pcie_bus_config = PCIE_BUS_TUNE_OFF;
} else if (!strncmp(str, "pcie_bus_safe", 13)) {
pcie_bus_config = PCIE_BUS_SAFE;
} else if (!strncmp(str, "pcie_bus_perf", 13)) {
pcie_bus_config = PCIE_BUS_PERFORMANCE;
} else if (!strncmp(str, "pcie_bus_peer2peer", 18)) {
pcie_bus_config = PCIE_BUS_PEER2PEER;
} else {
printk(KERN_ERR "PCI: Unknown option `%s'\n",
str);

View file

@ -1458,12 +1458,24 @@ static int pcie_bus_configure_set(struct pci_dev *dev, void *data)
*/
void pcie_bus_configure_settings(struct pci_bus *bus, u8 mpss)
{
u8 smpss = mpss;
u8 smpss;
if (!pci_is_pcie(bus->self))
return;
if (pcie_bus_config == PCIE_BUS_TUNE_OFF)
return;
/* FIXME - Peer to peer DMA is possible, though the endpoint would need
* to be aware to the MPS of the destination. To work around this,
* simply force the MPS of the entire system to the smallest possible.
*/
if (pcie_bus_config == PCIE_BUS_PEER2PEER)
smpss = 0;
if (pcie_bus_config == PCIE_BUS_SAFE) {
smpss = mpss;
pcie_find_smpss(bus->self, &smpss);
pci_walk_bus(bus, pcie_find_smpss, &smpss);
}

Some files were not shown because too many files have changed in this diff Show more