License cleanup: add SPDX GPL-2.0 license identifier to files with no license
Many source files in the tree are missing licensing information, which
makes it harder for compliance tools to determine the correct license.
By default all files without license information are under the default
license of the kernel, which is GPL version 2.
Update the files which contain no license information with the 'GPL-2.0'
SPDX license identifier. The SPDX identifier is a legally binding
shorthand, which can be used instead of the full boiler plate text.
This patch is based on work done by Thomas Gleixner and Kate Stewart and
Philippe Ombredanne.
How this work was done:
Patches were generated and checked against linux-4.14-rc6 for a subset of
the use cases:
- file had no licensing information it it.
- file was a */uapi/* one with no licensing information in it,
- file was a */uapi/* one with existing licensing information,
Further patches will be generated in subsequent months to fix up cases
where non-standard license headers were used, and references to license
had to be inferred by heuristics based on keywords.
The analysis to determine which SPDX License Identifier to be applied to
a file was done in a spreadsheet of side by side results from of the
output of two independent scanners (ScanCode & Windriver) producing SPDX
tag:value files created by Philippe Ombredanne. Philippe prepared the
base worksheet, and did an initial spot review of a few 1000 files.
The 4.13 kernel was the starting point of the analysis with 60,537 files
assessed. Kate Stewart did a file by file comparison of the scanner
results in the spreadsheet to determine which SPDX license identifier(s)
to be applied to the file. She confirmed any determination that was not
immediately clear with lawyers working with the Linux Foundation.
Criteria used to select files for SPDX license identifier tagging was:
- Files considered eligible had to be source code files.
- Make and config files were included as candidates if they contained >5
lines of source
- File already had some variant of a license header in it (even if <5
lines).
All documentation files were explicitly excluded.
The following heuristics were used to determine which SPDX license
identifiers to apply.
- when both scanners couldn't find any license traces, file was
considered to have no license information in it, and the top level
COPYING file license applied.
For non */uapi/* files that summary was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 11139
and resulted in the first patch in this series.
If that file was a */uapi/* path one, it was "GPL-2.0 WITH
Linux-syscall-note" otherwise it was "GPL-2.0". Results of that was:
SPDX license identifier # files
---------------------------------------------------|-------
GPL-2.0 WITH Linux-syscall-note 930
and resulted in the second patch in this series.
- if a file had some form of licensing information in it, and was one
of the */uapi/* ones, it was denoted with the Linux-syscall-note if
any GPL family license was found in the file or had no licensing in
it (per prior point). Results summary:
SPDX license identifier # files
---------------------------------------------------|------
GPL-2.0 WITH Linux-syscall-note 270
GPL-2.0+ WITH Linux-syscall-note 169
((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) 21
((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause) 17
LGPL-2.1+ WITH Linux-syscall-note 15
GPL-1.0+ WITH Linux-syscall-note 14
((GPL-2.0+ WITH Linux-syscall-note) OR BSD-3-Clause) 5
LGPL-2.0+ WITH Linux-syscall-note 4
LGPL-2.1 WITH Linux-syscall-note 3
((GPL-2.0 WITH Linux-syscall-note) OR MIT) 3
((GPL-2.0 WITH Linux-syscall-note) AND MIT) 1
and that resulted in the third patch in this series.
- when the two scanners agreed on the detected license(s), that became
the concluded license(s).
- when there was disagreement between the two scanners (one detected a
license but the other didn't, or they both detected different
licenses) a manual inspection of the file occurred.
- In most cases a manual inspection of the information in the file
resulted in a clear resolution of the license that should apply (and
which scanner probably needed to revisit its heuristics).
- When it was not immediately clear, the license identifier was
confirmed with lawyers working with the Linux Foundation.
- If there was any question as to the appropriate license identifier,
the file was flagged for further research and to be revisited later
in time.
In total, over 70 hours of logged manual review was done on the
spreadsheet to determine the SPDX license identifiers to apply to the
source files by Kate, Philippe, Thomas and, in some cases, confirmation
by lawyers working with the Linux Foundation.
Kate also obtained a third independent scan of the 4.13 code base from
FOSSology, and compared selected files where the other two scanners
disagreed against that SPDX file, to see if there was new insights. The
Windriver scanner is based on an older version of FOSSology in part, so
they are related.
Thomas did random spot checks in about 500 files from the spreadsheets
for the uapi headers and agreed with SPDX license identifier in the
files he inspected. For the non-uapi files Thomas did random spot checks
in about 15000 files.
In initial set of patches against 4.14-rc6, 3 files were found to have
copy/paste license identifier errors, and have been fixed to reflect the
correct identifier.
Additionally Philippe spent 10 hours this week doing a detailed manual
inspection and review of the 12,461 patched files from the initial patch
version early this week with:
- a full scancode scan run, collecting the matched texts, detected
license ids and scores
- reviewing anything where there was a license detected (about 500+
files) to ensure that the applied SPDX license was correct
- reviewing anything where there was no detection but the patch license
was not GPL-2.0 WITH Linux-syscall-note to ensure that the applied
SPDX license was correct
This produced a worksheet with 20 files needing minor correction. This
worksheet was then exported into 3 different .csv files for the
different types of files to be modified.
These .csv files were then reviewed by Greg. Thomas wrote a script to
parse the csv files and add the proper SPDX tag to the file, in the
format that the file expected. This script was further refined by Greg
based on the output to detect more types of files automatically and to
distinguish between header and source .c files (which need different
comment types.) Finally Greg ran the script using the .csv files to
generate the patches.
Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org>
Reviewed-by: Philippe Ombredanne <pombredanne@nexb.com>
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-01 14:07:57 +00:00
|
|
|
// SPDX-License-Identifier: GPL-2.0
|
2005-04-16 22:20:36 +00:00
|
|
|
#include <linux/kernel.h>
|
2020-06-09 04:32:42 +00:00
|
|
|
#include <linux/pgtable.h>
|
2005-04-16 22:20:36 +00:00
|
|
|
|
|
|
|
#include <linux/string.h>
|
|
|
|
#include <linux/bitops.h>
|
|
|
|
#include <linux/smp.h>
|
2009-02-26 19:16:58 +00:00
|
|
|
#include <linux/sched.h>
|
2017-02-01 15:36:40 +00:00
|
|
|
#include <linux/sched/clock.h>
|
2005-04-16 22:20:36 +00:00
|
|
|
#include <linux/thread_info.h>
|
2016-07-14 00:18:56 +00:00
|
|
|
#include <linux/init.h>
|
2009-07-03 23:35:45 +00:00
|
|
|
#include <linux/uaccess.h>
|
2021-04-19 21:49:56 +00:00
|
|
|
#include <linux/delay.h>
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2016-01-26 21:12:04 +00:00
|
|
|
#include <asm/cpufeature.h>
|
2005-04-16 22:20:36 +00:00
|
|
|
#include <asm/msr.h>
|
2008-02-04 15:48:04 +00:00
|
|
|
#include <asm/bugs.h>
|
2009-03-08 07:46:26 +00:00
|
|
|
#include <asm/cpu.h>
|
2016-07-18 18:41:10 +00:00
|
|
|
#include <asm/intel-family.h>
|
2017-01-09 11:41:45 +00:00
|
|
|
#include <asm/microcode_intel.h>
|
2017-01-20 13:22:36 +00:00
|
|
|
#include <asm/hwcap2.h>
|
|
|
|
#include <asm/elf.h>
|
2020-01-26 20:05:35 +00:00
|
|
|
#include <asm/cpu_device_id.h>
|
|
|
|
#include <asm/cmdline.h>
|
2020-04-10 11:54:00 +00:00
|
|
|
#include <asm/traps.h>
|
2020-05-05 22:36:15 +00:00
|
|
|
#include <asm/resctrl.h>
|
2020-08-06 12:35:11 +00:00
|
|
|
#include <asm/numa.h>
|
2021-01-07 12:29:05 +00:00
|
|
|
#include <asm/thermal.h>
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2008-09-09 23:40:35 +00:00
|
|
|
#ifdef CONFIG_X86_64
|
2009-07-03 23:35:45 +00:00
|
|
|
#include <linux/topology.h>
|
2008-09-09 23:40:35 +00:00
|
|
|
#endif
|
|
|
|
|
2005-04-16 22:20:36 +00:00
|
|
|
#include "cpu.h"
|
|
|
|
|
|
|
|
#ifdef CONFIG_X86_LOCAL_APIC
|
|
|
|
#include <asm/mpspec.h>
|
|
|
|
#include <asm/apic.h>
|
|
|
|
#endif
|
|
|
|
|
2020-01-26 20:05:35 +00:00
|
|
|
enum split_lock_detect_state {
|
|
|
|
sld_off = 0,
|
|
|
|
sld_warn,
|
|
|
|
sld_fatal,
|
2021-04-19 21:49:56 +00:00
|
|
|
sld_ratelimit,
|
2020-01-26 20:05:35 +00:00
|
|
|
};
|
|
|
|
|
|
|
|
/*
|
2021-03-22 13:53:24 +00:00
|
|
|
* Default to sld_off because most systems do not support split lock detection.
|
|
|
|
* sld_state_setup() will switch this to sld_warn on systems that support
|
|
|
|
* split lock/bus lock detect, unless there is a command line override.
|
2020-01-26 20:05:35 +00:00
|
|
|
*/
|
2020-03-25 03:09:23 +00:00
|
|
|
static enum split_lock_detect_state sld_state __ro_after_init = sld_off;
|
2020-03-25 03:09:24 +00:00
|
|
|
static u64 msr_test_ctrl_cache __ro_after_init;
|
2020-01-26 20:05:35 +00:00
|
|
|
|
x86/split_lock: Don't write MSR_TEST_CTRL on CPUs that aren't whitelisted
Choo! Choo! All aboard the Split Lock Express, with direct service to
Wreckage!
Skip split_lock_verify_msr() if the CPU isn't whitelisted as a possible
SLD-enabled CPU model to avoid writing MSR_TEST_CTRL. MSR_TEST_CTRL
exists, and is writable, on many generations of CPUs. Writing the MSR,
even with '0', can result in bizarre, undocumented behavior.
This fixes a crash on Haswell when resuming from suspend with a live KVM
guest. Because APs use the standard SMP boot flow for resume, they will
go through split_lock_init() and the subsequent RDMSR/WRMSR sequence,
which runs even when sld_state==sld_off to ensure SLD is disabled. On
Haswell (at least, my Haswell), writing MSR_TEST_CTRL with '0' will
succeed and _may_ take the SMT _sibling_ out of VMX root mode.
When KVM has an active guest, KVM performs VMXON as part of CPU onlining
(see kvm_starting_cpu()). Because SMP boot is serialized, the resulting
flow is effectively:
on_each_ap_cpu() {
WRMSR(MSR_TEST_CTRL, 0)
VMXON
}
As a result, the WRMSR can disable VMX on a different CPU that has
already done VMXON. This ultimately results in a #UD on VMPTRLD when
KVM regains control and attempt run its vCPUs.
The above voodoo was confirmed by reworking KVM's VMXON flow to write
MSR_TEST_CTRL prior to VMXON, and to serialize the sequence as above.
Further verification of the insanity was done by redoing VMXON on all
APs after the initial WRMSR->VMXON sequence. The additional VMXON,
which should VM-Fail, occasionally succeeded, and also eliminated the
unexpected #UD on VMPTRLD.
The damage done by writing MSR_TEST_CTRL doesn't appear to be limited
to VMX, e.g. after suspend with an active KVM guest, subsequent reboots
almost always hang (even when fudging VMXON), a #UD on a random Jcc was
observed, suspend/resume stability is qualitatively poor, and so on and
so forth.
kernel BUG at arch/x86/kvm/x86.c:386!
CPU: 1 PID: 2592 Comm: CPU 6/KVM Tainted: G D
Hardware name: ASUS Q87M-E/Q87M-E, BIOS 1102 03/03/2014
RIP: 0010:kvm_spurious_fault+0xf/0x20
Call Trace:
vmx_vcpu_load_vmcs+0x1fb/0x2b0
vmx_vcpu_load+0x3e/0x160
kvm_arch_vcpu_load+0x48/0x260
finish_task_switch+0x140/0x260
__schedule+0x460/0x720
_cond_resched+0x2d/0x40
kvm_arch_vcpu_ioctl_run+0x82e/0x1ca0
kvm_vcpu_ioctl+0x363/0x5c0
ksys_ioctl+0x88/0xa0
__x64_sys_ioctl+0x16/0x20
do_syscall_64+0x4c/0x170
entry_SYSCALL_64_after_hwframe+0x44/0xa9
Fixes: dbaba47085b0c ("x86/split_lock: Rework the initialization flow of split lock detection")
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: stable@vger.kernel.org
Link: https://lkml.kernel.org/r/20200605192605.7439-1-sean.j.christopherson@intel.com
2020-06-05 19:26:05 +00:00
|
|
|
/*
|
|
|
|
* With a name like MSR_TEST_CTL it should go without saying, but don't touch
|
|
|
|
* MSR_TEST_CTL unless the CPU is one of the whitelisted models. Writing it
|
|
|
|
* on CPUs that do not support SLD can cause fireworks, even when writing '0'.
|
|
|
|
*/
|
|
|
|
static bool cpu_model_supports_sld __ro_after_init;
|
|
|
|
|
2019-06-28 02:35:36 +00:00
|
|
|
/*
|
|
|
|
* Processors which have self-snooping capability can handle conflicting
|
|
|
|
* memory type across CPUs by snooping its own cache. However, there exists
|
|
|
|
* CPU models in which having conflicting memory types still leads to
|
|
|
|
* unpredictable behavior, machine check errors, or hangs. Clear this
|
|
|
|
* feature to prevent its use on machines with known erratas.
|
|
|
|
*/
|
|
|
|
static void check_memory_type_self_snoop_errata(struct cpuinfo_x86 *c)
|
|
|
|
{
|
|
|
|
switch (c->x86_model) {
|
|
|
|
case INTEL_FAM6_CORE_YONAH:
|
|
|
|
case INTEL_FAM6_CORE2_MEROM:
|
|
|
|
case INTEL_FAM6_CORE2_MEROM_L:
|
|
|
|
case INTEL_FAM6_CORE2_PENRYN:
|
|
|
|
case INTEL_FAM6_CORE2_DUNNINGTON:
|
|
|
|
case INTEL_FAM6_NEHALEM:
|
|
|
|
case INTEL_FAM6_NEHALEM_G:
|
|
|
|
case INTEL_FAM6_NEHALEM_EP:
|
|
|
|
case INTEL_FAM6_NEHALEM_EX:
|
|
|
|
case INTEL_FAM6_WESTMERE:
|
|
|
|
case INTEL_FAM6_WESTMERE_EP:
|
|
|
|
case INTEL_FAM6_SANDYBRIDGE:
|
|
|
|
setup_clear_cpu_cap(X86_FEATURE_SELFSNOOP);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2017-01-20 13:22:36 +00:00
|
|
|
static bool ring3mwait_disabled __read_mostly;
|
|
|
|
|
|
|
|
static int __init ring3mwait_disable(char *__unused)
|
|
|
|
{
|
|
|
|
ring3mwait_disabled = true;
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
__setup("ring3mwait=disable", ring3mwait_disable);
|
|
|
|
|
|
|
|
static void probe_xeon_phi_r3mwait(struct cpuinfo_x86 *c)
|
|
|
|
{
|
|
|
|
/*
|
|
|
|
* Ring 3 MONITOR/MWAIT feature cannot be detected without
|
|
|
|
* cpu model and family comparison.
|
|
|
|
*/
|
2017-01-20 13:22:37 +00:00
|
|
|
if (c->x86 != 6)
|
2017-01-20 13:22:36 +00:00
|
|
|
return;
|
2017-01-20 13:22:37 +00:00
|
|
|
switch (c->x86_model) {
|
|
|
|
case INTEL_FAM6_XEON_PHI_KNL:
|
|
|
|
case INTEL_FAM6_XEON_PHI_KNM:
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
return;
|
|
|
|
}
|
2017-01-20 13:22:36 +00:00
|
|
|
|
2017-03-20 08:16:26 +00:00
|
|
|
if (ring3mwait_disabled)
|
2017-01-20 13:22:36 +00:00
|
|
|
return;
|
|
|
|
|
|
|
|
set_cpu_cap(c, X86_FEATURE_RING3MWAIT);
|
2017-03-20 08:16:26 +00:00
|
|
|
this_cpu_or(msr_misc_features_shadow,
|
|
|
|
1UL << MSR_MISC_FEATURES_ENABLES_RING3MWAIT_BIT);
|
2017-01-20 13:22:36 +00:00
|
|
|
|
|
|
|
if (c == &boot_cpu_data)
|
|
|
|
ELF_HWCAP2 |= HWCAP2_RING3MWAIT;
|
|
|
|
}
|
|
|
|
|
2018-01-25 16:14:14 +00:00
|
|
|
/*
|
|
|
|
* Early microcode releases for the Spectre v2 mitigation were broken.
|
|
|
|
* Information taken from;
|
2018-03-13 19:38:56 +00:00
|
|
|
* - https://newsroom.intel.com/wp-content/uploads/sites/11/2018/03/microcode-update-guidance.pdf
|
2018-01-25 16:14:14 +00:00
|
|
|
* - https://kb.vmware.com/s/article/52345
|
|
|
|
* - Microcode revisions observed in the wild
|
|
|
|
* - Release note from 20180108 microcode release
|
|
|
|
*/
|
|
|
|
struct sku_microcode {
|
|
|
|
u8 model;
|
|
|
|
u8 stepping;
|
|
|
|
u32 microcode;
|
|
|
|
};
|
|
|
|
static const struct sku_microcode spectre_bad_microcodes[] = {
|
2019-08-27 19:48:21 +00:00
|
|
|
{ INTEL_FAM6_KABYLAKE, 0x0B, 0x80 },
|
|
|
|
{ INTEL_FAM6_KABYLAKE, 0x0A, 0x80 },
|
|
|
|
{ INTEL_FAM6_KABYLAKE, 0x09, 0x80 },
|
2019-08-27 19:48:22 +00:00
|
|
|
{ INTEL_FAM6_KABYLAKE_L, 0x0A, 0x80 },
|
|
|
|
{ INTEL_FAM6_KABYLAKE_L, 0x09, 0x80 },
|
2018-01-25 16:14:14 +00:00
|
|
|
{ INTEL_FAM6_SKYLAKE_X, 0x03, 0x0100013e },
|
|
|
|
{ INTEL_FAM6_SKYLAKE_X, 0x04, 0x0200003c },
|
2019-08-27 19:48:21 +00:00
|
|
|
{ INTEL_FAM6_BROADWELL, 0x04, 0x28 },
|
2019-08-27 19:48:23 +00:00
|
|
|
{ INTEL_FAM6_BROADWELL_G, 0x01, 0x1b },
|
2019-08-27 19:48:24 +00:00
|
|
|
{ INTEL_FAM6_BROADWELL_D, 0x02, 0x14 },
|
|
|
|
{ INTEL_FAM6_BROADWELL_D, 0x03, 0x07000011 },
|
2018-01-25 16:14:14 +00:00
|
|
|
{ INTEL_FAM6_BROADWELL_X, 0x01, 0x0b000025 },
|
2019-08-27 19:48:22 +00:00
|
|
|
{ INTEL_FAM6_HASWELL_L, 0x01, 0x21 },
|
2019-08-27 19:48:23 +00:00
|
|
|
{ INTEL_FAM6_HASWELL_G, 0x01, 0x18 },
|
2019-08-27 19:48:21 +00:00
|
|
|
{ INTEL_FAM6_HASWELL, 0x03, 0x23 },
|
2018-01-25 16:14:14 +00:00
|
|
|
{ INTEL_FAM6_HASWELL_X, 0x02, 0x3b },
|
|
|
|
{ INTEL_FAM6_HASWELL_X, 0x04, 0x10 },
|
|
|
|
{ INTEL_FAM6_IVYBRIDGE_X, 0x04, 0x42a },
|
|
|
|
/* Observed in the wild */
|
|
|
|
{ INTEL_FAM6_SANDYBRIDGE_X, 0x06, 0x61b },
|
|
|
|
{ INTEL_FAM6_SANDYBRIDGE_X, 0x07, 0x712 },
|
|
|
|
};
|
|
|
|
|
|
|
|
static bool bad_spectre_microcode(struct cpuinfo_x86 *c)
|
|
|
|
{
|
|
|
|
int i;
|
|
|
|
|
2018-02-26 14:35:01 +00:00
|
|
|
/*
|
|
|
|
* We know that the hypervisor lie to us on the microcode version so
|
|
|
|
* we may as well hope that it is running the correct version.
|
|
|
|
*/
|
|
|
|
if (cpu_has(c, X86_FEATURE_HYPERVISOR))
|
|
|
|
return false;
|
|
|
|
|
2018-08-24 17:03:51 +00:00
|
|
|
if (c->x86 != 6)
|
|
|
|
return false;
|
|
|
|
|
2018-01-25 16:14:14 +00:00
|
|
|
for (i = 0; i < ARRAY_SIZE(spectre_bad_microcodes); i++) {
|
|
|
|
if (c->x86_model == spectre_bad_microcodes[i].model &&
|
2018-01-01 01:52:10 +00:00
|
|
|
c->x86_stepping == spectre_bad_microcodes[i].stepping)
|
2018-01-25 16:14:14 +00:00
|
|
|
return (c->microcode <= spectre_bad_microcodes[i].microcode);
|
|
|
|
}
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
x86: delete __cpuinit usage from all x86 files
The __cpuinit type of throwaway sections might have made sense
some time ago when RAM was more constrained, but now the savings
do not offset the cost and complications. For example, the fix in
commit 5e427ec2d0 ("x86: Fix bit corruption at CPU resume time")
is a good example of the nasty type of bugs that can be created
with improper use of the various __init prefixes.
After a discussion on LKML[1] it was decided that cpuinit should go
the way of devinit and be phased out. Once all the users are gone,
we can then finally remove the macros themselves from linux/init.h.
Note that some harmless section mismatch warnings may result, since
notify_cpu_starting() and cpu_up() are arch independent (kernel/cpu.c)
are flagged as __cpuinit -- so if we remove the __cpuinit from
arch specific callers, we will also get section mismatch warnings.
As an intermediate step, we intend to turn the linux/init.h cpuinit
content into no-ops as early as possible, since that will get rid
of these warnings. In any case, they are temporary and harmless.
This removes all the arch/x86 uses of the __cpuinit macros from
all C files. x86 only had the one __CPUINIT used in assembly files,
and it wasn't paired off with a .previous or a __FINIT, so we can
delete it directly w/o any corresponding additional change there.
[1] https://lkml.org/lkml/2013/5/20/589
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: x86@kernel.org
Acked-by: Ingo Molnar <mingo@kernel.org>
Acked-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: H. Peter Anvin <hpa@linux.intel.com>
Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2013-06-18 22:23:59 +00:00
|
|
|
static void early_init_intel(struct cpuinfo_x86 *c)
|
2005-04-16 22:20:36 +00:00
|
|
|
{
|
2011-05-17 22:29:11 +00:00
|
|
|
u64 misc_enable;
|
|
|
|
|
2009-01-26 03:30:41 +00:00
|
|
|
/* Unmask CPUID levels if masked: */
|
2009-01-26 17:40:58 +00:00
|
|
|
if (c->x86 > 6 || (c->x86 == 6 && c->x86_model >= 0xd)) {
|
2014-03-13 22:40:52 +00:00
|
|
|
if (msr_clear_bit(MSR_IA32_MISC_ENABLE,
|
|
|
|
MSR_IA32_MISC_ENABLE_LIMIT_CPUID_BIT) > 0) {
|
2009-01-26 03:30:41 +00:00
|
|
|
c->cpuid_level = cpuid_eax(0);
|
2010-09-28 22:35:01 +00:00
|
|
|
get_cpu_cap(c);
|
2009-01-26 03:30:41 +00:00
|
|
|
}
|
2009-01-21 23:04:32 +00:00
|
|
|
}
|
|
|
|
|
2008-01-30 12:32:40 +00:00
|
|
|
if ((c->x86 == 0xf && c->x86_model >= 0x03) ||
|
|
|
|
(c->x86 == 0x6 && c->x86_model >= 0x0e))
|
|
|
|
set_cpu_cap(c, X86_FEATURE_CONSTANT_TSC);
|
2008-09-09 23:40:35 +00:00
|
|
|
|
2017-01-09 11:41:45 +00:00
|
|
|
if (c->x86 >= 6 && !cpu_has(c, X86_FEATURE_IA64))
|
|
|
|
c->microcode = intel_get_microcode_revision();
|
2011-10-13 00:46:33 +00:00
|
|
|
|
2018-01-27 16:24:32 +00:00
|
|
|
/* Now if any of them are set, check the blacklist and clear the lot */
|
2018-01-30 14:30:23 +00:00
|
|
|
if ((cpu_has(c, X86_FEATURE_SPEC_CTRL) ||
|
|
|
|
cpu_has(c, X86_FEATURE_INTEL_STIBP) ||
|
|
|
|
cpu_has(c, X86_FEATURE_IBRS) || cpu_has(c, X86_FEATURE_IBPB) ||
|
2018-01-27 16:24:32 +00:00
|
|
|
cpu_has(c, X86_FEATURE_STIBP)) && bad_spectre_microcode(c)) {
|
|
|
|
pr_warn("Intel Spectre v2 broken microcode detected; disabling Speculation Control\n");
|
2018-01-30 14:30:23 +00:00
|
|
|
setup_clear_cpu_cap(X86_FEATURE_IBRS);
|
|
|
|
setup_clear_cpu_cap(X86_FEATURE_IBPB);
|
|
|
|
setup_clear_cpu_cap(X86_FEATURE_STIBP);
|
|
|
|
setup_clear_cpu_cap(X86_FEATURE_SPEC_CTRL);
|
2018-05-10 17:13:18 +00:00
|
|
|
setup_clear_cpu_cap(X86_FEATURE_MSR_SPEC_CTRL);
|
2018-01-30 14:30:23 +00:00
|
|
|
setup_clear_cpu_cap(X86_FEATURE_INTEL_STIBP);
|
2018-05-09 19:41:38 +00:00
|
|
|
setup_clear_cpu_cap(X86_FEATURE_SSBD);
|
2018-05-10 18:21:36 +00:00
|
|
|
setup_clear_cpu_cap(X86_FEATURE_SPEC_CTRL_SSBD);
|
2018-01-25 16:14:14 +00:00
|
|
|
}
|
|
|
|
|
2010-04-13 21:40:54 +00:00
|
|
|
/*
|
|
|
|
* Atom erratum AAE44/AAF40/AAG38/AAH41:
|
|
|
|
*
|
|
|
|
* A race condition between speculative fetches and invalidating
|
|
|
|
* a large page. This is worked around in microcode, but we
|
|
|
|
* need the microcode to have already been loaded... so if it is
|
|
|
|
* not, recommend a BIOS update and disable large pages.
|
|
|
|
*/
|
2018-01-01 01:52:10 +00:00
|
|
|
if (c->x86 == 6 && c->x86_model == 0x1c && c->x86_stepping <= 2 &&
|
2011-10-13 00:46:34 +00:00
|
|
|
c->microcode < 0x20e) {
|
2016-02-02 03:45:02 +00:00
|
|
|
pr_warn("Atom PSE erratum detected, BIOS microcode update recommended\n");
|
2011-10-13 00:46:34 +00:00
|
|
|
clear_cpu_cap(c, X86_FEATURE_PSE);
|
2010-04-13 21:40:54 +00:00
|
|
|
}
|
|
|
|
|
2008-09-09 23:40:35 +00:00
|
|
|
#ifdef CONFIG_X86_64
|
|
|
|
set_cpu_cap(c, X86_FEATURE_SYSENTER32);
|
|
|
|
#else
|
|
|
|
/* Netburst reports 64 bytes clflush size, but does IO in 128 bytes */
|
|
|
|
if (c->x86 == 15 && c->x86_cache_alignment == 64)
|
|
|
|
c->x86_cache_alignment = 128;
|
|
|
|
#endif
|
2008-11-18 00:11:37 +00:00
|
|
|
|
2009-03-12 12:37:34 +00:00
|
|
|
/* CPUID workaround for 0F33/0F34 CPU */
|
|
|
|
if (c->x86 == 0xF && c->x86_model == 0x3
|
2018-01-01 01:52:10 +00:00
|
|
|
&& (c->x86_stepping == 0x3 || c->x86_stepping == 0x4))
|
2009-03-12 12:37:34 +00:00
|
|
|
c->x86_phys_bits = 36;
|
|
|
|
|
2008-11-18 00:11:37 +00:00
|
|
|
/*
|
|
|
|
* c->x86_power is 8000_0007 edx. Bit 8 is TSC runs at constant rate
|
2009-02-26 19:16:58 +00:00
|
|
|
* with P/T states and does not stop in deep C-states.
|
|
|
|
*
|
|
|
|
* It is also reliable across cores and sockets. (but not across
|
|
|
|
* cabinets - we turn it off in that case explicitly.)
|
2008-11-18 00:11:37 +00:00
|
|
|
*/
|
|
|
|
if (c->x86_power & (1 << 8)) {
|
|
|
|
set_cpu_cap(c, X86_FEATURE_CONSTANT_TSC);
|
|
|
|
set_cpu_cap(c, X86_FEATURE_NONSTOP_TSC);
|
|
|
|
}
|
|
|
|
|
2013-03-12 03:56:45 +00:00
|
|
|
/* Penwell and Cloverview have the TSC which doesn't sleep on S3 */
|
|
|
|
if (c->x86 == 6) {
|
|
|
|
switch (c->x86_model) {
|
2019-08-16 08:18:57 +00:00
|
|
|
case INTEL_FAM6_ATOM_SALTWELL_MID:
|
|
|
|
case INTEL_FAM6_ATOM_SALTWELL_TABLET:
|
|
|
|
case INTEL_FAM6_ATOM_SILVERMONT_MID:
|
2019-09-05 19:30:20 +00:00
|
|
|
case INTEL_FAM6_ATOM_AIRMONT_NP:
|
2013-03-12 03:56:45 +00:00
|
|
|
set_cpu_cap(c, X86_FEATURE_NONSTOP_TSC_S3);
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2009-01-23 00:17:05 +00:00
|
|
|
/*
|
|
|
|
* There is a known erratum on Pentium III and Core Solo
|
|
|
|
* and Core Duo CPUs.
|
|
|
|
* " Page with PAT set to WC while associated MTRR is UC
|
|
|
|
* may consolidate to UC "
|
|
|
|
* Because of this erratum, it is better to stick with
|
|
|
|
* setting WC in MTRR rather than using PAT on these CPUs.
|
|
|
|
*
|
|
|
|
* Enable PAT WC only on P4, Core 2 or later CPUs.
|
|
|
|
*/
|
|
|
|
if (c->x86 == 6 && c->x86_model < 15)
|
|
|
|
clear_cpu_cap(c, X86_FEATURE_PAT);
|
2008-04-03 22:53:23 +00:00
|
|
|
|
2011-05-17 22:29:11 +00:00
|
|
|
/*
|
|
|
|
* If fast string is not enabled in IA32_MISC_ENABLE for any reason,
|
|
|
|
* clear the fast string and enhanced fast string CPU capabilities.
|
|
|
|
*/
|
|
|
|
if (c->x86 > 6 || (c->x86 == 6 && c->x86_model >= 0xd)) {
|
|
|
|
rdmsrl(MSR_IA32_MISC_ENABLE, misc_enable);
|
|
|
|
if (!(misc_enable & MSR_IA32_MISC_ENABLE_FAST_STRING)) {
|
2016-02-02 03:45:02 +00:00
|
|
|
pr_info("Disabled fast string operations\n");
|
2011-05-17 22:29:11 +00:00
|
|
|
setup_clear_cpu_cap(X86_FEATURE_REP_GOOD);
|
|
|
|
setup_clear_cpu_cap(X86_FEATURE_ERMS);
|
|
|
|
}
|
|
|
|
}
|
2014-09-23 23:26:24 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Intel Quark Core DevMan_001.pdf section 6.4.11
|
|
|
|
* "The operating system also is required to invalidate (i.e., flush)
|
|
|
|
* the TLB when any changes are made to any of the page table entries.
|
|
|
|
* The operating system must reload CR3 to cause the TLB to be flushed"
|
|
|
|
*
|
2016-03-29 15:42:02 +00:00
|
|
|
* As a result, boot_cpu_has(X86_FEATURE_PGE) in arch/x86/include/asm/tlbflush.h
|
2021-03-21 21:28:53 +00:00
|
|
|
* should be false so that __flush_tlb_all() causes CR3 instead of CR4.PGE
|
2016-03-29 15:42:02 +00:00
|
|
|
* to be modified.
|
2014-09-23 23:26:24 +00:00
|
|
|
*/
|
|
|
|
if (c->x86 == 5 && c->x86_model == 9) {
|
|
|
|
pr_info("Disabling PGE capability bit\n");
|
|
|
|
setup_clear_cpu_cap(X86_FEATURE_PGE);
|
|
|
|
}
|
2016-02-22 22:19:15 +00:00
|
|
|
|
|
|
|
if (c->cpuid_level >= 0x00000001) {
|
|
|
|
u32 eax, ebx, ecx, edx;
|
|
|
|
|
|
|
|
cpuid(0x00000001, &eax, &ebx, &ecx, &edx);
|
|
|
|
/*
|
|
|
|
* If HTT (EDX[28]) is set EBX[16:23] contain the number of
|
|
|
|
* apicids which are reserved per package. Store the resulting
|
|
|
|
* shift value for the package management code.
|
|
|
|
*/
|
|
|
|
if (edx & (1U << 28))
|
|
|
|
c->x86_coreid_bits = get_count_order((ebx >> 16) & 0xff);
|
|
|
|
}
|
x86/mm/mpx: Work around MPX erratum SKD046
This erratum essentially causes the CPU to forget which privilege
level it is operating on (kernel vs. user) for the purposes of MPX.
This erratum can only be triggered when a system is not using
Supervisor Mode Execution Prevention (SMEP). Our workaround for
the erratum is to ensure that MPX can only be used in cases where
SMEP is present in the processor and is enabled.
This erratum only affects Core processors. Atom is unaffected.
But, there is no architectural way to determine Atom vs. Core.
So, we just apply this workaround to all processors. It's
possible that it will mistakenly disable MPX on some Atom
processsors or future unaffected Core processors. There are
currently no processors that have MPX and not SMEP. It would
take something akin to a hypervisor masking SMEP out on an Atom
processor for this to present itself on current hardware.
More details can be found at:
http://www.intel.com/content/dam/www/public/us/en/documents/specification-updates/desktop-6th-gen-core-family-spec-update.pdf
"
SKD046 Branch Instructions May Initialize MPX Bound Registers Incorrectly
Problem:
Depending on the current Intel MPX (Memory Protection
Extensions) configuration, execution of certain branch
instructions (near CALL, near RET, near JMP, and Jcc
instructions) without a BND prefix (F2H) initialize the MPX bound
registers. Due to this erratum, such a branch instruction that is
executed both with CPL = 3 and with CPL < 3 may not use the
correct MPX configuration register (BNDCFGU or BNDCFGS,
respectively) for determining whether to initialize the bound
registers; it may thus initialize the bound registers when it
should not, or fail to initialize them when it should.
Implication:
A branch instruction that has executed both in user mode and in
supervisor mode (from the same linear address) may cause a #BR
(bound range fault) when it should not have or may not cause a
#BR when it should have. Workaround An operating system can
avoid this erratum by setting CR4.SMEP[bit 20] to enable
supervisor-mode execution prevention (SMEP). When SMEP is
enabled, no code can be executed both with CPL = 3 and with CPL < 3.
"
Signed-off-by: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: Dave Hansen <dave@sr71.net>
Cc: Denys Vlasenko <dvlasenk@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/20160512220400.3B35F1BC@viggo.jf.intel.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2016-05-12 22:04:00 +00:00
|
|
|
|
2019-06-28 02:35:36 +00:00
|
|
|
check_memory_type_self_snoop_errata(c);
|
2018-06-05 23:00:55 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Get the number of SMT siblings early from the extended topology
|
|
|
|
* leaf, if available. Otherwise try the legacy SMT detection.
|
|
|
|
*/
|
|
|
|
if (detect_extended_topology_early(c) < 0)
|
|
|
|
detect_ht_early(c);
|
2005-04-16 22:20:36 +00:00
|
|
|
}
|
|
|
|
|
2020-05-05 22:36:15 +00:00
|
|
|
static void bsp_init_intel(struct cpuinfo_x86 *c)
|
|
|
|
{
|
|
|
|
resctrl_cpu_detect(c);
|
|
|
|
}
|
|
|
|
|
2008-09-09 23:40:35 +00:00
|
|
|
#ifdef CONFIG_X86_32
|
2005-04-16 22:20:36 +00:00
|
|
|
/*
|
|
|
|
* Early probe support logic for ppro memory erratum #50
|
|
|
|
*
|
|
|
|
* This is called before we do cpu ident work
|
|
|
|
*/
|
2008-02-22 22:09:42 +00:00
|
|
|
|
x86: delete __cpuinit usage from all x86 files
The __cpuinit type of throwaway sections might have made sense
some time ago when RAM was more constrained, but now the savings
do not offset the cost and complications. For example, the fix in
commit 5e427ec2d0 ("x86: Fix bit corruption at CPU resume time")
is a good example of the nasty type of bugs that can be created
with improper use of the various __init prefixes.
After a discussion on LKML[1] it was decided that cpuinit should go
the way of devinit and be phased out. Once all the users are gone,
we can then finally remove the macros themselves from linux/init.h.
Note that some harmless section mismatch warnings may result, since
notify_cpu_starting() and cpu_up() are arch independent (kernel/cpu.c)
are flagged as __cpuinit -- so if we remove the __cpuinit from
arch specific callers, we will also get section mismatch warnings.
As an intermediate step, we intend to turn the linux/init.h cpuinit
content into no-ops as early as possible, since that will get rid
of these warnings. In any case, they are temporary and harmless.
This removes all the arch/x86 uses of the __cpuinit macros from
all C files. x86 only had the one __CPUINIT used in assembly files,
and it wasn't paired off with a .previous or a __FINIT, so we can
delete it directly w/o any corresponding additional change there.
[1] https://lkml.org/lkml/2013/5/20/589
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: x86@kernel.org
Acked-by: Ingo Molnar <mingo@kernel.org>
Acked-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: H. Peter Anvin <hpa@linux.intel.com>
Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2013-06-18 22:23:59 +00:00
|
|
|
int ppro_with_ram_bug(void)
|
2005-04-16 22:20:36 +00:00
|
|
|
{
|
|
|
|
/* Uses data from early_cpu_detect now */
|
|
|
|
if (boot_cpu_data.x86_vendor == X86_VENDOR_INTEL &&
|
|
|
|
boot_cpu_data.x86 == 6 &&
|
|
|
|
boot_cpu_data.x86_model == 1 &&
|
2018-01-01 01:52:10 +00:00
|
|
|
boot_cpu_data.x86_stepping < 8) {
|
2016-02-02 03:45:02 +00:00
|
|
|
pr_info("Pentium Pro with Errata#50 detected. Taking evasive action.\n");
|
2005-04-16 22:20:36 +00:00
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
return 0;
|
|
|
|
}
|
2008-02-22 22:09:42 +00:00
|
|
|
|
x86: delete __cpuinit usage from all x86 files
The __cpuinit type of throwaway sections might have made sense
some time ago when RAM was more constrained, but now the savings
do not offset the cost and complications. For example, the fix in
commit 5e427ec2d0 ("x86: Fix bit corruption at CPU resume time")
is a good example of the nasty type of bugs that can be created
with improper use of the various __init prefixes.
After a discussion on LKML[1] it was decided that cpuinit should go
the way of devinit and be phased out. Once all the users are gone,
we can then finally remove the macros themselves from linux/init.h.
Note that some harmless section mismatch warnings may result, since
notify_cpu_starting() and cpu_up() are arch independent (kernel/cpu.c)
are flagged as __cpuinit -- so if we remove the __cpuinit from
arch specific callers, we will also get section mismatch warnings.
As an intermediate step, we intend to turn the linux/init.h cpuinit
content into no-ops as early as possible, since that will get rid
of these warnings. In any case, they are temporary and harmless.
This removes all the arch/x86 uses of the __cpuinit macros from
all C files. x86 only had the one __CPUINIT used in assembly files,
and it wasn't paired off with a .previous or a __FINIT, so we can
delete it directly w/o any corresponding additional change there.
[1] https://lkml.org/lkml/2013/5/20/589
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: x86@kernel.org
Acked-by: Ingo Molnar <mingo@kernel.org>
Acked-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: H. Peter Anvin <hpa@linux.intel.com>
Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2013-06-18 22:23:59 +00:00
|
|
|
static void intel_smp_check(struct cpuinfo_x86 *c)
|
2009-03-08 07:46:26 +00:00
|
|
|
{
|
|
|
|
/* calling is from identify_secondary_cpu() ? */
|
2010-07-21 17:03:58 +00:00
|
|
|
if (!c->cpu_index)
|
2009-03-08 07:46:26 +00:00
|
|
|
return;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Mask B, Pentium, but not Pentium MMX
|
|
|
|
*/
|
|
|
|
if (c->x86 == 5 &&
|
2018-01-01 01:52:10 +00:00
|
|
|
c->x86_stepping >= 1 && c->x86_stepping <= 4 &&
|
2009-03-08 07:46:26 +00:00
|
|
|
c->x86_model <= 3) {
|
|
|
|
/*
|
|
|
|
* Remember we have B step Pentia with bugs
|
|
|
|
*/
|
|
|
|
WARN_ONCE(1, "WARNING: SMP operation may be unreliable"
|
|
|
|
"with B stepping processors.\n");
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2014-03-07 11:40:42 +00:00
|
|
|
static int forcepae;
|
|
|
|
static int __init forcepae_setup(char *__unused)
|
|
|
|
{
|
|
|
|
forcepae = 1;
|
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
__setup("forcepae", forcepae_setup);
|
|
|
|
|
x86: delete __cpuinit usage from all x86 files
The __cpuinit type of throwaway sections might have made sense
some time ago when RAM was more constrained, but now the savings
do not offset the cost and complications. For example, the fix in
commit 5e427ec2d0 ("x86: Fix bit corruption at CPU resume time")
is a good example of the nasty type of bugs that can be created
with improper use of the various __init prefixes.
After a discussion on LKML[1] it was decided that cpuinit should go
the way of devinit and be phased out. Once all the users are gone,
we can then finally remove the macros themselves from linux/init.h.
Note that some harmless section mismatch warnings may result, since
notify_cpu_starting() and cpu_up() are arch independent (kernel/cpu.c)
are flagged as __cpuinit -- so if we remove the __cpuinit from
arch specific callers, we will also get section mismatch warnings.
As an intermediate step, we intend to turn the linux/init.h cpuinit
content into no-ops as early as possible, since that will get rid
of these warnings. In any case, they are temporary and harmless.
This removes all the arch/x86 uses of the __cpuinit macros from
all C files. x86 only had the one __CPUINIT used in assembly files,
and it wasn't paired off with a .previous or a __FINIT, so we can
delete it directly w/o any corresponding additional change there.
[1] https://lkml.org/lkml/2013/5/20/589
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: x86@kernel.org
Acked-by: Ingo Molnar <mingo@kernel.org>
Acked-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: H. Peter Anvin <hpa@linux.intel.com>
Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2013-06-18 22:23:59 +00:00
|
|
|
static void intel_workarounds(struct cpuinfo_x86 *c)
|
2005-04-16 22:20:36 +00:00
|
|
|
{
|
2008-09-09 23:40:38 +00:00
|
|
|
#ifdef CONFIG_X86_F00F_BUG
|
|
|
|
/*
|
2014-10-28 17:57:53 +00:00
|
|
|
* All models of Pentium and Pentium with MMX technology CPUs
|
2009-07-03 23:35:45 +00:00
|
|
|
* have the F0 0F bug, which lets nonprivileged users lock up the
|
2013-04-10 19:24:22 +00:00
|
|
|
* system. Announce that the fault handler will be checking for it.
|
2014-10-28 17:57:53 +00:00
|
|
|
* The Quark is also family 5, but does not have the same bug.
|
2008-09-09 23:40:38 +00:00
|
|
|
*/
|
2013-03-20 14:07:24 +00:00
|
|
|
clear_cpu_bug(c, X86_BUG_F00F);
|
2016-04-14 00:04:40 +00:00
|
|
|
if (c->x86 == 5 && c->x86_model < 9) {
|
2008-09-09 23:40:38 +00:00
|
|
|
static int f00f_workaround_enabled;
|
|
|
|
|
2013-03-20 14:07:24 +00:00
|
|
|
set_cpu_bug(c, X86_BUG_F00F);
|
2008-09-09 23:40:38 +00:00
|
|
|
if (!f00f_workaround_enabled) {
|
2016-02-02 03:45:02 +00:00
|
|
|
pr_notice("Intel Pentium with F0 0F bug - workaround enabled.\n");
|
2008-09-09 23:40:38 +00:00
|
|
|
f00f_workaround_enabled = 1;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
|
|
|
/*
|
|
|
|
* SEP CPUID bug: Pentium Pro reports SEP but doesn't have it until
|
|
|
|
* model 3 mask 3
|
|
|
|
*/
|
2018-01-01 01:52:10 +00:00
|
|
|
if ((c->x86<<8 | c->x86_model<<4 | c->x86_stepping) < 0x633)
|
2008-09-09 23:40:38 +00:00
|
|
|
clear_cpu_cap(c, X86_FEATURE_SEP);
|
|
|
|
|
2014-03-07 11:40:42 +00:00
|
|
|
/*
|
|
|
|
* PAE CPUID issue: many Pentium M report no PAE but may have a
|
|
|
|
* functionally usable PAE implementation.
|
|
|
|
* Forcefully enable PAE if kernel parameter "forcepae" is present.
|
|
|
|
*/
|
|
|
|
if (forcepae) {
|
2016-02-02 03:45:02 +00:00
|
|
|
pr_warn("PAE forced!\n");
|
2014-03-07 11:40:42 +00:00
|
|
|
set_cpu_cap(c, X86_FEATURE_PAE);
|
|
|
|
add_taint(TAINT_CPU_OUT_OF_SPEC, LOCKDEP_NOW_UNRELIABLE);
|
|
|
|
}
|
|
|
|
|
2008-09-09 23:40:38 +00:00
|
|
|
/*
|
2016-05-08 18:58:40 +00:00
|
|
|
* P4 Xeon erratum 037 workaround.
|
2008-09-09 23:40:38 +00:00
|
|
|
* Hardware prefetcher may cause stale data to be loaded into the cache.
|
|
|
|
*/
|
2018-01-01 01:52:10 +00:00
|
|
|
if ((c->x86 == 15) && (c->x86_model == 1) && (c->x86_stepping == 1)) {
|
2014-03-13 22:40:52 +00:00
|
|
|
if (msr_set_bit(MSR_IA32_MISC_ENABLE,
|
2016-05-08 18:58:40 +00:00
|
|
|
MSR_IA32_MISC_ENABLE_PREFETCH_DISABLE_BIT) > 0) {
|
2014-03-09 17:05:25 +00:00
|
|
|
pr_info("CPU: C0 stepping P4 Xeon detected.\n");
|
2016-05-08 18:58:40 +00:00
|
|
|
pr_info("CPU: Disabling hardware prefetching (Erratum 037)\n");
|
2005-04-16 22:20:36 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2008-09-09 23:40:38 +00:00
|
|
|
/*
|
|
|
|
* See if we have a good local APIC by checking for buggy Pentia,
|
|
|
|
* i.e. all B steppings and the C2 stepping of P54C when using their
|
|
|
|
* integrated APIC (see 11AP erratum in "Pentium Processor
|
|
|
|
* Specification Update").
|
|
|
|
*/
|
2016-04-04 20:25:00 +00:00
|
|
|
if (boot_cpu_has(X86_FEATURE_APIC) && (c->x86<<8 | c->x86_model<<4) == 0x520 &&
|
2018-01-01 01:52:10 +00:00
|
|
|
(c->x86_stepping < 0x6 || c->x86_stepping == 0xb))
|
2014-06-17 22:06:23 +00:00
|
|
|
set_cpu_bug(c, X86_BUG_11AP);
|
2008-09-09 23:40:35 +00:00
|
|
|
|
|
|
|
|
2008-09-09 23:40:38 +00:00
|
|
|
#ifdef CONFIG_X86_INTEL_USERCOPY
|
2008-09-09 23:40:35 +00:00
|
|
|
/*
|
2008-09-09 23:40:38 +00:00
|
|
|
* Set up the preferred alignment for movsl bulk memory moves
|
2008-09-09 23:40:35 +00:00
|
|
|
*/
|
2008-09-09 23:40:38 +00:00
|
|
|
switch (c->x86) {
|
|
|
|
case 4: /* 486: untested */
|
|
|
|
break;
|
|
|
|
case 5: /* Old Pentia: untested */
|
|
|
|
break;
|
|
|
|
case 6: /* PII/PIII only like movsl with 8-byte alignment */
|
|
|
|
movsl_mask.mask = 7;
|
|
|
|
break;
|
|
|
|
case 15: /* P4 is OK down to 8-byte alignment */
|
|
|
|
movsl_mask.mask = 7;
|
|
|
|
break;
|
|
|
|
}
|
2008-09-09 23:40:35 +00:00
|
|
|
#endif
|
2008-09-09 23:40:38 +00:00
|
|
|
|
2009-03-08 07:46:26 +00:00
|
|
|
intel_smp_check(c);
|
2008-09-09 23:40:38 +00:00
|
|
|
}
|
|
|
|
#else
|
x86: delete __cpuinit usage from all x86 files
The __cpuinit type of throwaway sections might have made sense
some time ago when RAM was more constrained, but now the savings
do not offset the cost and complications. For example, the fix in
commit 5e427ec2d0 ("x86: Fix bit corruption at CPU resume time")
is a good example of the nasty type of bugs that can be created
with improper use of the various __init prefixes.
After a discussion on LKML[1] it was decided that cpuinit should go
the way of devinit and be phased out. Once all the users are gone,
we can then finally remove the macros themselves from linux/init.h.
Note that some harmless section mismatch warnings may result, since
notify_cpu_starting() and cpu_up() are arch independent (kernel/cpu.c)
are flagged as __cpuinit -- so if we remove the __cpuinit from
arch specific callers, we will also get section mismatch warnings.
As an intermediate step, we intend to turn the linux/init.h cpuinit
content into no-ops as early as possible, since that will get rid
of these warnings. In any case, they are temporary and harmless.
This removes all the arch/x86 uses of the __cpuinit macros from
all C files. x86 only had the one __CPUINIT used in assembly files,
and it wasn't paired off with a .previous or a __FINIT, so we can
delete it directly w/o any corresponding additional change there.
[1] https://lkml.org/lkml/2013/5/20/589
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: x86@kernel.org
Acked-by: Ingo Molnar <mingo@kernel.org>
Acked-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: H. Peter Anvin <hpa@linux.intel.com>
Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2013-06-18 22:23:59 +00:00
|
|
|
static void intel_workarounds(struct cpuinfo_x86 *c)
|
2008-09-09 23:40:38 +00:00
|
|
|
{
|
|
|
|
}
|
2008-09-09 23:40:35 +00:00
|
|
|
#endif
|
|
|
|
|
x86: delete __cpuinit usage from all x86 files
The __cpuinit type of throwaway sections might have made sense
some time ago when RAM was more constrained, but now the savings
do not offset the cost and complications. For example, the fix in
commit 5e427ec2d0 ("x86: Fix bit corruption at CPU resume time")
is a good example of the nasty type of bugs that can be created
with improper use of the various __init prefixes.
After a discussion on LKML[1] it was decided that cpuinit should go
the way of devinit and be phased out. Once all the users are gone,
we can then finally remove the macros themselves from linux/init.h.
Note that some harmless section mismatch warnings may result, since
notify_cpu_starting() and cpu_up() are arch independent (kernel/cpu.c)
are flagged as __cpuinit -- so if we remove the __cpuinit from
arch specific callers, we will also get section mismatch warnings.
As an intermediate step, we intend to turn the linux/init.h cpuinit
content into no-ops as early as possible, since that will get rid
of these warnings. In any case, they are temporary and harmless.
This removes all the arch/x86 uses of the __cpuinit macros from
all C files. x86 only had the one __CPUINIT used in assembly files,
and it wasn't paired off with a .previous or a __FINIT, so we can
delete it directly w/o any corresponding additional change there.
[1] https://lkml.org/lkml/2013/5/20/589
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: x86@kernel.org
Acked-by: Ingo Molnar <mingo@kernel.org>
Acked-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: H. Peter Anvin <hpa@linux.intel.com>
Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2013-06-18 22:23:59 +00:00
|
|
|
static void srat_detect_node(struct cpuinfo_x86 *c)
|
2008-09-09 23:40:35 +00:00
|
|
|
{
|
2011-01-23 13:37:40 +00:00
|
|
|
#ifdef CONFIG_NUMA
|
2008-09-09 23:40:35 +00:00
|
|
|
unsigned node;
|
|
|
|
int cpu = smp_processor_id();
|
|
|
|
|
|
|
|
/* Don't do the funky fallback heuristics the AMD version employs
|
|
|
|
for now. */
|
2011-01-23 13:37:39 +00:00
|
|
|
node = numa_cpu_node(cpu);
|
2010-09-30 12:04:10 +00:00
|
|
|
if (node == NUMA_NO_NODE || !node_online(node)) {
|
2009-11-21 08:23:37 +00:00
|
|
|
/* reuse the value from init_cpu_to_node() */
|
|
|
|
node = cpu_to_node(cpu);
|
|
|
|
}
|
2008-09-09 23:40:35 +00:00
|
|
|
numa_set_node(cpu, node);
|
|
|
|
#endif
|
|
|
|
}
|
|
|
|
|
2018-03-05 16:25:50 +00:00
|
|
|
#define MSR_IA32_TME_ACTIVATE 0x982
|
|
|
|
|
|
|
|
/* Helpers to access TME_ACTIVATE MSR */
|
|
|
|
#define TME_ACTIVATE_LOCKED(x) (x & 0x1)
|
|
|
|
#define TME_ACTIVATE_ENABLED(x) (x & 0x2)
|
|
|
|
|
|
|
|
#define TME_ACTIVATE_POLICY(x) ((x >> 4) & 0xf) /* Bits 7:4 */
|
|
|
|
#define TME_ACTIVATE_POLICY_AES_XTS_128 0
|
|
|
|
|
|
|
|
#define TME_ACTIVATE_KEYID_BITS(x) ((x >> 32) & 0xf) /* Bits 35:32 */
|
|
|
|
|
|
|
|
#define TME_ACTIVATE_CRYPTO_ALGS(x) ((x >> 48) & 0xffff) /* Bits 63:48 */
|
|
|
|
#define TME_ACTIVATE_CRYPTO_AES_XTS_128 1
|
|
|
|
|
|
|
|
/* Values for mktme_status (SW only construct) */
|
|
|
|
#define MKTME_ENABLED 0
|
|
|
|
#define MKTME_DISABLED 1
|
|
|
|
#define MKTME_UNINITIALIZED 2
|
|
|
|
static int mktme_status = MKTME_UNINITIALIZED;
|
|
|
|
|
|
|
|
static void detect_tme(struct cpuinfo_x86 *c)
|
|
|
|
{
|
|
|
|
u64 tme_activate, tme_policy, tme_crypto_algs;
|
|
|
|
int keyid_bits = 0, nr_keyids = 0;
|
|
|
|
static u64 tme_activate_cpu0 = 0;
|
|
|
|
|
|
|
|
rdmsrl(MSR_IA32_TME_ACTIVATE, tme_activate);
|
|
|
|
|
|
|
|
if (mktme_status != MKTME_UNINITIALIZED) {
|
|
|
|
if (tme_activate != tme_activate_cpu0) {
|
|
|
|
/* Broken BIOS? */
|
2018-03-13 15:47:09 +00:00
|
|
|
pr_err_once("x86/tme: configuration is inconsistent between CPUs\n");
|
2018-03-05 16:25:50 +00:00
|
|
|
pr_err_once("x86/tme: MKTME is not usable\n");
|
|
|
|
mktme_status = MKTME_DISABLED;
|
|
|
|
|
|
|
|
/* Proceed. We may need to exclude bits from x86_phys_bits. */
|
|
|
|
}
|
|
|
|
} else {
|
|
|
|
tme_activate_cpu0 = tme_activate;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (!TME_ACTIVATE_LOCKED(tme_activate) || !TME_ACTIVATE_ENABLED(tme_activate)) {
|
|
|
|
pr_info_once("x86/tme: not enabled by BIOS\n");
|
|
|
|
mktme_status = MKTME_DISABLED;
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (mktme_status != MKTME_UNINITIALIZED)
|
|
|
|
goto detect_keyid_bits;
|
|
|
|
|
|
|
|
pr_info("x86/tme: enabled by BIOS\n");
|
|
|
|
|
|
|
|
tme_policy = TME_ACTIVATE_POLICY(tme_activate);
|
|
|
|
if (tme_policy != TME_ACTIVATE_POLICY_AES_XTS_128)
|
|
|
|
pr_warn("x86/tme: Unknown policy is active: %#llx\n", tme_policy);
|
|
|
|
|
|
|
|
tme_crypto_algs = TME_ACTIVATE_CRYPTO_ALGS(tme_activate);
|
|
|
|
if (!(tme_crypto_algs & TME_ACTIVATE_CRYPTO_AES_XTS_128)) {
|
|
|
|
pr_err("x86/mktme: No known encryption algorithm is supported: %#llx\n",
|
|
|
|
tme_crypto_algs);
|
|
|
|
mktme_status = MKTME_DISABLED;
|
|
|
|
}
|
|
|
|
detect_keyid_bits:
|
|
|
|
keyid_bits = TME_ACTIVATE_KEYID_BITS(tme_activate);
|
|
|
|
nr_keyids = (1UL << keyid_bits) - 1;
|
|
|
|
if (nr_keyids) {
|
|
|
|
pr_info_once("x86/mktme: enabled by BIOS\n");
|
|
|
|
pr_info_once("x86/mktme: %d KeyIDs available\n", nr_keyids);
|
|
|
|
} else {
|
|
|
|
pr_info_once("x86/mktme: disabled by BIOS\n");
|
|
|
|
}
|
|
|
|
|
|
|
|
if (mktme_status == MKTME_UNINITIALIZED) {
|
|
|
|
/* MKTME is usable */
|
|
|
|
mktme_status = MKTME_ENABLED;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
2018-03-15 13:49:06 +00:00
|
|
|
* KeyID bits effectively lower the number of physical address
|
|
|
|
* bits. Update cpuinfo_x86::x86_phys_bits accordingly.
|
2018-03-05 16:25:50 +00:00
|
|
|
*/
|
|
|
|
c->x86_phys_bits -= keyid_bits;
|
|
|
|
}
|
|
|
|
|
2017-03-20 08:16:25 +00:00
|
|
|
static void init_cpuid_fault(struct cpuinfo_x86 *c)
|
|
|
|
{
|
|
|
|
u64 msr;
|
|
|
|
|
|
|
|
if (!rdmsrl_safe(MSR_PLATFORM_INFO, &msr)) {
|
|
|
|
if (msr & MSR_PLATFORM_INFO_CPUID_FAULT)
|
|
|
|
set_cpu_cap(c, X86_FEATURE_CPUID_FAULT);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
static void init_intel_misc_features(struct cpuinfo_x86 *c)
|
|
|
|
{
|
|
|
|
u64 msr;
|
|
|
|
|
|
|
|
if (rdmsrl_safe(MSR_MISC_FEATURES_ENABLES, &msr))
|
|
|
|
return;
|
|
|
|
|
2017-03-20 08:16:26 +00:00
|
|
|
/* Clear all MISC features */
|
|
|
|
this_cpu_write(msr_misc_features_shadow, 0);
|
|
|
|
|
|
|
|
/* Check features and update capabilities and shadow control bits */
|
2017-03-20 08:16:25 +00:00
|
|
|
init_cpuid_fault(c);
|
|
|
|
probe_xeon_phi_r3mwait(c);
|
2017-03-20 08:16:26 +00:00
|
|
|
|
|
|
|
msr = this_cpu_read(msr_misc_features_shadow);
|
|
|
|
wrmsrl(MSR_MISC_FEATURES_ENABLES, msr);
|
2017-03-20 08:16:25 +00:00
|
|
|
}
|
|
|
|
|
2020-01-26 20:05:35 +00:00
|
|
|
static void split_lock_init(void);
|
2021-03-22 13:53:24 +00:00
|
|
|
static void bus_lock_init(void);
|
2020-01-26 20:05:35 +00:00
|
|
|
|
x86: delete __cpuinit usage from all x86 files
The __cpuinit type of throwaway sections might have made sense
some time ago when RAM was more constrained, but now the savings
do not offset the cost and complications. For example, the fix in
commit 5e427ec2d0 ("x86: Fix bit corruption at CPU resume time")
is a good example of the nasty type of bugs that can be created
with improper use of the various __init prefixes.
After a discussion on LKML[1] it was decided that cpuinit should go
the way of devinit and be phased out. Once all the users are gone,
we can then finally remove the macros themselves from linux/init.h.
Note that some harmless section mismatch warnings may result, since
notify_cpu_starting() and cpu_up() are arch independent (kernel/cpu.c)
are flagged as __cpuinit -- so if we remove the __cpuinit from
arch specific callers, we will also get section mismatch warnings.
As an intermediate step, we intend to turn the linux/init.h cpuinit
content into no-ops as early as possible, since that will get rid
of these warnings. In any case, they are temporary and harmless.
This removes all the arch/x86 uses of the __cpuinit macros from
all C files. x86 only had the one __CPUINIT used in assembly files,
and it wasn't paired off with a .previous or a __FINIT, so we can
delete it directly w/o any corresponding additional change there.
[1] https://lkml.org/lkml/2013/5/20/589
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: x86@kernel.org
Acked-by: Ingo Molnar <mingo@kernel.org>
Acked-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: H. Peter Anvin <hpa@linux.intel.com>
Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2013-06-18 22:23:59 +00:00
|
|
|
static void init_intel(struct cpuinfo_x86 *c)
|
2005-04-16 22:20:36 +00:00
|
|
|
{
|
2008-01-30 12:32:40 +00:00
|
|
|
early_init_intel(c);
|
|
|
|
|
2008-09-09 23:40:38 +00:00
|
|
|
intel_workarounds(c);
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2008-12-19 02:09:21 +00:00
|
|
|
/*
|
|
|
|
* Detect the extended topology information if available. This
|
|
|
|
* will reinitialise the initial_apicid which will be used
|
|
|
|
* in init_intel_cacheinfo()
|
|
|
|
*/
|
|
|
|
detect_extended_topology(c);
|
|
|
|
|
2014-07-22 13:35:14 +00:00
|
|
|
if (!cpu_has(c, X86_FEATURE_XTOPOLOGY)) {
|
|
|
|
/*
|
|
|
|
* let's use the legacy cpuid vector 0x1 and 0x4 for topology
|
|
|
|
* detection.
|
|
|
|
*/
|
2018-05-13 09:43:53 +00:00
|
|
|
detect_num_cpu_cores(c);
|
2014-07-22 13:35:14 +00:00
|
|
|
#ifdef CONFIG_X86_32
|
|
|
|
detect_ht(c);
|
|
|
|
#endif
|
|
|
|
}
|
|
|
|
|
2018-05-03 02:32:45 +00:00
|
|
|
init_intel_cacheinfo(c);
|
2014-10-07 00:19:49 +00:00
|
|
|
|
2008-02-22 22:09:42 +00:00
|
|
|
if (c->cpuid_level > 9) {
|
2006-06-26 11:59:59 +00:00
|
|
|
unsigned eax = cpuid_eax(10);
|
|
|
|
/* Check for version and the number of counters */
|
|
|
|
if ((eax & 0xff) && (((eax>>8) & 0xff) > 1))
|
2008-02-26 07:52:33 +00:00
|
|
|
set_cpu_cap(c, X86_FEATURE_ARCH_PERFMON);
|
2006-06-26 11:59:59 +00:00
|
|
|
}
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2016-03-29 15:42:00 +00:00
|
|
|
if (cpu_has(c, X86_FEATURE_XMM2))
|
2008-09-09 23:40:38 +00:00
|
|
|
set_cpu_cap(c, X86_FEATURE_LFENCE_RDTSC);
|
2015-12-07 09:39:41 +00:00
|
|
|
|
|
|
|
if (boot_cpu_has(X86_FEATURE_DS)) {
|
2018-05-03 02:32:45 +00:00
|
|
|
unsigned int l1, l2;
|
|
|
|
|
2008-09-09 23:40:38 +00:00
|
|
|
rdmsr(MSR_IA32_MISC_ENABLE, l1, l2);
|
|
|
|
if (!(l1 & (1<<11)))
|
|
|
|
set_cpu_cap(c, X86_FEATURE_BTS);
|
|
|
|
if (!(l1 & (1<<12)))
|
|
|
|
set_cpu_cap(c, X86_FEATURE_PEBS);
|
|
|
|
}
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2016-03-29 15:41:59 +00:00
|
|
|
if (c->x86 == 6 && boot_cpu_has(X86_FEATURE_CLFLUSH) &&
|
x86 idle: Repair large-server 50-watt idle-power regression
Linux 3.10 changed the timing of how thread_info->flags is touched:
x86: Use generic idle loop
(7d1a941731fabf27e5fb6edbebb79fe856edb4e5)
This caused Intel NHM-EX and WSM-EX servers to experience a large number
of immediate MONITOR/MWAIT break wakeups, which caused cpuidle to demote
from deep C-states to shallow C-states, which caused these platforms
to experience a significant increase in idle power.
Note that this issue was already present before the commit above,
however, it wasn't seen often enough to be noticed in power measurements.
Here we extend an errata workaround from the Core2 EX "Dunnington"
to extend to NHM-EX and WSM-EX, to prevent these immediate
returns from MWAIT, reducing idle power on these platforms.
While only acpi_idle ran on Dunnington, intel_idle
may also run on these two newer systems.
As of today, there are no other models that are known
to need this tweak.
Link: http://lkml.kernel.org/r/CAJvTdK=%2BaNN66mYpCGgbHGCHhYQAKx-vB0kJSWjVpsNb_hOAtQ@mail.gmail.com
Signed-off-by: Len Brown <len.brown@intel.com>
Link: http://lkml.kernel.org/r/baff264285f6e585df757d58b17788feabc68918.1387403066.git.len.brown@intel.com
Cc: <stable@vger.kernel.org> # 3.12.x, 3.11.x, 3.10.x
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
2013-12-18 21:44:57 +00:00
|
|
|
(c->x86_model == 29 || c->x86_model == 46 || c->x86_model == 47))
|
2014-06-17 22:06:23 +00:00
|
|
|
set_cpu_bug(c, X86_BUG_CLFLUSH_MONITOR);
|
2009-02-07 00:52:05 +00:00
|
|
|
|
2016-07-18 18:41:10 +00:00
|
|
|
if (c->x86 == 6 && boot_cpu_has(X86_FEATURE_MWAIT) &&
|
|
|
|
((c->x86_model == INTEL_FAM6_ATOM_GOLDMONT)))
|
|
|
|
set_cpu_bug(c, X86_BUG_MONITOR);
|
|
|
|
|
2008-09-09 23:40:38 +00:00
|
|
|
#ifdef CONFIG_X86_64
|
|
|
|
if (c->x86 == 15)
|
|
|
|
c->x86_cache_alignment = c->x86_clflush_size * 2;
|
|
|
|
if (c->x86 == 6)
|
|
|
|
set_cpu_cap(c, X86_FEATURE_REP_GOOD);
|
|
|
|
#else
|
2008-02-22 22:09:42 +00:00
|
|
|
/*
|
|
|
|
* Names for the Pentium II/Celeron processors
|
|
|
|
* detectable only by also checking the cache size.
|
|
|
|
* Dixon is NOT a Celeron.
|
|
|
|
*/
|
2005-04-16 22:20:36 +00:00
|
|
|
if (c->x86 == 6) {
|
2018-05-03 02:32:45 +00:00
|
|
|
unsigned int l2 = c->x86_cache_size;
|
2008-09-09 23:40:38 +00:00
|
|
|
char *p = NULL;
|
|
|
|
|
2005-04-16 22:20:36 +00:00
|
|
|
switch (c->x86_model) {
|
|
|
|
case 5:
|
2011-05-16 19:38:08 +00:00
|
|
|
if (l2 == 0)
|
|
|
|
p = "Celeron (Covington)";
|
|
|
|
else if (l2 == 256)
|
|
|
|
p = "Mobile Pentium II (Dixon)";
|
2005-04-16 22:20:36 +00:00
|
|
|
break;
|
2008-02-22 22:09:42 +00:00
|
|
|
|
2005-04-16 22:20:36 +00:00
|
|
|
case 6:
|
|
|
|
if (l2 == 128)
|
|
|
|
p = "Celeron (Mendocino)";
|
2018-01-01 01:52:10 +00:00
|
|
|
else if (c->x86_stepping == 0 || c->x86_stepping == 5)
|
2005-04-16 22:20:36 +00:00
|
|
|
p = "Celeron-A";
|
|
|
|
break;
|
2008-02-22 22:09:42 +00:00
|
|
|
|
2005-04-16 22:20:36 +00:00
|
|
|
case 8:
|
|
|
|
if (l2 == 128)
|
|
|
|
p = "Celeron (Coppermine)";
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
2008-09-09 23:40:38 +00:00
|
|
|
if (p)
|
|
|
|
strcpy(c->x86_model_id, p);
|
2005-04-16 22:20:36 +00:00
|
|
|
}
|
|
|
|
|
2008-09-09 23:40:35 +00:00
|
|
|
if (c->x86 == 15)
|
|
|
|
set_cpu_cap(c, X86_FEATURE_P4);
|
|
|
|
if (c->x86 == 6)
|
|
|
|
set_cpu_cap(c, X86_FEATURE_P3);
|
2008-11-09 13:29:21 +00:00
|
|
|
#endif
|
2008-09-09 23:40:35 +00:00
|
|
|
|
|
|
|
/* Work around errata */
|
2009-05-15 20:05:16 +00:00
|
|
|
srat_detect_node(c);
|
2008-09-10 10:53:34 +00:00
|
|
|
|
2019-12-21 04:44:58 +00:00
|
|
|
init_ia32_feat_ctl(c);
|
|
|
|
|
2018-03-05 16:25:50 +00:00
|
|
|
if (cpu_has(c, X86_FEATURE_TME))
|
|
|
|
detect_tme(c);
|
|
|
|
|
2017-03-20 08:16:25 +00:00
|
|
|
init_intel_misc_features(c);
|
2019-10-23 09:01:53 +00:00
|
|
|
|
|
|
|
if (tsx_ctrl_state == TSX_CTRL_ENABLE)
|
|
|
|
tsx_enable();
|
2021-06-14 21:14:25 +00:00
|
|
|
else if (tsx_ctrl_state == TSX_CTRL_DISABLE)
|
2019-10-23 09:01:53 +00:00
|
|
|
tsx_disable();
|
2021-06-14 21:14:25 +00:00
|
|
|
else if (tsx_ctrl_state == TSX_CTRL_RTM_ALWAYS_ABORT)
|
|
|
|
tsx_clear_cpuid();
|
2020-01-26 20:05:35 +00:00
|
|
|
|
|
|
|
split_lock_init();
|
2021-03-22 13:53:24 +00:00
|
|
|
bus_lock_init();
|
2021-01-07 12:29:05 +00:00
|
|
|
|
|
|
|
intel_init_thermal(c);
|
2006-12-07 01:14:01 +00:00
|
|
|
}
|
2005-04-16 22:20:36 +00:00
|
|
|
|
2008-09-09 23:40:35 +00:00
|
|
|
#ifdef CONFIG_X86_32
|
x86: delete __cpuinit usage from all x86 files
The __cpuinit type of throwaway sections might have made sense
some time ago when RAM was more constrained, but now the savings
do not offset the cost and complications. For example, the fix in
commit 5e427ec2d0 ("x86: Fix bit corruption at CPU resume time")
is a good example of the nasty type of bugs that can be created
with improper use of the various __init prefixes.
After a discussion on LKML[1] it was decided that cpuinit should go
the way of devinit and be phased out. Once all the users are gone,
we can then finally remove the macros themselves from linux/init.h.
Note that some harmless section mismatch warnings may result, since
notify_cpu_starting() and cpu_up() are arch independent (kernel/cpu.c)
are flagged as __cpuinit -- so if we remove the __cpuinit from
arch specific callers, we will also get section mismatch warnings.
As an intermediate step, we intend to turn the linux/init.h cpuinit
content into no-ops as early as possible, since that will get rid
of these warnings. In any case, they are temporary and harmless.
This removes all the arch/x86 uses of the __cpuinit macros from
all C files. x86 only had the one __CPUINIT used in assembly files,
and it wasn't paired off with a .previous or a __FINIT, so we can
delete it directly w/o any corresponding additional change there.
[1] https://lkml.org/lkml/2013/5/20/589
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: x86@kernel.org
Acked-by: Ingo Molnar <mingo@kernel.org>
Acked-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: H. Peter Anvin <hpa@linux.intel.com>
Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2013-06-18 22:23:59 +00:00
|
|
|
static unsigned int intel_size_cache(struct cpuinfo_x86 *c, unsigned int size)
|
2005-04-16 22:20:36 +00:00
|
|
|
{
|
2008-02-22 22:09:42 +00:00
|
|
|
/*
|
|
|
|
* Intel PIII Tualatin. This comes in two flavours.
|
2005-04-16 22:20:36 +00:00
|
|
|
* One has 256kb of cache, the other 512. We have no way
|
|
|
|
* to determine which, so we use a boottime override
|
|
|
|
* for the 512kb model, and assume 256 otherwise.
|
|
|
|
*/
|
|
|
|
if ((c->x86 == 6) && (c->x86_model == 11) && (size == 0))
|
|
|
|
size = 256;
|
2014-10-07 00:19:49 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Intel Quark SoC X1000 contains a 4-way set associative
|
|
|
|
* 16K cache with a 16 byte cache line and 256 lines per tag
|
|
|
|
*/
|
|
|
|
if ((c->x86 == 5) && (c->x86_model == 9))
|
|
|
|
size = 16;
|
2005-04-16 22:20:36 +00:00
|
|
|
return size;
|
|
|
|
}
|
2008-09-09 23:40:35 +00:00
|
|
|
#endif
|
2005-04-16 22:20:36 +00:00
|
|
|
|
x86/tlb_info: get last level TLB entry number of CPU
For 4KB pages, x86 CPU has 2 or 1 level TLB, first level is data TLB and
instruction TLB, second level is shared TLB for both data and instructions.
For hupe page TLB, usually there is just one level and seperated by 2MB/4MB
and 1GB.
Although each levels TLB size is important for performance tuning, but for
genernal and rude optimizing, last level TLB entry number is suitable. And
in fact, last level TLB always has the biggest entry number.
This patch will get the biggest TLB entry number and use it in furture TLB
optimizing.
Accroding Borislav's suggestion, except tlb_ll[i/d]_* array, other
function and data will be released after system boot up.
For all kinds of x86 vendor friendly, vendor specific code was moved to its
specific files.
Signed-off-by: Alex Shi <alex.shi@intel.com>
Link: http://lkml.kernel.org/r/1340845344-27557-2-git-send-email-alex.shi@intel.com
Signed-off-by: H. Peter Anvin <hpa@zytor.com>
2012-06-28 01:02:16 +00:00
|
|
|
#define TLB_INST_4K 0x01
|
|
|
|
#define TLB_INST_4M 0x02
|
|
|
|
#define TLB_INST_2M_4M 0x03
|
|
|
|
|
|
|
|
#define TLB_INST_ALL 0x05
|
|
|
|
#define TLB_INST_1G 0x06
|
|
|
|
|
|
|
|
#define TLB_DATA_4K 0x11
|
|
|
|
#define TLB_DATA_4M 0x12
|
|
|
|
#define TLB_DATA_2M_4M 0x13
|
|
|
|
#define TLB_DATA_4K_4M 0x14
|
|
|
|
|
|
|
|
#define TLB_DATA_1G 0x16
|
|
|
|
|
|
|
|
#define TLB_DATA0_4K 0x21
|
|
|
|
#define TLB_DATA0_4M 0x22
|
|
|
|
#define TLB_DATA0_2M_4M 0x23
|
|
|
|
|
|
|
|
#define STLB_4K 0x41
|
x86, cpu: Detect more TLB configuration
The Intel Software Developer’s Manual covers few more TLB
configurations exposed as CPUID 2 descriptors:
61H Instruction TLB: 4 KByte pages, fully associative, 48 entries
63H Data TLB: 1 GByte pages, 4-way set associative, 4 entries
76H Instruction TLB: 2M/4M pages, fully associative, 8 entries
B5H Instruction TLB: 4KByte pages, 8-way set associative, 64 entries
B6H Instruction TLB: 4KByte pages, 8-way set associative, 128 entries
C1H Shared 2nd-Level TLB: 4 KByte/2MByte pages, 8-way associative, 1024 entries
C2H DTLB DTLB: 2 MByte/$MByte pages, 4-way associative, 16 entries
Let's detect them as well.
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Link: http://lkml.kernel.org/r/1387801018-14499-1-git-send-email-kirill.shutemov@linux.intel.com
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
2013-12-23 12:16:58 +00:00
|
|
|
#define STLB_4K_2M 0x42
|
x86/tlb_info: get last level TLB entry number of CPU
For 4KB pages, x86 CPU has 2 or 1 level TLB, first level is data TLB and
instruction TLB, second level is shared TLB for both data and instructions.
For hupe page TLB, usually there is just one level and seperated by 2MB/4MB
and 1GB.
Although each levels TLB size is important for performance tuning, but for
genernal and rude optimizing, last level TLB entry number is suitable. And
in fact, last level TLB always has the biggest entry number.
This patch will get the biggest TLB entry number and use it in furture TLB
optimizing.
Accroding Borislav's suggestion, except tlb_ll[i/d]_* array, other
function and data will be released after system boot up.
For all kinds of x86 vendor friendly, vendor specific code was moved to its
specific files.
Signed-off-by: Alex Shi <alex.shi@intel.com>
Link: http://lkml.kernel.org/r/1340845344-27557-2-git-send-email-alex.shi@intel.com
Signed-off-by: H. Peter Anvin <hpa@zytor.com>
2012-06-28 01:02:16 +00:00
|
|
|
|
x86: delete __cpuinit usage from all x86 files
The __cpuinit type of throwaway sections might have made sense
some time ago when RAM was more constrained, but now the savings
do not offset the cost and complications. For example, the fix in
commit 5e427ec2d0 ("x86: Fix bit corruption at CPU resume time")
is a good example of the nasty type of bugs that can be created
with improper use of the various __init prefixes.
After a discussion on LKML[1] it was decided that cpuinit should go
the way of devinit and be phased out. Once all the users are gone,
we can then finally remove the macros themselves from linux/init.h.
Note that some harmless section mismatch warnings may result, since
notify_cpu_starting() and cpu_up() are arch independent (kernel/cpu.c)
are flagged as __cpuinit -- so if we remove the __cpuinit from
arch specific callers, we will also get section mismatch warnings.
As an intermediate step, we intend to turn the linux/init.h cpuinit
content into no-ops as early as possible, since that will get rid
of these warnings. In any case, they are temporary and harmless.
This removes all the arch/x86 uses of the __cpuinit macros from
all C files. x86 only had the one __CPUINIT used in assembly files,
and it wasn't paired off with a .previous or a __FINIT, so we can
delete it directly w/o any corresponding additional change there.
[1] https://lkml.org/lkml/2013/5/20/589
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: x86@kernel.org
Acked-by: Ingo Molnar <mingo@kernel.org>
Acked-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: H. Peter Anvin <hpa@linux.intel.com>
Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2013-06-18 22:23:59 +00:00
|
|
|
static const struct _tlb_table intel_tlb_table[] = {
|
x86/tlb_info: get last level TLB entry number of CPU
For 4KB pages, x86 CPU has 2 or 1 level TLB, first level is data TLB and
instruction TLB, second level is shared TLB for both data and instructions.
For hupe page TLB, usually there is just one level and seperated by 2MB/4MB
and 1GB.
Although each levels TLB size is important for performance tuning, but for
genernal and rude optimizing, last level TLB entry number is suitable. And
in fact, last level TLB always has the biggest entry number.
This patch will get the biggest TLB entry number and use it in furture TLB
optimizing.
Accroding Borislav's suggestion, except tlb_ll[i/d]_* array, other
function and data will be released after system boot up.
For all kinds of x86 vendor friendly, vendor specific code was moved to its
specific files.
Signed-off-by: Alex Shi <alex.shi@intel.com>
Link: http://lkml.kernel.org/r/1340845344-27557-2-git-send-email-alex.shi@intel.com
Signed-off-by: H. Peter Anvin <hpa@zytor.com>
2012-06-28 01:02:16 +00:00
|
|
|
{ 0x01, TLB_INST_4K, 32, " TLB_INST 4 KByte pages, 4-way set associative" },
|
|
|
|
{ 0x02, TLB_INST_4M, 2, " TLB_INST 4 MByte pages, full associative" },
|
|
|
|
{ 0x03, TLB_DATA_4K, 64, " TLB_DATA 4 KByte pages, 4-way set associative" },
|
|
|
|
{ 0x04, TLB_DATA_4M, 8, " TLB_DATA 4 MByte pages, 4-way set associative" },
|
|
|
|
{ 0x05, TLB_DATA_4M, 32, " TLB_DATA 4 MByte pages, 4-way set associative" },
|
|
|
|
{ 0x0b, TLB_INST_4M, 4, " TLB_INST 4 MByte pages, 4-way set associative" },
|
2019-09-15 09:09:25 +00:00
|
|
|
{ 0x4f, TLB_INST_4K, 32, " TLB_INST 4 KByte pages" },
|
x86/tlb_info: get last level TLB entry number of CPU
For 4KB pages, x86 CPU has 2 or 1 level TLB, first level is data TLB and
instruction TLB, second level is shared TLB for both data and instructions.
For hupe page TLB, usually there is just one level and seperated by 2MB/4MB
and 1GB.
Although each levels TLB size is important for performance tuning, but for
genernal and rude optimizing, last level TLB entry number is suitable. And
in fact, last level TLB always has the biggest entry number.
This patch will get the biggest TLB entry number and use it in furture TLB
optimizing.
Accroding Borislav's suggestion, except tlb_ll[i/d]_* array, other
function and data will be released after system boot up.
For all kinds of x86 vendor friendly, vendor specific code was moved to its
specific files.
Signed-off-by: Alex Shi <alex.shi@intel.com>
Link: http://lkml.kernel.org/r/1340845344-27557-2-git-send-email-alex.shi@intel.com
Signed-off-by: H. Peter Anvin <hpa@zytor.com>
2012-06-28 01:02:16 +00:00
|
|
|
{ 0x50, TLB_INST_ALL, 64, " TLB_INST 4 KByte and 2-MByte or 4-MByte pages" },
|
|
|
|
{ 0x51, TLB_INST_ALL, 128, " TLB_INST 4 KByte and 2-MByte or 4-MByte pages" },
|
|
|
|
{ 0x52, TLB_INST_ALL, 256, " TLB_INST 4 KByte and 2-MByte or 4-MByte pages" },
|
|
|
|
{ 0x55, TLB_INST_2M_4M, 7, " TLB_INST 2-MByte or 4-MByte pages, fully associative" },
|
|
|
|
{ 0x56, TLB_DATA0_4M, 16, " TLB_DATA0 4 MByte pages, 4-way set associative" },
|
|
|
|
{ 0x57, TLB_DATA0_4K, 16, " TLB_DATA0 4 KByte pages, 4-way associative" },
|
|
|
|
{ 0x59, TLB_DATA0_4K, 16, " TLB_DATA0 4 KByte pages, fully associative" },
|
|
|
|
{ 0x5a, TLB_DATA0_2M_4M, 32, " TLB_DATA0 2-MByte or 4 MByte pages, 4-way set associative" },
|
|
|
|
{ 0x5b, TLB_DATA_4K_4M, 64, " TLB_DATA 4 KByte and 4 MByte pages" },
|
|
|
|
{ 0x5c, TLB_DATA_4K_4M, 128, " TLB_DATA 4 KByte and 4 MByte pages" },
|
|
|
|
{ 0x5d, TLB_DATA_4K_4M, 256, " TLB_DATA 4 KByte and 4 MByte pages" },
|
x86, cpu: Detect more TLB configuration
The Intel Software Developer’s Manual covers few more TLB
configurations exposed as CPUID 2 descriptors:
61H Instruction TLB: 4 KByte pages, fully associative, 48 entries
63H Data TLB: 1 GByte pages, 4-way set associative, 4 entries
76H Instruction TLB: 2M/4M pages, fully associative, 8 entries
B5H Instruction TLB: 4KByte pages, 8-way set associative, 64 entries
B6H Instruction TLB: 4KByte pages, 8-way set associative, 128 entries
C1H Shared 2nd-Level TLB: 4 KByte/2MByte pages, 8-way associative, 1024 entries
C2H DTLB DTLB: 2 MByte/$MByte pages, 4-way associative, 16 entries
Let's detect them as well.
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Link: http://lkml.kernel.org/r/1387801018-14499-1-git-send-email-kirill.shutemov@linux.intel.com
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
2013-12-23 12:16:58 +00:00
|
|
|
{ 0x61, TLB_INST_4K, 48, " TLB_INST 4 KByte pages, full associative" },
|
|
|
|
{ 0x63, TLB_DATA_1G, 4, " TLB_DATA 1 GByte pages, 4-way set associative" },
|
2018-04-23 16:14:25 +00:00
|
|
|
{ 0x6b, TLB_DATA_4K, 256, " TLB_DATA 4 KByte pages, 8-way associative" },
|
|
|
|
{ 0x6c, TLB_DATA_2M_4M, 128, " TLB_DATA 2 MByte or 4 MByte pages, 8-way associative" },
|
|
|
|
{ 0x6d, TLB_DATA_1G, 16, " TLB_DATA 1 GByte pages, fully associative" },
|
x86, cpu: Detect more TLB configuration
The Intel Software Developer’s Manual covers few more TLB
configurations exposed as CPUID 2 descriptors:
61H Instruction TLB: 4 KByte pages, fully associative, 48 entries
63H Data TLB: 1 GByte pages, 4-way set associative, 4 entries
76H Instruction TLB: 2M/4M pages, fully associative, 8 entries
B5H Instruction TLB: 4KByte pages, 8-way set associative, 64 entries
B6H Instruction TLB: 4KByte pages, 8-way set associative, 128 entries
C1H Shared 2nd-Level TLB: 4 KByte/2MByte pages, 8-way associative, 1024 entries
C2H DTLB DTLB: 2 MByte/$MByte pages, 4-way associative, 16 entries
Let's detect them as well.
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Link: http://lkml.kernel.org/r/1387801018-14499-1-git-send-email-kirill.shutemov@linux.intel.com
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
2013-12-23 12:16:58 +00:00
|
|
|
{ 0x76, TLB_INST_2M_4M, 8, " TLB_INST 2-MByte or 4-MByte pages, fully associative" },
|
x86/tlb_info: get last level TLB entry number of CPU
For 4KB pages, x86 CPU has 2 or 1 level TLB, first level is data TLB and
instruction TLB, second level is shared TLB for both data and instructions.
For hupe page TLB, usually there is just one level and seperated by 2MB/4MB
and 1GB.
Although each levels TLB size is important for performance tuning, but for
genernal and rude optimizing, last level TLB entry number is suitable. And
in fact, last level TLB always has the biggest entry number.
This patch will get the biggest TLB entry number and use it in furture TLB
optimizing.
Accroding Borislav's suggestion, except tlb_ll[i/d]_* array, other
function and data will be released after system boot up.
For all kinds of x86 vendor friendly, vendor specific code was moved to its
specific files.
Signed-off-by: Alex Shi <alex.shi@intel.com>
Link: http://lkml.kernel.org/r/1340845344-27557-2-git-send-email-alex.shi@intel.com
Signed-off-by: H. Peter Anvin <hpa@zytor.com>
2012-06-28 01:02:16 +00:00
|
|
|
{ 0xb0, TLB_INST_4K, 128, " TLB_INST 4 KByte pages, 4-way set associative" },
|
|
|
|
{ 0xb1, TLB_INST_2M_4M, 4, " TLB_INST 2M pages, 4-way, 8 entries or 4M pages, 4-way entries" },
|
|
|
|
{ 0xb2, TLB_INST_4K, 64, " TLB_INST 4KByte pages, 4-way set associative" },
|
|
|
|
{ 0xb3, TLB_DATA_4K, 128, " TLB_DATA 4 KByte pages, 4-way set associative" },
|
|
|
|
{ 0xb4, TLB_DATA_4K, 256, " TLB_DATA 4 KByte pages, 4-way associative" },
|
2015-02-21 22:41:50 +00:00
|
|
|
{ 0xb5, TLB_INST_4K, 64, " TLB_INST 4 KByte pages, 8-way set associative" },
|
|
|
|
{ 0xb6, TLB_INST_4K, 128, " TLB_INST 4 KByte pages, 8-way set associative" },
|
x86/tlb_info: get last level TLB entry number of CPU
For 4KB pages, x86 CPU has 2 or 1 level TLB, first level is data TLB and
instruction TLB, second level is shared TLB for both data and instructions.
For hupe page TLB, usually there is just one level and seperated by 2MB/4MB
and 1GB.
Although each levels TLB size is important for performance tuning, but for
genernal and rude optimizing, last level TLB entry number is suitable. And
in fact, last level TLB always has the biggest entry number.
This patch will get the biggest TLB entry number and use it in furture TLB
optimizing.
Accroding Borislav's suggestion, except tlb_ll[i/d]_* array, other
function and data will be released after system boot up.
For all kinds of x86 vendor friendly, vendor specific code was moved to its
specific files.
Signed-off-by: Alex Shi <alex.shi@intel.com>
Link: http://lkml.kernel.org/r/1340845344-27557-2-git-send-email-alex.shi@intel.com
Signed-off-by: H. Peter Anvin <hpa@zytor.com>
2012-06-28 01:02:16 +00:00
|
|
|
{ 0xba, TLB_DATA_4K, 64, " TLB_DATA 4 KByte pages, 4-way associative" },
|
|
|
|
{ 0xc0, TLB_DATA_4K_4M, 8, " TLB_DATA 4 KByte and 4 MByte pages, 4-way associative" },
|
x86, cpu: Detect more TLB configuration
The Intel Software Developer’s Manual covers few more TLB
configurations exposed as CPUID 2 descriptors:
61H Instruction TLB: 4 KByte pages, fully associative, 48 entries
63H Data TLB: 1 GByte pages, 4-way set associative, 4 entries
76H Instruction TLB: 2M/4M pages, fully associative, 8 entries
B5H Instruction TLB: 4KByte pages, 8-way set associative, 64 entries
B6H Instruction TLB: 4KByte pages, 8-way set associative, 128 entries
C1H Shared 2nd-Level TLB: 4 KByte/2MByte pages, 8-way associative, 1024 entries
C2H DTLB DTLB: 2 MByte/$MByte pages, 4-way associative, 16 entries
Let's detect them as well.
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Link: http://lkml.kernel.org/r/1387801018-14499-1-git-send-email-kirill.shutemov@linux.intel.com
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
2013-12-23 12:16:58 +00:00
|
|
|
{ 0xc1, STLB_4K_2M, 1024, " STLB 4 KByte and 2 MByte pages, 8-way associative" },
|
2019-09-15 09:09:25 +00:00
|
|
|
{ 0xc2, TLB_DATA_2M_4M, 16, " TLB_DATA 2 MByte/4MByte pages, 4-way associative" },
|
x86/tlb_info: get last level TLB entry number of CPU
For 4KB pages, x86 CPU has 2 or 1 level TLB, first level is data TLB and
instruction TLB, second level is shared TLB for both data and instructions.
For hupe page TLB, usually there is just one level and seperated by 2MB/4MB
and 1GB.
Although each levels TLB size is important for performance tuning, but for
genernal and rude optimizing, last level TLB entry number is suitable. And
in fact, last level TLB always has the biggest entry number.
This patch will get the biggest TLB entry number and use it in furture TLB
optimizing.
Accroding Borislav's suggestion, except tlb_ll[i/d]_* array, other
function and data will be released after system boot up.
For all kinds of x86 vendor friendly, vendor specific code was moved to its
specific files.
Signed-off-by: Alex Shi <alex.shi@intel.com>
Link: http://lkml.kernel.org/r/1340845344-27557-2-git-send-email-alex.shi@intel.com
Signed-off-by: H. Peter Anvin <hpa@zytor.com>
2012-06-28 01:02:16 +00:00
|
|
|
{ 0xca, STLB_4K, 512, " STLB 4 KByte pages, 4-way associative" },
|
|
|
|
{ 0x00, 0, 0 }
|
|
|
|
};
|
|
|
|
|
x86: delete __cpuinit usage from all x86 files
The __cpuinit type of throwaway sections might have made sense
some time ago when RAM was more constrained, but now the savings
do not offset the cost and complications. For example, the fix in
commit 5e427ec2d0 ("x86: Fix bit corruption at CPU resume time")
is a good example of the nasty type of bugs that can be created
with improper use of the various __init prefixes.
After a discussion on LKML[1] it was decided that cpuinit should go
the way of devinit and be phased out. Once all the users are gone,
we can then finally remove the macros themselves from linux/init.h.
Note that some harmless section mismatch warnings may result, since
notify_cpu_starting() and cpu_up() are arch independent (kernel/cpu.c)
are flagged as __cpuinit -- so if we remove the __cpuinit from
arch specific callers, we will also get section mismatch warnings.
As an intermediate step, we intend to turn the linux/init.h cpuinit
content into no-ops as early as possible, since that will get rid
of these warnings. In any case, they are temporary and harmless.
This removes all the arch/x86 uses of the __cpuinit macros from
all C files. x86 only had the one __CPUINIT used in assembly files,
and it wasn't paired off with a .previous or a __FINIT, so we can
delete it directly w/o any corresponding additional change there.
[1] https://lkml.org/lkml/2013/5/20/589
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: x86@kernel.org
Acked-by: Ingo Molnar <mingo@kernel.org>
Acked-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: H. Peter Anvin <hpa@linux.intel.com>
Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2013-06-18 22:23:59 +00:00
|
|
|
static void intel_tlb_lookup(const unsigned char desc)
|
x86/tlb_info: get last level TLB entry number of CPU
For 4KB pages, x86 CPU has 2 or 1 level TLB, first level is data TLB and
instruction TLB, second level is shared TLB for both data and instructions.
For hupe page TLB, usually there is just one level and seperated by 2MB/4MB
and 1GB.
Although each levels TLB size is important for performance tuning, but for
genernal and rude optimizing, last level TLB entry number is suitable. And
in fact, last level TLB always has the biggest entry number.
This patch will get the biggest TLB entry number and use it in furture TLB
optimizing.
Accroding Borislav's suggestion, except tlb_ll[i/d]_* array, other
function and data will be released after system boot up.
For all kinds of x86 vendor friendly, vendor specific code was moved to its
specific files.
Signed-off-by: Alex Shi <alex.shi@intel.com>
Link: http://lkml.kernel.org/r/1340845344-27557-2-git-send-email-alex.shi@intel.com
Signed-off-by: H. Peter Anvin <hpa@zytor.com>
2012-06-28 01:02:16 +00:00
|
|
|
{
|
|
|
|
unsigned char k;
|
|
|
|
if (desc == 0)
|
|
|
|
return;
|
|
|
|
|
|
|
|
/* look up this descriptor in the table */
|
2019-09-15 09:09:25 +00:00
|
|
|
for (k = 0; intel_tlb_table[k].descriptor != desc &&
|
|
|
|
intel_tlb_table[k].descriptor != 0; k++)
|
x86/tlb_info: get last level TLB entry number of CPU
For 4KB pages, x86 CPU has 2 or 1 level TLB, first level is data TLB and
instruction TLB, second level is shared TLB for both data and instructions.
For hupe page TLB, usually there is just one level and seperated by 2MB/4MB
and 1GB.
Although each levels TLB size is important for performance tuning, but for
genernal and rude optimizing, last level TLB entry number is suitable. And
in fact, last level TLB always has the biggest entry number.
This patch will get the biggest TLB entry number and use it in furture TLB
optimizing.
Accroding Borislav's suggestion, except tlb_ll[i/d]_* array, other
function and data will be released after system boot up.
For all kinds of x86 vendor friendly, vendor specific code was moved to its
specific files.
Signed-off-by: Alex Shi <alex.shi@intel.com>
Link: http://lkml.kernel.org/r/1340845344-27557-2-git-send-email-alex.shi@intel.com
Signed-off-by: H. Peter Anvin <hpa@zytor.com>
2012-06-28 01:02:16 +00:00
|
|
|
;
|
|
|
|
|
|
|
|
if (intel_tlb_table[k].tlb_type == 0)
|
|
|
|
return;
|
|
|
|
|
|
|
|
switch (intel_tlb_table[k].tlb_type) {
|
|
|
|
case STLB_4K:
|
|
|
|
if (tlb_lli_4k[ENTRIES] < intel_tlb_table[k].entries)
|
|
|
|
tlb_lli_4k[ENTRIES] = intel_tlb_table[k].entries;
|
|
|
|
if (tlb_lld_4k[ENTRIES] < intel_tlb_table[k].entries)
|
|
|
|
tlb_lld_4k[ENTRIES] = intel_tlb_table[k].entries;
|
|
|
|
break;
|
x86, cpu: Detect more TLB configuration
The Intel Software Developer’s Manual covers few more TLB
configurations exposed as CPUID 2 descriptors:
61H Instruction TLB: 4 KByte pages, fully associative, 48 entries
63H Data TLB: 1 GByte pages, 4-way set associative, 4 entries
76H Instruction TLB: 2M/4M pages, fully associative, 8 entries
B5H Instruction TLB: 4KByte pages, 8-way set associative, 64 entries
B6H Instruction TLB: 4KByte pages, 8-way set associative, 128 entries
C1H Shared 2nd-Level TLB: 4 KByte/2MByte pages, 8-way associative, 1024 entries
C2H DTLB DTLB: 2 MByte/$MByte pages, 4-way associative, 16 entries
Let's detect them as well.
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Link: http://lkml.kernel.org/r/1387801018-14499-1-git-send-email-kirill.shutemov@linux.intel.com
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
2013-12-23 12:16:58 +00:00
|
|
|
case STLB_4K_2M:
|
|
|
|
if (tlb_lli_4k[ENTRIES] < intel_tlb_table[k].entries)
|
|
|
|
tlb_lli_4k[ENTRIES] = intel_tlb_table[k].entries;
|
|
|
|
if (tlb_lld_4k[ENTRIES] < intel_tlb_table[k].entries)
|
|
|
|
tlb_lld_4k[ENTRIES] = intel_tlb_table[k].entries;
|
|
|
|
if (tlb_lli_2m[ENTRIES] < intel_tlb_table[k].entries)
|
|
|
|
tlb_lli_2m[ENTRIES] = intel_tlb_table[k].entries;
|
|
|
|
if (tlb_lld_2m[ENTRIES] < intel_tlb_table[k].entries)
|
|
|
|
tlb_lld_2m[ENTRIES] = intel_tlb_table[k].entries;
|
|
|
|
if (tlb_lli_4m[ENTRIES] < intel_tlb_table[k].entries)
|
|
|
|
tlb_lli_4m[ENTRIES] = intel_tlb_table[k].entries;
|
|
|
|
if (tlb_lld_4m[ENTRIES] < intel_tlb_table[k].entries)
|
|
|
|
tlb_lld_4m[ENTRIES] = intel_tlb_table[k].entries;
|
|
|
|
break;
|
x86/tlb_info: get last level TLB entry number of CPU
For 4KB pages, x86 CPU has 2 or 1 level TLB, first level is data TLB and
instruction TLB, second level is shared TLB for both data and instructions.
For hupe page TLB, usually there is just one level and seperated by 2MB/4MB
and 1GB.
Although each levels TLB size is important for performance tuning, but for
genernal and rude optimizing, last level TLB entry number is suitable. And
in fact, last level TLB always has the biggest entry number.
This patch will get the biggest TLB entry number and use it in furture TLB
optimizing.
Accroding Borislav's suggestion, except tlb_ll[i/d]_* array, other
function and data will be released after system boot up.
For all kinds of x86 vendor friendly, vendor specific code was moved to its
specific files.
Signed-off-by: Alex Shi <alex.shi@intel.com>
Link: http://lkml.kernel.org/r/1340845344-27557-2-git-send-email-alex.shi@intel.com
Signed-off-by: H. Peter Anvin <hpa@zytor.com>
2012-06-28 01:02:16 +00:00
|
|
|
case TLB_INST_ALL:
|
|
|
|
if (tlb_lli_4k[ENTRIES] < intel_tlb_table[k].entries)
|
|
|
|
tlb_lli_4k[ENTRIES] = intel_tlb_table[k].entries;
|
|
|
|
if (tlb_lli_2m[ENTRIES] < intel_tlb_table[k].entries)
|
|
|
|
tlb_lli_2m[ENTRIES] = intel_tlb_table[k].entries;
|
|
|
|
if (tlb_lli_4m[ENTRIES] < intel_tlb_table[k].entries)
|
|
|
|
tlb_lli_4m[ENTRIES] = intel_tlb_table[k].entries;
|
|
|
|
break;
|
|
|
|
case TLB_INST_4K:
|
|
|
|
if (tlb_lli_4k[ENTRIES] < intel_tlb_table[k].entries)
|
|
|
|
tlb_lli_4k[ENTRIES] = intel_tlb_table[k].entries;
|
|
|
|
break;
|
|
|
|
case TLB_INST_4M:
|
|
|
|
if (tlb_lli_4m[ENTRIES] < intel_tlb_table[k].entries)
|
|
|
|
tlb_lli_4m[ENTRIES] = intel_tlb_table[k].entries;
|
|
|
|
break;
|
|
|
|
case TLB_INST_2M_4M:
|
|
|
|
if (tlb_lli_2m[ENTRIES] < intel_tlb_table[k].entries)
|
|
|
|
tlb_lli_2m[ENTRIES] = intel_tlb_table[k].entries;
|
|
|
|
if (tlb_lli_4m[ENTRIES] < intel_tlb_table[k].entries)
|
|
|
|
tlb_lli_4m[ENTRIES] = intel_tlb_table[k].entries;
|
|
|
|
break;
|
|
|
|
case TLB_DATA_4K:
|
|
|
|
case TLB_DATA0_4K:
|
|
|
|
if (tlb_lld_4k[ENTRIES] < intel_tlb_table[k].entries)
|
|
|
|
tlb_lld_4k[ENTRIES] = intel_tlb_table[k].entries;
|
|
|
|
break;
|
|
|
|
case TLB_DATA_4M:
|
|
|
|
case TLB_DATA0_4M:
|
|
|
|
if (tlb_lld_4m[ENTRIES] < intel_tlb_table[k].entries)
|
|
|
|
tlb_lld_4m[ENTRIES] = intel_tlb_table[k].entries;
|
|
|
|
break;
|
|
|
|
case TLB_DATA_2M_4M:
|
|
|
|
case TLB_DATA0_2M_4M:
|
|
|
|
if (tlb_lld_2m[ENTRIES] < intel_tlb_table[k].entries)
|
|
|
|
tlb_lld_2m[ENTRIES] = intel_tlb_table[k].entries;
|
|
|
|
if (tlb_lld_4m[ENTRIES] < intel_tlb_table[k].entries)
|
|
|
|
tlb_lld_4m[ENTRIES] = intel_tlb_table[k].entries;
|
|
|
|
break;
|
|
|
|
case TLB_DATA_4K_4M:
|
|
|
|
if (tlb_lld_4k[ENTRIES] < intel_tlb_table[k].entries)
|
|
|
|
tlb_lld_4k[ENTRIES] = intel_tlb_table[k].entries;
|
|
|
|
if (tlb_lld_4m[ENTRIES] < intel_tlb_table[k].entries)
|
|
|
|
tlb_lld_4m[ENTRIES] = intel_tlb_table[k].entries;
|
|
|
|
break;
|
x86, cpu: Detect more TLB configuration
The Intel Software Developer’s Manual covers few more TLB
configurations exposed as CPUID 2 descriptors:
61H Instruction TLB: 4 KByte pages, fully associative, 48 entries
63H Data TLB: 1 GByte pages, 4-way set associative, 4 entries
76H Instruction TLB: 2M/4M pages, fully associative, 8 entries
B5H Instruction TLB: 4KByte pages, 8-way set associative, 64 entries
B6H Instruction TLB: 4KByte pages, 8-way set associative, 128 entries
C1H Shared 2nd-Level TLB: 4 KByte/2MByte pages, 8-way associative, 1024 entries
C2H DTLB DTLB: 2 MByte/$MByte pages, 4-way associative, 16 entries
Let's detect them as well.
Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Link: http://lkml.kernel.org/r/1387801018-14499-1-git-send-email-kirill.shutemov@linux.intel.com
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
2013-12-23 12:16:58 +00:00
|
|
|
case TLB_DATA_1G:
|
|
|
|
if (tlb_lld_1g[ENTRIES] < intel_tlb_table[k].entries)
|
|
|
|
tlb_lld_1g[ENTRIES] = intel_tlb_table[k].entries;
|
x86/tlb_info: get last level TLB entry number of CPU
For 4KB pages, x86 CPU has 2 or 1 level TLB, first level is data TLB and
instruction TLB, second level is shared TLB for both data and instructions.
For hupe page TLB, usually there is just one level and seperated by 2MB/4MB
and 1GB.
Although each levels TLB size is important for performance tuning, but for
genernal and rude optimizing, last level TLB entry number is suitable. And
in fact, last level TLB always has the biggest entry number.
This patch will get the biggest TLB entry number and use it in furture TLB
optimizing.
Accroding Borislav's suggestion, except tlb_ll[i/d]_* array, other
function and data will be released after system boot up.
For all kinds of x86 vendor friendly, vendor specific code was moved to its
specific files.
Signed-off-by: Alex Shi <alex.shi@intel.com>
Link: http://lkml.kernel.org/r/1340845344-27557-2-git-send-email-alex.shi@intel.com
Signed-off-by: H. Peter Anvin <hpa@zytor.com>
2012-06-28 01:02:16 +00:00
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
x86: delete __cpuinit usage from all x86 files
The __cpuinit type of throwaway sections might have made sense
some time ago when RAM was more constrained, but now the savings
do not offset the cost and complications. For example, the fix in
commit 5e427ec2d0 ("x86: Fix bit corruption at CPU resume time")
is a good example of the nasty type of bugs that can be created
with improper use of the various __init prefixes.
After a discussion on LKML[1] it was decided that cpuinit should go
the way of devinit and be phased out. Once all the users are gone,
we can then finally remove the macros themselves from linux/init.h.
Note that some harmless section mismatch warnings may result, since
notify_cpu_starting() and cpu_up() are arch independent (kernel/cpu.c)
are flagged as __cpuinit -- so if we remove the __cpuinit from
arch specific callers, we will also get section mismatch warnings.
As an intermediate step, we intend to turn the linux/init.h cpuinit
content into no-ops as early as possible, since that will get rid
of these warnings. In any case, they are temporary and harmless.
This removes all the arch/x86 uses of the __cpuinit macros from
all C files. x86 only had the one __CPUINIT used in assembly files,
and it wasn't paired off with a .previous or a __FINIT, so we can
delete it directly w/o any corresponding additional change there.
[1] https://lkml.org/lkml/2013/5/20/589
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: x86@kernel.org
Acked-by: Ingo Molnar <mingo@kernel.org>
Acked-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: H. Peter Anvin <hpa@linux.intel.com>
Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2013-06-18 22:23:59 +00:00
|
|
|
static void intel_detect_tlb(struct cpuinfo_x86 *c)
|
x86/tlb_info: get last level TLB entry number of CPU
For 4KB pages, x86 CPU has 2 or 1 level TLB, first level is data TLB and
instruction TLB, second level is shared TLB for both data and instructions.
For hupe page TLB, usually there is just one level and seperated by 2MB/4MB
and 1GB.
Although each levels TLB size is important for performance tuning, but for
genernal and rude optimizing, last level TLB entry number is suitable. And
in fact, last level TLB always has the biggest entry number.
This patch will get the biggest TLB entry number and use it in furture TLB
optimizing.
Accroding Borislav's suggestion, except tlb_ll[i/d]_* array, other
function and data will be released after system boot up.
For all kinds of x86 vendor friendly, vendor specific code was moved to its
specific files.
Signed-off-by: Alex Shi <alex.shi@intel.com>
Link: http://lkml.kernel.org/r/1340845344-27557-2-git-send-email-alex.shi@intel.com
Signed-off-by: H. Peter Anvin <hpa@zytor.com>
2012-06-28 01:02:16 +00:00
|
|
|
{
|
|
|
|
int i, j, n;
|
|
|
|
unsigned int regs[4];
|
|
|
|
unsigned char *desc = (unsigned char *)regs;
|
2012-08-06 17:00:37 +00:00
|
|
|
|
|
|
|
if (c->cpuid_level < 2)
|
|
|
|
return;
|
|
|
|
|
x86/tlb_info: get last level TLB entry number of CPU
For 4KB pages, x86 CPU has 2 or 1 level TLB, first level is data TLB and
instruction TLB, second level is shared TLB for both data and instructions.
For hupe page TLB, usually there is just one level and seperated by 2MB/4MB
and 1GB.
Although each levels TLB size is important for performance tuning, but for
genernal and rude optimizing, last level TLB entry number is suitable. And
in fact, last level TLB always has the biggest entry number.
This patch will get the biggest TLB entry number and use it in furture TLB
optimizing.
Accroding Borislav's suggestion, except tlb_ll[i/d]_* array, other
function and data will be released after system boot up.
For all kinds of x86 vendor friendly, vendor specific code was moved to its
specific files.
Signed-off-by: Alex Shi <alex.shi@intel.com>
Link: http://lkml.kernel.org/r/1340845344-27557-2-git-send-email-alex.shi@intel.com
Signed-off-by: H. Peter Anvin <hpa@zytor.com>
2012-06-28 01:02:16 +00:00
|
|
|
/* Number of times to iterate */
|
|
|
|
n = cpuid_eax(2) & 0xFF;
|
|
|
|
|
|
|
|
for (i = 0 ; i < n ; i++) {
|
|
|
|
cpuid(2, ®s[0], ®s[1], ®s[2], ®s[3]);
|
|
|
|
|
|
|
|
/* If bit 31 is set, this is an unknown format */
|
|
|
|
for (j = 0 ; j < 3 ; j++)
|
|
|
|
if (regs[j] & (1 << 31))
|
|
|
|
regs[j] = 0;
|
|
|
|
|
|
|
|
/* Byte 0 is level count, not a descriptor */
|
|
|
|
for (j = 1 ; j < 16 ; j++)
|
|
|
|
intel_tlb_lookup(desc[j]);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
x86: delete __cpuinit usage from all x86 files
The __cpuinit type of throwaway sections might have made sense
some time ago when RAM was more constrained, but now the savings
do not offset the cost and complications. For example, the fix in
commit 5e427ec2d0 ("x86: Fix bit corruption at CPU resume time")
is a good example of the nasty type of bugs that can be created
with improper use of the various __init prefixes.
After a discussion on LKML[1] it was decided that cpuinit should go
the way of devinit and be phased out. Once all the users are gone,
we can then finally remove the macros themselves from linux/init.h.
Note that some harmless section mismatch warnings may result, since
notify_cpu_starting() and cpu_up() are arch independent (kernel/cpu.c)
are flagged as __cpuinit -- so if we remove the __cpuinit from
arch specific callers, we will also get section mismatch warnings.
As an intermediate step, we intend to turn the linux/init.h cpuinit
content into no-ops as early as possible, since that will get rid
of these warnings. In any case, they are temporary and harmless.
This removes all the arch/x86 uses of the __cpuinit macros from
all C files. x86 only had the one __CPUINIT used in assembly files,
and it wasn't paired off with a .previous or a __FINIT, so we can
delete it directly w/o any corresponding additional change there.
[1] https://lkml.org/lkml/2013/5/20/589
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: x86@kernel.org
Acked-by: Ingo Molnar <mingo@kernel.org>
Acked-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: H. Peter Anvin <hpa@linux.intel.com>
Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2013-06-18 22:23:59 +00:00
|
|
|
static const struct cpu_dev intel_cpu_dev = {
|
2005-04-16 22:20:36 +00:00
|
|
|
.c_vendor = "Intel",
|
2008-02-22 22:09:42 +00:00
|
|
|
.c_ident = { "GenuineIntel" },
|
2008-09-09 23:40:35 +00:00
|
|
|
#ifdef CONFIG_X86_32
|
2013-10-21 08:35:20 +00:00
|
|
|
.legacy_models = {
|
|
|
|
{ .family = 4, .model_names =
|
2008-02-22 22:09:42 +00:00
|
|
|
{
|
|
|
|
[0] = "486 DX-25/33",
|
|
|
|
[1] = "486 DX-50",
|
|
|
|
[2] = "486 SX",
|
|
|
|
[3] = "486 DX/2",
|
|
|
|
[4] = "486 SL",
|
|
|
|
[5] = "486 SX/2",
|
|
|
|
[7] = "486 DX/2-WB",
|
|
|
|
[8] = "486 DX/4",
|
2005-04-16 22:20:36 +00:00
|
|
|
[9] = "486 DX/4-WB"
|
|
|
|
}
|
|
|
|
},
|
2013-10-21 08:35:20 +00:00
|
|
|
{ .family = 5, .model_names =
|
2008-02-22 22:09:42 +00:00
|
|
|
{
|
|
|
|
[0] = "Pentium 60/66 A-step",
|
|
|
|
[1] = "Pentium 60/66",
|
2005-04-16 22:20:36 +00:00
|
|
|
[2] = "Pentium 75 - 200",
|
2008-02-22 22:09:42 +00:00
|
|
|
[3] = "OverDrive PODP5V83",
|
2005-04-16 22:20:36 +00:00
|
|
|
[4] = "Pentium MMX",
|
2008-02-22 22:09:42 +00:00
|
|
|
[7] = "Mobile Pentium 75 - 200",
|
2014-10-07 00:19:49 +00:00
|
|
|
[8] = "Mobile Pentium MMX",
|
|
|
|
[9] = "Quark SoC X1000",
|
2005-04-16 22:20:36 +00:00
|
|
|
}
|
|
|
|
},
|
2013-10-21 08:35:20 +00:00
|
|
|
{ .family = 6, .model_names =
|
2008-02-22 22:09:42 +00:00
|
|
|
{
|
2005-04-16 22:20:36 +00:00
|
|
|
[0] = "Pentium Pro A-step",
|
2008-02-22 22:09:42 +00:00
|
|
|
[1] = "Pentium Pro",
|
|
|
|
[3] = "Pentium II (Klamath)",
|
|
|
|
[4] = "Pentium II (Deschutes)",
|
|
|
|
[5] = "Pentium II (Deschutes)",
|
2005-04-16 22:20:36 +00:00
|
|
|
[6] = "Mobile Pentium II",
|
2008-02-22 22:09:42 +00:00
|
|
|
[7] = "Pentium III (Katmai)",
|
|
|
|
[8] = "Pentium III (Coppermine)",
|
2005-04-16 22:20:36 +00:00
|
|
|
[10] = "Pentium III (Cascades)",
|
|
|
|
[11] = "Pentium III (Tualatin)",
|
|
|
|
}
|
|
|
|
},
|
2013-10-21 08:35:20 +00:00
|
|
|
{ .family = 15, .model_names =
|
2005-04-16 22:20:36 +00:00
|
|
|
{
|
|
|
|
[0] = "Pentium 4 (Unknown)",
|
|
|
|
[1] = "Pentium 4 (Willamette)",
|
|
|
|
[2] = "Pentium 4 (Northwood)",
|
|
|
|
[4] = "Pentium 4 (Foster)",
|
|
|
|
[5] = "Pentium 4 (Foster)",
|
|
|
|
}
|
|
|
|
},
|
|
|
|
},
|
2013-10-21 08:35:20 +00:00
|
|
|
.legacy_cache_size = intel_size_cache,
|
2008-09-09 23:40:35 +00:00
|
|
|
#endif
|
x86/tlb_info: get last level TLB entry number of CPU
For 4KB pages, x86 CPU has 2 or 1 level TLB, first level is data TLB and
instruction TLB, second level is shared TLB for both data and instructions.
For hupe page TLB, usually there is just one level and seperated by 2MB/4MB
and 1GB.
Although each levels TLB size is important for performance tuning, but for
genernal and rude optimizing, last level TLB entry number is suitable. And
in fact, last level TLB always has the biggest entry number.
This patch will get the biggest TLB entry number and use it in furture TLB
optimizing.
Accroding Borislav's suggestion, except tlb_ll[i/d]_* array, other
function and data will be released after system boot up.
For all kinds of x86 vendor friendly, vendor specific code was moved to its
specific files.
Signed-off-by: Alex Shi <alex.shi@intel.com>
Link: http://lkml.kernel.org/r/1340845344-27557-2-git-send-email-alex.shi@intel.com
Signed-off-by: H. Peter Anvin <hpa@zytor.com>
2012-06-28 01:02:16 +00:00
|
|
|
.c_detect_tlb = intel_detect_tlb,
|
x86: use ELF section to list CPU vendor specific code
Replace the hardcoded list of initialization functions for each CPU
vendor by a list in an ELF section, which is read at initialization in
arch/x86/kernel/cpu/cpu.c to fill the cpu_devs[] array. The ELF
section, named .x86cpuvendor.init, is reclaimed after boot, and
contains entries of type "struct cpu_vendor_dev" which associates a
vendor number with a pointer to a "struct cpu_dev" structure.
This first modification allows to remove all the VENDOR_init_cpu()
functions.
This patch also removes the hardcoded calls to early_init_amd() and
early_init_intel(). Instead, we add a "c_early_init" member to the
cpu_dev structure, which is then called if not NULL by the generic CPU
initialization code. Unfortunately, in early_cpu_detect(), this_cpu is
not yet set, so we have to use the cpu_devs[] array directly.
This patch is part of the Linux Tiny project, and is needed for
further patch that will allow to disable compilation of unused CPU
support code.
Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2008-02-15 11:00:23 +00:00
|
|
|
.c_early_init = early_init_intel,
|
2020-05-05 22:36:15 +00:00
|
|
|
.c_bsp_init = bsp_init_intel,
|
2005-04-16 22:20:36 +00:00
|
|
|
.c_init = init_intel,
|
2008-09-04 19:09:45 +00:00
|
|
|
.c_x86_vendor = X86_VENDOR_INTEL,
|
2005-04-16 22:20:36 +00:00
|
|
|
};
|
|
|
|
|
2008-09-04 19:09:45 +00:00
|
|
|
cpu_dev_register(intel_cpu_dev);
|
2020-01-26 20:05:35 +00:00
|
|
|
|
|
|
|
#undef pr_fmt
|
|
|
|
#define pr_fmt(fmt) "x86/split lock detection: " fmt
|
|
|
|
|
|
|
|
static const struct {
|
|
|
|
const char *option;
|
|
|
|
enum split_lock_detect_state state;
|
|
|
|
} sld_options[] __initconst = {
|
|
|
|
{ "off", sld_off },
|
|
|
|
{ "warn", sld_warn },
|
|
|
|
{ "fatal", sld_fatal },
|
2021-04-19 21:49:56 +00:00
|
|
|
{ "ratelimit:", sld_ratelimit },
|
2020-01-26 20:05:35 +00:00
|
|
|
};
|
|
|
|
|
2021-04-19 21:49:56 +00:00
|
|
|
static struct ratelimit_state bld_ratelimit;
|
|
|
|
|
2020-01-26 20:05:35 +00:00
|
|
|
static inline bool match_option(const char *arg, int arglen, const char *opt)
|
|
|
|
{
|
2021-04-19 21:49:56 +00:00
|
|
|
int len = strlen(opt), ratelimit;
|
|
|
|
|
|
|
|
if (strncmp(arg, opt, len))
|
|
|
|
return false;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Min ratelimit is 1 bus lock/sec.
|
|
|
|
* Max ratelimit is 1000 bus locks/sec.
|
|
|
|
*/
|
|
|
|
if (sscanf(arg, "ratelimit:%d", &ratelimit) == 1 &&
|
|
|
|
ratelimit > 0 && ratelimit <= 1000) {
|
|
|
|
ratelimit_state_init(&bld_ratelimit, HZ, ratelimit);
|
|
|
|
ratelimit_set_flags(&bld_ratelimit, RATELIMIT_MSG_ON_RELEASE);
|
|
|
|
return true;
|
|
|
|
}
|
2020-01-26 20:05:35 +00:00
|
|
|
|
2021-04-19 21:49:56 +00:00
|
|
|
return len == arglen;
|
2020-01-26 20:05:35 +00:00
|
|
|
}
|
|
|
|
|
2020-03-25 03:09:23 +00:00
|
|
|
static bool split_lock_verify_msr(bool on)
|
|
|
|
{
|
|
|
|
u64 ctrl, tmp;
|
|
|
|
|
|
|
|
if (rdmsrl_safe(MSR_TEST_CTRL, &ctrl))
|
|
|
|
return false;
|
|
|
|
if (on)
|
|
|
|
ctrl |= MSR_TEST_CTRL_SPLIT_LOCK_DETECT;
|
|
|
|
else
|
|
|
|
ctrl &= ~MSR_TEST_CTRL_SPLIT_LOCK_DETECT;
|
|
|
|
if (wrmsrl_safe(MSR_TEST_CTRL, ctrl))
|
|
|
|
return false;
|
|
|
|
rdmsrl(MSR_TEST_CTRL, tmp);
|
|
|
|
return ctrl == tmp;
|
|
|
|
}
|
|
|
|
|
2021-03-22 13:53:24 +00:00
|
|
|
static void __init sld_state_setup(void)
|
2020-01-26 20:05:35 +00:00
|
|
|
{
|
2020-03-25 03:09:23 +00:00
|
|
|
enum split_lock_detect_state state = sld_warn;
|
2020-01-26 20:05:35 +00:00
|
|
|
char arg[20];
|
|
|
|
int i, ret;
|
|
|
|
|
2021-03-22 13:53:24 +00:00
|
|
|
if (!boot_cpu_has(X86_FEATURE_SPLIT_LOCK_DETECT) &&
|
|
|
|
!boot_cpu_has(X86_FEATURE_BUS_LOCK_DETECT))
|
2020-03-25 03:09:23 +00:00
|
|
|
return;
|
2020-01-26 20:05:35 +00:00
|
|
|
|
|
|
|
ret = cmdline_find_option(boot_command_line, "split_lock_detect",
|
|
|
|
arg, sizeof(arg));
|
|
|
|
if (ret >= 0) {
|
|
|
|
for (i = 0; i < ARRAY_SIZE(sld_options); i++) {
|
|
|
|
if (match_option(arg, ret, sld_options[i].option)) {
|
2020-03-25 03:09:23 +00:00
|
|
|
state = sld_options[i].state;
|
2020-01-26 20:05:35 +00:00
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
2021-03-22 13:53:24 +00:00
|
|
|
sld_state = state;
|
|
|
|
}
|
2020-01-26 20:05:35 +00:00
|
|
|
|
2021-03-22 13:53:24 +00:00
|
|
|
static void __init __split_lock_setup(void)
|
|
|
|
{
|
|
|
|
if (!split_lock_verify_msr(false)) {
|
|
|
|
pr_info("MSR access failed: Disabled\n");
|
2020-03-25 03:09:23 +00:00
|
|
|
return;
|
2020-01-26 20:05:35 +00:00
|
|
|
}
|
2020-03-25 03:09:23 +00:00
|
|
|
|
2020-03-25 03:09:24 +00:00
|
|
|
rdmsrl(MSR_TEST_CTRL, msr_test_ctrl_cache);
|
|
|
|
|
2020-03-25 03:09:23 +00:00
|
|
|
if (!split_lock_verify_msr(true)) {
|
|
|
|
pr_info("MSR access failed: Disabled\n");
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
2021-03-22 13:53:24 +00:00
|
|
|
/* Restore the MSR to its cached value. */
|
|
|
|
wrmsrl(MSR_TEST_CTRL, msr_test_ctrl_cache);
|
|
|
|
|
2020-03-25 03:09:23 +00:00
|
|
|
setup_force_cpu_cap(X86_FEATURE_SPLIT_LOCK_DETECT);
|
2020-01-26 20:05:35 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
2020-03-25 03:09:23 +00:00
|
|
|
* MSR_TEST_CTRL is per core, but we treat it like a per CPU MSR. Locking
|
|
|
|
* is not implemented as one thread could undo the setting of the other
|
|
|
|
* thread immediately after dropping the lock anyway.
|
2020-01-26 20:05:35 +00:00
|
|
|
*/
|
2020-03-25 03:09:23 +00:00
|
|
|
static void sld_update_msr(bool on)
|
2020-01-26 20:05:35 +00:00
|
|
|
{
|
2020-03-25 03:09:24 +00:00
|
|
|
u64 test_ctrl_val = msr_test_ctrl_cache;
|
2020-01-26 20:05:35 +00:00
|
|
|
|
|
|
|
if (on)
|
|
|
|
test_ctrl_val |= MSR_TEST_CTRL_SPLIT_LOCK_DETECT;
|
|
|
|
|
2020-03-25 03:09:23 +00:00
|
|
|
wrmsrl(MSR_TEST_CTRL, test_ctrl_val);
|
2020-01-26 20:05:35 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
static void split_lock_init(void)
|
|
|
|
{
|
2021-04-19 21:49:56 +00:00
|
|
|
/*
|
|
|
|
* #DB for bus lock handles ratelimit and #AC for split lock is
|
|
|
|
* disabled.
|
|
|
|
*/
|
|
|
|
if (sld_state == sld_ratelimit) {
|
|
|
|
split_lock_verify_msr(false);
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
x86/split_lock: Don't write MSR_TEST_CTRL on CPUs that aren't whitelisted
Choo! Choo! All aboard the Split Lock Express, with direct service to
Wreckage!
Skip split_lock_verify_msr() if the CPU isn't whitelisted as a possible
SLD-enabled CPU model to avoid writing MSR_TEST_CTRL. MSR_TEST_CTRL
exists, and is writable, on many generations of CPUs. Writing the MSR,
even with '0', can result in bizarre, undocumented behavior.
This fixes a crash on Haswell when resuming from suspend with a live KVM
guest. Because APs use the standard SMP boot flow for resume, they will
go through split_lock_init() and the subsequent RDMSR/WRMSR sequence,
which runs even when sld_state==sld_off to ensure SLD is disabled. On
Haswell (at least, my Haswell), writing MSR_TEST_CTRL with '0' will
succeed and _may_ take the SMT _sibling_ out of VMX root mode.
When KVM has an active guest, KVM performs VMXON as part of CPU onlining
(see kvm_starting_cpu()). Because SMP boot is serialized, the resulting
flow is effectively:
on_each_ap_cpu() {
WRMSR(MSR_TEST_CTRL, 0)
VMXON
}
As a result, the WRMSR can disable VMX on a different CPU that has
already done VMXON. This ultimately results in a #UD on VMPTRLD when
KVM regains control and attempt run its vCPUs.
The above voodoo was confirmed by reworking KVM's VMXON flow to write
MSR_TEST_CTRL prior to VMXON, and to serialize the sequence as above.
Further verification of the insanity was done by redoing VMXON on all
APs after the initial WRMSR->VMXON sequence. The additional VMXON,
which should VM-Fail, occasionally succeeded, and also eliminated the
unexpected #UD on VMPTRLD.
The damage done by writing MSR_TEST_CTRL doesn't appear to be limited
to VMX, e.g. after suspend with an active KVM guest, subsequent reboots
almost always hang (even when fudging VMXON), a #UD on a random Jcc was
observed, suspend/resume stability is qualitatively poor, and so on and
so forth.
kernel BUG at arch/x86/kvm/x86.c:386!
CPU: 1 PID: 2592 Comm: CPU 6/KVM Tainted: G D
Hardware name: ASUS Q87M-E/Q87M-E, BIOS 1102 03/03/2014
RIP: 0010:kvm_spurious_fault+0xf/0x20
Call Trace:
vmx_vcpu_load_vmcs+0x1fb/0x2b0
vmx_vcpu_load+0x3e/0x160
kvm_arch_vcpu_load+0x48/0x260
finish_task_switch+0x140/0x260
__schedule+0x460/0x720
_cond_resched+0x2d/0x40
kvm_arch_vcpu_ioctl_run+0x82e/0x1ca0
kvm_vcpu_ioctl+0x363/0x5c0
ksys_ioctl+0x88/0xa0
__x64_sys_ioctl+0x16/0x20
do_syscall_64+0x4c/0x170
entry_SYSCALL_64_after_hwframe+0x44/0xa9
Fixes: dbaba47085b0c ("x86/split_lock: Rework the initialization flow of split lock detection")
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: stable@vger.kernel.org
Link: https://lkml.kernel.org/r/20200605192605.7439-1-sean.j.christopherson@intel.com
2020-06-05 19:26:05 +00:00
|
|
|
if (cpu_model_supports_sld)
|
|
|
|
split_lock_verify_msr(sld_state != sld_off);
|
2020-01-26 20:05:35 +00:00
|
|
|
}
|
|
|
|
|
2020-04-10 11:54:00 +00:00
|
|
|
static void split_lock_warn(unsigned long ip)
|
2020-01-26 20:05:35 +00:00
|
|
|
{
|
|
|
|
pr_warn_ratelimited("#AC: %s/%d took a split_lock trap at address: 0x%lx\n",
|
2020-04-10 11:54:00 +00:00
|
|
|
current->comm, current->pid, ip);
|
2020-01-26 20:05:35 +00:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Disable the split lock detection for this task so it can make
|
|
|
|
* progress and set TIF_SLD so the detection is re-enabled via
|
|
|
|
* switch_to_sld() when the task is scheduled out.
|
|
|
|
*/
|
2020-03-25 03:09:23 +00:00
|
|
|
sld_update_msr(false);
|
2020-01-26 20:05:35 +00:00
|
|
|
set_tsk_thread_flag(current, TIF_SLD);
|
2020-04-10 11:54:00 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
bool handle_guest_split_lock(unsigned long ip)
|
|
|
|
{
|
|
|
|
if (sld_state == sld_warn) {
|
|
|
|
split_lock_warn(ip);
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
pr_warn_once("#AC: %s/%d %s split_lock trap at address: 0x%lx\n",
|
|
|
|
current->comm, current->pid,
|
|
|
|
sld_state == sld_fatal ? "fatal" : "bogus", ip);
|
|
|
|
|
|
|
|
current->thread.error_code = 0;
|
|
|
|
current->thread.trap_nr = X86_TRAP_AC;
|
|
|
|
force_sig_fault(SIGBUS, BUS_ADRALN, NULL);
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
EXPORT_SYMBOL_GPL(handle_guest_split_lock);
|
|
|
|
|
2021-03-22 13:53:24 +00:00
|
|
|
static void bus_lock_init(void)
|
|
|
|
{
|
|
|
|
u64 val;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Warn and fatal are handled by #AC for split lock if #AC for
|
|
|
|
* split lock is supported.
|
|
|
|
*/
|
|
|
|
if (!boot_cpu_has(X86_FEATURE_BUS_LOCK_DETECT) ||
|
|
|
|
(boot_cpu_has(X86_FEATURE_SPLIT_LOCK_DETECT) &&
|
|
|
|
(sld_state == sld_warn || sld_state == sld_fatal)) ||
|
|
|
|
sld_state == sld_off)
|
|
|
|
return;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Enable #DB for bus lock. All bus locks are handled in #DB except
|
|
|
|
* split locks are handled in #AC in the fatal case.
|
|
|
|
*/
|
|
|
|
rdmsrl(MSR_IA32_DEBUGCTLMSR, val);
|
|
|
|
val |= DEBUGCTLMSR_BUS_LOCK_DETECT;
|
|
|
|
wrmsrl(MSR_IA32_DEBUGCTLMSR, val);
|
|
|
|
}
|
|
|
|
|
2020-04-10 11:54:00 +00:00
|
|
|
bool handle_user_split_lock(struct pt_regs *regs, long error_code)
|
|
|
|
{
|
|
|
|
if ((regs->flags & X86_EFLAGS_AC) || sld_state == sld_fatal)
|
|
|
|
return false;
|
|
|
|
split_lock_warn(regs->ip);
|
2020-01-26 20:05:35 +00:00
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
2021-03-22 13:53:24 +00:00
|
|
|
void handle_bus_lock(struct pt_regs *regs)
|
|
|
|
{
|
|
|
|
switch (sld_state) {
|
|
|
|
case sld_off:
|
|
|
|
break;
|
2021-04-19 21:49:56 +00:00
|
|
|
case sld_ratelimit:
|
|
|
|
/* Enforce no more than bld_ratelimit bus locks/sec. */
|
|
|
|
while (!__ratelimit(&bld_ratelimit))
|
|
|
|
msleep(20);
|
|
|
|
/* Warn on the bus lock. */
|
|
|
|
fallthrough;
|
2021-03-22 13:53:24 +00:00
|
|
|
case sld_warn:
|
|
|
|
pr_warn_ratelimited("#DB: %s/%d took a bus_lock trap at address: 0x%lx\n",
|
|
|
|
current->comm, current->pid, regs->ip);
|
|
|
|
break;
|
|
|
|
case sld_fatal:
|
|
|
|
force_sig_fault(SIGBUS, BUS_ADRALN, NULL);
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2020-01-26 20:05:35 +00:00
|
|
|
/*
|
|
|
|
* This function is called only when switching between tasks with
|
|
|
|
* different split-lock detection modes. It sets the MSR for the
|
|
|
|
* mode of the new task. This is right most of the time, but since
|
|
|
|
* the MSR is shared by hyperthreads on a physical core there can
|
|
|
|
* be glitches when the two threads need different modes.
|
|
|
|
*/
|
|
|
|
void switch_to_sld(unsigned long tifn)
|
|
|
|
{
|
2020-03-25 03:09:23 +00:00
|
|
|
sld_update_msr(!(tifn & _TIF_SLD));
|
2020-01-26 20:05:35 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
2020-04-16 20:57:53 +00:00
|
|
|
* Bits in the IA32_CORE_CAPABILITIES are not architectural, so they should
|
|
|
|
* only be trusted if it is confirmed that a CPU model implements a
|
|
|
|
* specific feature at a particular bit position.
|
|
|
|
*
|
|
|
|
* The possible driver data field values:
|
|
|
|
*
|
|
|
|
* - 0: CPU models that are known to have the per-core split-lock detection
|
|
|
|
* feature even though they do not enumerate IA32_CORE_CAPABILITIES.
|
|
|
|
*
|
|
|
|
* - 1: CPU models which may enumerate IA32_CORE_CAPABILITIES and if so use
|
|
|
|
* bit 5 to enumerate the per-core split-lock detection feature.
|
2020-01-26 20:05:35 +00:00
|
|
|
*/
|
|
|
|
static const struct x86_cpu_id split_lock_cpu_ids[] __initconst = {
|
2020-04-16 20:57:52 +00:00
|
|
|
X86_MATCH_INTEL_FAM6_MODEL(ICELAKE_X, 0),
|
|
|
|
X86_MATCH_INTEL_FAM6_MODEL(ICELAKE_L, 0),
|
2020-04-30 23:46:35 +00:00
|
|
|
X86_MATCH_INTEL_FAM6_MODEL(ICELAKE_D, 0),
|
2020-04-16 20:57:54 +00:00
|
|
|
X86_MATCH_INTEL_FAM6_MODEL(ATOM_TREMONT, 1),
|
|
|
|
X86_MATCH_INTEL_FAM6_MODEL(ATOM_TREMONT_D, 1),
|
|
|
|
X86_MATCH_INTEL_FAM6_MODEL(ATOM_TREMONT_L, 1),
|
2020-04-30 23:46:35 +00:00
|
|
|
X86_MATCH_INTEL_FAM6_MODEL(TIGERLAKE_L, 1),
|
|
|
|
X86_MATCH_INTEL_FAM6_MODEL(TIGERLAKE, 1),
|
2020-07-24 23:45:20 +00:00
|
|
|
X86_MATCH_INTEL_FAM6_MODEL(SAPPHIRERAPIDS_X, 1),
|
|
|
|
X86_MATCH_INTEL_FAM6_MODEL(ALDERLAKE, 1),
|
2021-02-01 19:00:07 +00:00
|
|
|
X86_MATCH_INTEL_FAM6_MODEL(ALDERLAKE_L, 1),
|
2020-01-26 20:05:35 +00:00
|
|
|
{}
|
|
|
|
};
|
|
|
|
|
2021-03-22 13:53:24 +00:00
|
|
|
static void __init split_lock_setup(struct cpuinfo_x86 *c)
|
2020-01-26 20:05:35 +00:00
|
|
|
{
|
2020-04-16 20:57:53 +00:00
|
|
|
const struct x86_cpu_id *m;
|
|
|
|
u64 ia32_core_caps;
|
|
|
|
|
|
|
|
if (boot_cpu_has(X86_FEATURE_HYPERVISOR))
|
|
|
|
return;
|
2020-01-26 20:05:35 +00:00
|
|
|
|
2020-04-16 20:57:53 +00:00
|
|
|
m = x86_match_cpu(split_lock_cpu_ids);
|
|
|
|
if (!m)
|
2020-01-26 20:05:35 +00:00
|
|
|
return;
|
2020-04-16 20:57:53 +00:00
|
|
|
|
|
|
|
switch (m->driver_data) {
|
|
|
|
case 0:
|
|
|
|
break;
|
|
|
|
case 1:
|
|
|
|
if (!cpu_has(c, X86_FEATURE_CORE_CAPABILITIES))
|
|
|
|
return;
|
2020-01-26 20:05:35 +00:00
|
|
|
rdmsrl(MSR_IA32_CORE_CAPS, ia32_core_caps);
|
2020-04-16 20:57:53 +00:00
|
|
|
if (!(ia32_core_caps & MSR_IA32_CORE_CAPS_SPLIT_LOCK_DETECT))
|
|
|
|
return;
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
return;
|
2020-01-26 20:05:35 +00:00
|
|
|
}
|
|
|
|
|
x86/split_lock: Don't write MSR_TEST_CTRL on CPUs that aren't whitelisted
Choo! Choo! All aboard the Split Lock Express, with direct service to
Wreckage!
Skip split_lock_verify_msr() if the CPU isn't whitelisted as a possible
SLD-enabled CPU model to avoid writing MSR_TEST_CTRL. MSR_TEST_CTRL
exists, and is writable, on many generations of CPUs. Writing the MSR,
even with '0', can result in bizarre, undocumented behavior.
This fixes a crash on Haswell when resuming from suspend with a live KVM
guest. Because APs use the standard SMP boot flow for resume, they will
go through split_lock_init() and the subsequent RDMSR/WRMSR sequence,
which runs even when sld_state==sld_off to ensure SLD is disabled. On
Haswell (at least, my Haswell), writing MSR_TEST_CTRL with '0' will
succeed and _may_ take the SMT _sibling_ out of VMX root mode.
When KVM has an active guest, KVM performs VMXON as part of CPU onlining
(see kvm_starting_cpu()). Because SMP boot is serialized, the resulting
flow is effectively:
on_each_ap_cpu() {
WRMSR(MSR_TEST_CTRL, 0)
VMXON
}
As a result, the WRMSR can disable VMX on a different CPU that has
already done VMXON. This ultimately results in a #UD on VMPTRLD when
KVM regains control and attempt run its vCPUs.
The above voodoo was confirmed by reworking KVM's VMXON flow to write
MSR_TEST_CTRL prior to VMXON, and to serialize the sequence as above.
Further verification of the insanity was done by redoing VMXON on all
APs after the initial WRMSR->VMXON sequence. The additional VMXON,
which should VM-Fail, occasionally succeeded, and also eliminated the
unexpected #UD on VMPTRLD.
The damage done by writing MSR_TEST_CTRL doesn't appear to be limited
to VMX, e.g. after suspend with an active KVM guest, subsequent reboots
almost always hang (even when fudging VMXON), a #UD on a random Jcc was
observed, suspend/resume stability is qualitatively poor, and so on and
so forth.
kernel BUG at arch/x86/kvm/x86.c:386!
CPU: 1 PID: 2592 Comm: CPU 6/KVM Tainted: G D
Hardware name: ASUS Q87M-E/Q87M-E, BIOS 1102 03/03/2014
RIP: 0010:kvm_spurious_fault+0xf/0x20
Call Trace:
vmx_vcpu_load_vmcs+0x1fb/0x2b0
vmx_vcpu_load+0x3e/0x160
kvm_arch_vcpu_load+0x48/0x260
finish_task_switch+0x140/0x260
__schedule+0x460/0x720
_cond_resched+0x2d/0x40
kvm_arch_vcpu_ioctl_run+0x82e/0x1ca0
kvm_vcpu_ioctl+0x363/0x5c0
ksys_ioctl+0x88/0xa0
__x64_sys_ioctl+0x16/0x20
do_syscall_64+0x4c/0x170
entry_SYSCALL_64_after_hwframe+0x44/0xa9
Fixes: dbaba47085b0c ("x86/split_lock: Rework the initialization flow of split lock detection")
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: stable@vger.kernel.org
Link: https://lkml.kernel.org/r/20200605192605.7439-1-sean.j.christopherson@intel.com
2020-06-05 19:26:05 +00:00
|
|
|
cpu_model_supports_sld = true;
|
2021-03-22 13:53:24 +00:00
|
|
|
__split_lock_setup();
|
|
|
|
}
|
|
|
|
|
|
|
|
static void sld_state_show(void)
|
|
|
|
{
|
|
|
|
if (!boot_cpu_has(X86_FEATURE_BUS_LOCK_DETECT) &&
|
|
|
|
!boot_cpu_has(X86_FEATURE_SPLIT_LOCK_DETECT))
|
|
|
|
return;
|
|
|
|
|
|
|
|
switch (sld_state) {
|
|
|
|
case sld_off:
|
|
|
|
pr_info("disabled\n");
|
|
|
|
break;
|
|
|
|
case sld_warn:
|
|
|
|
if (boot_cpu_has(X86_FEATURE_SPLIT_LOCK_DETECT))
|
|
|
|
pr_info("#AC: crashing the kernel on kernel split_locks and warning on user-space split_locks\n");
|
|
|
|
else if (boot_cpu_has(X86_FEATURE_BUS_LOCK_DETECT))
|
|
|
|
pr_info("#DB: warning on user-space bus_locks\n");
|
|
|
|
break;
|
|
|
|
case sld_fatal:
|
|
|
|
if (boot_cpu_has(X86_FEATURE_SPLIT_LOCK_DETECT)) {
|
|
|
|
pr_info("#AC: crashing the kernel on kernel split_locks and sending SIGBUS on user-space split_locks\n");
|
|
|
|
} else if (boot_cpu_has(X86_FEATURE_BUS_LOCK_DETECT)) {
|
|
|
|
pr_info("#DB: sending SIGBUS on user-space bus_locks%s\n",
|
|
|
|
boot_cpu_has(X86_FEATURE_SPLIT_LOCK_DETECT) ?
|
|
|
|
" from non-WB" : "");
|
|
|
|
}
|
|
|
|
break;
|
2021-04-19 21:49:56 +00:00
|
|
|
case sld_ratelimit:
|
|
|
|
if (boot_cpu_has(X86_FEATURE_BUS_LOCK_DETECT))
|
|
|
|
pr_info("#DB: setting system wide bus lock rate limit to %u/sec\n", bld_ratelimit.burst);
|
|
|
|
break;
|
2021-03-22 13:53:24 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
void __init sld_setup(struct cpuinfo_x86 *c)
|
|
|
|
{
|
|
|
|
split_lock_setup(c);
|
|
|
|
sld_state_setup();
|
|
|
|
sld_state_show();
|
2020-01-26 20:05:35 +00:00
|
|
|
}
|
2021-04-12 14:30:42 +00:00
|
|
|
|
|
|
|
#define X86_HYBRID_CPU_TYPE_ID_SHIFT 24
|
|
|
|
|
|
|
|
/**
|
|
|
|
* get_this_hybrid_cpu_type() - Get the type of this hybrid CPU
|
|
|
|
*
|
|
|
|
* Returns the CPU type [31:24] (i.e., Atom or Core) of a CPU in
|
|
|
|
* a hybrid processor. If the processor is not hybrid, returns 0.
|
|
|
|
*/
|
|
|
|
u8 get_this_hybrid_cpu_type(void)
|
|
|
|
{
|
|
|
|
if (!cpu_feature_enabled(X86_FEATURE_HYBRID_CPU))
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
return cpuid_eax(0x0000001a) >> X86_HYBRID_CPU_TYPE_ID_SHIFT;
|
|
|
|
}
|