linux-stable/arch/x86/kernel/head_32.S

597 lines
14 KiB
ArmAsm
Raw Normal View History

License cleanup: add SPDX GPL-2.0 license identifier to files with no license Many source files in the tree are missing licensing information, which makes it harder for compliance tools to determine the correct license. By default all files without license information are under the default license of the kernel, which is GPL version 2. Update the files which contain no license information with the 'GPL-2.0' SPDX license identifier. The SPDX identifier is a legally binding shorthand, which can be used instead of the full boiler plate text. This patch is based on work done by Thomas Gleixner and Kate Stewart and Philippe Ombredanne. How this work was done: Patches were generated and checked against linux-4.14-rc6 for a subset of the use cases: - file had no licensing information it it. - file was a */uapi/* one with no licensing information in it, - file was a */uapi/* one with existing licensing information, Further patches will be generated in subsequent months to fix up cases where non-standard license headers were used, and references to license had to be inferred by heuristics based on keywords. The analysis to determine which SPDX License Identifier to be applied to a file was done in a spreadsheet of side by side results from of the output of two independent scanners (ScanCode & Windriver) producing SPDX tag:value files created by Philippe Ombredanne. Philippe prepared the base worksheet, and did an initial spot review of a few 1000 files. The 4.13 kernel was the starting point of the analysis with 60,537 files assessed. Kate Stewart did a file by file comparison of the scanner results in the spreadsheet to determine which SPDX license identifier(s) to be applied to the file. She confirmed any determination that was not immediately clear with lawyers working with the Linux Foundation. Criteria used to select files for SPDX license identifier tagging was: - Files considered eligible had to be source code files. - Make and config files were included as candidates if they contained >5 lines of source - File already had some variant of a license header in it (even if <5 lines). All documentation files were explicitly excluded. The following heuristics were used to determine which SPDX license identifiers to apply. - when both scanners couldn't find any license traces, file was considered to have no license information in it, and the top level COPYING file license applied. For non */uapi/* files that summary was: SPDX license identifier # files ---------------------------------------------------|------- GPL-2.0 11139 and resulted in the first patch in this series. If that file was a */uapi/* path one, it was "GPL-2.0 WITH Linux-syscall-note" otherwise it was "GPL-2.0". Results of that was: SPDX license identifier # files ---------------------------------------------------|------- GPL-2.0 WITH Linux-syscall-note 930 and resulted in the second patch in this series. - if a file had some form of licensing information in it, and was one of the */uapi/* ones, it was denoted with the Linux-syscall-note if any GPL family license was found in the file or had no licensing in it (per prior point). Results summary: SPDX license identifier # files ---------------------------------------------------|------ GPL-2.0 WITH Linux-syscall-note 270 GPL-2.0+ WITH Linux-syscall-note 169 ((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) 21 ((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause) 17 LGPL-2.1+ WITH Linux-syscall-note 15 GPL-1.0+ WITH Linux-syscall-note 14 ((GPL-2.0+ WITH Linux-syscall-note) OR BSD-3-Clause) 5 LGPL-2.0+ WITH Linux-syscall-note 4 LGPL-2.1 WITH Linux-syscall-note 3 ((GPL-2.0 WITH Linux-syscall-note) OR MIT) 3 ((GPL-2.0 WITH Linux-syscall-note) AND MIT) 1 and that resulted in the third patch in this series. - when the two scanners agreed on the detected license(s), that became the concluded license(s). - when there was disagreement between the two scanners (one detected a license but the other didn't, or they both detected different licenses) a manual inspection of the file occurred. - In most cases a manual inspection of the information in the file resulted in a clear resolution of the license that should apply (and which scanner probably needed to revisit its heuristics). - When it was not immediately clear, the license identifier was confirmed with lawyers working with the Linux Foundation. - If there was any question as to the appropriate license identifier, the file was flagged for further research and to be revisited later in time. In total, over 70 hours of logged manual review was done on the spreadsheet to determine the SPDX license identifiers to apply to the source files by Kate, Philippe, Thomas and, in some cases, confirmation by lawyers working with the Linux Foundation. Kate also obtained a third independent scan of the 4.13 code base from FOSSology, and compared selected files where the other two scanners disagreed against that SPDX file, to see if there was new insights. The Windriver scanner is based on an older version of FOSSology in part, so they are related. Thomas did random spot checks in about 500 files from the spreadsheets for the uapi headers and agreed with SPDX license identifier in the files he inspected. For the non-uapi files Thomas did random spot checks in about 15000 files. In initial set of patches against 4.14-rc6, 3 files were found to have copy/paste license identifier errors, and have been fixed to reflect the correct identifier. Additionally Philippe spent 10 hours this week doing a detailed manual inspection and review of the 12,461 patched files from the initial patch version early this week with: - a full scancode scan run, collecting the matched texts, detected license ids and scores - reviewing anything where there was a license detected (about 500+ files) to ensure that the applied SPDX license was correct - reviewing anything where there was no detection but the patch license was not GPL-2.0 WITH Linux-syscall-note to ensure that the applied SPDX license was correct This produced a worksheet with 20 files needing minor correction. This worksheet was then exported into 3 different .csv files for the different types of files to be modified. These .csv files were then reviewed by Greg. Thomas wrote a script to parse the csv files and add the proper SPDX tag to the file, in the format that the file expected. This script was further refined by Greg based on the output to detect more types of files automatically and to distinguish between header and source .c files (which need different comment types.) Finally Greg ran the script using the .csv files to generate the patches. Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org> Reviewed-by: Philippe Ombredanne <pombredanne@nexb.com> Reviewed-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-11-01 14:07:57 +00:00
/* SPDX-License-Identifier: GPL-2.0 */
/*
*
* Copyright (C) 1991, 1992 Linus Torvalds
*
* Enhanced CPU detection and feature setting code by Mike Jagdis
* and Martin Mares, November 1997.
*/
.text
#include <linux/threads.h>
#include <linux/init.h>
#include <linux/linkage.h>
#include <asm/segment.h>
#include <asm/page_types.h>
#include <asm/pgtable_types.h>
#include <asm/cache.h>
#include <asm/thread_info.h>
#include <asm/asm-offsets.h>
#include <asm/setup.h>
#include <asm/processor-flags.h>
#include <asm/msr-index.h>
#include <asm/cpufeatures.h>
#include <asm/percpu.h>
#include <asm/nops.h>
#include <asm/bootparam.h>
#include <asm/export.h>
#include <asm/pgtable_32.h>
/* Physical address */
#define pa(X) ((X) - __PAGE_OFFSET)
/*
* References to members of the new_cpu_data structure.
*/
#define X86 new_cpu_data+CPUINFO_x86
#define X86_VENDOR new_cpu_data+CPUINFO_x86_vendor
#define X86_MODEL new_cpu_data+CPUINFO_x86_model
#define X86_STEPPING new_cpu_data+CPUINFO_x86_stepping
#define X86_HARD_MATH new_cpu_data+CPUINFO_hard_math
#define X86_CPUID new_cpu_data+CPUINFO_cpuid_level
#define X86_CAPABILITY new_cpu_data+CPUINFO_x86_capability
#define X86_VENDOR_ID new_cpu_data+CPUINFO_x86_vendor_id
#define SIZEOF_PTREGS 17*4
/*
* Worst-case size of the kernel mapping we need to make:
* a relocatable kernel can live anywhere in lowmem, so we need to be able
* to map all of lowmem.
*/
KERNEL_PAGES = LOWMEM_PAGES
INIT_MAP_SIZE = PAGE_TABLE_SIZE(KERNEL_PAGES) * PAGE_SIZE
RESERVE_BRK(pagetables, INIT_MAP_SIZE)
/*
* 32-bit kernel entrypoint; only used by the boot CPU. On entry,
* %esi points to the real-mode code as a 32-bit pointer.
* CS and DS must be 4 GB flat segments, but we don't depend on
* any particular GDT layout, because we load our own as soon as we
* can.
*/
__HEAD
SYM_CODE_START(startup_32)
movl pa(initial_stack),%ecx
/*
* Set segments to known values.
*/
lgdt pa(boot_gdt_descr)
movl $(__BOOT_DS),%eax
movl %eax,%ds
movl %eax,%es
movl %eax,%fs
movl %eax,%gs
movl %eax,%ss
leal -__PAGE_OFFSET(%ecx),%esp
/*
* Clear BSS first so that there are no surprises...
*/
cld
xorl %eax,%eax
movl $pa(__bss_start),%edi
movl $pa(__bss_stop),%ecx
subl %edi,%ecx
shrl $2,%ecx
rep ; stosl
/*
* Copy bootup parameters out of the way.
* Note: %esi still has the pointer to the real-mode data.
* With the kexec as boot loader, parameter segment might be loaded beyond
* kernel image and might not even be addressable by early boot page tables.
* (kexec on panic case). Hence copy out the parameters before initializing
* page tables.
*/
movl $pa(boot_params),%edi
movl $(PARAM_SIZE/4),%ecx
cld
rep
movsl
movl pa(boot_params) + NEW_CL_POINTER,%esi
andl %esi,%esi
jz 1f # No command line
movl $pa(boot_command_line),%edi
movl $(COMMAND_LINE_SIZE/4),%ecx
rep
movsl
1:
#ifdef CONFIG_OLPC
/* save OFW's pgdir table for later use when calling into OFW */
movl %cr3, %eax
movl %eax, pa(olpc_ofw_pgd)
#endif
#ifdef CONFIG_MICROCODE
/* Early load ucode on BSP. */
call load_ucode_bsp
#endif
/* Create early pagetables. */
call mk_early_pgtbl_32
/* Do early initialization of the fixmap area */
movl $pa(initial_pg_fixmap)+PDE_IDENT_ATTR,%eax
#ifdef CONFIG_X86_PAE
#define KPMDS (((-__PAGE_OFFSET) >> 30) & 3) /* Number of kernel PMDs */
movl %eax,pa(initial_pg_pmd+0x1000*KPMDS-8)
#else
movl %eax,pa(initial_page_table+0xffc)
#endif
jmp .Ldefault_entry
SYM_CODE_END(startup_32)
#ifdef CONFIG_HOTPLUG_CPU
/*
* Boot CPU0 entry point. It's called from play_dead(). Everything has been set
* up already except stack. We just set up stack here. Then call
* start_secondary().
*/
SYM_FUNC_START(start_cpu0)
movl initial_stack, %ecx
movl %ecx, %esp
call *(initial_code)
1: jmp 1b
SYM_FUNC_END(start_cpu0)
#endif
/*
* Non-boot CPU entry point; entered from trampoline.S
* We can't lgdt here, because lgdt itself uses a data segment, but
* we know the trampoline has already loaded the boot_gdt for us.
*
* If cpu hotplug is not supported then this code can go in init section
* which will be freed later
*/
SYM_FUNC_START(startup_32_smp)
cld
movl $(__BOOT_DS),%eax
movl %eax,%ds
movl %eax,%es
movl %eax,%fs
movl %eax,%gs
movl pa(initial_stack),%ecx
movl %eax,%ss
leal -__PAGE_OFFSET(%ecx),%esp
#ifdef CONFIG_MICROCODE
/* Early load ucode on AP. */
call load_ucode_ap
#endif
.Ldefault_entry:
movl $(CR0_STATE & ~X86_CR0_PG),%eax
movl %eax,%cr0
/*
* We want to start out with EFLAGS unambiguously cleared. Some BIOSes leave
* bits like NT set. This would confuse the debugger if this code is traced. So
* initialize them properly now before switching to protected mode. That means
* DF in particular (even though we have cleared it earlier after copying the
* command line) because GCC expects it.
*/
pushl $0
popfl
/*
* New page tables may be in 4Mbyte page mode and may be using the global pages.
*
* NOTE! If we are on a 486 we may have no cr4 at all! Specifically, cr4 exists
* if and only if CPUID exists and has flags other than the FPU flag set.
*/
movl $-1,pa(X86_CPUID) # preset CPUID level
movl $X86_EFLAGS_ID,%ecx
pushl %ecx
popfl # set EFLAGS=ID
pushfl
popl %eax # get EFLAGS
testl $X86_EFLAGS_ID,%eax # did EFLAGS.ID remained set?
jz .Lenable_paging # hw disallowed setting of ID bit
# which means no CPUID and no CR4
xorl %eax,%eax
cpuid
movl %eax,pa(X86_CPUID) # save largest std CPUID function
movl $1,%eax
cpuid
andl $~1,%edx # Ignore CPUID.FPU
jz .Lenable_paging # No flags or only CPUID.FPU = no CR4
movl pa(mmu_cr4_features),%eax
movl %eax,%cr4
testb $X86_CR4_PAE, %al # check if PAE is enabled
jz .Lenable_paging
/* Check if extended functions are implemented */
movl $0x80000000, %eax
cpuid
/* Value must be in the range 0x80000001 to 0x8000ffff */
subl $0x80000001, %eax
cmpl $(0x8000ffff-0x80000001), %eax
ja .Lenable_paging
/* Clear bogus XD_DISABLE bits */
call verify_cpu
mov $0x80000001, %eax
cpuid
/* Execute Disable bit supported? */
btl $(X86_FEATURE_NX & 31), %edx
jnc .Lenable_paging
/* Setup EFER (Extended Feature Enable Register) */
movl $MSR_EFER, %ecx
rdmsr
btsl $_EFER_NX, %eax
/* Make changes effective */
wrmsr
.Lenable_paging:
/*
* Enable paging
*/
movl $pa(initial_page_table), %eax
movl %eax,%cr3 /* set the page table pointer.. */
movl $CR0_STATE,%eax
movl %eax,%cr0 /* ..and set paging (PG) bit */
ljmp $__BOOT_CS,$1f /* Clear prefetch and normalize %eip */
1:
/* Shift the stack pointer to a virtual address */
addl $__PAGE_OFFSET, %esp
/*
* start system 32-bit setup. We need to re-do some of the things done
* in 16-bit mode for the "real" operations.
*/
movl setup_once_ref,%eax
andl %eax,%eax
jz 1f # Did we do this already?
call *%eax
1:
/*
* Check if it is 486
*/
movb $4,X86 # at least 486
cmpl $-1,X86_CPUID
je .Lis486
/* get vendor info */
xorl %eax,%eax # call CPUID with 0 -> return vendor ID
cpuid
movl %eax,X86_CPUID # save CPUID level
movl %ebx,X86_VENDOR_ID # lo 4 chars
movl %edx,X86_VENDOR_ID+4 # next 4 chars
movl %ecx,X86_VENDOR_ID+8 # last 4 chars
orl %eax,%eax # do we have processor info as well?
je .Lis486
movl $1,%eax # Use the CPUID instruction to get CPU type
cpuid
movb %al,%cl # save reg for future use
andb $0x0f,%ah # mask processor family
movb %ah,X86
andb $0xf0,%al # mask model
shrb $4,%al
movb %al,X86_MODEL
andb $0x0f,%cl # mask mask revision
movb %cl,X86_STEPPING
movl %edx,X86_CAPABILITY
.Lis486:
movl $0x50022,%ecx # set AM, WP, NE and MP
movl %cr0,%eax
andl $0x80000011,%eax # Save PG,PE,ET
orl %ecx,%eax
movl %eax,%cr0
lgdt early_gdt_descr
ljmp $(__KERNEL_CS),$1f
1: movl $(__KERNEL_DS),%eax # reload all the segment registers
movl %eax,%ss # after changing gdt.
movl $(__USER_DS),%eax # DS/ES contains default USER segment
movl %eax,%ds
movl %eax,%es
movl $(__KERNEL_PERCPU), %eax
movl %eax,%fs # set this cpu's percpu
movl $(__KERNEL_STACK_CANARY),%eax
movl %eax,%gs
xorl %eax,%eax # Clear LDT
lldt %ax
call *(initial_code)
1: jmp 1b
SYM_FUNC_END(startup_32_smp)
#include "verify_cpu.S"
/*
* setup_once
*
* The setup work we only want to run on the BSP.
*
* Warning: %esi is live across this function.
*/
__INIT
setup_once:
Kbuild: rename CC_STACKPROTECTOR[_STRONG] config variables The changes to automatically test for working stack protector compiler support in the Kconfig files removed the special STACKPROTECTOR_AUTO option that picked the strongest stack protector that the compiler supported. That was all a nice cleanup - it makes no sense to have the AUTO case now that the Kconfig phase can just determine the compiler support directly. HOWEVER. It also meant that doing "make oldconfig" would now _disable_ the strong stackprotector if you had AUTO enabled, because in a legacy config file, the sane stack protector configuration would look like CONFIG_HAVE_CC_STACKPROTECTOR=y # CONFIG_CC_STACKPROTECTOR_NONE is not set # CONFIG_CC_STACKPROTECTOR_REGULAR is not set # CONFIG_CC_STACKPROTECTOR_STRONG is not set CONFIG_CC_STACKPROTECTOR_AUTO=y and when you ran this through "make oldconfig" with the Kbuild changes, it would ask you about the regular CONFIG_CC_STACKPROTECTOR (that had been renamed from CONFIG_CC_STACKPROTECTOR_REGULAR to just CONFIG_CC_STACKPROTECTOR), but it would think that the STRONG version used to be disabled (because it was really enabled by AUTO), and would disable it in the new config, resulting in: CONFIG_HAVE_CC_STACKPROTECTOR=y CONFIG_CC_HAS_STACKPROTECTOR_NONE=y CONFIG_CC_STACKPROTECTOR=y # CONFIG_CC_STACKPROTECTOR_STRONG is not set CONFIG_CC_HAS_SANE_STACKPROTECTOR=y That's dangerously subtle - people could suddenly find themselves with the weaker stack protector setup without even realizing. The solution here is to just rename not just the old RECULAR stack protector option, but also the strong one. This does that by just removing the CC_ prefix entirely for the user choices, because it really is not about the compiler support (the compiler support now instead automatially impacts _visibility_ of the options to users). This results in "make oldconfig" actually asking the user for their choice, so that we don't have any silent subtle security model changes. The end result would generally look like this: CONFIG_HAVE_CC_STACKPROTECTOR=y CONFIG_CC_HAS_STACKPROTECTOR_NONE=y CONFIG_STACKPROTECTOR=y CONFIG_STACKPROTECTOR_STRONG=y CONFIG_CC_HAS_SANE_STACKPROTECTOR=y where the "CC_" versions really are about internal compiler infrastructure, not the user selections. Acked-by: Masahiro Yamada <yamada.masahiro@socionext.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2018-06-14 03:21:18 +00:00
#ifdef CONFIG_STACKPROTECTOR
/*
* Configure the stack canary. The linker can't handle this by
* relocation. Manually set base address in stack canary
* segment descriptor.
*/
movl $gdt_page,%eax
movl $stack_canary,%ecx
movw %cx, 8 * GDT_ENTRY_STACK_CANARY + 2(%eax)
shrl $16, %ecx
movb %cl, 8 * GDT_ENTRY_STACK_CANARY + 4(%eax)
movb %ch, 8 * GDT_ENTRY_STACK_CANARY + 7(%eax)
#endif
andl $0,setup_once_ref /* Once is enough, thanks */
ret
SYM_FUNC_START(early_idt_handler_array)
# 36(%esp) %eflags
# 32(%esp) %cs
# 28(%esp) %eip
# 24(%rsp) error code
i = 0
.rept NUM_EXCEPTION_VECTORS
.if ((EXCEPTION_ERRCODE_MASK >> i) & 1) == 0
pushl $0 # Dummy error code, to make stack frame uniform
.endif
pushl $i # 20(%esp) Vector number
x86/asm/irq: Stop relying on magic JMP behavior for early_idt_handlers The early_idt_handlers asm code generates an array of entry points spaced nine bytes apart. It's not really clear from that code or from the places that reference it what's going on, and the code only works in the first place because GAS never generates two-byte JMP instructions when jumping to global labels. Clean up the code to generate the correct array stride (member size) explicitly. This should be considerably more robust against screw-ups, as GAS will warn if a .fill directive has a negative count. Using '. =' to advance would have been even more robust (it would generate an actual error if it tried to move backwards), but it would pad with nulls, confusing anyone who tries to disassemble the code. The new scheme should be much clearer to future readers. While we're at it, improve the comments and rename the array and common code. Binutils may start relaxing jumps to non-weak labels. If so, this change will fix our build, and we may need to backport this change. Before, on x86_64: 0000000000000000 <early_idt_handlers>: 0: 6a 00 pushq $0x0 2: 6a 00 pushq $0x0 4: e9 00 00 00 00 jmpq 9 <early_idt_handlers+0x9> 5: R_X86_64_PC32 early_idt_handler-0x4 ... 48: 66 90 xchg %ax,%ax 4a: 6a 08 pushq $0x8 4c: e9 00 00 00 00 jmpq 51 <early_idt_handlers+0x51> 4d: R_X86_64_PC32 early_idt_handler-0x4 ... 117: 6a 00 pushq $0x0 119: 6a 1f pushq $0x1f 11b: e9 00 00 00 00 jmpq 120 <early_idt_handler> 11c: R_X86_64_PC32 early_idt_handler-0x4 After: 0000000000000000 <early_idt_handler_array>: 0: 6a 00 pushq $0x0 2: 6a 00 pushq $0x0 4: e9 14 01 00 00 jmpq 11d <early_idt_handler_common> ... 48: 6a 08 pushq $0x8 4a: e9 d1 00 00 00 jmpq 120 <early_idt_handler_common> 4f: cc int3 50: cc int3 ... 117: 6a 00 pushq $0x0 119: 6a 1f pushq $0x1f 11b: eb 03 jmp 120 <early_idt_handler_common> 11d: cc int3 11e: cc int3 11f: cc int3 Signed-off-by: Andy Lutomirski <luto@kernel.org> Acked-by: H. Peter Anvin <hpa@linux.intel.com> Cc: Binutils <binutils@sourceware.org> Cc: Borislav Petkov <bp@alien8.de> Cc: H.J. Lu <hjl.tools@gmail.com> Cc: Jan Beulich <JBeulich@suse.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: <stable@vger.kernel.org> Link: http://lkml.kernel.org/r/ac027962af343b0c599cbfcf50b945ad2ef3d7a8.1432336324.git.luto@kernel.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-05-22 23:15:47 +00:00
jmp early_idt_handler_common
i = i + 1
x86/asm/irq: Stop relying on magic JMP behavior for early_idt_handlers The early_idt_handlers asm code generates an array of entry points spaced nine bytes apart. It's not really clear from that code or from the places that reference it what's going on, and the code only works in the first place because GAS never generates two-byte JMP instructions when jumping to global labels. Clean up the code to generate the correct array stride (member size) explicitly. This should be considerably more robust against screw-ups, as GAS will warn if a .fill directive has a negative count. Using '. =' to advance would have been even more robust (it would generate an actual error if it tried to move backwards), but it would pad with nulls, confusing anyone who tries to disassemble the code. The new scheme should be much clearer to future readers. While we're at it, improve the comments and rename the array and common code. Binutils may start relaxing jumps to non-weak labels. If so, this change will fix our build, and we may need to backport this change. Before, on x86_64: 0000000000000000 <early_idt_handlers>: 0: 6a 00 pushq $0x0 2: 6a 00 pushq $0x0 4: e9 00 00 00 00 jmpq 9 <early_idt_handlers+0x9> 5: R_X86_64_PC32 early_idt_handler-0x4 ... 48: 66 90 xchg %ax,%ax 4a: 6a 08 pushq $0x8 4c: e9 00 00 00 00 jmpq 51 <early_idt_handlers+0x51> 4d: R_X86_64_PC32 early_idt_handler-0x4 ... 117: 6a 00 pushq $0x0 119: 6a 1f pushq $0x1f 11b: e9 00 00 00 00 jmpq 120 <early_idt_handler> 11c: R_X86_64_PC32 early_idt_handler-0x4 After: 0000000000000000 <early_idt_handler_array>: 0: 6a 00 pushq $0x0 2: 6a 00 pushq $0x0 4: e9 14 01 00 00 jmpq 11d <early_idt_handler_common> ... 48: 6a 08 pushq $0x8 4a: e9 d1 00 00 00 jmpq 120 <early_idt_handler_common> 4f: cc int3 50: cc int3 ... 117: 6a 00 pushq $0x0 119: 6a 1f pushq $0x1f 11b: eb 03 jmp 120 <early_idt_handler_common> 11d: cc int3 11e: cc int3 11f: cc int3 Signed-off-by: Andy Lutomirski <luto@kernel.org> Acked-by: H. Peter Anvin <hpa@linux.intel.com> Cc: Binutils <binutils@sourceware.org> Cc: Borislav Petkov <bp@alien8.de> Cc: H.J. Lu <hjl.tools@gmail.com> Cc: Jan Beulich <JBeulich@suse.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: <stable@vger.kernel.org> Link: http://lkml.kernel.org/r/ac027962af343b0c599cbfcf50b945ad2ef3d7a8.1432336324.git.luto@kernel.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-05-22 23:15:47 +00:00
.fill early_idt_handler_array + i*EARLY_IDT_HANDLER_SIZE - ., 1, 0xcc
.endr
SYM_FUNC_END(early_idt_handler_array)
x86/asm: Annotate local pseudo-functions Use the newly added SYM_CODE_START_LOCAL* to annotate beginnings of all pseudo-functions (those ending with END until now) which do not have ".globl" annotation. This is needed to balance END for tools that generate debuginfo. Note that ENDs are switched to SYM_CODE_END too so that everybody can see the pairing. C-like functions (which handle frame ptr etc.) are not annotated here, hence SYM_CODE_* macros are used here, not SYM_FUNC_*. Note that the 32bit version of early_idt_handler_common already had ENDPROC -- switch that to SYM_CODE_END for the same reason as above (and to be the same as 64bit). While early_idt_handler_common is LOCAL, it's name is not prepended with ".L" as it happens to appear in call traces. bad_get_user*, and bad_put_user are now aligned, as they are separate functions. They do not mind to be aligned -- no need to be compact there. early_idt_handler_common is aligned now too, as it is after early_idt_handler_array, so as well no need to be compact there. verify_cpu is self-standing and included in other .S files, so align it too. The others have alignment preserved to what it used to be (using the _NOALIGN variant of macros). Signed-off-by: Jiri Slaby <jslaby@suse.cz> Signed-off-by: Borislav Petkov <bp@suse.de> Cc: Alexios Zavras <alexios.zavras@intel.com> Cc: Allison Randal <allison@lohutok.net> Cc: Andy Lutomirski <luto@kernel.org> Cc: Cao jin <caoj.fnst@cn.fujitsu.com> Cc: Enrico Weigelt <info@metux.net> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Cc: Juergen Gross <jgross@suse.com> Cc: linux-arch@vger.kernel.org Cc: Maran Wilson <maran.wilson@oracle.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: x86-ml <x86@kernel.org> Link: https://lkml.kernel.org/r/20191011115108.12392-6-jslaby@suse.cz
2019-10-11 11:50:45 +00:00
SYM_CODE_START_LOCAL(early_idt_handler_common)
x86/asm/irq: Stop relying on magic JMP behavior for early_idt_handlers The early_idt_handlers asm code generates an array of entry points spaced nine bytes apart. It's not really clear from that code or from the places that reference it what's going on, and the code only works in the first place because GAS never generates two-byte JMP instructions when jumping to global labels. Clean up the code to generate the correct array stride (member size) explicitly. This should be considerably more robust against screw-ups, as GAS will warn if a .fill directive has a negative count. Using '. =' to advance would have been even more robust (it would generate an actual error if it tried to move backwards), but it would pad with nulls, confusing anyone who tries to disassemble the code. The new scheme should be much clearer to future readers. While we're at it, improve the comments and rename the array and common code. Binutils may start relaxing jumps to non-weak labels. If so, this change will fix our build, and we may need to backport this change. Before, on x86_64: 0000000000000000 <early_idt_handlers>: 0: 6a 00 pushq $0x0 2: 6a 00 pushq $0x0 4: e9 00 00 00 00 jmpq 9 <early_idt_handlers+0x9> 5: R_X86_64_PC32 early_idt_handler-0x4 ... 48: 66 90 xchg %ax,%ax 4a: 6a 08 pushq $0x8 4c: e9 00 00 00 00 jmpq 51 <early_idt_handlers+0x51> 4d: R_X86_64_PC32 early_idt_handler-0x4 ... 117: 6a 00 pushq $0x0 119: 6a 1f pushq $0x1f 11b: e9 00 00 00 00 jmpq 120 <early_idt_handler> 11c: R_X86_64_PC32 early_idt_handler-0x4 After: 0000000000000000 <early_idt_handler_array>: 0: 6a 00 pushq $0x0 2: 6a 00 pushq $0x0 4: e9 14 01 00 00 jmpq 11d <early_idt_handler_common> ... 48: 6a 08 pushq $0x8 4a: e9 d1 00 00 00 jmpq 120 <early_idt_handler_common> 4f: cc int3 50: cc int3 ... 117: 6a 00 pushq $0x0 119: 6a 1f pushq $0x1f 11b: eb 03 jmp 120 <early_idt_handler_common> 11d: cc int3 11e: cc int3 11f: cc int3 Signed-off-by: Andy Lutomirski <luto@kernel.org> Acked-by: H. Peter Anvin <hpa@linux.intel.com> Cc: Binutils <binutils@sourceware.org> Cc: Borislav Petkov <bp@alien8.de> Cc: H.J. Lu <hjl.tools@gmail.com> Cc: Jan Beulich <JBeulich@suse.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: <stable@vger.kernel.org> Link: http://lkml.kernel.org/r/ac027962af343b0c599cbfcf50b945ad2ef3d7a8.1432336324.git.luto@kernel.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
2015-05-22 23:15:47 +00:00
/*
* The stack is the hardware frame, an error code or zero, and the
* vector number.
*/
cld
incl %ss:early_recursion_flag
/* The vector number is in pt_regs->gs */
cld
pushl %fs /* pt_regs->fs (__fsh varies by model) */
pushl %es /* pt_regs->es (__esh varies by model) */
pushl %ds /* pt_regs->ds (__dsh varies by model) */
pushl %eax /* pt_regs->ax */
pushl %ebp /* pt_regs->bp */
pushl %edi /* pt_regs->di */
pushl %esi /* pt_regs->si */
pushl %edx /* pt_regs->dx */
pushl %ecx /* pt_regs->cx */
pushl %ebx /* pt_regs->bx */
/* Fix up DS and ES */
movl $(__KERNEL_DS), %ecx
movl %ecx, %ds
movl %ecx, %es
/* Load the vector number into EDX */
movl PT_GS(%esp), %edx
/* Load GS into pt_regs->gs (and maybe clobber __gsh) */
movw %gs, PT_GS(%esp)
movl %esp, %eax /* args are pt_regs (EAX), trapnr (EDX) */
call early_fixup_exception
popl %ebx /* pt_regs->bx */
popl %ecx /* pt_regs->cx */
popl %edx /* pt_regs->dx */
popl %esi /* pt_regs->si */
popl %edi /* pt_regs->di */
popl %ebp /* pt_regs->bp */
popl %eax /* pt_regs->ax */
popl %ds /* pt_regs->ds (always ignores __dsh) */
popl %es /* pt_regs->es (always ignores __esh) */
popl %fs /* pt_regs->fs (always ignores __fsh) */
popl %gs /* pt_regs->gs (always ignores __gsh) */
decl %ss:early_recursion_flag
addl $4, %esp /* pop pt_regs->orig_ax */
iret
x86/asm: Annotate local pseudo-functions Use the newly added SYM_CODE_START_LOCAL* to annotate beginnings of all pseudo-functions (those ending with END until now) which do not have ".globl" annotation. This is needed to balance END for tools that generate debuginfo. Note that ENDs are switched to SYM_CODE_END too so that everybody can see the pairing. C-like functions (which handle frame ptr etc.) are not annotated here, hence SYM_CODE_* macros are used here, not SYM_FUNC_*. Note that the 32bit version of early_idt_handler_common already had ENDPROC -- switch that to SYM_CODE_END for the same reason as above (and to be the same as 64bit). While early_idt_handler_common is LOCAL, it's name is not prepended with ".L" as it happens to appear in call traces. bad_get_user*, and bad_put_user are now aligned, as they are separate functions. They do not mind to be aligned -- no need to be compact there. early_idt_handler_common is aligned now too, as it is after early_idt_handler_array, so as well no need to be compact there. verify_cpu is self-standing and included in other .S files, so align it too. The others have alignment preserved to what it used to be (using the _NOALIGN variant of macros). Signed-off-by: Jiri Slaby <jslaby@suse.cz> Signed-off-by: Borislav Petkov <bp@suse.de> Cc: Alexios Zavras <alexios.zavras@intel.com> Cc: Allison Randal <allison@lohutok.net> Cc: Andy Lutomirski <luto@kernel.org> Cc: Cao jin <caoj.fnst@cn.fujitsu.com> Cc: Enrico Weigelt <info@metux.net> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Cc: Juergen Gross <jgross@suse.com> Cc: linux-arch@vger.kernel.org Cc: Maran Wilson <maran.wilson@oracle.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: x86-ml <x86@kernel.org> Link: https://lkml.kernel.org/r/20191011115108.12392-6-jslaby@suse.cz
2019-10-11 11:50:45 +00:00
SYM_CODE_END(early_idt_handler_common)
/* This is the default interrupt "handler" :-) */
SYM_FUNC_START(early_ignore_irq)
cld
#ifdef CONFIG_PRINTK
pushl %eax
pushl %ecx
pushl %edx
pushl %es
pushl %ds
movl $(__KERNEL_DS),%eax
movl %eax,%ds
movl %eax,%es
cmpl $2,early_recursion_flag
je hlt_loop
incl early_recursion_flag
pushl 16(%esp)
pushl 24(%esp)
pushl 32(%esp)
pushl 40(%esp)
pushl $int_msg
call printk
call dump_stack
addl $(5*4),%esp
popl %ds
popl %es
popl %edx
popl %ecx
popl %eax
#endif
iret
hlt_loop:
hlt
jmp hlt_loop
SYM_FUNC_END(early_ignore_irq)
__INITDATA
.align 4
x86/asm/head: Annotate data appropriately Use the new SYM_DATA, SYM_DATA_START, and SYM_DATA_END in both 32 and 64 bit head_*.S. In the 64-bit version, define also SYM_DATA_START_PAGE_ALIGNED locally using the new SYM_START. It is used in the code instead of NEXT_PAGE() which was defined in this file and had been using the obsolete macro GLOBAL(). Now, the data in the 64-bit object file look sane: Value Size Type Bind Vis Ndx Name 0000 4096 OBJECT GLOBAL DEFAULT 15 init_level4_pgt 1000 4096 OBJECT GLOBAL DEFAULT 15 level3_kernel_pgt 2000 2048 OBJECT GLOBAL DEFAULT 15 level2_kernel_pgt 3000 4096 OBJECT GLOBAL DEFAULT 15 level2_fixmap_pgt 4000 4096 OBJECT GLOBAL DEFAULT 15 level1_fixmap_pgt 5000 2 OBJECT GLOBAL DEFAULT 15 early_gdt_descr 5002 8 OBJECT LOCAL DEFAULT 15 early_gdt_descr_base 500a 8 OBJECT GLOBAL DEFAULT 15 phys_base 0000 8 OBJECT GLOBAL DEFAULT 17 initial_code 0008 8 OBJECT GLOBAL DEFAULT 17 initial_gs 0010 8 OBJECT GLOBAL DEFAULT 17 initial_stack 0000 4 OBJECT GLOBAL DEFAULT 19 early_recursion_flag 1000 4096 OBJECT GLOBAL DEFAULT 19 early_level4_pgt 2000 0x40000 OBJECT GLOBAL DEFAULT 19 early_dynamic_pgts 0000 4096 OBJECT GLOBAL DEFAULT 22 empty_zero_page All have correct size and type now. Note that this also removes implicit 16B alignment previously inserted by ENTRY: * initial_code, setup_once_ref, initial_page_table, initial_stack, boot_gdt are still aligned * early_gdt_descr is now properly aligned as was intended before ENTRY was added there long time ago * phys_base's alignment is kept by an explicitly added new alignment Signed-off-by: Jiri Slaby <jslaby@suse.cz> Signed-off-by: Borislav Petkov <bp@suse.de> Cc: Andy Lutomirski <luto@kernel.org> Cc: Cao jin <caoj.fnst@cn.fujitsu.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Cc: Juergen Gross <jgross@suse.com> Cc: linux-arch@vger.kernel.org Cc: Maran Wilson <maran.wilson@oracle.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: x86-ml <x86@kernel.org> Link: https://lkml.kernel.org/r/20191011115108.12392-12-jslaby@suse.cz
2019-10-11 11:50:51 +00:00
SYM_DATA(early_recursion_flag, .long 0)
__REFDATA
.align 4
x86/asm/head: Annotate data appropriately Use the new SYM_DATA, SYM_DATA_START, and SYM_DATA_END in both 32 and 64 bit head_*.S. In the 64-bit version, define also SYM_DATA_START_PAGE_ALIGNED locally using the new SYM_START. It is used in the code instead of NEXT_PAGE() which was defined in this file and had been using the obsolete macro GLOBAL(). Now, the data in the 64-bit object file look sane: Value Size Type Bind Vis Ndx Name 0000 4096 OBJECT GLOBAL DEFAULT 15 init_level4_pgt 1000 4096 OBJECT GLOBAL DEFAULT 15 level3_kernel_pgt 2000 2048 OBJECT GLOBAL DEFAULT 15 level2_kernel_pgt 3000 4096 OBJECT GLOBAL DEFAULT 15 level2_fixmap_pgt 4000 4096 OBJECT GLOBAL DEFAULT 15 level1_fixmap_pgt 5000 2 OBJECT GLOBAL DEFAULT 15 early_gdt_descr 5002 8 OBJECT LOCAL DEFAULT 15 early_gdt_descr_base 500a 8 OBJECT GLOBAL DEFAULT 15 phys_base 0000 8 OBJECT GLOBAL DEFAULT 17 initial_code 0008 8 OBJECT GLOBAL DEFAULT 17 initial_gs 0010 8 OBJECT GLOBAL DEFAULT 17 initial_stack 0000 4 OBJECT GLOBAL DEFAULT 19 early_recursion_flag 1000 4096 OBJECT GLOBAL DEFAULT 19 early_level4_pgt 2000 0x40000 OBJECT GLOBAL DEFAULT 19 early_dynamic_pgts 0000 4096 OBJECT GLOBAL DEFAULT 22 empty_zero_page All have correct size and type now. Note that this also removes implicit 16B alignment previously inserted by ENTRY: * initial_code, setup_once_ref, initial_page_table, initial_stack, boot_gdt are still aligned * early_gdt_descr is now properly aligned as was intended before ENTRY was added there long time ago * phys_base's alignment is kept by an explicitly added new alignment Signed-off-by: Jiri Slaby <jslaby@suse.cz> Signed-off-by: Borislav Petkov <bp@suse.de> Cc: Andy Lutomirski <luto@kernel.org> Cc: Cao jin <caoj.fnst@cn.fujitsu.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Cc: Juergen Gross <jgross@suse.com> Cc: linux-arch@vger.kernel.org Cc: Maran Wilson <maran.wilson@oracle.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: x86-ml <x86@kernel.org> Link: https://lkml.kernel.org/r/20191011115108.12392-12-jslaby@suse.cz
2019-10-11 11:50:51 +00:00
SYM_DATA(initial_code, .long i386_start_kernel)
SYM_DATA(setup_once_ref, .long setup_once)
#ifdef CONFIG_PAGE_TABLE_ISOLATION
#define PGD_ALIGN (2 * PAGE_SIZE)
#define PTI_USER_PGD_FILL 1024
#else
#define PGD_ALIGN (PAGE_SIZE)
#define PTI_USER_PGD_FILL 0
#endif
/*
* BSS section
*/
__PAGE_ALIGNED_BSS
.align PGD_ALIGN
#ifdef CONFIG_X86_PAE
.globl initial_pg_pmd
initial_pg_pmd:
.fill 1024*KPMDS,4,0
#else
.globl initial_page_table
initial_page_table:
.fill 1024,4,0
#endif
.align PGD_ALIGN
initial_pg_fixmap:
.fill 1024,4,0
.globl swapper_pg_dir
.align PGD_ALIGN
swapper_pg_dir:
.fill 1024,4,0
.fill PTI_USER_PGD_FILL,4,0
.globl empty_zero_page
empty_zero_page:
.fill 4096,1,0
EXPORT_SYMBOL(empty_zero_page)
/*
* This starts the data section.
*/
#ifdef CONFIG_X86_PAE
__PAGE_ALIGNED_DATA
/* Page-aligned for the benefit of paravirt? */
.align PGD_ALIGN
x86/asm/head: Annotate data appropriately Use the new SYM_DATA, SYM_DATA_START, and SYM_DATA_END in both 32 and 64 bit head_*.S. In the 64-bit version, define also SYM_DATA_START_PAGE_ALIGNED locally using the new SYM_START. It is used in the code instead of NEXT_PAGE() which was defined in this file and had been using the obsolete macro GLOBAL(). Now, the data in the 64-bit object file look sane: Value Size Type Bind Vis Ndx Name 0000 4096 OBJECT GLOBAL DEFAULT 15 init_level4_pgt 1000 4096 OBJECT GLOBAL DEFAULT 15 level3_kernel_pgt 2000 2048 OBJECT GLOBAL DEFAULT 15 level2_kernel_pgt 3000 4096 OBJECT GLOBAL DEFAULT 15 level2_fixmap_pgt 4000 4096 OBJECT GLOBAL DEFAULT 15 level1_fixmap_pgt 5000 2 OBJECT GLOBAL DEFAULT 15 early_gdt_descr 5002 8 OBJECT LOCAL DEFAULT 15 early_gdt_descr_base 500a 8 OBJECT GLOBAL DEFAULT 15 phys_base 0000 8 OBJECT GLOBAL DEFAULT 17 initial_code 0008 8 OBJECT GLOBAL DEFAULT 17 initial_gs 0010 8 OBJECT GLOBAL DEFAULT 17 initial_stack 0000 4 OBJECT GLOBAL DEFAULT 19 early_recursion_flag 1000 4096 OBJECT GLOBAL DEFAULT 19 early_level4_pgt 2000 0x40000 OBJECT GLOBAL DEFAULT 19 early_dynamic_pgts 0000 4096 OBJECT GLOBAL DEFAULT 22 empty_zero_page All have correct size and type now. Note that this also removes implicit 16B alignment previously inserted by ENTRY: * initial_code, setup_once_ref, initial_page_table, initial_stack, boot_gdt are still aligned * early_gdt_descr is now properly aligned as was intended before ENTRY was added there long time ago * phys_base's alignment is kept by an explicitly added new alignment Signed-off-by: Jiri Slaby <jslaby@suse.cz> Signed-off-by: Borislav Petkov <bp@suse.de> Cc: Andy Lutomirski <luto@kernel.org> Cc: Cao jin <caoj.fnst@cn.fujitsu.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Cc: Juergen Gross <jgross@suse.com> Cc: linux-arch@vger.kernel.org Cc: Maran Wilson <maran.wilson@oracle.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: x86-ml <x86@kernel.org> Link: https://lkml.kernel.org/r/20191011115108.12392-12-jslaby@suse.cz
2019-10-11 11:50:51 +00:00
SYM_DATA_START(initial_page_table)
.long pa(initial_pg_pmd+PGD_IDENT_ATTR),0 /* low identity map */
# if KPMDS == 3
.long pa(initial_pg_pmd+PGD_IDENT_ATTR),0
.long pa(initial_pg_pmd+PGD_IDENT_ATTR+0x1000),0
.long pa(initial_pg_pmd+PGD_IDENT_ATTR+0x2000),0
# elif KPMDS == 2
.long 0,0
.long pa(initial_pg_pmd+PGD_IDENT_ATTR),0
.long pa(initial_pg_pmd+PGD_IDENT_ATTR+0x1000),0
# elif KPMDS == 1
.long 0,0
.long 0,0
.long pa(initial_pg_pmd+PGD_IDENT_ATTR),0
# else
# error "Kernel PMDs should be 1, 2 or 3"
# endif
.align PAGE_SIZE /* needs to be page-sized too */
#ifdef CONFIG_PAGE_TABLE_ISOLATION
/*
* PTI needs another page so sync_initial_pagetable() works correctly
* and does not scribble over the data which is placed behind the
* actual initial_page_table. See clone_pgd_range().
*/
.fill 1024, 4, 0
#endif
x86/asm/head: Annotate data appropriately Use the new SYM_DATA, SYM_DATA_START, and SYM_DATA_END in both 32 and 64 bit head_*.S. In the 64-bit version, define also SYM_DATA_START_PAGE_ALIGNED locally using the new SYM_START. It is used in the code instead of NEXT_PAGE() which was defined in this file and had been using the obsolete macro GLOBAL(). Now, the data in the 64-bit object file look sane: Value Size Type Bind Vis Ndx Name 0000 4096 OBJECT GLOBAL DEFAULT 15 init_level4_pgt 1000 4096 OBJECT GLOBAL DEFAULT 15 level3_kernel_pgt 2000 2048 OBJECT GLOBAL DEFAULT 15 level2_kernel_pgt 3000 4096 OBJECT GLOBAL DEFAULT 15 level2_fixmap_pgt 4000 4096 OBJECT GLOBAL DEFAULT 15 level1_fixmap_pgt 5000 2 OBJECT GLOBAL DEFAULT 15 early_gdt_descr 5002 8 OBJECT LOCAL DEFAULT 15 early_gdt_descr_base 500a 8 OBJECT GLOBAL DEFAULT 15 phys_base 0000 8 OBJECT GLOBAL DEFAULT 17 initial_code 0008 8 OBJECT GLOBAL DEFAULT 17 initial_gs 0010 8 OBJECT GLOBAL DEFAULT 17 initial_stack 0000 4 OBJECT GLOBAL DEFAULT 19 early_recursion_flag 1000 4096 OBJECT GLOBAL DEFAULT 19 early_level4_pgt 2000 0x40000 OBJECT GLOBAL DEFAULT 19 early_dynamic_pgts 0000 4096 OBJECT GLOBAL DEFAULT 22 empty_zero_page All have correct size and type now. Note that this also removes implicit 16B alignment previously inserted by ENTRY: * initial_code, setup_once_ref, initial_page_table, initial_stack, boot_gdt are still aligned * early_gdt_descr is now properly aligned as was intended before ENTRY was added there long time ago * phys_base's alignment is kept by an explicitly added new alignment Signed-off-by: Jiri Slaby <jslaby@suse.cz> Signed-off-by: Borislav Petkov <bp@suse.de> Cc: Andy Lutomirski <luto@kernel.org> Cc: Cao jin <caoj.fnst@cn.fujitsu.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Cc: Juergen Gross <jgross@suse.com> Cc: linux-arch@vger.kernel.org Cc: Maran Wilson <maran.wilson@oracle.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: x86-ml <x86@kernel.org> Link: https://lkml.kernel.org/r/20191011115108.12392-12-jslaby@suse.cz
2019-10-11 11:50:51 +00:00
SYM_DATA_END(initial_page_table)
#endif
.data
.balign 4
x86/asm/head: Annotate data appropriately Use the new SYM_DATA, SYM_DATA_START, and SYM_DATA_END in both 32 and 64 bit head_*.S. In the 64-bit version, define also SYM_DATA_START_PAGE_ALIGNED locally using the new SYM_START. It is used in the code instead of NEXT_PAGE() which was defined in this file and had been using the obsolete macro GLOBAL(). Now, the data in the 64-bit object file look sane: Value Size Type Bind Vis Ndx Name 0000 4096 OBJECT GLOBAL DEFAULT 15 init_level4_pgt 1000 4096 OBJECT GLOBAL DEFAULT 15 level3_kernel_pgt 2000 2048 OBJECT GLOBAL DEFAULT 15 level2_kernel_pgt 3000 4096 OBJECT GLOBAL DEFAULT 15 level2_fixmap_pgt 4000 4096 OBJECT GLOBAL DEFAULT 15 level1_fixmap_pgt 5000 2 OBJECT GLOBAL DEFAULT 15 early_gdt_descr 5002 8 OBJECT LOCAL DEFAULT 15 early_gdt_descr_base 500a 8 OBJECT GLOBAL DEFAULT 15 phys_base 0000 8 OBJECT GLOBAL DEFAULT 17 initial_code 0008 8 OBJECT GLOBAL DEFAULT 17 initial_gs 0010 8 OBJECT GLOBAL DEFAULT 17 initial_stack 0000 4 OBJECT GLOBAL DEFAULT 19 early_recursion_flag 1000 4096 OBJECT GLOBAL DEFAULT 19 early_level4_pgt 2000 0x40000 OBJECT GLOBAL DEFAULT 19 early_dynamic_pgts 0000 4096 OBJECT GLOBAL DEFAULT 22 empty_zero_page All have correct size and type now. Note that this also removes implicit 16B alignment previously inserted by ENTRY: * initial_code, setup_once_ref, initial_page_table, initial_stack, boot_gdt are still aligned * early_gdt_descr is now properly aligned as was intended before ENTRY was added there long time ago * phys_base's alignment is kept by an explicitly added new alignment Signed-off-by: Jiri Slaby <jslaby@suse.cz> Signed-off-by: Borislav Petkov <bp@suse.de> Cc: Andy Lutomirski <luto@kernel.org> Cc: Cao jin <caoj.fnst@cn.fujitsu.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Cc: Juergen Gross <jgross@suse.com> Cc: linux-arch@vger.kernel.org Cc: Maran Wilson <maran.wilson@oracle.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: x86-ml <x86@kernel.org> Link: https://lkml.kernel.org/r/20191011115108.12392-12-jslaby@suse.cz
2019-10-11 11:50:51 +00:00
/*
* The SIZEOF_PTREGS gap is a convention which helps the in-kernel unwinder
* reliably detect the end of the stack.
*/
SYM_DATA(initial_stack,
.long init_thread_union + THREAD_SIZE -
SIZEOF_PTREGS - TOP_OF_KERNEL_STACK_PADDING)
__INITRODATA
int_msg:
.asciz "Unknown interrupt or fault at: %p %p %p\n"
#include "../../x86/xen/xen-head.S"
xen: Core Xen implementation This patch is a rollup of all the core pieces of the Xen implementation, including: - booting and setup - pagetable setup - privileged instructions - segmentation - interrupt flags - upcalls - multicall batching BOOTING AND SETUP The vmlinux image is decorated with ELF notes which tell the Xen domain builder what the kernel's requirements are; the domain builder then constructs the address space accordingly and starts the kernel. Xen has its own entrypoint for the kernel (contained in an ELF note). The ELF notes are set up by xen-head.S, which is included into head.S. In principle it could be linked separately, but it seems to provoke lots of binutils bugs. Because the domain builder starts the kernel in a fairly sane state (32-bit protected mode, paging enabled, flat segments set up), there's not a lot of setup needed before starting the kernel proper. The main steps are: 1. Install the Xen paravirt_ops, which is simply a matter of a structure assignment. 2. Set init_mm to use the Xen-supplied pagetables (analogous to the head.S generated pagetables in a native boot). 3. Reserve address space for Xen, since it takes a chunk at the top of the address space for its own use. 4. Call start_kernel() PAGETABLE SETUP Once we hit the main kernel boot sequence, it will end up calling back via paravirt_ops to set up various pieces of Xen specific state. One of the critical things which requires a bit of extra care is the construction of the initial init_mm pagetable. Because Xen places tight constraints on pagetables (an active pagetable must always be valid, and must always be mapped read-only to the guest domain), we need to be careful when constructing the new pagetable to keep these constraints in mind. It turns out that the easiest way to do this is use the initial Xen-provided pagetable as a template, and then just insert new mappings for memory where a mapping doesn't already exist. This means that during pagetable setup, it uses a special version of xen_set_pte which ignores any attempt to remap a read-only page as read-write (since Xen will map its own initial pagetable as RO), but lets other changes to the ptes happen, so that things like NX are set properly. PRIVILEGED INSTRUCTIONS AND SEGMENTATION When the kernel runs under Xen, it runs in ring 1 rather than ring 0. This means that it is more privileged than user-mode in ring 3, but it still can't run privileged instructions directly. Non-performance critical instructions are dealt with by taking a privilege exception and trapping into the hypervisor and emulating the instruction, but more performance-critical instructions have their own specific paravirt_ops. In many cases we can avoid having to do any hypercalls for these instructions, or the Xen implementation is quite different from the normal native version. The privileged instructions fall into the broad classes of: Segmentation: setting up the GDT and the GDT entries, LDT, TLS and so on. Xen doesn't allow the GDT to be directly modified; all GDT updates are done via hypercalls where the new entries can be validated. This is important because Xen uses segment limits to prevent the guest kernel from damaging the hypervisor itself. Traps and exceptions: Xen uses a special format for trap entrypoints, so when the kernel wants to set an IDT entry, it needs to be converted to the form Xen expects. Xen sets int 0x80 up specially so that the trap goes straight from userspace into the guest kernel without going via the hypervisor. sysenter isn't supported. Kernel stack: The esp0 entry is extracted from the tss and provided to Xen. TLB operations: the various TLB calls are mapped into corresponding Xen hypercalls. Control registers: all the control registers are privileged. The most important is cr3, which points to the base of the current pagetable, and we handle it specially. Another instruction we treat specially is CPUID, even though its not privileged. We want to control what CPU features are visible to the rest of the kernel, and so CPUID ends up going into a paravirt_op. Xen implements this mainly to disable the ACPI and APIC subsystems. INTERRUPT FLAGS Xen maintains its own separate flag for masking events, which is contained within the per-cpu vcpu_info structure. Because the guest kernel runs in ring 1 and not 0, the IF flag in EFLAGS is completely ignored (and must be, because even if a guest domain disables interrupts for itself, it can't disable them overall). (A note on terminology: "events" and interrupts are effectively synonymous. However, rather than using an "enable flag", Xen uses a "mask flag", which blocks event delivery when it is non-zero.) There are paravirt_ops for each of cli/sti/save_fl/restore_fl, which are implemented to manage the Xen event mask state. The only thing worth noting is that when events are unmasked, we need to explicitly see if there's a pending event and call into the hypervisor to make sure it gets delivered. UPCALLS Xen needs a couple of upcall (or callback) functions to be implemented by each guest. One is the event upcalls, which is how events (interrupts, effectively) are delivered to the guests. The other is the failsafe callback, which is used to report errors in either reloading a segment register, or caused by iret. These are implemented in i386/kernel/entry.S so they can jump into the normal iret_exc path when necessary. MULTICALL BATCHING Xen provides a multicall mechanism, which allows multiple hypercalls to be issued at once in order to mitigate the cost of trapping into the hypervisor. This is particularly useful for context switches, since the 4-5 hypercalls they would normally need (reload cr3, update TLS, maybe update LDT) can be reduced to one. This patch implements a generic batching mechanism for hypercalls, which gets used in many places in the Xen code. Signed-off-by: Jeremy Fitzhardinge <jeremy@xensource.com> Signed-off-by: Chris Wright <chrisw@sous-sol.org> Cc: Ian Pratt <ian.pratt@xensource.com> Cc: Christian Limpach <Christian.Limpach@cl.cam.ac.uk> Cc: Adrian Bunk <bunk@stusta.de>
2007-07-18 01:37:04 +00:00
/*
* The IDT and GDT 'descriptors' are a strange 48-bit object
* only used by the lidt and lgdt instructions. They are not
* like usual segment descriptors - they consist of a 16-bit
* segment size, and 32-bit linear address value:
*/
.data
ALIGN
# early boot GDT descriptor (must use 1:1 address mapping)
.word 0 # 32 bit align gdt_desc.address
x86/asm/head: Annotate data appropriately Use the new SYM_DATA, SYM_DATA_START, and SYM_DATA_END in both 32 and 64 bit head_*.S. In the 64-bit version, define also SYM_DATA_START_PAGE_ALIGNED locally using the new SYM_START. It is used in the code instead of NEXT_PAGE() which was defined in this file and had been using the obsolete macro GLOBAL(). Now, the data in the 64-bit object file look sane: Value Size Type Bind Vis Ndx Name 0000 4096 OBJECT GLOBAL DEFAULT 15 init_level4_pgt 1000 4096 OBJECT GLOBAL DEFAULT 15 level3_kernel_pgt 2000 2048 OBJECT GLOBAL DEFAULT 15 level2_kernel_pgt 3000 4096 OBJECT GLOBAL DEFAULT 15 level2_fixmap_pgt 4000 4096 OBJECT GLOBAL DEFAULT 15 level1_fixmap_pgt 5000 2 OBJECT GLOBAL DEFAULT 15 early_gdt_descr 5002 8 OBJECT LOCAL DEFAULT 15 early_gdt_descr_base 500a 8 OBJECT GLOBAL DEFAULT 15 phys_base 0000 8 OBJECT GLOBAL DEFAULT 17 initial_code 0008 8 OBJECT GLOBAL DEFAULT 17 initial_gs 0010 8 OBJECT GLOBAL DEFAULT 17 initial_stack 0000 4 OBJECT GLOBAL DEFAULT 19 early_recursion_flag 1000 4096 OBJECT GLOBAL DEFAULT 19 early_level4_pgt 2000 0x40000 OBJECT GLOBAL DEFAULT 19 early_dynamic_pgts 0000 4096 OBJECT GLOBAL DEFAULT 22 empty_zero_page All have correct size and type now. Note that this also removes implicit 16B alignment previously inserted by ENTRY: * initial_code, setup_once_ref, initial_page_table, initial_stack, boot_gdt are still aligned * early_gdt_descr is now properly aligned as was intended before ENTRY was added there long time ago * phys_base's alignment is kept by an explicitly added new alignment Signed-off-by: Jiri Slaby <jslaby@suse.cz> Signed-off-by: Borislav Petkov <bp@suse.de> Cc: Andy Lutomirski <luto@kernel.org> Cc: Cao jin <caoj.fnst@cn.fujitsu.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Cc: Juergen Gross <jgross@suse.com> Cc: linux-arch@vger.kernel.org Cc: Maran Wilson <maran.wilson@oracle.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: x86-ml <x86@kernel.org> Link: https://lkml.kernel.org/r/20191011115108.12392-12-jslaby@suse.cz
2019-10-11 11:50:51 +00:00
SYM_DATA_START_LOCAL(boot_gdt_descr)
.word __BOOT_DS+7
.long boot_gdt - __PAGE_OFFSET
x86/asm/head: Annotate data appropriately Use the new SYM_DATA, SYM_DATA_START, and SYM_DATA_END in both 32 and 64 bit head_*.S. In the 64-bit version, define also SYM_DATA_START_PAGE_ALIGNED locally using the new SYM_START. It is used in the code instead of NEXT_PAGE() which was defined in this file and had been using the obsolete macro GLOBAL(). Now, the data in the 64-bit object file look sane: Value Size Type Bind Vis Ndx Name 0000 4096 OBJECT GLOBAL DEFAULT 15 init_level4_pgt 1000 4096 OBJECT GLOBAL DEFAULT 15 level3_kernel_pgt 2000 2048 OBJECT GLOBAL DEFAULT 15 level2_kernel_pgt 3000 4096 OBJECT GLOBAL DEFAULT 15 level2_fixmap_pgt 4000 4096 OBJECT GLOBAL DEFAULT 15 level1_fixmap_pgt 5000 2 OBJECT GLOBAL DEFAULT 15 early_gdt_descr 5002 8 OBJECT LOCAL DEFAULT 15 early_gdt_descr_base 500a 8 OBJECT GLOBAL DEFAULT 15 phys_base 0000 8 OBJECT GLOBAL DEFAULT 17 initial_code 0008 8 OBJECT GLOBAL DEFAULT 17 initial_gs 0010 8 OBJECT GLOBAL DEFAULT 17 initial_stack 0000 4 OBJECT GLOBAL DEFAULT 19 early_recursion_flag 1000 4096 OBJECT GLOBAL DEFAULT 19 early_level4_pgt 2000 0x40000 OBJECT GLOBAL DEFAULT 19 early_dynamic_pgts 0000 4096 OBJECT GLOBAL DEFAULT 22 empty_zero_page All have correct size and type now. Note that this also removes implicit 16B alignment previously inserted by ENTRY: * initial_code, setup_once_ref, initial_page_table, initial_stack, boot_gdt are still aligned * early_gdt_descr is now properly aligned as was intended before ENTRY was added there long time ago * phys_base's alignment is kept by an explicitly added new alignment Signed-off-by: Jiri Slaby <jslaby@suse.cz> Signed-off-by: Borislav Petkov <bp@suse.de> Cc: Andy Lutomirski <luto@kernel.org> Cc: Cao jin <caoj.fnst@cn.fujitsu.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Cc: Juergen Gross <jgross@suse.com> Cc: linux-arch@vger.kernel.org Cc: Maran Wilson <maran.wilson@oracle.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: x86-ml <x86@kernel.org> Link: https://lkml.kernel.org/r/20191011115108.12392-12-jslaby@suse.cz
2019-10-11 11:50:51 +00:00
SYM_DATA_END(boot_gdt_descr)
# boot GDT descriptor (later on used by CPU#0):
.word 0 # 32 bit align gdt_desc.address
x86/asm/head: Annotate data appropriately Use the new SYM_DATA, SYM_DATA_START, and SYM_DATA_END in both 32 and 64 bit head_*.S. In the 64-bit version, define also SYM_DATA_START_PAGE_ALIGNED locally using the new SYM_START. It is used in the code instead of NEXT_PAGE() which was defined in this file and had been using the obsolete macro GLOBAL(). Now, the data in the 64-bit object file look sane: Value Size Type Bind Vis Ndx Name 0000 4096 OBJECT GLOBAL DEFAULT 15 init_level4_pgt 1000 4096 OBJECT GLOBAL DEFAULT 15 level3_kernel_pgt 2000 2048 OBJECT GLOBAL DEFAULT 15 level2_kernel_pgt 3000 4096 OBJECT GLOBAL DEFAULT 15 level2_fixmap_pgt 4000 4096 OBJECT GLOBAL DEFAULT 15 level1_fixmap_pgt 5000 2 OBJECT GLOBAL DEFAULT 15 early_gdt_descr 5002 8 OBJECT LOCAL DEFAULT 15 early_gdt_descr_base 500a 8 OBJECT GLOBAL DEFAULT 15 phys_base 0000 8 OBJECT GLOBAL DEFAULT 17 initial_code 0008 8 OBJECT GLOBAL DEFAULT 17 initial_gs 0010 8 OBJECT GLOBAL DEFAULT 17 initial_stack 0000 4 OBJECT GLOBAL DEFAULT 19 early_recursion_flag 1000 4096 OBJECT GLOBAL DEFAULT 19 early_level4_pgt 2000 0x40000 OBJECT GLOBAL DEFAULT 19 early_dynamic_pgts 0000 4096 OBJECT GLOBAL DEFAULT 22 empty_zero_page All have correct size and type now. Note that this also removes implicit 16B alignment previously inserted by ENTRY: * initial_code, setup_once_ref, initial_page_table, initial_stack, boot_gdt are still aligned * early_gdt_descr is now properly aligned as was intended before ENTRY was added there long time ago * phys_base's alignment is kept by an explicitly added new alignment Signed-off-by: Jiri Slaby <jslaby@suse.cz> Signed-off-by: Borislav Petkov <bp@suse.de> Cc: Andy Lutomirski <luto@kernel.org> Cc: Cao jin <caoj.fnst@cn.fujitsu.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Cc: Juergen Gross <jgross@suse.com> Cc: linux-arch@vger.kernel.org Cc: Maran Wilson <maran.wilson@oracle.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: x86-ml <x86@kernel.org> Link: https://lkml.kernel.org/r/20191011115108.12392-12-jslaby@suse.cz
2019-10-11 11:50:51 +00:00
SYM_DATA_START(early_gdt_descr)
.word GDT_ENTRIES*8-1
.long gdt_page /* Overwritten for secondary CPUs */
x86/asm/head: Annotate data appropriately Use the new SYM_DATA, SYM_DATA_START, and SYM_DATA_END in both 32 and 64 bit head_*.S. In the 64-bit version, define also SYM_DATA_START_PAGE_ALIGNED locally using the new SYM_START. It is used in the code instead of NEXT_PAGE() which was defined in this file and had been using the obsolete macro GLOBAL(). Now, the data in the 64-bit object file look sane: Value Size Type Bind Vis Ndx Name 0000 4096 OBJECT GLOBAL DEFAULT 15 init_level4_pgt 1000 4096 OBJECT GLOBAL DEFAULT 15 level3_kernel_pgt 2000 2048 OBJECT GLOBAL DEFAULT 15 level2_kernel_pgt 3000 4096 OBJECT GLOBAL DEFAULT 15 level2_fixmap_pgt 4000 4096 OBJECT GLOBAL DEFAULT 15 level1_fixmap_pgt 5000 2 OBJECT GLOBAL DEFAULT 15 early_gdt_descr 5002 8 OBJECT LOCAL DEFAULT 15 early_gdt_descr_base 500a 8 OBJECT GLOBAL DEFAULT 15 phys_base 0000 8 OBJECT GLOBAL DEFAULT 17 initial_code 0008 8 OBJECT GLOBAL DEFAULT 17 initial_gs 0010 8 OBJECT GLOBAL DEFAULT 17 initial_stack 0000 4 OBJECT GLOBAL DEFAULT 19 early_recursion_flag 1000 4096 OBJECT GLOBAL DEFAULT 19 early_level4_pgt 2000 0x40000 OBJECT GLOBAL DEFAULT 19 early_dynamic_pgts 0000 4096 OBJECT GLOBAL DEFAULT 22 empty_zero_page All have correct size and type now. Note that this also removes implicit 16B alignment previously inserted by ENTRY: * initial_code, setup_once_ref, initial_page_table, initial_stack, boot_gdt are still aligned * early_gdt_descr is now properly aligned as was intended before ENTRY was added there long time ago * phys_base's alignment is kept by an explicitly added new alignment Signed-off-by: Jiri Slaby <jslaby@suse.cz> Signed-off-by: Borislav Petkov <bp@suse.de> Cc: Andy Lutomirski <luto@kernel.org> Cc: Cao jin <caoj.fnst@cn.fujitsu.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Cc: Juergen Gross <jgross@suse.com> Cc: linux-arch@vger.kernel.org Cc: Maran Wilson <maran.wilson@oracle.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: x86-ml <x86@kernel.org> Link: https://lkml.kernel.org/r/20191011115108.12392-12-jslaby@suse.cz
2019-10-11 11:50:51 +00:00
SYM_DATA_END(early_gdt_descr)
/*
* The boot_gdt must mirror the equivalent in setup.S and is
* used only for booting.
*/
.align L1_CACHE_BYTES
x86/asm/head: Annotate data appropriately Use the new SYM_DATA, SYM_DATA_START, and SYM_DATA_END in both 32 and 64 bit head_*.S. In the 64-bit version, define also SYM_DATA_START_PAGE_ALIGNED locally using the new SYM_START. It is used in the code instead of NEXT_PAGE() which was defined in this file and had been using the obsolete macro GLOBAL(). Now, the data in the 64-bit object file look sane: Value Size Type Bind Vis Ndx Name 0000 4096 OBJECT GLOBAL DEFAULT 15 init_level4_pgt 1000 4096 OBJECT GLOBAL DEFAULT 15 level3_kernel_pgt 2000 2048 OBJECT GLOBAL DEFAULT 15 level2_kernel_pgt 3000 4096 OBJECT GLOBAL DEFAULT 15 level2_fixmap_pgt 4000 4096 OBJECT GLOBAL DEFAULT 15 level1_fixmap_pgt 5000 2 OBJECT GLOBAL DEFAULT 15 early_gdt_descr 5002 8 OBJECT LOCAL DEFAULT 15 early_gdt_descr_base 500a 8 OBJECT GLOBAL DEFAULT 15 phys_base 0000 8 OBJECT GLOBAL DEFAULT 17 initial_code 0008 8 OBJECT GLOBAL DEFAULT 17 initial_gs 0010 8 OBJECT GLOBAL DEFAULT 17 initial_stack 0000 4 OBJECT GLOBAL DEFAULT 19 early_recursion_flag 1000 4096 OBJECT GLOBAL DEFAULT 19 early_level4_pgt 2000 0x40000 OBJECT GLOBAL DEFAULT 19 early_dynamic_pgts 0000 4096 OBJECT GLOBAL DEFAULT 22 empty_zero_page All have correct size and type now. Note that this also removes implicit 16B alignment previously inserted by ENTRY: * initial_code, setup_once_ref, initial_page_table, initial_stack, boot_gdt are still aligned * early_gdt_descr is now properly aligned as was intended before ENTRY was added there long time ago * phys_base's alignment is kept by an explicitly added new alignment Signed-off-by: Jiri Slaby <jslaby@suse.cz> Signed-off-by: Borislav Petkov <bp@suse.de> Cc: Andy Lutomirski <luto@kernel.org> Cc: Cao jin <caoj.fnst@cn.fujitsu.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Cc: Juergen Gross <jgross@suse.com> Cc: linux-arch@vger.kernel.org Cc: Maran Wilson <maran.wilson@oracle.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: x86-ml <x86@kernel.org> Link: https://lkml.kernel.org/r/20191011115108.12392-12-jslaby@suse.cz
2019-10-11 11:50:51 +00:00
SYM_DATA_START(boot_gdt)
.fill GDT_ENTRY_BOOT_CS,8,0
.quad 0x00cf9a000000ffff /* kernel 4GB code at 0x00000000 */
.quad 0x00cf92000000ffff /* kernel 4GB data at 0x00000000 */
x86/asm/head: Annotate data appropriately Use the new SYM_DATA, SYM_DATA_START, and SYM_DATA_END in both 32 and 64 bit head_*.S. In the 64-bit version, define also SYM_DATA_START_PAGE_ALIGNED locally using the new SYM_START. It is used in the code instead of NEXT_PAGE() which was defined in this file and had been using the obsolete macro GLOBAL(). Now, the data in the 64-bit object file look sane: Value Size Type Bind Vis Ndx Name 0000 4096 OBJECT GLOBAL DEFAULT 15 init_level4_pgt 1000 4096 OBJECT GLOBAL DEFAULT 15 level3_kernel_pgt 2000 2048 OBJECT GLOBAL DEFAULT 15 level2_kernel_pgt 3000 4096 OBJECT GLOBAL DEFAULT 15 level2_fixmap_pgt 4000 4096 OBJECT GLOBAL DEFAULT 15 level1_fixmap_pgt 5000 2 OBJECT GLOBAL DEFAULT 15 early_gdt_descr 5002 8 OBJECT LOCAL DEFAULT 15 early_gdt_descr_base 500a 8 OBJECT GLOBAL DEFAULT 15 phys_base 0000 8 OBJECT GLOBAL DEFAULT 17 initial_code 0008 8 OBJECT GLOBAL DEFAULT 17 initial_gs 0010 8 OBJECT GLOBAL DEFAULT 17 initial_stack 0000 4 OBJECT GLOBAL DEFAULT 19 early_recursion_flag 1000 4096 OBJECT GLOBAL DEFAULT 19 early_level4_pgt 2000 0x40000 OBJECT GLOBAL DEFAULT 19 early_dynamic_pgts 0000 4096 OBJECT GLOBAL DEFAULT 22 empty_zero_page All have correct size and type now. Note that this also removes implicit 16B alignment previously inserted by ENTRY: * initial_code, setup_once_ref, initial_page_table, initial_stack, boot_gdt are still aligned * early_gdt_descr is now properly aligned as was intended before ENTRY was added there long time ago * phys_base's alignment is kept by an explicitly added new alignment Signed-off-by: Jiri Slaby <jslaby@suse.cz> Signed-off-by: Borislav Petkov <bp@suse.de> Cc: Andy Lutomirski <luto@kernel.org> Cc: Cao jin <caoj.fnst@cn.fujitsu.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Cc: Juergen Gross <jgross@suse.com> Cc: linux-arch@vger.kernel.org Cc: Maran Wilson <maran.wilson@oracle.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: x86-ml <x86@kernel.org> Link: https://lkml.kernel.org/r/20191011115108.12392-12-jslaby@suse.cz
2019-10-11 11:50:51 +00:00
SYM_DATA_END(boot_gdt)