arch: Remove Itanium (IA-64) architecture

The Itanium architecture is obsolete, and an informal survey [0] reveals
that any residual use of Itanium hardware in production is mostly HP-UX
or OpenVMS based. The use of Linux on Itanium appears to be limited to
enthusiasts that occasionally boot a fresh Linux kernel to see whether
things are still working as intended, and perhaps to churn out some
distro packages that are rarely used in practice.

None of the original companies behind Itanium still produce or support
any hardware or software for the architecture, and it is listed as
'Orphaned' in the MAINTAINERS file, as apparently, none of the engineers
that contributed on behalf of those companies (nor anyone else, for that
matter) have been willing to support or maintain the architecture
upstream or even be responsible for applying the odd fix. The Intel
firmware team removed all IA-64 support from the Tianocore/EDK2
reference implementation of EFI in 2018. (Itanium is the original
architecture for which EFI was developed, and the way Linux supports it
deviates significantly from other architectures.) Some distros, such as
Debian and Gentoo, still maintain [unofficial] ia64 ports, but many have
dropped support years ago.

While the argument is being made [1] that there is a 'for the common
good' angle to being able to build and run existing projects such as the
Grid Community Toolkit [2] on Itanium for interoperability testing, the
fact remains that none of those projects are known to be deployed on
Linux/ia64, and very few people actually have access to such a system in
the first place. Even if there were ways imaginable in which Linux/ia64
could be put to good use today, what matters is whether anyone is
actually doing that, and this does not appear to be the case.

There are no emulators widely available, and so boot testing Itanium is
generally infeasible for ordinary contributors. GCC still supports IA-64
but its compile farm [3] no longer has any IA-64 machines. GLIBC would
like to get rid of IA-64 [4] too because it would permit some overdue
code cleanups. In summary, the benefits to the ecosystem of having IA-64
be part of it are mostly theoretical, whereas the maintenance overhead
of keeping it supported is real.

So let's rip off the band aid, and remove the IA-64 arch code entirely.
This follows the timeline proposed by the Debian/ia64 maintainer [5],
which removes support in a controlled manner, leaving IA-64 in a known
good state in the most recent LTS release. Other projects will follow
once the kernel support is removed.

[0] https://lore.kernel.org/all/CAMj1kXFCMh_578jniKpUtx_j8ByHnt=s7S+yQ+vGbKt9ud7+kQ@mail.gmail.com/
[1] https://lore.kernel.org/all/0075883c-7c51-00f5-2c2d-5119c1820410@web.de/
[2] https://gridcf.org/gct-docs/latest/index.html
[3] https://cfarm.tetaneutral.net/machines/list/
[4] https://lore.kernel.org/all/87bkiilpc4.fsf@mid.deneb.enyo.de/
[5] https://lore.kernel.org/all/ff58a3e76e5102c94bb5946d99187b358def688a.camel@physik.fu-berlin.de/

Acked-by: Tony Luck <tony.luck@intel.com>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
This commit is contained in:
Ard Biesheuvel 2022-10-20 15:54:33 +02:00
parent a0334bf78b
commit cf8e865810
357 changed files with 45 additions and 64955 deletions

View File

@ -1,246 +0,0 @@
==================================
Memory Attribute Aliasing on IA-64
==================================
Bjorn Helgaas <bjorn.helgaas@hp.com>
May 4, 2006
Memory Attributes
=================
Itanium supports several attributes for virtual memory references.
The attribute is part of the virtual translation, i.e., it is
contained in the TLB entry. The ones of most interest to the Linux
kernel are:
== ======================
WB Write-back (cacheable)
UC Uncacheable
WC Write-coalescing
== ======================
System memory typically uses the WB attribute. The UC attribute is
used for memory-mapped I/O devices. The WC attribute is uncacheable
like UC is, but writes may be delayed and combined to increase
performance for things like frame buffers.
The Itanium architecture requires that we avoid accessing the same
page with both a cacheable mapping and an uncacheable mapping[1].
The design of the chipset determines which attributes are supported
on which regions of the address space. For example, some chipsets
support either WB or UC access to main memory, while others support
only WB access.
Memory Map
==========
Platform firmware describes the physical memory map and the
supported attributes for each region. At boot-time, the kernel uses
the EFI GetMemoryMap() interface. ACPI can also describe memory
devices and the attributes they support, but Linux/ia64 currently
doesn't use this information.
The kernel uses the efi_memmap table returned from GetMemoryMap() to
learn the attributes supported by each region of physical address
space. Unfortunately, this table does not completely describe the
address space because some machines omit some or all of the MMIO
regions from the map.
The kernel maintains another table, kern_memmap, which describes the
memory Linux is actually using and the attribute for each region.
This contains only system memory; it does not contain MMIO space.
The kern_memmap table typically contains only a subset of the system
memory described by the efi_memmap. Linux/ia64 can't use all memory
in the system because of constraints imposed by the identity mapping
scheme.
The efi_memmap table is preserved unmodified because the original
boot-time information is required for kexec.
Kernel Identity Mappings
========================
Linux/ia64 identity mappings are done with large pages, currently
either 16MB or 64MB, referred to as "granules." Cacheable mappings
are speculative[2], so the processor can read any location in the
page at any time, independent of the programmer's intentions. This
means that to avoid attribute aliasing, Linux can create a cacheable
identity mapping only when the entire granule supports cacheable
access.
Therefore, kern_memmap contains only full granule-sized regions that
can referenced safely by an identity mapping.
Uncacheable mappings are not speculative, so the processor will
generate UC accesses only to locations explicitly referenced by
software. This allows UC identity mappings to cover granules that
are only partially populated, or populated with a combination of UC
and WB regions.
User Mappings
=============
User mappings are typically done with 16K or 64K pages. The smaller
page size allows more flexibility because only 16K or 64K has to be
homogeneous with respect to memory attributes.
Potential Attribute Aliasing Cases
==================================
There are several ways the kernel creates new mappings:
mmap of /dev/mem
----------------
This uses remap_pfn_range(), which creates user mappings. These
mappings may be either WB or UC. If the region being mapped
happens to be in kern_memmap, meaning that it may also be mapped
by a kernel identity mapping, the user mapping must use the same
attribute as the kernel mapping.
If the region is not in kern_memmap, the user mapping should use
an attribute reported as being supported in the EFI memory map.
Since the EFI memory map does not describe MMIO on some
machines, this should use an uncacheable mapping as a fallback.
mmap of /sys/class/pci_bus/.../legacy_mem
-----------------------------------------
This is very similar to mmap of /dev/mem, except that legacy_mem
only allows mmap of the one megabyte "legacy MMIO" area for a
specific PCI bus. Typically this is the first megabyte of
physical address space, but it may be different on machines with
several VGA devices.
"X" uses this to access VGA frame buffers. Using legacy_mem
rather than /dev/mem allows multiple instances of X to talk to
different VGA cards.
The /dev/mem mmap constraints apply.
mmap of /proc/bus/pci/.../??.?
------------------------------
This is an MMIO mmap of PCI functions, which additionally may or
may not be requested as using the WC attribute.
If WC is requested, and the region in kern_memmap is either WC
or UC, and the EFI memory map designates the region as WC, then
the WC mapping is allowed.
Otherwise, the user mapping must use the same attribute as the
kernel mapping.
read/write of /dev/mem
----------------------
This uses copy_from_user(), which implicitly uses a kernel
identity mapping. This is obviously safe for things in
kern_memmap.
There may be corner cases of things that are not in kern_memmap,
but could be accessed this way. For example, registers in MMIO
space are not in kern_memmap, but could be accessed with a UC
mapping. This would not cause attribute aliasing. But
registers typically can be accessed only with four-byte or
eight-byte accesses, and the copy_from_user() path doesn't allow
any control over the access size, so this would be dangerous.
ioremap()
---------
This returns a mapping for use inside the kernel.
If the region is in kern_memmap, we should use the attribute
specified there.
If the EFI memory map reports that the entire granule supports
WB, we should use that (granules that are partially reserved
or occupied by firmware do not appear in kern_memmap).
If the granule contains non-WB memory, but we can cover the
region safely with kernel page table mappings, we can use
ioremap_page_range() as most other architectures do.
Failing all of the above, we have to fall back to a UC mapping.
Past Problem Cases
==================
mmap of various MMIO regions from /dev/mem by "X" on Intel platforms
--------------------------------------------------------------------
The EFI memory map may not report these MMIO regions.
These must be allowed so that X will work. This means that
when the EFI memory map is incomplete, every /dev/mem mmap must
succeed. It may create either WB or UC user mappings, depending
on whether the region is in kern_memmap or the EFI memory map.
mmap of 0x0-0x9FFFF /dev/mem by "hwinfo" on HP sx1000 with VGA enabled
----------------------------------------------------------------------
The EFI memory map reports the following attributes:
=============== ======= ==================
0x00000-0x9FFFF WB only
0xA0000-0xBFFFF UC only (VGA frame buffer)
0xC0000-0xFFFFF WB only
=============== ======= ==================
This mmap is done with user pages, not kernel identity mappings,
so it is safe to use WB mappings.
The kernel VGA driver may ioremap the VGA frame buffer at 0xA0000,
which uses a granule-sized UC mapping. This granule will cover some
WB-only memory, but since UC is non-speculative, the processor will
never generate an uncacheable reference to the WB-only areas unless
the driver explicitly touches them.
mmap of 0x0-0xFFFFF legacy_mem by "X"
-------------------------------------
If the EFI memory map reports that the entire range supports the
same attributes, we can allow the mmap (and we will prefer WB if
supported, as is the case with HP sx[12]000 machines with VGA
disabled).
If EFI reports the range as partly WB and partly UC (as on sx[12]000
machines with VGA enabled), we must fail the mmap because there's no
safe attribute to use.
If EFI reports some of the range but not all (as on Intel firmware
that doesn't report the VGA frame buffer at all), we should fail the
mmap and force the user to map just the specific region of interest.
mmap of 0xA0000-0xBFFFF legacy_mem by "X" on HP sx1000 with VGA disabled
------------------------------------------------------------------------
The EFI memory map reports the following attributes::
0x00000-0xFFFFF WB only (no VGA MMIO hole)
This is a special case of the previous case, and the mmap should
fail for the same reason as above.
read of /sys/devices/.../rom
----------------------------
For VGA devices, this may cause an ioremap() of 0xC0000. This
used to be done with a UC mapping, because the VGA frame buffer
at 0xA0000 prevents use of a WB granule. The UC mapping causes
an MCA on HP sx[12]000 chipsets.
We should use WB page table mappings to avoid covering the VGA
frame buffer.
Notes
=====
[1] SDM rev 2.2, vol 2, sec 4.4.1.
[2] SDM rev 2.2, vol 2, sec 4.4.6.

View File

@ -1,144 +0,0 @@
==========================
EFI Real Time Clock driver
==========================
S. Eranian <eranian@hpl.hp.com>
March 2000
1. Introduction
===============
This document describes the efirtc.c driver has provided for
the IA-64 platform.
The purpose of this driver is to supply an API for kernel and user applications
to get access to the Time Service offered by EFI version 0.92.
EFI provides 4 calls one can make once the OS is booted: GetTime(),
SetTime(), GetWakeupTime(), SetWakeupTime() which are all supported by this
driver. We describe those calls as well the design of the driver in the
following sections.
2. Design Decisions
===================
The original ideas was to provide a very simple driver to get access to,
at first, the time of day service. This is required in order to access, in a
portable way, the CMOS clock. A program like /sbin/hwclock uses such a clock
to initialize the system view of the time during boot.
Because we wanted to minimize the impact on existing user-level apps using
the CMOS clock, we decided to expose an API that was very similar to the one
used today with the legacy RTC driver (driver/char/rtc.c). However, because
EFI provides a simpler services, not all ioctl() are available. Also
new ioctl()s have been introduced for things that EFI provides but not the
legacy.
EFI uses a slightly different way of representing the time, noticeably
the reference date is different. Year is the using the full 4-digit format.
The Epoch is January 1st 1998. For backward compatibility reasons we don't
expose this new way of representing time. Instead we use something very
similar to the struct tm, i.e. struct rtc_time, as used by hwclock.
One of the reasons for doing it this way is to allow for EFI to still evolve
without necessarily impacting any of the user applications. The decoupling
enables flexibility and permits writing wrapper code is ncase things change.
The driver exposes two interfaces, one via the device file and a set of
ioctl()s. The other is read-only via the /proc filesystem.
As of today we don't offer a /proc/sys interface.
To allow for a uniform interface between the legacy RTC and EFI time service,
we have created the include/linux/rtc.h header file to contain only the
"public" API of the two drivers. The specifics of the legacy RTC are still
in include/linux/mc146818rtc.h.
3. Time of day service
======================
The part of the driver gives access to the time of day service of EFI.
Two ioctl()s, compatible with the legacy RTC calls:
Read the CMOS clock::
ioctl(d, RTC_RD_TIME, &rtc);
Write the CMOS clock::
ioctl(d, RTC_SET_TIME, &rtc);
The rtc is a pointer to a data structure defined in rtc.h which is close
to a struct tm::
struct rtc_time {
int tm_sec;
int tm_min;
int tm_hour;
int tm_mday;
int tm_mon;
int tm_year;
int tm_wday;
int tm_yday;
int tm_isdst;
};
The driver takes care of converting back an forth between the EFI time and
this format.
Those two ioctl()s can be exercised with the hwclock command:
For reading::
# /sbin/hwclock --show
Mon Mar 6 15:32:32 2000 -0.910248 seconds
For setting::
# /sbin/hwclock --systohc
Root privileges are required to be able to set the time of day.
4. Wakeup Alarm service
=======================
EFI provides an API by which one can program when a machine should wakeup,
i.e. reboot. This is very different from the alarm provided by the legacy
RTC which is some kind of interval timer alarm. For this reason we don't use
the same ioctl()s to get access to the service. Instead we have
introduced 2 news ioctl()s to the interface of an RTC.
We have added 2 new ioctl()s that are specific to the EFI driver:
Read the current state of the alarm::
ioctl(d, RTC_WKALM_RD, &wkt)
Set the alarm or change its status::
ioctl(d, RTC_WKALM_SET, &wkt)
The wkt structure encapsulates a struct rtc_time + 2 extra fields to get
status information::
struct rtc_wkalrm {
unsigned char enabled; /* =1 if alarm is enabled */
unsigned char pending; /* =1 if alarm is pending */
struct rtc_time time;
}
As of today, none of the existing user-level apps supports this feature.
However writing such a program should be hard by simply using those two
ioctl().
Root privileges are required to be able to set the alarm.
5. References
=============
Checkout the following Web site for more information on EFI:
http://developer.intel.com/technology/efi/

File diff suppressed because it is too large Load Diff

View File

@ -1,3 +0,0 @@
.. SPDX-License-Identifier: GPL-2.0
.. kernel-feat:: $srctree/Documentation/features ia64

View File

@ -1,303 +0,0 @@
===================================
Light-weight System Calls for IA-64
===================================
Started: 13-Jan-2003
Last update: 27-Sep-2003
David Mosberger-Tang
<davidm@hpl.hp.com>
Using the "epc" instruction effectively introduces a new mode of
execution to the ia64 linux kernel. We call this mode the
"fsys-mode". To recap, the normal states of execution are:
- kernel mode:
Both the register stack and the memory stack have been
switched over to kernel memory. The user-level state is saved
in a pt-regs structure at the top of the kernel memory stack.
- user mode:
Both the register stack and the kernel stack are in
user memory. The user-level state is contained in the
CPU registers.
- bank 0 interruption-handling mode:
This is the non-interruptible state which all
interruption-handlers start execution in. The user-level
state remains in the CPU registers and some kernel state may
be stored in bank 0 of registers r16-r31.
In contrast, fsys-mode has the following special properties:
- execution is at privilege level 0 (most-privileged)
- CPU registers may contain a mixture of user-level and kernel-level
state (it is the responsibility of the kernel to ensure that no
security-sensitive kernel-level state is leaked back to
user-level)
- execution is interruptible and preemptible (an fsys-mode handler
can disable interrupts and avoid all other interruption-sources
to avoid preemption)
- neither the memory-stack nor the register-stack can be trusted while
in fsys-mode (they point to the user-level stacks, which may
be invalid, or completely bogus addresses)
In summary, fsys-mode is much more similar to running in user-mode
than it is to running in kernel-mode. Of course, given that the
privilege level is at level 0, this means that fsys-mode requires some
care (see below).
How to tell fsys-mode
=====================
Linux operates in fsys-mode when (a) the privilege level is 0 (most
privileged) and (b) the stacks have NOT been switched to kernel memory
yet. For convenience, the header file <asm-ia64/ptrace.h> provides
three macros::
user_mode(regs)
user_stack(task,regs)
fsys_mode(task,regs)
The "regs" argument is a pointer to a pt_regs structure. The "task"
argument is a pointer to the task structure to which the "regs"
pointer belongs to. user_mode() returns TRUE if the CPU state pointed
to by "regs" was executing in user mode (privilege level 3).
user_stack() returns TRUE if the state pointed to by "regs" was
executing on the user-level stack(s). Finally, fsys_mode() returns
TRUE if the CPU state pointed to by "regs" was executing in fsys-mode.
The fsys_mode() macro is equivalent to the expression::
!user_mode(regs) && user_stack(task,regs)
How to write an fsyscall handler
================================
The file arch/ia64/kernel/fsys.S contains a table of fsyscall-handlers
(fsyscall_table). This table contains one entry for each system call.
By default, a system call is handled by fsys_fallback_syscall(). This
routine takes care of entering (full) kernel mode and calling the
normal Linux system call handler. For performance-critical system
calls, it is possible to write a hand-tuned fsyscall_handler. For
example, fsys.S contains fsys_getpid(), which is a hand-tuned version
of the getpid() system call.
The entry and exit-state of an fsyscall handler is as follows:
Machine state on entry to fsyscall handler
------------------------------------------
========= ===============================================================
r10 0
r11 saved ar.pfs (a user-level value)
r15 system call number
r16 "current" task pointer (in normal kernel-mode, this is in r13)
r32-r39 system call arguments
b6 return address (a user-level value)
ar.pfs previous frame-state (a user-level value)
PSR.be cleared to zero (i.e., little-endian byte order is in effect)
- all other registers may contain values passed in from user-mode
========= ===============================================================
Required machine state on exit to fsyscall handler
--------------------------------------------------
========= ===========================================================
r11 saved ar.pfs (as passed into the fsyscall handler)
r15 system call number (as passed into the fsyscall handler)
r32-r39 system call arguments (as passed into the fsyscall handler)
b6 return address (as passed into the fsyscall handler)
ar.pfs previous frame-state (as passed into the fsyscall handler)
========= ===========================================================
Fsyscall handlers can execute with very little overhead, but with that
speed comes a set of restrictions:
* Fsyscall-handlers MUST check for any pending work in the flags
member of the thread-info structure and if any of the
TIF_ALLWORK_MASK flags are set, the handler needs to fall back on
doing a full system call (by calling fsys_fallback_syscall).
* Fsyscall-handlers MUST preserve incoming arguments (r32-r39, r11,
r15, b6, and ar.pfs) because they will be needed in case of a
system call restart. Of course, all "preserved" registers also
must be preserved, in accordance to the normal calling conventions.
* Fsyscall-handlers MUST check argument registers for containing a
NaT value before using them in any way that could trigger a
NaT-consumption fault. If a system call argument is found to
contain a NaT value, an fsyscall-handler may return immediately
with r8=EINVAL, r10=-1.
* Fsyscall-handlers MUST NOT use the "alloc" instruction or perform
any other operation that would trigger mandatory RSE
(register-stack engine) traffic.
* Fsyscall-handlers MUST NOT write to any stacked registers because
it is not safe to assume that user-level called a handler with the
proper number of arguments.
* Fsyscall-handlers need to be careful when accessing per-CPU variables:
unless proper safe-guards are taken (e.g., interruptions are avoided),
execution may be pre-empted and resumed on another CPU at any given
time.
* Fsyscall-handlers must be careful not to leak sensitive kernel'
information back to user-level. In particular, before returning to
user-level, care needs to be taken to clear any scratch registers
that could contain sensitive information (note that the current
task pointer is not considered sensitive: it's already exposed
through ar.k6).
* Fsyscall-handlers MUST NOT access user-memory without first
validating access-permission (this can be done typically via
probe.r.fault and/or probe.w.fault) and without guarding against
memory access exceptions (this can be done with the EX() macros
defined by asmmacro.h).
The above restrictions may seem draconian, but remember that it's
possible to trade off some of the restrictions by paying a slightly
higher overhead. For example, if an fsyscall-handler could benefit
from the shadow register bank, it could temporarily disable PSR.i and
PSR.ic, switch to bank 0 (bsw.0) and then use the shadow registers as
needed. In other words, following the above rules yields extremely
fast system call execution (while fully preserving system call
semantics), but there is also a lot of flexibility in handling more
complicated cases.
Signal handling
===============
The delivery of (asynchronous) signals must be delayed until fsys-mode
is exited. This is accomplished with the help of the lower-privilege
transfer trap: arch/ia64/kernel/process.c:do_notify_resume_user()
checks whether the interrupted task was in fsys-mode and, if so, sets
PSR.lp and returns immediately. When fsys-mode is exited via the
"br.ret" instruction that lowers the privilege level, a trap will
occur. The trap handler clears PSR.lp again and returns immediately.
The kernel exit path then checks for and delivers any pending signals.
PSR Handling
============
The "epc" instruction doesn't change the contents of PSR at all. This
is in contrast to a regular interruption, which clears almost all
bits. Because of that, some care needs to be taken to ensure things
work as expected. The following discussion describes how each PSR bit
is handled.
======= =======================================================================
PSR.be Cleared when entering fsys-mode. A srlz.d instruction is used
to ensure the CPU is in little-endian mode before the first
load/store instruction is executed. PSR.be is normally NOT
restored upon return from an fsys-mode handler. In other
words, user-level code must not rely on PSR.be being preserved
across a system call.
PSR.up Unchanged.
PSR.ac Unchanged.
PSR.mfl Unchanged. Note: fsys-mode handlers must not write-registers!
PSR.mfh Unchanged. Note: fsys-mode handlers must not write-registers!
PSR.ic Unchanged. Note: fsys-mode handlers can clear the bit, if needed.
PSR.i Unchanged. Note: fsys-mode handlers can clear the bit, if needed.
PSR.pk Unchanged.
PSR.dt Unchanged.
PSR.dfl Unchanged. Note: fsys-mode handlers must not write-registers!
PSR.dfh Unchanged. Note: fsys-mode handlers must not write-registers!
PSR.sp Unchanged.
PSR.pp Unchanged.
PSR.di Unchanged.
PSR.si Unchanged.
PSR.db Unchanged. The kernel prevents user-level from setting a hardware
breakpoint that triggers at any privilege level other than
3 (user-mode).
PSR.lp Unchanged.
PSR.tb Lazy redirect. If a taken-branch trap occurs while in
fsys-mode, the trap-handler modifies the saved machine state
such that execution resumes in the gate page at
syscall_via_break(), with privilege level 3. Note: the
taken branch would occur on the branch invoking the
fsyscall-handler, at which point, by definition, a syscall
restart is still safe. If the system call number is invalid,
the fsys-mode handler will return directly to user-level. This
return will trigger a taken-branch trap, but since the trap is
taken _after_ restoring the privilege level, the CPU has already
left fsys-mode, so no special treatment is needed.
PSR.rt Unchanged.
PSR.cpl Cleared to 0.
PSR.is Unchanged (guaranteed to be 0 on entry to the gate page).
PSR.mc Unchanged.
PSR.it Unchanged (guaranteed to be 1).
PSR.id Unchanged. Note: the ia64 linux kernel never sets this bit.
PSR.da Unchanged. Note: the ia64 linux kernel never sets this bit.
PSR.dd Unchanged. Note: the ia64 linux kernel never sets this bit.
PSR.ss Lazy redirect. If set, "epc" will cause a Single Step Trap to
be taken. The trap handler then modifies the saved machine
state such that execution resumes in the gate page at
syscall_via_break(), with privilege level 3.
PSR.ri Unchanged.
PSR.ed Unchanged. Note: This bit could only have an effect if an fsys-mode
handler performed a speculative load that gets NaTted. If so, this
would be the normal & expected behavior, so no special treatment is
needed.
PSR.bn Unchanged. Note: fsys-mode handlers may clear the bit, if needed.
Doing so requires clearing PSR.i and PSR.ic as well.
PSR.ia Unchanged. Note: the ia64 linux kernel never sets this bit.
======= =======================================================================
Using fast system calls
=======================
To use fast system calls, userspace applications need simply call
__kernel_syscall_via_epc(). For example
-- example fgettimeofday() call --
-- fgettimeofday.S --
::
#include <asm/asmmacro.h>
GLOBAL_ENTRY(fgettimeofday)
.prologue
.save ar.pfs, r11
mov r11 = ar.pfs
.body
mov r2 = 0xa000000000020660;; // gate address
// found by inspection of System.map for the
// __kernel_syscall_via_epc() function. See
// below for how to do this for real.
mov b7 = r2
mov r15 = 1087 // gettimeofday syscall
;;
br.call.sptk.many b6 = b7
;;
.restore sp
mov ar.pfs = r11
br.ret.sptk.many rp;; // return to caller
END(fgettimeofday)
-- end fgettimeofday.S --
In reality, getting the gate address is accomplished by two extra
values passed via the ELF auxiliary vector (include/asm-ia64/elf.h)
* AT_SYSINFO : is the address of __kernel_syscall_via_epc()
* AT_SYSINFO_EHDR : is the address of the kernel gate ELF DSO
The ELF DSO is a pre-linked library that is mapped in by the kernel at
the gate page. It is a proper ELF shared object so, with a dynamic
loader that recognises the library, you should be able to make calls to
the exported functions within it as with any other shared library.
AT_SYSINFO points into the kernel DSO at the
__kernel_syscall_via_epc() function for historical reasons (it was
used before the kernel DSO) and as a convenience.

View File

@ -1,49 +0,0 @@
===========================================
Linux kernel release for the IA-64 Platform
===========================================
These are the release notes for Linux since version 2.4 for IA-64
platform. This document provides information specific to IA-64
ONLY, to get additional information about the Linux kernel also
read the original Linux README provided with the kernel.
Installing the Kernel
=====================
- IA-64 kernel installation is the same as the other platforms, see
original README for details.
Software Requirements
=====================
Compiling and running this kernel requires an IA-64 compliant GCC
compiler. And various software packages also compiled with an
IA-64 compliant GCC compiler.
Configuring the Kernel
======================
Configuration is the same, see original README for details.
Compiling the Kernel:
- Compiling this kernel doesn't differ from other platform so read
the original README for details BUT make sure you have an IA-64
compliant GCC compiler.
IA-64 Specifics
===============
- General issues:
* Hardly any performance tuning has been done. Obvious targets
include the library routines (IP checksum, etc.). Less
obvious targets include making sure we don't flush the TLB
needlessly, etc.
* SMP locks cleanup/optimization
* IA32 support. Currently experimental. It mostly works.

View File

@ -1,19 +0,0 @@
.. SPDX-License-Identifier: GPL-2.0
==================
IA-64 Architecture
==================
.. toctree::
:maxdepth: 1
ia64
aliasing
efirtc
err_inject
fsys
irq-redir
mca
serial
features

View File

@ -1,80 +0,0 @@
==============================
IRQ affinity on IA64 platforms
==============================
07.01.2002, Erich Focht <efocht@ess.nec.de>
By writing to /proc/irq/IRQ#/smp_affinity the interrupt routing can be
controlled. The behavior on IA64 platforms is slightly different from
that described in Documentation/core-api/irq/irq-affinity.rst for i386 systems.
Because of the usage of SAPIC mode and physical destination mode the
IRQ target is one particular CPU and cannot be a mask of several
CPUs. Only the first non-zero bit is taken into account.
Usage examples
==============
The target CPU has to be specified as a hexadecimal CPU mask. The
first non-zero bit is the selected CPU. This format has been kept for
compatibility reasons with i386.
Set the delivery mode of interrupt 41 to fixed and route the
interrupts to CPU #3 (logical CPU number) (2^3=0x08)::
echo "8" >/proc/irq/41/smp_affinity
Set the default route for IRQ number 41 to CPU 6 in lowest priority
delivery mode (redirectable)::
echo "r 40" >/proc/irq/41/smp_affinity
The output of the command::
cat /proc/irq/IRQ#/smp_affinity
gives the target CPU mask for the specified interrupt vector. If the CPU
mask is preceded by the character "r", the interrupt is redirectable
(i.e. lowest priority mode routing is used), otherwise its route is
fixed.
Initialization and default behavior
===================================
If the platform features IRQ redirection (info provided by SAL) all
IO-SAPIC interrupts are initialized with CPU#0 as their default target
and the routing is the so called "lowest priority mode" (actually
fixed SAPIC mode with hint). The XTP chipset registers are used as hints
for the IRQ routing. Currently in Linux XTP registers can have three
values:
- minimal for an idle task,
- normal if any other task runs,
- maximal if the CPU is going to be switched off.
The IRQ is routed to the CPU with lowest XTP register value, the
search begins at the default CPU. Therefore most of the interrupts
will be handled by CPU #0.
If the platform doesn't feature interrupt redirection IOSAPIC fixed
routing is used. The target CPUs are distributed in a round robin
manner. IRQs will be routed only to the selected target CPUs. Check
with::
cat /proc/interrupts
Comments
========
On large (multi-node) systems it is recommended to route the IRQs to
the node to which the corresponding device is connected.
For systems like the NEC AzusA we get IRQ node-affinity for free. This
is because usually the chipsets on each node redirect the interrupts
only to their own CPUs (as they cannot see the XTP registers on the
other nodes).

View File

@ -1,198 +0,0 @@
=============================================================
An ad-hoc collection of notes on IA64 MCA and INIT processing
=============================================================
Feel free to update it with notes about any area that is not clear.
---
MCA/INIT are completely asynchronous. They can occur at any time, when
the OS is in any state. Including when one of the cpus is already
holding a spinlock. Trying to get any lock from MCA/INIT state is
asking for deadlock. Also the state of structures that are protected
by locks is indeterminate, including linked lists.
---
The complicated ia64 MCA process. All of this is mandated by Intel's
specification for ia64 SAL, error recovery and unwind, it is not as
if we have a choice here.
* MCA occurs on one cpu, usually due to a double bit memory error.
This is the monarch cpu.
* SAL sends an MCA rendezvous interrupt (which is a normal interrupt)
to all the other cpus, the slaves.
* Slave cpus that receive the MCA interrupt call down into SAL, they
end up spinning disabled while the MCA is being serviced.
* If any slave cpu was already spinning disabled when the MCA occurred
then it cannot service the MCA interrupt. SAL waits ~20 seconds then
sends an unmaskable INIT event to the slave cpus that have not
already rendezvoused.
* Because MCA/INIT can be delivered at any time, including when the cpu
is down in PAL in physical mode, the registers at the time of the
event are _completely_ undefined. In particular the MCA/INIT
handlers cannot rely on the thread pointer, PAL physical mode can
(and does) modify TP. It is allowed to do that as long as it resets
TP on return. However MCA/INIT events expose us to these PAL
internal TP changes. Hence curr_task().
* If an MCA/INIT event occurs while the kernel was running (not user
space) and the kernel has called PAL then the MCA/INIT handler cannot
assume that the kernel stack is in a fit state to be used. Mainly
because PAL may or may not maintain the stack pointer internally.
Because the MCA/INIT handlers cannot trust the kernel stack, they
have to use their own, per-cpu stacks. The MCA/INIT stacks are
preformatted with just enough task state to let the relevant handlers
do their job.
* Unlike most other architectures, the ia64 struct task is embedded in
the kernel stack[1]. So switching to a new kernel stack means that
we switch to a new task as well. Because various bits of the kernel
assume that current points into the struct task, switching to a new
stack also means a new value for current.
* Once all slaves have rendezvoused and are spinning disabled, the
monarch is entered. The monarch now tries to diagnose the problem
and decide if it can recover or not.
* Part of the monarch's job is to look at the state of all the other
tasks. The only way to do that on ia64 is to call the unwinder,
as mandated by Intel.
* The starting point for the unwind depends on whether a task is
running or not. That is, whether it is on a cpu or is blocked. The
monarch has to determine whether or not a task is on a cpu before it
knows how to start unwinding it. The tasks that received an MCA or
INIT event are no longer running, they have been converted to blocked
tasks. But (and its a big but), the cpus that received the MCA
rendezvous interrupt are still running on their normal kernel stacks!
* To distinguish between these two cases, the monarch must know which
tasks are on a cpu and which are not. Hence each slave cpu that
switches to an MCA/INIT stack, registers its new stack using
set_curr_task(), so the monarch can tell that the _original_ task is
no longer running on that cpu. That gives us a decent chance of
getting a valid backtrace of the _original_ task.
* MCA/INIT can be nested, to a depth of 2 on any cpu. In the case of a
nested error, we want diagnostics on the MCA/INIT handler that
failed, not on the task that was originally running. Again this
requires set_curr_task() so the MCA/INIT handlers can register their
own stack as running on that cpu. Then a recursive error gets a
trace of the failing handler's "task".
[1]
My (Keith Owens) original design called for ia64 to separate its
struct task and the kernel stacks. Then the MCA/INIT data would be
chained stacks like i386 interrupt stacks. But that required
radical surgery on the rest of ia64, plus extra hard wired TLB
entries with its associated performance degradation. David
Mosberger vetoed that approach. Which meant that separate kernel
stacks meant separate "tasks" for the MCA/INIT handlers.
---
INIT is less complicated than MCA. Pressing the nmi button or using
the equivalent command on the management console sends INIT to all
cpus. SAL picks one of the cpus as the monarch and the rest are
slaves. All the OS INIT handlers are entered at approximately the same
time. The OS monarch prints the state of all tasks and returns, after
which the slaves return and the system resumes.
At least that is what is supposed to happen. Alas there are broken
versions of SAL out there. Some drive all the cpus as monarchs. Some
drive them all as slaves. Some drive one cpu as monarch, wait for that
cpu to return from the OS then drive the rest as slaves. Some versions
of SAL cannot even cope with returning from the OS, they spin inside
SAL on resume. The OS INIT code has workarounds for some of these
broken SAL symptoms, but some simply cannot be fixed from the OS side.
---
The scheduler hooks used by ia64 (curr_task, set_curr_task) are layer
violations. Unfortunately MCA/INIT start off as massive layer
violations (can occur at _any_ time) and they build from there.
At least ia64 makes an attempt at recovering from hardware errors, but
it is a difficult problem because of the asynchronous nature of these
errors. When processing an unmaskable interrupt we sometimes need
special code to cope with our inability to take any locks.
---
How is ia64 MCA/INIT different from x86 NMI?
* x86 NMI typically gets delivered to one cpu. MCA/INIT gets sent to
all cpus.
* x86 NMI cannot be nested. MCA/INIT can be nested, to a depth of 2
per cpu.
* x86 has a separate struct task which points to one of multiple kernel
stacks. ia64 has the struct task embedded in the single kernel
stack, so switching stack means switching task.
* x86 does not call the BIOS so the NMI handler does not have to worry
about any registers having changed. MCA/INIT can occur while the cpu
is in PAL in physical mode, with undefined registers and an undefined
kernel stack.
* i386 backtrace is not very sensitive to whether a process is running
or not. ia64 unwind is very, very sensitive to whether a process is
running or not.
---
What happens when MCA/INIT is delivered what a cpu is running user
space code?
The user mode registers are stored in the RSE area of the MCA/INIT on
entry to the OS and are restored from there on return to SAL, so user
mode registers are preserved across a recoverable MCA/INIT. Since the
OS has no idea what unwind data is available for the user space stack,
MCA/INIT never tries to backtrace user space. Which means that the OS
does not bother making the user space process look like a blocked task,
i.e. the OS does not copy pt_regs and switch_stack to the user space
stack. Also the OS has no idea how big the user space RSE and memory
stacks are, which makes it too risky to copy the saved state to a user
mode stack.
---
How do we get a backtrace on the tasks that were running when MCA/INIT
was delivered?
mca.c:::ia64_mca_modify_original_stack(). That identifies and
verifies the original kernel stack, copies the dirty registers from
the MCA/INIT stack's RSE to the original stack's RSE, copies the
skeleton struct pt_regs and switch_stack to the original stack, fills
in the skeleton structures from the PAL minstate area and updates the
original stack's thread.ksp. That makes the original stack look
exactly like any other blocked task, i.e. it now appears to be
sleeping. To get a backtrace, just start with thread.ksp for the
original task and unwind like any other sleeping task.
---
How do we identify the tasks that were running when MCA/INIT was
delivered?
If the previous task has been verified and converted to a blocked
state, then sos->prev_task on the MCA/INIT stack is updated to point to
the previous task. You can look at that field in dumps or debuggers.
To help distinguish between the handler and the original tasks,
handlers have _TIF_MCA_INIT set in thread_info.flags.
The sos data is always in the MCA/INIT handler stack, at offset
MCA_SOS_OFFSET. You can get that value from mca_asm.h or calculate it
as KERNEL_STACK_SIZE - sizeof(struct pt_regs) - sizeof(struct
ia64_sal_os_state), with 16 byte alignment for all structures.
Also the comm field of the MCA/INIT task is modified to include the pid
of the original task, for humans to use. For example, a comm field of
'MCA 12159' means that pid 12159 was running when the MCA was
delivered.

View File

@ -1,165 +0,0 @@
==============
Serial Devices
==============
Serial Device Naming
====================
As of 2.6.10, serial devices on ia64 are named based on the
order of ACPI and PCI enumeration. The first device in the
ACPI namespace (if any) becomes /dev/ttyS0, the second becomes
/dev/ttyS1, etc., and PCI devices are named sequentially
starting after the ACPI devices.
Prior to 2.6.10, there were confusing exceptions to this:
- Firmware on some machines (mostly from HP) provides an HCDP
table[1] that tells the kernel about devices that can be used
as a serial console. If the user specified "console=ttyS0"
or the EFI ConOut path contained only UART devices, the
kernel registered the device described by the HCDP as
/dev/ttyS0.
- If there was no HCDP, we assumed there were UARTs at the
legacy COM port addresses (I/O ports 0x3f8 and 0x2f8), so
the kernel registered those as /dev/ttyS0 and /dev/ttyS1.
Any additional ACPI or PCI devices were registered sequentially
after /dev/ttyS0 as they were discovered.
With an HCDP, device names changed depending on EFI configuration
and "console=" arguments. Without an HCDP, device names didn't
change, but we registered devices that might not really exist.
For example, an HP rx1600 with a single built-in serial port
(described in the ACPI namespace) plus an MP[2] (a PCI device) has
these ports:
========== ========== ============ ============ =======
Type MMIO pre-2.6.10 pre-2.6.10 2.6.10+
address
(EFI console (EFI console
on builtin) on MP port)
========== ========== ============ ============ =======
builtin 0xff5e0000 ttyS0 ttyS1 ttyS0
MP UPS 0xf8031000 ttyS1 ttyS2 ttyS1
MP Console 0xf8030000 ttyS2 ttyS0 ttyS2
MP 2 0xf8030010 ttyS3 ttyS3 ttyS3
MP 3 0xf8030038 ttyS4 ttyS4 ttyS4
========== ========== ============ ============ =======
Console Selection
=================
EFI knows what your console devices are, but it doesn't tell the
kernel quite enough to actually locate them. The DIG64 HCDP
table[1] does tell the kernel where potential serial console
devices are, but not all firmware supplies it. Also, EFI supports
multiple simultaneous consoles and doesn't tell the kernel which
should be the "primary" one.
So how do you tell Linux which console device to use?
- If your firmware supplies the HCDP, it is simplest to
configure EFI with a single device (either a UART or a VGA
card) as the console. Then you don't need to tell Linux
anything; the kernel will automatically use the EFI console.
(This works only in 2.6.6 or later; prior to that you had
to specify "console=ttyS0" to get a serial console.)
- Without an HCDP, Linux defaults to a VGA console unless you
specify a "console=" argument.
NOTE: Don't assume that a serial console device will be /dev/ttyS0.
It might be ttyS1, ttyS2, etc. Make sure you have the appropriate
entries in /etc/inittab (for getty) and /etc/securetty (to allow
root login).
Early Serial Console
====================
The kernel can't start using a serial console until it knows where
the device lives. Normally this happens when the driver enumerates
all the serial devices, which can happen a minute or more after the
kernel starts booting.
2.6.10 and later kernels have an "early uart" driver that works
very early in the boot process. The kernel will automatically use
this if the user supplies an argument like "console=uart,io,0x3f8",
or if the EFI console path contains only a UART device and the
firmware supplies an HCDP.
Troubleshooting Serial Console Problems
=======================================
No kernel output after elilo prints "Uncompressing Linux... done":
- You specified "console=ttyS0" but Linux changed the device
to which ttyS0 refers. Configure exactly one EFI console
device[3] and remove the "console=" option.
- The EFI console path contains both a VGA device and a UART.
EFI and elilo use both, but Linux defaults to VGA. Remove
the VGA device from the EFI console path[3].
- Multiple UARTs selected as EFI console devices. EFI and
elilo use all selected devices, but Linux uses only one.
Make sure only one UART is selected in the EFI console
path[3].
- You're connected to an HP MP port[2] but have a non-MP UART
selected as EFI console device. EFI uses the MP as a
console device even when it isn't explicitly selected.
Either move the console cable to the non-MP UART, or change
the EFI console path[3] to the MP UART.
Long pause (60+ seconds) between "Uncompressing Linux... done" and
start of kernel output:
- No early console because you used "console=ttyS<n>". Remove
the "console=" option if your firmware supplies an HCDP.
- If you don't have an HCDP, the kernel doesn't know where
your console lives until the driver discovers serial
devices. Use "console=uart,io,0x3f8" (or appropriate
address for your machine).
Kernel and init script output works fine, but no "login:" prompt:
- Add getty entry to /etc/inittab for console tty. Look for
the "Adding console on ttyS<n>" message that tells you which
device is the console.
"login:" prompt, but can't login as root:
- Add entry to /etc/securetty for console tty.
No ACPI serial devices found in 2.6.17 or later:
- Turn on CONFIG_PNP and CONFIG_PNPACPI. Prior to 2.6.17, ACPI
serial devices were discovered by 8250_acpi. In 2.6.17,
8250_acpi was replaced by the combination of 8250_pnp and
CONFIG_PNPACPI.
[1]
http://www.dig64.org/specifications/agreement
The table was originally defined as the "HCDP" for "Headless
Console/Debug Port." The current version is the "PCDP" for
"Primary Console and Debug Port Devices."
[2]
The HP MP (management processor) is a PCI device that provides
several UARTs. One of the UARTs is often used as a console; the
EFI Boot Manager identifies it as "Acpi(HWP0002,700)/Pci(...)/Uart".
The external connection is usually a 25-pin connector, and a
special dongle converts that to three 9-pin connectors, one of
which is labelled "Console."
[3]
EFI console devices are configured using the EFI Boot Manager
"Boot option maintenance" menu. You may have to interrupt the
boot sequence to use this menu, and you will have to reset the
box after changing console configuration.

View File

@ -40,12 +40,6 @@ Command Line Switches
supplied here is lower than the number of physically available CPUs, then
those CPUs can not be brought online later.
``additional_cpus=n``
Use this to limit hotpluggable CPUs. This option sets
``cpu_possible_mask = cpu_present_mask + additional_cpus``
This option is limited to the IA64 architecture.
``possible_cpus=n``
This option sets ``possible_cpus`` bits in ``cpu_possible_mask``.

View File

@ -9935,12 +9935,6 @@ F: Documentation/driver-api/i3c
F: drivers/i3c/
F: include/linux/i3c/
IA64 (Itanium) PLATFORM
L: linux-ia64@vger.kernel.org
S: Orphan
F: Documentation/arch/ia64/
F: arch/ia64/
IBM Operation Panel Input Driver
M: Eddie James <eajames@linux.ibm.com>
L: linux-input@vger.kernel.org
@ -16269,11 +16263,6 @@ L: linux-i2c@vger.kernel.org
S: Maintained
F: drivers/i2c/muxes/i2c-mux-pca9541.c
PCDP - PRIMARY CONSOLE AND DEBUG PORT
M: Khalid Aziz <khalid@gonehiking.org>
S: Maintained
F: drivers/firmware/pcdp.*
PCI DRIVER FOR AARDVARK (Marvell Armada 3700)
M: Thomas Petazzoni <thomas.petazzoni@bootlin.com>
M: Pali Rohár <pali@kernel.org>

View File

@ -1088,7 +1088,6 @@ config HAVE_ARCH_COMPAT_MMAP_BASES
config PAGE_SIZE_LESS_THAN_64KB
def_bool y
depends on !ARM64_64K_PAGES
depends on !IA64_PAGE_SIZE_64KB
depends on !PAGE_SIZE_64KB
depends on !PARISC_PAGE_SIZE_64KB
depends on PAGE_SIZE_LESS_THAN_256KB

View File

@ -1,3 +0,0 @@
# SPDX-License-Identifier: GPL-2.0-only
obj-y += kernel/ mm/
obj-$(CONFIG_IA64_SGI_UV) += uv/

View File

@ -1,394 +0,0 @@
# SPDX-License-Identifier: GPL-2.0
config PGTABLE_LEVELS
int "Page Table Levels" if !IA64_PAGE_SIZE_64KB
range 3 4 if !IA64_PAGE_SIZE_64KB
default 3
menu "Processor type and features"
config IA64
bool
select ARCH_BINFMT_ELF_EXTRA_PHDRS
select ARCH_HAS_CPU_FINALIZE_INIT
select ARCH_HAS_DMA_MARK_CLEAN
select ARCH_HAS_STRNCPY_FROM_USER
select ARCH_HAS_STRNLEN_USER
select ARCH_MIGHT_HAVE_PC_PARPORT
select ARCH_MIGHT_HAVE_PC_SERIO
select ACPI
select ACPI_NUMA if NUMA
select ARCH_ENABLE_MEMORY_HOTPLUG
select ARCH_ENABLE_MEMORY_HOTREMOVE
select ARCH_SUPPORTS_ACPI
select ACPI_SYSTEM_POWER_STATES_SUPPORT if ACPI
select ARCH_MIGHT_HAVE_ACPI_PDC if ACPI
select FORCE_PCI
select PCI_DOMAINS if PCI
select PCI_MSI
select PCI_SYSCALL if PCI
select HAS_IOPORT
select HAVE_ASM_MODVERSIONS
select HAVE_UNSTABLE_SCHED_CLOCK
select HAVE_EXIT_THREAD
select HAVE_KPROBES
select HAVE_KRETPROBES
select HAVE_FTRACE_MCOUNT_RECORD
select HAVE_DYNAMIC_FTRACE if (!ITANIUM)
select HAVE_FUNCTION_TRACER
select HAVE_SETUP_PER_CPU_AREA
select TTY
select HAVE_ARCH_TRACEHOOK
select HAVE_FUNCTION_DESCRIPTORS
select HAVE_VIRT_CPU_ACCOUNTING
select HUGETLB_PAGE_SIZE_VARIABLE if HUGETLB_PAGE
select GENERIC_IRQ_PROBE
select GENERIC_PENDING_IRQ if SMP
select GENERIC_IRQ_SHOW
select GENERIC_IRQ_LEGACY
select ARCH_HAVE_NMI_SAFE_CMPXCHG
select GENERIC_IOMAP
select GENERIC_IOREMAP
select GENERIC_SMP_IDLE_THREAD
select ARCH_TASK_STRUCT_ON_STACK
select ARCH_TASK_STRUCT_ALLOCATOR
select ARCH_THREAD_STACK_ALLOCATOR
select ARCH_CLOCKSOURCE_DATA
select GENERIC_TIME_VSYSCALL
select LEGACY_TIMER_TICK
select SWIOTLB
select SYSCTL_ARCH_UNALIGN_NO_WARN
select HAVE_MOD_ARCH_SPECIFIC
select MODULES_USE_ELF_RELA
select ARCH_USE_CMPXCHG_LOCKREF
select HAVE_ARCH_AUDITSYSCALL
select NEED_DMA_MAP_STATE
select NEED_SG_DMA_LENGTH
select NUMA if !FLATMEM
select PCI_MSI_ARCH_FALLBACKS if PCI_MSI
select ZONE_DMA32
select FUNCTION_ALIGNMENT_32B
default y
help
The Itanium Processor Family is Intel's 64-bit successor to
the 32-bit X86 line. The IA-64 Linux project has a home
page at <http://www.linuxia64.org/> and a mailing list at
<linux-ia64@vger.kernel.org>.
config 64BIT
bool
select ATA_NONSTANDARD if ATA
default y
config MMU
bool
default y
config STACKTRACE_SUPPORT
def_bool y
config GENERIC_LOCKBREAK
def_bool n
config GENERIC_CALIBRATE_DELAY
bool
default y
config DMI
bool
default y
select DMI_SCAN_MACHINE_NON_EFI_FALLBACK
config EFI
bool
select UCS2_STRING
default y
config SCHED_OMIT_FRAME_POINTER
bool
default y
config IA64_UNCACHED_ALLOCATOR
bool
select GENERIC_ALLOCATOR
config ARCH_USES_PG_UNCACHED
def_bool y
depends on IA64_UNCACHED_ALLOCATOR
config AUDIT_ARCH
bool
default y
choice
prompt "Processor type"
default ITANIUM
config ITANIUM
bool "Itanium"
help
Select your IA-64 processor type. The default is Itanium.
This choice is safe for all IA-64 systems, but may not perform
optimally on systems with, say, Itanium 2 or newer processors.
config MCKINLEY
bool "Itanium 2"
help
Select this to configure for an Itanium 2 (McKinley) processor.
endchoice
choice
prompt "Kernel page size"
default IA64_PAGE_SIZE_16KB
config IA64_PAGE_SIZE_4KB
bool "4KB"
help
This lets you select the page size of the kernel. For best IA-64
performance, a page size of 8KB or 16KB is recommended. For best
IA-32 compatibility, a page size of 4KB should be selected (the vast
majority of IA-32 binaries work perfectly fine with a larger page
size). For Itanium 2 or newer systems, a page size of 64KB can also
be selected.
4KB For best IA-32 compatibility
8KB For best IA-64 performance
16KB For best IA-64 performance
64KB Requires Itanium 2 or newer processor.
If you don't know what to do, choose 16KB.
config IA64_PAGE_SIZE_8KB
bool "8KB"
config IA64_PAGE_SIZE_16KB
bool "16KB"
config IA64_PAGE_SIZE_64KB
depends on !ITANIUM
bool "64KB"
endchoice
source "kernel/Kconfig.hz"
config IA64_BRL_EMU
bool
depends on ITANIUM
default y
# align cache-sensitive data to 128 bytes
config IA64_L1_CACHE_SHIFT
int
default "7" if MCKINLEY
default "6" if ITANIUM
config IA64_SGI_UV
bool "SGI-UV support"
help
Selecting this option will add specific support for running on SGI
UV based systems. If you have an SGI UV system or are building a
distro kernel, select this option.
config IA64_HP_SBA_IOMMU
bool "HP SBA IOMMU support"
select DMA_OPS
default y
help
Say Y here to add support for the SBA IOMMU found on HP zx1 and
sx1000 systems. If you're unsure, answer Y.
config IA64_CYCLONE
bool "Cyclone (EXA) Time Source support"
help
Say Y here to enable support for IBM EXA Cyclone time source.
If you're unsure, answer N.
config ARCH_FORCE_MAX_ORDER
int
default "16" if HUGETLB_PAGE
default "10"
config SMP
bool "Symmetric multi-processing support"
help
This enables support for systems with more than one CPU. If you have
a system with only one CPU, say N. If you have a system with more
than one CPU, say Y.
If you say N here, the kernel will run on single and multiprocessor
systems, but will use only one CPU of a multiprocessor system. If
you say Y here, the kernel will run on many, but not all,
single processor systems. On a single processor system, the kernel
will run faster if you say N here.
See also the SMP-HOWTO available at
<http://www.tldp.org/docs.html#howto>.
If you don't know what to do here, say N.
config NR_CPUS
int "Maximum number of CPUs (2-4096)"
range 2 4096
depends on SMP
default "4096"
help
You should set this to the number of CPUs in your system, but
keep in mind that a kernel compiled for, e.g., 2 CPUs will boot but
only use 2 CPUs on a >2 CPU system. Setting this to a value larger
than 64 will cause the use of a CPU mask array, causing a small
performance hit.
config HOTPLUG_CPU
bool "Support for hot-pluggable CPUs"
depends on SMP
default n
help
Say Y here to experiment with turning CPUs off and on. CPUs
can be controlled through /sys/devices/system/cpu/cpu#.
Say N if you want to disable CPU hotplug.
config SCHED_SMT
bool "SMT scheduler support"
depends on SMP
help
Improves the CPU scheduler's decision making when dealing with
Intel IA64 chips with MultiThreading at a cost of slightly increased
overhead in some places. If unsure say N here.
config PERMIT_BSP_REMOVE
bool "Support removal of Bootstrap Processor"
depends on HOTPLUG_CPU
default n
help
Say Y here if your platform SAL will support removal of BSP with HOTPLUG_CPU
support.
config FORCE_CPEI_RETARGET
bool "Force assumption that CPEI can be re-targeted"
depends on PERMIT_BSP_REMOVE
default n
help
Say Y if you need to force the assumption that CPEI can be re-targeted to
any cpu in the system. This hint is available via ACPI 3.0 specifications.
Tiger4 systems are capable of re-directing CPEI to any CPU other than BSP.
This option it useful to enable this feature on older BIOS's as well.
You can also enable this by using boot command line option force_cpei=1.
config ARCH_SELECT_MEMORY_MODEL
def_bool y
config ARCH_FLATMEM_ENABLE
def_bool y
config ARCH_SPARSEMEM_ENABLE
def_bool y
select SPARSEMEM_VMEMMAP_ENABLE
config ARCH_SPARSEMEM_DEFAULT
def_bool y
depends on ARCH_SPARSEMEM_ENABLE
config NUMA
bool "NUMA support"
depends on !FLATMEM
select SMP
select USE_PERCPU_NUMA_NODE_ID
help
Say Y to compile the kernel to support NUMA (Non-Uniform Memory
Access). This option is for configuring high-end multiprocessor
server systems. If in doubt, say N.
config NODES_SHIFT
int "Max num nodes shift(3-10)"
range 3 10
default "10"
depends on NUMA
help
This option specifies the maximum number of nodes in your SSI system.
MAX_NUMNODES will be 2^(This value).
If in doubt, use the default.
config HAVE_ARCH_NODEDATA_EXTENSION
def_bool y
depends on NUMA
config HAVE_MEMORYLESS_NODES
def_bool NUMA
config ARCH_PROC_KCORE_TEXT
def_bool y
depends on PROC_KCORE
config IA64_MCA_RECOVERY
bool "MCA recovery from errors other than TLB."
config IA64_PALINFO
tristate "/proc/pal support"
help
If you say Y here, you are able to get PAL (Processor Abstraction
Layer) information in /proc/pal. This contains useful information
about the processors in your systems, such as cache and TLB sizes
and the PAL firmware version in use.
To use this option, you have to ensure that the "/proc file system
support" (CONFIG_PROC_FS) is enabled, too.
config IA64_MC_ERR_INJECT
tristate "MC error injection support"
help
Adds support for MC error injection. If enabled, the kernel
will provide a sysfs interface for user applications to
call MC error injection PAL procedures to inject various errors.
This is a useful tool for MCA testing.
If you're unsure, do not select this option.
config IA64_ESI
bool "ESI (Extensible SAL Interface) support"
help
If you say Y here, support is built into the kernel to
make ESI calls. ESI calls are used to support vendor-specific
firmware extensions, such as the ability to inject memory-errors
for test-purposes. If you're unsure, say N.
config IA64_HP_AML_NFW
bool "Support ACPI AML calls to native firmware"
help
This driver installs a global ACPI Operation Region handler for
region 0xA1. AML methods can use this OpRegion to call arbitrary
native firmware functions. The driver installs the OpRegion
handler if there is an HPQ5001 device or if the user supplies
the "force" module parameter, e.g., with the "aml_nfw.force"
kernel command line option.
endmenu
config ARCH_SUPPORTS_KEXEC
def_bool !SMP || HOTPLUG_CPU
config ARCH_SUPPORTS_CRASH_DUMP
def_bool IA64_MCA_RECOVERY && (!SMP || HOTPLUG_CPU)
menu "Power management and ACPI options"
source "kernel/power/Kconfig"
source "drivers/acpi/Kconfig"
if PM
menu "CPU Frequency scaling"
source "drivers/cpufreq/Kconfig"
endmenu
endif
endmenu
config MSPEC
tristate "Memory special operations driver"
depends on IA64
select IA64_UNCACHED_ALLOCATOR
help
If you have an ia64 and you want to enable memory special
operations support (formerly known as fetchop), say Y here,
otherwise say N.

View File

@ -1,55 +0,0 @@
# SPDX-License-Identifier: GPL-2.0
choice
prompt "Physical memory granularity"
default IA64_GRANULE_64MB
config IA64_GRANULE_16MB
bool "16MB"
help
IA-64 identity-mapped regions use a large page size called "granules".
Select "16MB" for a small granule size.
Select "64MB" for a large granule size. This is the current default.
config IA64_GRANULE_64MB
bool "64MB"
depends on BROKEN
endchoice
config IA64_PRINT_HAZARDS
bool "Print possible IA-64 dependency violations to console"
depends on DEBUG_KERNEL
help
Selecting this option prints more information for Illegal Dependency
Faults, that is, for Read-after-Write (RAW), Write-after-Write (WAW),
or Write-after-Read (WAR) violations. This option is ignored if you
are compiling for an Itanium A step processor
(CONFIG_ITANIUM_ASTEP_SPECIFIC). If you're unsure, select Y.
config DISABLE_VHPT
bool "Disable VHPT"
depends on DEBUG_KERNEL
help
The Virtual Hash Page Table (VHPT) enhances virtual address
translation performance. Normally you want the VHPT active but you
can select this option to disable the VHPT for debugging. If you're
unsure, answer N.
config IA64_DEBUG_CMPXCHG
bool "Turn on compare-and-exchange bug checking (slow!)"
depends on DEBUG_KERNEL && PRINTK
help
Selecting this option turns on bug checking for the IA-64
compare-and-exchange instructions. This is slow! Itaniums
from step B3 or later don't have this problem. If you're unsure,
select N.
config IA64_DEBUG_IRQ
bool "Turn on irq debug checks (slow!)"
depends on DEBUG_KERNEL
help
Selecting this option turns on bug checking for the IA-64 irq_save
and restore instructions. It's useful for tracking down spinlock
problems, but slow! If you're unsure, select N.

View File

@ -1,82 +0,0 @@
#
# ia64/Makefile
#
# This file is included by the global makefile so that you can add your own
# architecture-specific flags and dependencies.
#
# This file is subject to the terms and conditions of the GNU General Public
# License. See the file "COPYING" in the main directory of this archive
# for more details.
#
# Copyright (C) 1998-2004 by David Mosberger-Tang <davidm@hpl.hp.com>
#
KBUILD_DEFCONFIG := generic_defconfig
NM := $(CROSS_COMPILE)nm -B
CHECKFLAGS += -D__ia64=1 -D__ia64__=1 -D_LP64 -D__LP64__
OBJCOPYFLAGS := --strip-all
LDFLAGS_vmlinux := -static
KBUILD_AFLAGS_KERNEL := -mconstant-gp
EXTRA :=
cflags-y := -pipe $(EXTRA) -ffixed-r13 -mfixed-range=f12-f15,f32-f127 \
-frename-registers -fno-optimize-sibling-calls
KBUILD_CFLAGS_KERNEL := -mconstant-gp
GAS_STATUS = $(shell $(srctree)/arch/ia64/scripts/check-gas "$(CC)" "$(OBJDUMP)")
KBUILD_CPPFLAGS += $(shell $(srctree)/arch/ia64/scripts/toolchain-flags "$(CC)" "$(OBJDUMP)" "$(READELF)")
ifeq ($(GAS_STATUS),buggy)
$(error Sorry, you need a newer version of the assember, one that is built from \
a source-tree that post-dates 18-Dec-2002. You can find a pre-compiled \
static binary of such an assembler at: \
\
ftp://ftp.hpl.hp.com/pub/linux-ia64/gas-030124.tar.gz)
endif
quiet_cmd_gzip = GZIP $@
cmd_gzip = cat $(real-prereqs) | $(KGZIP) -n -f -9 > $@
quiet_cmd_objcopy = OBJCOPY $@
cmd_objcopy = $(OBJCOPY) $(OBJCOPYFLAGS) $(OBJCOPYFLAGS_$(@F)) $< $@
KBUILD_CFLAGS += $(cflags-y)
libs-y += arch/ia64/lib/
drivers-y += arch/ia64/pci/ arch/ia64/hp/common/
PHONY += compressed check
all: compressed unwcheck
compressed: vmlinux.gz
vmlinuz: vmlinux.gz
vmlinux.gz: vmlinux.bin FORCE
$(call if_changed,gzip)
vmlinux.bin: vmlinux FORCE
$(call if_changed,objcopy)
unwcheck: vmlinux
-$(Q)READELF=$(READELF) $(PYTHON3) $(srctree)/arch/ia64/scripts/unwcheck.py $<
archheaders:
$(Q)$(MAKE) $(build)=arch/ia64/kernel/syscalls all
CLEAN_FILES += vmlinux.gz
install: KBUILD_IMAGE := vmlinux.gz
install:
$(call cmd,install)
define archhelp
echo '* compressed - Build compressed kernel image'
echo ' install - Install compressed kernel image'
echo '* unwcheck - Check vmlinux for invalid unwind info'
endef

View File

@ -1,102 +0,0 @@
CONFIG_SYSVIPC=y
CONFIG_POSIX_MQUEUE=y
CONFIG_LOG_BUF_SHIFT=16
CONFIG_PROFILING=y
CONFIG_MODULES=y
CONFIG_MODULE_UNLOAD=y
CONFIG_PARTITION_ADVANCED=y
CONFIG_SGI_PARTITION=y
CONFIG_SMP=y
CONFIG_NR_CPUS=2
CONFIG_PREEMPT=y
CONFIG_IA64_PALINFO=y
CONFIG_BINFMT_MISC=m
CONFIG_ACPI_BUTTON=m
CONFIG_ACPI_FAN=m
CONFIG_ACPI_PROCESSOR=m
CONFIG_NET=y
CONFIG_PACKET=y
CONFIG_UNIX=y
CONFIG_INET=y
# CONFIG_IPV6 is not set
CONFIG_BLK_DEV_LOOP=m
CONFIG_BLK_DEV_NBD=m
CONFIG_BLK_DEV_RAM=m
CONFIG_ATA=m
CONFIG_ATA_GENERIC=m
CONFIG_ATA_PIIX=m
CONFIG_SCSI=y
CONFIG_BLK_DEV_SD=y
CONFIG_SCSI_CONSTANTS=y
CONFIG_SCSI_LOGGING=y
CONFIG_SCSI_SPI_ATTRS=m
CONFIG_SCSI_QLOGIC_1280=y
CONFIG_MD=y
CONFIG_BLK_DEV_MD=m
CONFIG_MD_LINEAR=m
CONFIG_MD_RAID0=m
CONFIG_MD_RAID1=m
CONFIG_MD_RAID10=m
CONFIG_MD_MULTIPATH=m
CONFIG_BLK_DEV_DM=m
CONFIG_DM_CRYPT=m
CONFIG_DM_SNAPSHOT=m
CONFIG_DM_MIRROR=m
CONFIG_DM_ZERO=m
CONFIG_NETDEVICES=y
CONFIG_DUMMY=y
CONFIG_INPUT_EVDEV=y
CONFIG_SERIAL_8250=y
CONFIG_SERIAL_8250_CONSOLE=y
CONFIG_SERIAL_8250_EXTENDED=y
CONFIG_SERIAL_8250_SHARE_IRQ=y
# CONFIG_HW_RANDOM is not set
CONFIG_RTC_CLASS=y
CONFIG_RTC_DRV_EFI=y
CONFIG_I2C=y
CONFIG_I2C_CHARDEV=y
CONFIG_AGP=m
CONFIG_AGP_I460=m
CONFIG_DRM=m
CONFIG_DRM_R128=m
CONFIG_SOUND=m
CONFIG_SND=m
CONFIG_SND_SEQUENCER=m
CONFIG_SND_MIXER_OSS=m
CONFIG_SND_PCM_OSS=m
CONFIG_SND_CS4281=m
CONFIG_USB_HIDDEV=y
CONFIG_USB=m
CONFIG_USB_MON=m
CONFIG_USB_UHCI_HCD=m
CONFIG_USB_ACM=m
CONFIG_USB_PRINTER=m
CONFIG_USB_STORAGE=m
CONFIG_EXT2_FS=y
CONFIG_EXT3_FS=y
CONFIG_XFS_FS=y
CONFIG_XFS_QUOTA=y
CONFIG_XFS_POSIX_ACL=y
CONFIG_AUTOFS_FS=m
CONFIG_ISO9660_FS=m
CONFIG_JOLIET=y
CONFIG_UDF_FS=m
CONFIG_VFAT_FS=y
CONFIG_PROC_KCORE=y
CONFIG_TMPFS=y
CONFIG_HUGETLBFS=y
CONFIG_NFS_FS=m
CONFIG_NFS_V4=m
CONFIG_NFSD=m
CONFIG_NFSD_V4=y
CONFIG_CIFS=m
CONFIG_CIFS_XATTR=y
CONFIG_CIFS_POSIX=y
CONFIG_NLS_CODEPAGE_437=y
CONFIG_NLS_ISO8859_1=y
CONFIG_NLS_UTF8=m
CONFIG_MAGIC_SYSRQ=y
CONFIG_DEBUG_KERNEL=y
CONFIG_DEBUG_MUTEXES=y
CONFIG_CRYPTO_MD5=y
CONFIG_CRYPTO_DES=y

View File

@ -1,206 +0,0 @@
CONFIG_SYSVIPC=y
CONFIG_POSIX_MQUEUE=y
CONFIG_IKCONFIG=y
CONFIG_IKCONFIG_PROC=y
CONFIG_LOG_BUF_SHIFT=20
CONFIG_CGROUPS=y
CONFIG_CPUSETS=y
CONFIG_BLK_DEV_INITRD=y
CONFIG_KALLSYMS_ALL=y
CONFIG_MODULES=y
CONFIG_MODULE_UNLOAD=y
CONFIG_MODVERSIONS=y
CONFIG_PARTITION_ADVANCED=y
CONFIG_SGI_PARTITION=y
CONFIG_MCKINLEY=y
CONFIG_IA64_PAGE_SIZE_64KB=y
CONFIG_IA64_CYCLONE=y
CONFIG_SMP=y
CONFIG_HOTPLUG_CPU=y
CONFIG_IA64_MCA_RECOVERY=y
CONFIG_IA64_PALINFO=y
CONFIG_KEXEC=y
CONFIG_CRASH_DUMP=y
CONFIG_BINFMT_MISC=m
CONFIG_ACPI_BUTTON=m
CONFIG_ACPI_FAN=m
CONFIG_ACPI_DOCK=y
CONFIG_ACPI_PROCESSOR=m
CONFIG_HOTPLUG_PCI=y
CONFIG_HOTPLUG_PCI_ACPI=y
CONFIG_NET=y
CONFIG_PACKET=y
CONFIG_UNIX=y
CONFIG_INET=y
CONFIG_IP_MULTICAST=y
CONFIG_SYN_COOKIES=y
# CONFIG_IPV6 is not set
CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug"
CONFIG_CONNECTOR=y
# CONFIG_PNP_DEBUG_MESSAGES is not set
CONFIG_BLK_DEV_LOOP=m
CONFIG_BLK_DEV_NBD=m
CONFIG_BLK_DEV_RAM=y
CONFIG_SGI_XP=m
CONFIG_ATA=y
CONFIG_ATA_GENERIC=y
CONFIG_PATA_CMD64X=y
CONFIG_ATA_PIIX=y
CONFIG_BLK_DEV_SD=y
CONFIG_CHR_DEV_ST=m
CONFIG_BLK_DEV_SR=m
CONFIG_CHR_DEV_SG=m
CONFIG_SCSI_FC_ATTRS=y
CONFIG_SCSI_SYM53C8XX_2=y
CONFIG_SCSI_QLOGIC_1280=y
CONFIG_SATA_VITESSE=y
CONFIG_MD=y
CONFIG_BLK_DEV_MD=m
CONFIG_MD_LINEAR=m
CONFIG_MD_RAID0=m
CONFIG_MD_RAID1=m
CONFIG_MD_MULTIPATH=m
CONFIG_BLK_DEV_DM=m
CONFIG_DM_CRYPT=m
CONFIG_DM_SNAPSHOT=m
CONFIG_DM_MIRROR=m
CONFIG_DM_ZERO=m
CONFIG_DM_MULTIPATH=m
CONFIG_FUSION=y
CONFIG_FUSION_SPI=y
CONFIG_FUSION_FC=m
CONFIG_FUSION_SAS=y
CONFIG_NETDEVICES=y
CONFIG_DUMMY=m
CONFIG_NETCONSOLE=y
CONFIG_TIGON3=y
CONFIG_NET_TULIP=y
CONFIG_TULIP=m
CONFIG_E100=m
CONFIG_E1000=y
CONFIG_IGB=y
# CONFIG_SERIO_SERPORT is not set
CONFIG_GAMEPORT=m
CONFIG_SERIAL_NONSTANDARD=y
CONFIG_SERIAL_8250=y
CONFIG_SERIAL_8250_CONSOLE=y
CONFIG_SERIAL_8250_NR_UARTS=6
CONFIG_SERIAL_8250_EXTENDED=y
CONFIG_SERIAL_8250_SHARE_IRQ=y
# CONFIG_HW_RANDOM is not set
CONFIG_RTC_CLASS=y
CONFIG_RTC_DRV_EFI=y
CONFIG_HPET=y
CONFIG_AGP=m
CONFIG_AGP_I460=m
CONFIG_AGP_HP_ZX1=m
CONFIG_DRM=m
CONFIG_DRM_TDFX=m
CONFIG_DRM_R128=m
CONFIG_DRM_RADEON=m
CONFIG_DRM_MGA=m
CONFIG_DRM_SIS=m
CONFIG_SOUND=m
CONFIG_SND=m
CONFIG_SND_SEQUENCER=m
CONFIG_SND_SEQ_DUMMY=m
CONFIG_SND_MIXER_OSS=m
CONFIG_SND_PCM_OSS=m
CONFIG_SND_SEQUENCER_OSS=y
CONFIG_SND_VERBOSE_PRINTK=y
CONFIG_SND_DUMMY=m
CONFIG_SND_VIRMIDI=m
CONFIG_SND_MTPAV=m
CONFIG_SND_SERIAL_U16550=m
CONFIG_SND_MPU401=m
CONFIG_SND_CS4281=m
CONFIG_SND_CS46XX=m
CONFIG_SND_EMU10K1=m
CONFIG_SND_FM801=m
CONFIG_HID_GYRATION=m
CONFIG_HID_PANTHERLORD=m
CONFIG_HID_PETALYNX=m
CONFIG_HID_SAMSUNG=m
CONFIG_HID_SONY=m
CONFIG_HID_SUNPLUS=m
CONFIG_USB=m
CONFIG_USB_MON=m
CONFIG_USB_EHCI_HCD=m
CONFIG_USB_OHCI_HCD=m
CONFIG_USB_UHCI_HCD=m
CONFIG_USB_STORAGE=m
CONFIG_INFINIBAND=m
CONFIG_INFINIBAND_MTHCA=m
CONFIG_INFINIBAND_IPOIB=m
CONFIG_INTEL_IOMMU=y
CONFIG_MSPEC=m
CONFIG_EXT2_FS=y
CONFIG_EXT2_FS_XATTR=y
CONFIG_EXT2_FS_POSIX_ACL=y
CONFIG_EXT2_FS_SECURITY=y
CONFIG_EXT3_FS=y
CONFIG_EXT3_FS_POSIX_ACL=y
CONFIG_EXT3_FS_SECURITY=y
CONFIG_REISERFS_FS=y
CONFIG_REISERFS_FS_XATTR=y
CONFIG_REISERFS_FS_POSIX_ACL=y
CONFIG_REISERFS_FS_SECURITY=y
CONFIG_XFS_FS=y
CONFIG_AUTOFS_FS=m
CONFIG_ISO9660_FS=m
CONFIG_JOLIET=y
CONFIG_UDF_FS=m
CONFIG_VFAT_FS=y
CONFIG_NTFS_FS=m
CONFIG_PROC_KCORE=y
CONFIG_TMPFS=y
CONFIG_HUGETLBFS=y
CONFIG_NFS_FS=m
CONFIG_NFS_V4=m
CONFIG_NFSD=m
CONFIG_NFSD_V4=y
CONFIG_CIFS=m
CONFIG_NLS_CODEPAGE_437=y
CONFIG_NLS_CODEPAGE_737=m
CONFIG_NLS_CODEPAGE_775=m
CONFIG_NLS_CODEPAGE_850=m
CONFIG_NLS_CODEPAGE_852=m
CONFIG_NLS_CODEPAGE_855=m
CONFIG_NLS_CODEPAGE_857=m
CONFIG_NLS_CODEPAGE_860=m
CONFIG_NLS_CODEPAGE_861=m
CONFIG_NLS_CODEPAGE_862=m
CONFIG_NLS_CODEPAGE_863=m
CONFIG_NLS_CODEPAGE_864=m
CONFIG_NLS_CODEPAGE_865=m
CONFIG_NLS_CODEPAGE_866=m
CONFIG_NLS_CODEPAGE_869=m
CONFIG_NLS_CODEPAGE_936=m
CONFIG_NLS_CODEPAGE_950=m
CONFIG_NLS_CODEPAGE_932=m
CONFIG_NLS_CODEPAGE_949=m
CONFIG_NLS_CODEPAGE_874=m
CONFIG_NLS_ISO8859_8=m
CONFIG_NLS_CODEPAGE_1250=m
CONFIG_NLS_CODEPAGE_1251=m
CONFIG_NLS_ISO8859_1=y
CONFIG_NLS_ISO8859_2=m
CONFIG_NLS_ISO8859_3=m
CONFIG_NLS_ISO8859_4=m
CONFIG_NLS_ISO8859_5=m
CONFIG_NLS_ISO8859_6=m
CONFIG_NLS_ISO8859_7=m
CONFIG_NLS_ISO8859_9=m
CONFIG_NLS_ISO8859_13=m
CONFIG_NLS_ISO8859_14=m
CONFIG_NLS_ISO8859_15=m
CONFIG_NLS_KOI8_R=m
CONFIG_NLS_KOI8_U=m
CONFIG_NLS_UTF8=m
CONFIG_MAGIC_SYSRQ=y
CONFIG_DEBUG_KERNEL=y
CONFIG_DEBUG_MUTEXES=y
CONFIG_CRYPTO_PCBC=m
CONFIG_CRYPTO_MD5=y
# CONFIG_CRYPTO_ANSI_CPRNG is not set
CONFIG_CRC_T10DIF=y

View File

@ -1,184 +0,0 @@
CONFIG_SYSVIPC=y
CONFIG_POSIX_MQUEUE=y
CONFIG_IKCONFIG=y
CONFIG_IKCONFIG_PROC=y
CONFIG_LOG_BUF_SHIFT=20
CONFIG_BLK_DEV_INITRD=y
CONFIG_KALLSYMS_ALL=y
CONFIG_MODULES=y
CONFIG_MODULE_UNLOAD=y
CONFIG_MODVERSIONS=y
CONFIG_PARTITION_ADVANCED=y
CONFIG_SGI_PARTITION=y
CONFIG_MCKINLEY=y
CONFIG_IA64_CYCLONE=y
CONFIG_SMP=y
CONFIG_NR_CPUS=512
CONFIG_HOTPLUG_CPU=y
CONFIG_SPARSEMEM_MANUAL=y
CONFIG_IA64_MCA_RECOVERY=y
CONFIG_IA64_PALINFO=y
CONFIG_BINFMT_MISC=m
CONFIG_ACPI_BUTTON=m
CONFIG_ACPI_FAN=m
CONFIG_ACPI_PROCESSOR=m
CONFIG_HOTPLUG_PCI=y
CONFIG_NET=y
CONFIG_PACKET=y
CONFIG_UNIX=y
CONFIG_INET=y
CONFIG_IP_MULTICAST=y
CONFIG_SYN_COOKIES=y
# CONFIG_IPV6 is not set
CONFIG_BLK_DEV_LOOP=m
CONFIG_BLK_DEV_NBD=m
CONFIG_BLK_DEV_RAM=y
CONFIG_ATA=y
CONFIG_ATA_GENERIC=y
CONFIG_PATA_CMD64X=y
CONFIG_ATA_PIIX=y
CONFIG_SCSI=y
CONFIG_BLK_DEV_SD=y
CONFIG_CHR_DEV_ST=m
CONFIG_BLK_DEV_SR=m
CONFIG_CHR_DEV_SG=m
CONFIG_SCSI_FC_ATTRS=y
CONFIG_SCSI_SYM53C8XX_2=y
CONFIG_SCSI_QLOGIC_1280=y
CONFIG_MD=y
CONFIG_BLK_DEV_MD=m
CONFIG_MD_LINEAR=m
CONFIG_MD_RAID0=m
CONFIG_MD_RAID1=m
CONFIG_MD_MULTIPATH=m
CONFIG_BLK_DEV_DM=m
CONFIG_DM_CRYPT=m
CONFIG_DM_SNAPSHOT=m
CONFIG_DM_MIRROR=m
CONFIG_DM_ZERO=m
CONFIG_DM_MULTIPATH=m
CONFIG_FUSION=y
CONFIG_FUSION_SPI=y
CONFIG_FUSION_FC=m
CONFIG_NETDEVICES=y
CONFIG_DUMMY=m
CONFIG_NETCONSOLE=y
CONFIG_TIGON3=y
CONFIG_NET_TULIP=y
CONFIG_TULIP=m
CONFIG_E100=m
CONFIG_E1000=y
# CONFIG_SERIO_SERPORT is not set
CONFIG_GAMEPORT=m
CONFIG_SERIAL_NONSTANDARD=y
CONFIG_SERIAL_8250=y
CONFIG_SERIAL_8250_CONSOLE=y
CONFIG_SERIAL_8250_NR_UARTS=6
CONFIG_SERIAL_8250_EXTENDED=y
CONFIG_SERIAL_8250_SHARE_IRQ=y
# CONFIG_HW_RANDOM is not set
CONFIG_RTC_CLASS=y
CONFIG_RTC_DRV_EFI=y
CONFIG_HPET=y
CONFIG_AGP=m
CONFIG_AGP_I460=m
CONFIG_AGP_HP_ZX1=m
CONFIG_DRM=m
CONFIG_DRM_TDFX=m
CONFIG_DRM_R128=m
CONFIG_DRM_RADEON=m
CONFIG_DRM_MGA=m
CONFIG_DRM_SIS=m
CONFIG_SOUND=m
CONFIG_SND=m
CONFIG_SND_SEQUENCER=m
CONFIG_SND_SEQ_DUMMY=m
CONFIG_SND_MIXER_OSS=m
CONFIG_SND_PCM_OSS=m
CONFIG_SND_SEQUENCER_OSS=y
CONFIG_SND_VERBOSE_PRINTK=y
CONFIG_SND_DUMMY=m
CONFIG_SND_VIRMIDI=m
CONFIG_SND_MTPAV=m
CONFIG_SND_SERIAL_U16550=m
CONFIG_SND_MPU401=m
CONFIG_SND_CS4281=m
CONFIG_SND_CS46XX=m
CONFIG_SND_EMU10K1=m
CONFIG_SND_FM801=m
CONFIG_USB=m
CONFIG_USB_MON=m
CONFIG_USB_EHCI_HCD=m
CONFIG_USB_OHCI_HCD=m
CONFIG_USB_UHCI_HCD=m
CONFIG_USB_STORAGE=m
CONFIG_INFINIBAND=m
CONFIG_INFINIBAND_MTHCA=m
CONFIG_INFINIBAND_IPOIB=m
CONFIG_EXT2_FS=y
CONFIG_EXT2_FS_XATTR=y
CONFIG_EXT2_FS_POSIX_ACL=y
CONFIG_EXT2_FS_SECURITY=y
CONFIG_EXT3_FS=y
CONFIG_EXT3_FS_POSIX_ACL=y
CONFIG_EXT3_FS_SECURITY=y
CONFIG_REISERFS_FS=y
CONFIG_REISERFS_FS_XATTR=y
CONFIG_REISERFS_FS_POSIX_ACL=y
CONFIG_REISERFS_FS_SECURITY=y
CONFIG_XFS_FS=y
CONFIG_AUTOFS_FS=y
CONFIG_ISO9660_FS=m
CONFIG_JOLIET=y
CONFIG_UDF_FS=m
CONFIG_VFAT_FS=y
CONFIG_NTFS_FS=m
CONFIG_PROC_KCORE=y
CONFIG_TMPFS=y
CONFIG_HUGETLBFS=y
CONFIG_NFS_FS=m
CONFIG_NFS_V4=m
CONFIG_NFSD=m
CONFIG_NFSD_V4=y
CONFIG_CIFS=m
CONFIG_NLS_CODEPAGE_437=y
CONFIG_NLS_CODEPAGE_737=m
CONFIG_NLS_CODEPAGE_775=m
CONFIG_NLS_CODEPAGE_850=m
CONFIG_NLS_CODEPAGE_852=m
CONFIG_NLS_CODEPAGE_855=m
CONFIG_NLS_CODEPAGE_857=m
CONFIG_NLS_CODEPAGE_860=m
CONFIG_NLS_CODEPAGE_861=m
CONFIG_NLS_CODEPAGE_862=m
CONFIG_NLS_CODEPAGE_863=m
CONFIG_NLS_CODEPAGE_864=m
CONFIG_NLS_CODEPAGE_865=m
CONFIG_NLS_CODEPAGE_866=m
CONFIG_NLS_CODEPAGE_869=m
CONFIG_NLS_CODEPAGE_936=m
CONFIG_NLS_CODEPAGE_950=m
CONFIG_NLS_CODEPAGE_932=m
CONFIG_NLS_CODEPAGE_949=m
CONFIG_NLS_CODEPAGE_874=m
CONFIG_NLS_ISO8859_8=m
CONFIG_NLS_CODEPAGE_1250=m
CONFIG_NLS_CODEPAGE_1251=m
CONFIG_NLS_ISO8859_1=y
CONFIG_NLS_ISO8859_2=m
CONFIG_NLS_ISO8859_3=m
CONFIG_NLS_ISO8859_4=m
CONFIG_NLS_ISO8859_5=m
CONFIG_NLS_ISO8859_6=m
CONFIG_NLS_ISO8859_7=m
CONFIG_NLS_ISO8859_9=m
CONFIG_NLS_ISO8859_13=m
CONFIG_NLS_ISO8859_14=m
CONFIG_NLS_ISO8859_15=m
CONFIG_NLS_KOI8_R=m
CONFIG_NLS_KOI8_U=m
CONFIG_NLS_UTF8=m
CONFIG_MAGIC_SYSRQ=y
CONFIG_DEBUG_KERNEL=y
CONFIG_DEBUG_MUTEXES=y
CONFIG_CRYPTO_MD5=y

View File

@ -1,169 +0,0 @@
CONFIG_SYSVIPC=y
CONFIG_POSIX_MQUEUE=y
CONFIG_IKCONFIG=y
CONFIG_IKCONFIG_PROC=y
CONFIG_LOG_BUF_SHIFT=20
CONFIG_BLK_DEV_INITRD=y
CONFIG_KALLSYMS_ALL=y
CONFIG_MODULES=y
CONFIG_MODULE_UNLOAD=y
CONFIG_MODVERSIONS=y
CONFIG_MODULE_SRCVERSION_ALL=y
# CONFIG_BLK_DEV_BSG is not set
CONFIG_PARTITION_ADVANCED=y
CONFIG_SGI_PARTITION=y
CONFIG_MCKINLEY=y
CONFIG_IA64_PAGE_SIZE_64KB=y
CONFIG_IA64_CYCLONE=y
CONFIG_SMP=y
CONFIG_NR_CPUS=16
CONFIG_HOTPLUG_CPU=y
CONFIG_PERMIT_BSP_REMOVE=y
CONFIG_FORCE_CPEI_RETARGET=y
CONFIG_IA64_MCA_RECOVERY=y
CONFIG_IA64_PALINFO=y
CONFIG_KEXEC=y
CONFIG_BINFMT_MISC=m
CONFIG_ACPI_BUTTON=m
CONFIG_ACPI_FAN=m
CONFIG_ACPI_PROCESSOR=m
CONFIG_HOTPLUG_PCI=y
CONFIG_NET=y
CONFIG_PACKET=y
CONFIG_UNIX=y
CONFIG_INET=y
CONFIG_IP_MULTICAST=y
CONFIG_SYN_COOKIES=y
# CONFIG_IPV6 is not set
CONFIG_BLK_DEV_LOOP=m
CONFIG_BLK_DEV_NBD=m
CONFIG_BLK_DEV_RAM=y
CONFIG_ATA=y
CONFIG_ATA_GENERIC=y
CONFIG_PATA_CMD64X=y
CONFIG_ATA_PIIX=y
CONFIG_SCSI=y
CONFIG_BLK_DEV_SD=y
CONFIG_CHR_DEV_ST=m
CONFIG_BLK_DEV_SR=m
CONFIG_CHR_DEV_SG=m
CONFIG_SCSI_FC_ATTRS=y
CONFIG_SCSI_SYM53C8XX_2=y
CONFIG_SCSI_QLOGIC_1280=y
CONFIG_MD=y
CONFIG_BLK_DEV_MD=m
CONFIG_MD_LINEAR=m
CONFIG_MD_RAID0=m
CONFIG_MD_RAID1=m
CONFIG_MD_MULTIPATH=m
CONFIG_BLK_DEV_DM=m
CONFIG_DM_CRYPT=m
CONFIG_DM_SNAPSHOT=m
CONFIG_DM_MIRROR=m
CONFIG_DM_ZERO=m
CONFIG_FUSION=y
CONFIG_FUSION_SPI=y
CONFIG_FUSION_FC=y
CONFIG_FUSION_CTL=y
CONFIG_NETDEVICES=y
CONFIG_DUMMY=m
CONFIG_NETCONSOLE=y
CONFIG_TIGON3=y
CONFIG_NET_TULIP=y
CONFIG_TULIP=m
CONFIG_E100=m
CONFIG_E1000=y
# CONFIG_SERIO_SERPORT is not set
CONFIG_GAMEPORT=m
CONFIG_SERIAL_NONSTANDARD=y
CONFIG_SERIAL_8250=y
CONFIG_SERIAL_8250_CONSOLE=y
CONFIG_SERIAL_8250_NR_UARTS=6
CONFIG_SERIAL_8250_EXTENDED=y
CONFIG_SERIAL_8250_SHARE_IRQ=y
# CONFIG_HW_RANDOM is not set
CONFIG_RTC_CLASS=y
CONFIG_RTC_DRV_EFI=y
CONFIG_HPET=y
CONFIG_AGP=m
CONFIG_AGP_I460=m
CONFIG_DRM=m
CONFIG_DRM_TDFX=m
CONFIG_DRM_R128=m
CONFIG_DRM_RADEON=m
CONFIG_DRM_MGA=m
CONFIG_DRM_SIS=m
CONFIG_USB=y
CONFIG_USB_EHCI_HCD=m
CONFIG_USB_OHCI_HCD=m
CONFIG_USB_UHCI_HCD=y
CONFIG_USB_STORAGE=m
CONFIG_EXT2_FS=y
CONFIG_EXT2_FS_XATTR=y
CONFIG_EXT2_FS_POSIX_ACL=y
CONFIG_EXT2_FS_SECURITY=y
CONFIG_EXT3_FS=y
CONFIG_EXT3_FS_POSIX_ACL=y
CONFIG_EXT3_FS_SECURITY=y
CONFIG_REISERFS_FS=y
CONFIG_REISERFS_FS_XATTR=y
CONFIG_REISERFS_FS_POSIX_ACL=y
CONFIG_REISERFS_FS_SECURITY=y
CONFIG_XFS_FS=y
CONFIG_AUTOFS_FS=y
CONFIG_ISO9660_FS=m
CONFIG_JOLIET=y
CONFIG_UDF_FS=m
CONFIG_VFAT_FS=y
CONFIG_NTFS_FS=m
CONFIG_PROC_KCORE=y
CONFIG_TMPFS=y
CONFIG_HUGETLBFS=y
CONFIG_NFS_FS=m
CONFIG_NFS_V4=m
CONFIG_NFSD=m
CONFIG_NFSD_V4=y
CONFIG_CIFS=m
CONFIG_NLS_CODEPAGE_437=y
CONFIG_NLS_CODEPAGE_737=m
CONFIG_NLS_CODEPAGE_775=m
CONFIG_NLS_CODEPAGE_850=m
CONFIG_NLS_CODEPAGE_852=m
CONFIG_NLS_CODEPAGE_855=m
CONFIG_NLS_CODEPAGE_857=m
CONFIG_NLS_CODEPAGE_860=m
CONFIG_NLS_CODEPAGE_861=m
CONFIG_NLS_CODEPAGE_862=m
CONFIG_NLS_CODEPAGE_863=m
CONFIG_NLS_CODEPAGE_864=m
CONFIG_NLS_CODEPAGE_865=m
CONFIG_NLS_CODEPAGE_866=m
CONFIG_NLS_CODEPAGE_869=m
CONFIG_NLS_CODEPAGE_936=m
CONFIG_NLS_CODEPAGE_950=m
CONFIG_NLS_CODEPAGE_932=m
CONFIG_NLS_CODEPAGE_949=m
CONFIG_NLS_CODEPAGE_874=m
CONFIG_NLS_ISO8859_8=m
CONFIG_NLS_CODEPAGE_1250=m
CONFIG_NLS_CODEPAGE_1251=m
CONFIG_NLS_ISO8859_1=y
CONFIG_NLS_ISO8859_2=m
CONFIG_NLS_ISO8859_3=m
CONFIG_NLS_ISO8859_4=m
CONFIG_NLS_ISO8859_5=m
CONFIG_NLS_ISO8859_6=m
CONFIG_NLS_ISO8859_7=m
CONFIG_NLS_ISO8859_9=m
CONFIG_NLS_ISO8859_13=m
CONFIG_NLS_ISO8859_14=m
CONFIG_NLS_ISO8859_15=m
CONFIG_NLS_KOI8_R=m
CONFIG_NLS_KOI8_U=m
CONFIG_NLS_UTF8=m
CONFIG_MAGIC_SYSRQ=y
CONFIG_DEBUG_KERNEL=y
CONFIG_DEBUG_MUTEXES=y
CONFIG_IA64_GRANULE_16MB=y
CONFIG_CRYPTO_PCBC=m
CONFIG_CRYPTO_MD5=y

View File

@ -1,148 +0,0 @@
CONFIG_SYSVIPC=y
CONFIG_BSD_PROCESS_ACCT=y
CONFIG_BLK_DEV_INITRD=y
CONFIG_KPROBES=y
CONFIG_MODULES=y
CONFIG_PARTITION_ADVANCED=y
CONFIG_MCKINLEY=y
CONFIG_SMP=y
CONFIG_NR_CPUS=16
CONFIG_HOTPLUG_CPU=y
CONFIG_FLATMEM_MANUAL=y
CONFIG_IA64_MCA_RECOVERY=y
CONFIG_IA64_PALINFO=y
CONFIG_CRASH_DUMP=y
CONFIG_BINFMT_MISC=y
CONFIG_HOTPLUG_PCI=y
CONFIG_HOTPLUG_PCI_ACPI=y
CONFIG_NET=y
CONFIG_PACKET=y
CONFIG_UNIX=y
CONFIG_INET=y
CONFIG_IP_MULTICAST=y
# CONFIG_IPV6 is not set
CONFIG_NETFILTER=y
CONFIG_BLK_DEV_LOOP=y
CONFIG_BLK_DEV_RAM=y
CONFIG_ATA=y
CONFIG_ATA_GENERIC=y
CONFIG_PATA_CMD64X=y
CONFIG_SCSI=y
CONFIG_BLK_DEV_SD=y
CONFIG_CHR_DEV_ST=y
CONFIG_BLK_DEV_SR=y
CONFIG_CHR_DEV_SG=y
CONFIG_SCSI_CONSTANTS=y
CONFIG_SCSI_LOGGING=y
CONFIG_SCSI_FC_ATTRS=y
CONFIG_SCSI_SYM53C8XX_2=y
CONFIG_SCSI_QLOGIC_1280=y
CONFIG_FUSION=y
CONFIG_FUSION_SPI=y
CONFIG_FUSION_FC=y
CONFIG_FUSION_CTL=m
CONFIG_NETDEVICES=y
CONFIG_DUMMY=y
CONFIG_TIGON3=y
CONFIG_NET_TULIP=y
CONFIG_TULIP=y
CONFIG_TULIP_MWI=y
CONFIG_TULIP_MMIO=y
CONFIG_TULIP_NAPI=y
CONFIG_TULIP_NAPI_HW_MITIGATION=y
CONFIG_E100=y
CONFIG_E1000=y
CONFIG_INPUT_JOYDEV=y
CONFIG_INPUT_EVDEV=y
# CONFIG_INPUT_KEYBOARD is not set
# CONFIG_INPUT_MOUSE is not set
# CONFIG_SERIO_I8042 is not set
# CONFIG_SERIO_SERPORT is not set
CONFIG_SERIAL_8250=y
CONFIG_SERIAL_8250_CONSOLE=y
CONFIG_SERIAL_8250_NR_UARTS=8
CONFIG_SERIAL_8250_EXTENDED=y
CONFIG_SERIAL_8250_SHARE_IRQ=y
# CONFIG_HW_RANDOM is not set
CONFIG_RTC_CLASS=y
CONFIG_RTC_DRV_EFI=y
CONFIG_I2C_CHARDEV=y
CONFIG_AGP=y
CONFIG_AGP_HP_ZX1=y
CONFIG_DRM=y
CONFIG_DRM_RADEON=y
CONFIG_FB_RADEON=y
CONFIG_FB_RADEON_DEBUG=y
CONFIG_LOGO=y
# CONFIG_LOGO_LINUX_MONO is not set
# CONFIG_LOGO_LINUX_VGA16 is not set
CONFIG_SOUND=y
CONFIG_SND=y
CONFIG_SND_SEQUENCER=y
CONFIG_SND_MIXER_OSS=y
CONFIG_SND_PCM_OSS=y
CONFIG_SND_SEQUENCER_OSS=y
CONFIG_SND_FM801=y
CONFIG_USB_HIDDEV=y
CONFIG_USB=y
CONFIG_USB_MON=y
CONFIG_USB_EHCI_HCD=y
CONFIG_USB_OHCI_HCD=y
CONFIG_USB_UHCI_HCD=y
CONFIG_USB_STORAGE=y
CONFIG_EXT2_FS=y
CONFIG_EXT2_FS_XATTR=y
CONFIG_EXT3_FS=y
CONFIG_ISO9660_FS=y
CONFIG_JOLIET=y
CONFIG_UDF_FS=y
CONFIG_MSDOS_FS=y
CONFIG_VFAT_FS=y
CONFIG_PROC_KCORE=y
CONFIG_TMPFS=y
CONFIG_HUGETLBFS=y
CONFIG_NFS_FS=y
CONFIG_NFS_V4=y
CONFIG_NFSD=y
CONFIG_NLS_CODEPAGE_437=y
CONFIG_NLS_CODEPAGE_737=y
CONFIG_NLS_CODEPAGE_775=y
CONFIG_NLS_CODEPAGE_850=y
CONFIG_NLS_CODEPAGE_852=y
CONFIG_NLS_CODEPAGE_855=y
CONFIG_NLS_CODEPAGE_857=y
CONFIG_NLS_CODEPAGE_860=y
CONFIG_NLS_CODEPAGE_861=y
CONFIG_NLS_CODEPAGE_862=y
CONFIG_NLS_CODEPAGE_863=y
CONFIG_NLS_CODEPAGE_864=y
CONFIG_NLS_CODEPAGE_865=y
CONFIG_NLS_CODEPAGE_866=y
CONFIG_NLS_CODEPAGE_869=y
CONFIG_NLS_CODEPAGE_936=y
CONFIG_NLS_CODEPAGE_950=y
CONFIG_NLS_CODEPAGE_932=y
CONFIG_NLS_CODEPAGE_949=y
CONFIG_NLS_CODEPAGE_874=y
CONFIG_NLS_ISO8859_8=y
CONFIG_NLS_CODEPAGE_1251=y
CONFIG_NLS_ISO8859_1=y
CONFIG_NLS_ISO8859_2=y
CONFIG_NLS_ISO8859_3=y
CONFIG_NLS_ISO8859_4=y
CONFIG_NLS_ISO8859_5=y
CONFIG_NLS_ISO8859_6=y
CONFIG_NLS_ISO8859_7=y
CONFIG_NLS_ISO8859_9=y
CONFIG_NLS_ISO8859_13=y
CONFIG_NLS_ISO8859_14=y
CONFIG_NLS_ISO8859_15=y
CONFIG_NLS_KOI8_R=y
CONFIG_NLS_KOI8_U=y
CONFIG_NLS_UTF8=y
CONFIG_MAGIC_SYSRQ=y
CONFIG_DEBUG_KERNEL=y
CONFIG_DEBUG_MUTEXES=y
CONFIG_IA64_PRINT_HAZARDS=y
CONFIG_CRYPTO_ECB=m
CONFIG_CRYPTO_PCBC=m

View File

@ -1,10 +0,0 @@
# SPDX-License-Identifier: GPL-2.0-only
#
# ia64/platform/hp/common/Makefile
#
# Copyright (C) 2002 Hewlett Packard
# Copyright (C) Alex Williamson (alex_williamson@hp.com)
#
obj-$(CONFIG_IA64_HP_SBA_IOMMU) += sba_iommu.o
obj-$(CONFIG_IA64_HP_AML_NFW) += aml_nfw.o

View File

@ -1,232 +0,0 @@
// SPDX-License-Identifier: GPL-2.0-only
/*
* OpRegion handler to allow AML to call native firmware
*
* (c) Copyright 2007 Hewlett-Packard Development Company, L.P.
* Bjorn Helgaas <bjorn.helgaas@hp.com>
*
* This driver implements HP Open Source Review Board proposal 1842,
* which was approved on 9/20/2006.
*
* For technical documentation, see the HP SPPA Firmware EAS, Appendix F.
*
* ACPI does not define a mechanism for AML methods to call native firmware
* interfaces such as PAL or SAL. This OpRegion handler adds such a mechanism.
* After the handler is installed, an AML method can call native firmware by
* storing the arguments and firmware entry point to specific offsets in the
* OpRegion. When AML reads the "return value" offset from the OpRegion, this
* handler loads up the arguments, makes the firmware call, and returns the
* result.
*/
#include <linux/module.h>
#include <linux/acpi.h>
#include <asm/sal.h>
MODULE_AUTHOR("Bjorn Helgaas <bjorn.helgaas@hp.com>");
MODULE_LICENSE("GPL");
MODULE_DESCRIPTION("ACPI opregion handler for native firmware calls");
static bool force_register;
module_param_named(force, force_register, bool, 0);
MODULE_PARM_DESC(force, "Install opregion handler even without HPQ5001 device");
#define AML_NFW_SPACE 0xA1
struct ia64_pdesc {
void *ip;
void *gp;
};
/*
* N.B. The layout of this structure is defined in the HP SPPA FW EAS, and
* the member offsets are embedded in AML methods.
*/
struct ia64_nfw_context {
u64 arg[8];
struct ia64_sal_retval ret;
u64 ip;
u64 gp;
u64 pad[2];
};
static void *virt_map(u64 address)
{
if (address & (1UL << 63))
return (void *) (__IA64_UNCACHED_OFFSET | address);
return __va(address);
}
static void aml_nfw_execute(struct ia64_nfw_context *c)
{
struct ia64_pdesc virt_entry;
ia64_sal_handler entry;
virt_entry.ip = virt_map(c->ip);
virt_entry.gp = virt_map(c->gp);
entry = (ia64_sal_handler) &virt_entry;
IA64_FW_CALL(entry, c->ret,
c->arg[0], c->arg[1], c->arg[2], c->arg[3],
c->arg[4], c->arg[5], c->arg[6], c->arg[7]);
}
static void aml_nfw_read_arg(u8 *offset, u32 bit_width, u64 *value)
{
switch (bit_width) {
case 8:
*value = *(u8 *)offset;
break;
case 16:
*value = *(u16 *)offset;
break;
case 32:
*value = *(u32 *)offset;
break;
case 64:
*value = *(u64 *)offset;
break;
}
}
static void aml_nfw_write_arg(u8 *offset, u32 bit_width, u64 *value)
{
switch (bit_width) {
case 8:
*(u8 *) offset = *value;
break;
case 16:
*(u16 *) offset = *value;
break;
case 32:
*(u32 *) offset = *value;
break;
case 64:
*(u64 *) offset = *value;
break;
}
}
static acpi_status aml_nfw_handler(u32 function, acpi_physical_address address,
u32 bit_width, u64 *value, void *handler_context,
void *region_context)
{
struct ia64_nfw_context *context = handler_context;
u8 *offset = (u8 *) context + address;
if (bit_width != 8 && bit_width != 16 &&
bit_width != 32 && bit_width != 64)
return AE_BAD_PARAMETER;
if (address + (bit_width >> 3) > sizeof(struct ia64_nfw_context))
return AE_BAD_PARAMETER;
switch (function) {
case ACPI_READ:
if (address == offsetof(struct ia64_nfw_context, ret))
aml_nfw_execute(context);
aml_nfw_read_arg(offset, bit_width, value);
break;
case ACPI_WRITE:
aml_nfw_write_arg(offset, bit_width, value);
break;
}
return AE_OK;
}
static struct ia64_nfw_context global_context;
static int global_handler_registered;
static int aml_nfw_add_global_handler(void)
{
acpi_status status;
if (global_handler_registered)
return 0;
status = acpi_install_address_space_handler(ACPI_ROOT_OBJECT,
AML_NFW_SPACE, aml_nfw_handler, NULL, &global_context);
if (ACPI_FAILURE(status))
return -ENODEV;
global_handler_registered = 1;
printk(KERN_INFO "Global 0x%02X opregion handler registered\n",
AML_NFW_SPACE);
return 0;
}
static int aml_nfw_remove_global_handler(void)
{
acpi_status status;
if (!global_handler_registered)
return 0;
status = acpi_remove_address_space_handler(ACPI_ROOT_OBJECT,
AML_NFW_SPACE, aml_nfw_handler);
if (ACPI_FAILURE(status))
return -ENODEV;
global_handler_registered = 0;
printk(KERN_INFO "Global 0x%02X opregion handler removed\n",
AML_NFW_SPACE);
return 0;
}
static int aml_nfw_add(struct acpi_device *device)
{
/*
* We would normally allocate a new context structure and install
* the address space handler for the specific device we found.
* But the HP-UX implementation shares a single global context
* and always puts the handler at the root, so we'll do the same.
*/
return aml_nfw_add_global_handler();
}
static void aml_nfw_remove(struct acpi_device *device)
{
aml_nfw_remove_global_handler();
}
static const struct acpi_device_id aml_nfw_ids[] = {
{"HPQ5001", 0},
{"", 0}
};
static struct acpi_driver acpi_aml_nfw_driver = {
.name = "native firmware",
.ids = aml_nfw_ids,
.ops = {
.add = aml_nfw_add,
.remove = aml_nfw_remove,
},
};
static int __init aml_nfw_init(void)
{
int result;
if (force_register)
aml_nfw_add_global_handler();
result = acpi_bus_register_driver(&acpi_aml_nfw_driver);
if (result < 0) {
aml_nfw_remove_global_handler();
return result;
}
return 0;
}
static void __exit aml_nfw_exit(void)
{
acpi_bus_unregister_driver(&acpi_aml_nfw_driver);
aml_nfw_remove_global_handler();
}
module_init(aml_nfw_init);
module_exit(aml_nfw_exit);

File diff suppressed because it is too large Load Diff

View File

@ -1,6 +0,0 @@
# SPDX-License-Identifier: GPL-2.0
generated-y += syscall_table.h
generic-y += agp.h
generic-y += kvm_para.h
generic-y += mcs_spinlock.h
generic-y += vtime.h

View File

@ -1,49 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0-only */
/*
* IA64 specific ACPICA environments and implementation
*
* Copyright (C) 2014, Intel Corporation
* Author: Lv Zheng <lv.zheng@intel.com>
*/
#ifndef _ASM_IA64_ACENV_H
#define _ASM_IA64_ACENV_H
#include <asm/intrinsics.h>
#define COMPILER_DEPENDENT_INT64 long
#define COMPILER_DEPENDENT_UINT64 unsigned long
/* Asm macros */
static inline int
ia64_acpi_acquire_global_lock(unsigned int *lock)
{
unsigned int old, new, val;
do {
old = *lock;
new = (((old & ~0x3) + 2) + ((old >> 1) & 0x1));
val = ia64_cmpxchg4_acq(lock, new, old);
} while (unlikely (val != old));
return (new < 3) ? -1 : 0;
}
static inline int
ia64_acpi_release_global_lock(unsigned int *lock)
{
unsigned int old, new, val;
do {
old = *lock;
new = old & ~0x3;
val = ia64_cmpxchg4_acq(lock, new, old);
} while (unlikely (val != old));
return old & 0x1;
}
#define ACPI_ACQUIRE_GLOBAL_LOCK(facs, Acq) \
((Acq) = ia64_acpi_acquire_global_lock(&facs->global_lock))
#define ACPI_RELEASE_GLOBAL_LOCK(facs, Acq) \
((Acq) = ia64_acpi_release_global_lock(&facs->global_lock))
#endif /* _ASM_IA64_ACENV_H */

View File

@ -1,17 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0-only */
/*
* (c) Copyright 2003, 2006 Hewlett-Packard Development Company, L.P.
* Alex Williamson <alex.williamson@hp.com>
* Bjorn Helgaas <bjorn.helgaas@hp.com>
*
* Vendor specific extensions to ACPI.
*/
#ifndef _ASM_IA64_ACPI_EXT_H
#define _ASM_IA64_ACPI_EXT_H
#include <linux/types.h>
extern acpi_status hp_acpi_csr_space (acpi_handle, u64 *base, u64 *length);
#endif /* _ASM_IA64_ACPI_EXT_H */

View File

@ -1,110 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0-or-later */
/*
* Copyright (C) 1999 VA Linux Systems
* Copyright (C) 1999 Walt Drummond <drummond@valinux.com>
* Copyright (C) 2000,2001 J.I. Lee <jung-ik.lee@intel.com>
* Copyright (C) 2001,2002 Paul Diefenbaugh <paul.s.diefenbaugh@intel.com>
*/
#ifndef _ASM_ACPI_H
#define _ASM_ACPI_H
#ifdef __KERNEL__
#include <acpi/proc_cap_intel.h>
#include <linux/init.h>
#include <linux/numa.h>
#include <asm/numa.h>
extern int acpi_lapic;
#define acpi_disabled 0 /* ACPI always enabled on IA64 */
#define acpi_noirq 0 /* ACPI always enabled on IA64 */
#define acpi_pci_disabled 0 /* ACPI PCI always enabled on IA64 */
#define acpi_strict 1 /* no ACPI spec workarounds on IA64 */
static inline bool acpi_has_cpu_in_madt(void)
{
return !!acpi_lapic;
}
#define acpi_processor_cstate_check(x) (x) /* no idle limits on IA64 :) */
static inline void disable_acpi(void) { }
int acpi_request_vector (u32 int_type);
int acpi_gsi_to_irq (u32 gsi, unsigned int *irq);
/* Low-level suspend routine. */
extern int acpi_suspend_lowlevel(void);
static inline unsigned long acpi_get_wakeup_address(void)
{
return 0;
}
/*
* Record the cpei override flag and current logical cpu. This is
* useful for CPU removal.
*/
extern unsigned int can_cpei_retarget(void);
extern unsigned int is_cpu_cpei_target(unsigned int cpu);
extern void set_cpei_target_cpu(unsigned int cpu);
extern unsigned int get_cpei_target_cpu(void);
extern void prefill_possible_map(void);
#ifdef CONFIG_ACPI_HOTPLUG_CPU
extern int additional_cpus;
#else
#define additional_cpus 0
#endif
#ifdef CONFIG_ACPI_NUMA
#if MAX_NUMNODES > 256
#define MAX_PXM_DOMAINS MAX_NUMNODES
#else
#define MAX_PXM_DOMAINS (256)
#endif
extern int pxm_to_nid_map[MAX_PXM_DOMAINS];
extern int __initdata nid_to_pxm_map[MAX_NUMNODES];
#endif
static inline bool arch_has_acpi_pdc(void) { return true; }
static inline void arch_acpi_set_proc_cap_bits(u32 *cap)
{
*cap |= ACPI_PROC_CAP_EST_CAPABILITY_SMP;
}
#ifdef CONFIG_ACPI_NUMA
extern cpumask_t early_cpu_possible_map;
#define for_each_possible_early_cpu(cpu) \
for_each_cpu((cpu), &early_cpu_possible_map)
static inline void per_cpu_scan_finalize(int min_cpus, int reserve_cpus)
{
int low_cpu, high_cpu;
int cpu;
int next_nid = 0;
low_cpu = cpumask_weight(&early_cpu_possible_map);
high_cpu = max(low_cpu, min_cpus);
high_cpu = min(high_cpu + reserve_cpus, NR_CPUS);
for (cpu = low_cpu; cpu < high_cpu; cpu++) {
cpumask_set_cpu(cpu, &early_cpu_possible_map);
if (node_cpuid[cpu].nid == NUMA_NO_NODE) {
node_cpuid[cpu].nid = next_nid;
next_nid++;
if (next_nid >= num_online_nodes())
next_nid = 0;
}
}
}
extern void acpi_numa_fixup(void);
#endif /* CONFIG_ACPI_NUMA */
#endif /*__KERNEL__*/
#endif /*_ASM_ACPI_H*/

View File

@ -1 +0,0 @@
#include <generated/asm-offsets.h>

View File

@ -1,30 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef _ASM_IA64_ASM_PROTOTYPES_H
#define _ASM_IA64_ASM_PROTOTYPES_H
#include <asm/cacheflush.h>
#include <asm/checksum.h>
#include <asm/esi.h>
#include <asm/ftrace.h>
#include <asm/page.h>
#include <asm/pal.h>
#include <asm/string.h>
#include <linux/uaccess.h>
#include <asm/unwind.h>
#include <asm/xor.h>
extern const char ia64_ivt[];
signed int __divsi3(signed int, unsigned int);
signed int __modsi3(signed int, unsigned int);
signed long long __divdi3(signed long long, unsigned long long);
signed long long __moddi3(signed long long, unsigned long long);
unsigned int __udivsi3(unsigned int, unsigned int);
unsigned int __umodsi3(unsigned int, unsigned int);
unsigned long long __udivdi3(unsigned long long, unsigned long long);
unsigned long long __umoddi3(unsigned long long, unsigned long long);
#endif /* _ASM_IA64_ASM_PROTOTYPES_H */

View File

@ -1,136 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef _ASM_IA64_ASMMACRO_H
#define _ASM_IA64_ASMMACRO_H
/*
* Copyright (C) 2000-2001, 2003-2004 Hewlett-Packard Co
* David Mosberger-Tang <davidm@hpl.hp.com>
*/
#define ENTRY(name) \
.align 32; \
.proc name; \
name:
#define ENTRY_MIN_ALIGN(name) \
.align 16; \
.proc name; \
name:
#define GLOBAL_ENTRY(name) \
.global name; \
ENTRY(name)
#define END(name) \
.endp name
/*
* Helper macros to make unwind directives more readable:
*/
/* prologue_gr: */
#define ASM_UNW_PRLG_RP 0x8
#define ASM_UNW_PRLG_PFS 0x4
#define ASM_UNW_PRLG_PSP 0x2
#define ASM_UNW_PRLG_PR 0x1
#define ASM_UNW_PRLG_GRSAVE(ninputs) (32+(ninputs))
/*
* Helper macros for accessing user memory.
*
* When adding any new .section/.previous entries here, make sure to
* also add it to the DISCARD section in arch/ia64/kernel/gate.lds.S or
* unpleasant things will happen.
*/
.section "__ex_table", "a" // declare section & section attributes
.previous
# define EX(y,x...) \
.xdata4 "__ex_table", 99f-., y-.; \
[99:] x
# define EXCLR(y,x...) \
.xdata4 "__ex_table", 99f-., y-.+4; \
[99:] x
/*
* Tag MCA recoverable instruction ranges.
*/
.section "__mca_table", "a" // declare section & section attributes
.previous
# define MCA_RECOVER_RANGE(y) \
.xdata4 "__mca_table", y-., 99f-.; \
[99:]
/*
* Mark instructions that need a load of a virtual address patched to be
* a load of a physical address. We use this either in critical performance
* path (ivt.S - TLB miss processing) or in places where it might not be
* safe to use a "tpa" instruction (mca_asm.S - error recovery).
*/
.section ".data..patch.vtop", "a" // declare section & section attributes
.previous
#define LOAD_PHYSICAL(pr, reg, obj) \
[1:](pr)movl reg = obj; \
.xdata4 ".data..patch.vtop", 1b-.
/*
* For now, we always put in the McKinley E9 workaround. On CPUs that don't need it,
* we'll patch out the work-around bundles with NOPs, so their impact is minimal.
*/
#define DO_MCKINLEY_E9_WORKAROUND
#ifdef DO_MCKINLEY_E9_WORKAROUND
.section ".data..patch.mckinley_e9", "a"
.previous
/* workaround for Itanium 2 Errata 9: */
# define FSYS_RETURN \
.xdata4 ".data..patch.mckinley_e9", 1f-.; \
1:{ .mib; \
nop.m 0; \
mov r16=ar.pfs; \
br.call.sptk.many b7=2f;; \
}; \
2:{ .mib; \
nop.m 0; \
mov ar.pfs=r16; \
br.ret.sptk.many b6;; \
}
#else
# define FSYS_RETURN br.ret.sptk.many b6
#endif
/*
* If physical stack register size is different from DEF_NUM_STACK_REG,
* dynamically patch the kernel for correct size.
*/
.section ".data..patch.phys_stack_reg", "a"
.previous
#define LOAD_PHYS_STACK_REG_SIZE(reg) \
[1:] adds reg=IA64_NUM_PHYS_STACK_REG*8+8,r0; \
.xdata4 ".data..patch.phys_stack_reg", 1b-.
/*
* Up until early 2004, use of .align within a function caused bad unwind info.
* TEXT_ALIGN(n) expands into ".align n" if a fixed GAS is available or into nothing
* otherwise.
*/
#ifdef HAVE_WORKING_TEXT_ALIGN
# define TEXT_ALIGN(n) .align n
#else
# define TEXT_ALIGN(n)
#endif
#ifdef HAVE_SERIALIZE_DIRECTIVE
# define dv_serialize_data .serialize.data
# define dv_serialize_instruction .serialize.instruction
#else
# define dv_serialize_data
# define dv_serialize_instruction
#endif
#endif /* _ASM_IA64_ASMMACRO_H */

View File

@ -1,216 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef _ASM_IA64_ATOMIC_H
#define _ASM_IA64_ATOMIC_H
/*
* Atomic operations that C can't guarantee us. Useful for
* resource counting etc..
*
* NOTE: don't mess with the types below! The "unsigned long" and
* "int" types were carefully placed so as to ensure proper operation
* of the macros.
*
* Copyright (C) 1998, 1999, 2002-2003 Hewlett-Packard Co
* David Mosberger-Tang <davidm@hpl.hp.com>
*/
#include <linux/types.h>
#include <asm/intrinsics.h>
#include <asm/barrier.h>
#define ATOMIC64_INIT(i) { (i) }
#define arch_atomic_read(v) READ_ONCE((v)->counter)
#define arch_atomic64_read(v) READ_ONCE((v)->counter)
#define arch_atomic_set(v,i) WRITE_ONCE(((v)->counter), (i))
#define arch_atomic64_set(v,i) WRITE_ONCE(((v)->counter), (i))
#define ATOMIC_OP(op, c_op) \
static __inline__ int \
ia64_atomic_##op (int i, atomic_t *v) \
{ \
__s32 old, new; \
CMPXCHG_BUGCHECK_DECL \
\
do { \
CMPXCHG_BUGCHECK(v); \
old = arch_atomic_read(v); \
new = old c_op i; \
} while (ia64_cmpxchg(acq, v, old, new, sizeof(atomic_t)) != old); \
return new; \
}
#define ATOMIC_FETCH_OP(op, c_op) \
static __inline__ int \
ia64_atomic_fetch_##op (int i, atomic_t *v) \
{ \
__s32 old, new; \
CMPXCHG_BUGCHECK_DECL \
\
do { \
CMPXCHG_BUGCHECK(v); \
old = arch_atomic_read(v); \
new = old c_op i; \
} while (ia64_cmpxchg(acq, v, old, new, sizeof(atomic_t)) != old); \
return old; \
}
#define ATOMIC_OPS(op, c_op) \
ATOMIC_OP(op, c_op) \
ATOMIC_FETCH_OP(op, c_op)
ATOMIC_OPS(add, +)
ATOMIC_OPS(sub, -)
#ifdef __OPTIMIZE__
#define __ia64_atomic_const(i) \
static const int __ia64_atomic_p = __builtin_constant_p(i) ? \
((i) == 1 || (i) == 4 || (i) == 8 || (i) == 16 || \
(i) == -1 || (i) == -4 || (i) == -8 || (i) == -16) : 0;\
__ia64_atomic_p
#else
#define __ia64_atomic_const(i) 0
#endif
#define arch_atomic_add_return(i,v) \
({ \
int __ia64_aar_i = (i); \
__ia64_atomic_const(i) \
? ia64_fetch_and_add(__ia64_aar_i, &(v)->counter) \
: ia64_atomic_add(__ia64_aar_i, v); \
})
#define arch_atomic_sub_return(i,v) \
({ \
int __ia64_asr_i = (i); \
__ia64_atomic_const(i) \
? ia64_fetch_and_add(-__ia64_asr_i, &(v)->counter) \
: ia64_atomic_sub(__ia64_asr_i, v); \
})
#define arch_atomic_fetch_add(i,v) \
({ \
int __ia64_aar_i = (i); \
__ia64_atomic_const(i) \
? ia64_fetchadd(__ia64_aar_i, &(v)->counter, acq) \
: ia64_atomic_fetch_add(__ia64_aar_i, v); \
})
#define arch_atomic_fetch_sub(i,v) \
({ \
int __ia64_asr_i = (i); \
__ia64_atomic_const(i) \
? ia64_fetchadd(-__ia64_asr_i, &(v)->counter, acq) \
: ia64_atomic_fetch_sub(__ia64_asr_i, v); \
})
ATOMIC_FETCH_OP(and, &)
ATOMIC_FETCH_OP(or, |)
ATOMIC_FETCH_OP(xor, ^)
#define arch_atomic_and(i,v) (void)ia64_atomic_fetch_and(i,v)
#define arch_atomic_or(i,v) (void)ia64_atomic_fetch_or(i,v)
#define arch_atomic_xor(i,v) (void)ia64_atomic_fetch_xor(i,v)
#define arch_atomic_fetch_and(i,v) ia64_atomic_fetch_and(i,v)
#define arch_atomic_fetch_or(i,v) ia64_atomic_fetch_or(i,v)
#define arch_atomic_fetch_xor(i,v) ia64_atomic_fetch_xor(i,v)
#undef ATOMIC_OPS
#undef ATOMIC_FETCH_OP
#undef ATOMIC_OP
#define ATOMIC64_OP(op, c_op) \
static __inline__ s64 \
ia64_atomic64_##op (s64 i, atomic64_t *v) \
{ \
s64 old, new; \
CMPXCHG_BUGCHECK_DECL \
\
do { \
CMPXCHG_BUGCHECK(v); \
old = arch_atomic64_read(v); \
new = old c_op i; \
} while (ia64_cmpxchg(acq, v, old, new, sizeof(atomic64_t)) != old); \
return new; \
}
#define ATOMIC64_FETCH_OP(op, c_op) \
static __inline__ s64 \
ia64_atomic64_fetch_##op (s64 i, atomic64_t *v) \
{ \
s64 old, new; \
CMPXCHG_BUGCHECK_DECL \
\
do { \
CMPXCHG_BUGCHECK(v); \
old = arch_atomic64_read(v); \
new = old c_op i; \
} while (ia64_cmpxchg(acq, v, old, new, sizeof(atomic64_t)) != old); \
return old; \
}
#define ATOMIC64_OPS(op, c_op) \
ATOMIC64_OP(op, c_op) \
ATOMIC64_FETCH_OP(op, c_op)
ATOMIC64_OPS(add, +)
ATOMIC64_OPS(sub, -)
#define arch_atomic64_add_return(i,v) \
({ \
s64 __ia64_aar_i = (i); \
__ia64_atomic_const(i) \
? ia64_fetch_and_add(__ia64_aar_i, &(v)->counter) \
: ia64_atomic64_add(__ia64_aar_i, v); \
})
#define arch_atomic64_sub_return(i,v) \
({ \
s64 __ia64_asr_i = (i); \
__ia64_atomic_const(i) \
? ia64_fetch_and_add(-__ia64_asr_i, &(v)->counter) \
: ia64_atomic64_sub(__ia64_asr_i, v); \
})
#define arch_atomic64_fetch_add(i,v) \
({ \
s64 __ia64_aar_i = (i); \
__ia64_atomic_const(i) \
? ia64_fetchadd(__ia64_aar_i, &(v)->counter, acq) \
: ia64_atomic64_fetch_add(__ia64_aar_i, v); \
})
#define arch_atomic64_fetch_sub(i,v) \
({ \
s64 __ia64_asr_i = (i); \
__ia64_atomic_const(i) \
? ia64_fetchadd(-__ia64_asr_i, &(v)->counter, acq) \
: ia64_atomic64_fetch_sub(__ia64_asr_i, v); \
})
ATOMIC64_FETCH_OP(and, &)
ATOMIC64_FETCH_OP(or, |)
ATOMIC64_FETCH_OP(xor, ^)
#define arch_atomic64_and(i,v) (void)ia64_atomic64_fetch_and(i,v)
#define arch_atomic64_or(i,v) (void)ia64_atomic64_fetch_or(i,v)
#define arch_atomic64_xor(i,v) (void)ia64_atomic64_fetch_xor(i,v)
#define arch_atomic64_fetch_and(i,v) ia64_atomic64_fetch_and(i,v)
#define arch_atomic64_fetch_or(i,v) ia64_atomic64_fetch_or(i,v)
#define arch_atomic64_fetch_xor(i,v) ia64_atomic64_fetch_xor(i,v)
#undef ATOMIC64_OPS
#undef ATOMIC64_FETCH_OP
#undef ATOMIC64_OP
#define arch_atomic_add(i,v) (void)arch_atomic_add_return((i), (v))
#define arch_atomic_sub(i,v) (void)arch_atomic_sub_return((i), (v))
#define arch_atomic64_add(i,v) (void)arch_atomic64_add_return((i), (v))
#define arch_atomic64_sub(i,v) (void)arch_atomic64_sub_return((i), (v))
#endif /* _ASM_IA64_ATOMIC_H */

View File

@ -1,79 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Memory barrier definitions. This is based on information published
* in the Processor Abstraction Layer and the System Abstraction Layer
* manual.
*
* Copyright (C) 1998-2003 Hewlett-Packard Co
* David Mosberger-Tang <davidm@hpl.hp.com>
* Copyright (C) 1999 Asit Mallick <asit.k.mallick@intel.com>
* Copyright (C) 1999 Don Dugger <don.dugger@intel.com>
*/
#ifndef _ASM_IA64_BARRIER_H
#define _ASM_IA64_BARRIER_H
#include <linux/compiler.h>
/*
* Macros to force memory ordering. In these descriptions, "previous"
* and "subsequent" refer to program order; "visible" means that all
* architecturally visible effects of a memory access have occurred
* (at a minimum, this means the memory has been read or written).
*
* wmb(): Guarantees that all preceding stores to memory-
* like regions are visible before any subsequent
* stores and that all following stores will be
* visible only after all previous stores.
* rmb(): Like wmb(), but for reads.
* mb(): wmb()/rmb() combo, i.e., all previous memory
* accesses are visible before all subsequent
* accesses and vice versa. This is also known as
* a "fence."
*
* Note: "mb()" and its variants cannot be used as a fence to order
* accesses to memory mapped I/O registers. For that, mf.a needs to
* be used. However, we don't want to always use mf.a because (a)
* it's (presumably) much slower than mf and (b) mf.a is supported for
* sequential memory pages only.
*/
#define mb() ia64_mf()
#define rmb() mb()
#define wmb() mb()
#define dma_rmb() mb()
#define dma_wmb() mb()
# define __smp_mb() mb()
#define __smp_mb__before_atomic() barrier()
#define __smp_mb__after_atomic() barrier()
/*
* IA64 GCC turns volatile stores into st.rel and volatile loads into ld.acq no
* need for asm trickery!
*/
#define __smp_store_release(p, v) \
do { \
compiletime_assert_atomic_type(*p); \
barrier(); \
WRITE_ONCE(*p, v); \
} while (0)
#define __smp_load_acquire(p) \
({ \
typeof(*p) ___p1 = READ_ONCE(*p); \
compiletime_assert_atomic_type(*p); \
barrier(); \
___p1; \
})
/*
* The group barrier in front of the rsm & ssm are necessary to ensure
* that none of the previous instructions in the same group are
* affected by the rsm/ssm.
*/
#include <asm-generic/barrier.h>
#endif /* _ASM_IA64_BARRIER_H */

View File

@ -1,453 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef _ASM_IA64_BITOPS_H
#define _ASM_IA64_BITOPS_H
/*
* Copyright (C) 1998-2003 Hewlett-Packard Co
* David Mosberger-Tang <davidm@hpl.hp.com>
*
* 02/06/02 find_next_bit() and find_first_bit() added from Erich Focht's ia64
* O(1) scheduler patch
*/
#ifndef _LINUX_BITOPS_H
#error only <linux/bitops.h> can be included directly
#endif
#include <linux/compiler.h>
#include <linux/types.h>
#include <asm/intrinsics.h>
#include <asm/barrier.h>
/**
* set_bit - Atomically set a bit in memory
* @nr: the bit to set
* @addr: the address to start counting from
*
* This function is atomic and may not be reordered. See __set_bit()
* if you do not require the atomic guarantees.
* Note that @nr may be almost arbitrarily large; this function is not
* restricted to acting on a single-word quantity.
*
* The address must be (at least) "long" aligned.
* Note that there are driver (e.g., eepro100) which use these operations to
* operate on hw-defined data-structures, so we can't easily change these
* operations to force a bigger alignment.
*
* bit 0 is the LSB of addr; bit 32 is the LSB of (addr+1).
*/
static __inline__ void
set_bit (int nr, volatile void *addr)
{
__u32 bit, old, new;
volatile __u32 *m;
CMPXCHG_BUGCHECK_DECL
m = (volatile __u32 *) addr + (nr >> 5);
bit = 1 << (nr & 31);
do {
CMPXCHG_BUGCHECK(m);
old = *m;
new = old | bit;
} while (cmpxchg_acq(m, old, new) != old);
}
/**
* arch___set_bit - Set a bit in memory
* @nr: the bit to set
* @addr: the address to start counting from
*
* Unlike set_bit(), this function is non-atomic and may be reordered.
* If it's called on the same region of memory simultaneously, the effect
* may be that only one operation succeeds.
*/
static __always_inline void
arch___set_bit(unsigned long nr, volatile unsigned long *addr)
{
*((__u32 *) addr + (nr >> 5)) |= (1 << (nr & 31));
}
/**
* clear_bit - Clears a bit in memory
* @nr: Bit to clear
* @addr: Address to start counting from
*
* clear_bit() is atomic and may not be reordered. However, it does
* not contain a memory barrier, so if it is used for locking purposes,
* you should call smp_mb__before_atomic() and/or smp_mb__after_atomic()
* in order to ensure changes are visible on other processors.
*/
static __inline__ void
clear_bit (int nr, volatile void *addr)
{
__u32 mask, old, new;
volatile __u32 *m;
CMPXCHG_BUGCHECK_DECL
m = (volatile __u32 *) addr + (nr >> 5);
mask = ~(1 << (nr & 31));
do {
CMPXCHG_BUGCHECK(m);
old = *m;
new = old & mask;
} while (cmpxchg_acq(m, old, new) != old);
}
/**
* clear_bit_unlock - Clears a bit in memory with release
* @nr: Bit to clear
* @addr: Address to start counting from
*
* clear_bit_unlock() is atomic and may not be reordered. It does
* contain a memory barrier suitable for unlock type operations.
*/
static __inline__ void
clear_bit_unlock (int nr, volatile void *addr)
{
__u32 mask, old, new;
volatile __u32 *m;
CMPXCHG_BUGCHECK_DECL
m = (volatile __u32 *) addr + (nr >> 5);
mask = ~(1 << (nr & 31));
do {
CMPXCHG_BUGCHECK(m);
old = *m;
new = old & mask;
} while (cmpxchg_rel(m, old, new) != old);
}
/**
* __clear_bit_unlock - Non-atomically clears a bit in memory with release
* @nr: Bit to clear
* @addr: Address to start counting from
*
* Similarly to clear_bit_unlock, the implementation uses a store
* with release semantics. See also arch_spin_unlock().
*/
static __inline__ void
__clear_bit_unlock(int nr, void *addr)
{
__u32 * const m = (__u32 *) addr + (nr >> 5);
__u32 const new = *m & ~(1 << (nr & 31));
ia64_st4_rel_nta(m, new);
}
/**
* arch___clear_bit - Clears a bit in memory (non-atomic version)
* @nr: the bit to clear
* @addr: the address to start counting from
*
* Unlike clear_bit(), this function is non-atomic and may be reordered.
* If it's called on the same region of memory simultaneously, the effect
* may be that only one operation succeeds.
*/
static __always_inline void
arch___clear_bit(unsigned long nr, volatile unsigned long *addr)
{
*((__u32 *) addr + (nr >> 5)) &= ~(1 << (nr & 31));
}
/**
* change_bit - Toggle a bit in memory
* @nr: Bit to toggle
* @addr: Address to start counting from
*
* change_bit() is atomic and may not be reordered.
* Note that @nr may be almost arbitrarily large; this function is not
* restricted to acting on a single-word quantity.
*/
static __inline__ void
change_bit (int nr, volatile void *addr)
{
__u32 bit, old, new;
volatile __u32 *m;
CMPXCHG_BUGCHECK_DECL
m = (volatile __u32 *) addr + (nr >> 5);
bit = (1 << (nr & 31));
do {
CMPXCHG_BUGCHECK(m);
old = *m;
new = old ^ bit;
} while (cmpxchg_acq(m, old, new) != old);
}
/**
* arch___change_bit - Toggle a bit in memory
* @nr: the bit to toggle
* @addr: the address to start counting from
*
* Unlike change_bit(), this function is non-atomic and may be reordered.
* If it's called on the same region of memory simultaneously, the effect
* may be that only one operation succeeds.
*/
static __always_inline void
arch___change_bit(unsigned long nr, volatile unsigned long *addr)
{
*((__u32 *) addr + (nr >> 5)) ^= (1 << (nr & 31));
}
/**
* test_and_set_bit - Set a bit and return its old value
* @nr: Bit to set
* @addr: Address to count from
*
* This operation is atomic and cannot be reordered.
* It also implies the acquisition side of the memory barrier.
*/
static __inline__ int
test_and_set_bit (int nr, volatile void *addr)
{
__u32 bit, old, new;
volatile __u32 *m;
CMPXCHG_BUGCHECK_DECL
m = (volatile __u32 *) addr + (nr >> 5);
bit = 1 << (nr & 31);
do {
CMPXCHG_BUGCHECK(m);
old = *m;
new = old | bit;
} while (cmpxchg_acq(m, old, new) != old);
return (old & bit) != 0;
}
/**
* test_and_set_bit_lock - Set a bit and return its old value for lock
* @nr: Bit to set
* @addr: Address to count from
*
* This is the same as test_and_set_bit on ia64
*/
#define test_and_set_bit_lock test_and_set_bit
/**
* arch___test_and_set_bit - Set a bit and return its old value
* @nr: Bit to set
* @addr: Address to count from
*
* This operation is non-atomic and can be reordered.
* If two examples of this operation race, one can appear to succeed
* but actually fail. You must protect multiple accesses with a lock.
*/
static __always_inline bool
arch___test_and_set_bit(unsigned long nr, volatile unsigned long *addr)
{
__u32 *p = (__u32 *) addr + (nr >> 5);
__u32 m = 1 << (nr & 31);
int oldbitset = (*p & m) != 0;
*p |= m;
return oldbitset;
}
/**
* test_and_clear_bit - Clear a bit and return its old value
* @nr: Bit to clear
* @addr: Address to count from
*
* This operation is atomic and cannot be reordered.
* It also implies the acquisition side of the memory barrier.
*/
static __inline__ int
test_and_clear_bit (int nr, volatile void *addr)
{
__u32 mask, old, new;
volatile __u32 *m;
CMPXCHG_BUGCHECK_DECL
m = (volatile __u32 *) addr + (nr >> 5);
mask = ~(1 << (nr & 31));
do {
CMPXCHG_BUGCHECK(m);
old = *m;
new = old & mask;
} while (cmpxchg_acq(m, old, new) != old);
return (old & ~mask) != 0;
}
/**
* arch___test_and_clear_bit - Clear a bit and return its old value
* @nr: Bit to clear
* @addr: Address to count from
*
* This operation is non-atomic and can be reordered.
* If two examples of this operation race, one can appear to succeed
* but actually fail. You must protect multiple accesses with a lock.
*/
static __always_inline bool
arch___test_and_clear_bit(unsigned long nr, volatile unsigned long *addr)
{
__u32 *p = (__u32 *) addr + (nr >> 5);
__u32 m = 1 << (nr & 31);
int oldbitset = (*p & m) != 0;
*p &= ~m;
return oldbitset;
}
/**
* test_and_change_bit - Change a bit and return its old value
* @nr: Bit to change
* @addr: Address to count from
*
* This operation is atomic and cannot be reordered.
* It also implies the acquisition side of the memory barrier.
*/
static __inline__ int
test_and_change_bit (int nr, volatile void *addr)
{
__u32 bit, old, new;
volatile __u32 *m;
CMPXCHG_BUGCHECK_DECL
m = (volatile __u32 *) addr + (nr >> 5);
bit = (1 << (nr & 31));
do {
CMPXCHG_BUGCHECK(m);
old = *m;
new = old ^ bit;
} while (cmpxchg_acq(m, old, new) != old);
return (old & bit) != 0;
}
/**
* arch___test_and_change_bit - Change a bit and return its old value
* @nr: Bit to change
* @addr: Address to count from
*
* This operation is non-atomic and can be reordered.
*/
static __always_inline bool
arch___test_and_change_bit(unsigned long nr, volatile unsigned long *addr)
{
__u32 old, bit = (1 << (nr & 31));
__u32 *m = (__u32 *) addr + (nr >> 5);
old = *m;
*m = old ^ bit;
return (old & bit) != 0;
}
#define arch_test_bit generic_test_bit
#define arch_test_bit_acquire generic_test_bit_acquire
/**
* ffz - find the first zero bit in a long word
* @x: The long word to find the bit in
*
* Returns the bit-number (0..63) of the first (least significant) zero bit.
* Undefined if no zero exists, so code should check against ~0UL first...
*/
static inline unsigned long
ffz (unsigned long x)
{
unsigned long result;
result = ia64_popcnt(x & (~x - 1));
return result;
}
/**
* __ffs - find first bit in word.
* @x: The word to search
*
* Undefined if no bit exists, so code should check against 0 first.
*/
static __inline__ unsigned long
__ffs (unsigned long x)
{
unsigned long result;
result = ia64_popcnt((x-1) & ~x);
return result;
}
#ifdef __KERNEL__
/*
* Return bit number of last (most-significant) bit set. Undefined
* for x==0. Bits are numbered from 0..63 (e.g., ia64_fls(9) == 3).
*/
static inline unsigned long
ia64_fls (unsigned long x)
{
long double d = x;
long exp;
exp = ia64_getf_exp(d);
return exp - 0xffff;
}
/*
* Find the last (most significant) bit set. Returns 0 for x==0 and
* bits are numbered from 1..32 (e.g., fls(9) == 4).
*/
static inline int fls(unsigned int t)
{
unsigned long x = t & 0xffffffffu;
if (!x)
return 0;
x |= x >> 1;
x |= x >> 2;
x |= x >> 4;
x |= x >> 8;
x |= x >> 16;
return ia64_popcnt(x);
}
/*
* Find the last (most significant) bit set. Undefined for x==0.
* Bits are numbered from 0..63 (e.g., __fls(9) == 3).
*/
static inline unsigned long
__fls (unsigned long x)
{
x |= x >> 1;
x |= x >> 2;
x |= x >> 4;
x |= x >> 8;
x |= x >> 16;
x |= x >> 32;
return ia64_popcnt(x) - 1;
}
#include <asm-generic/bitops/fls64.h>
#include <asm-generic/bitops/builtin-ffs.h>
/*
* hweightN: returns the hamming weight (i.e. the number
* of bits set) of a N-bit word
*/
static __inline__ unsigned long __arch_hweight64(unsigned long x)
{
unsigned long result;
result = ia64_popcnt(x);
return result;
}
#define __arch_hweight32(x) ((unsigned int) __arch_hweight64((x) & 0xfffffffful))
#define __arch_hweight16(x) ((unsigned int) __arch_hweight64((x) & 0xfffful))
#define __arch_hweight8(x) ((unsigned int) __arch_hweight64((x) & 0xfful))
#include <asm-generic/bitops/const_hweight.h>
#endif /* __KERNEL__ */
#ifdef __KERNEL__
#include <asm-generic/bitops/non-instrumented-non-atomic.h>
#include <asm-generic/bitops/le.h>
#include <asm-generic/bitops/ext2-atomic-setbit.h>
#include <asm-generic/bitops/sched.h>
#endif /* __KERNEL__ */
#endif /* _ASM_IA64_BITOPS_H */

View File

@ -1,19 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef _ASM_IA64_BUG_H
#define _ASM_IA64_BUG_H
#ifdef CONFIG_BUG
#define ia64_abort() __builtin_trap()
#define BUG() do { \
printk("kernel BUG at %s:%d!\n", __FILE__, __LINE__); \
barrier_before_unreachable(); \
ia64_abort(); \
} while (0)
/* should this BUG be made generic? */
#define HAVE_ARCH_BUG
#endif
#include <asm-generic/bug.h>
#endif

View File

@ -1,30 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef _ASM_IA64_CACHE_H
#define _ASM_IA64_CACHE_H
/*
* Copyright (C) 1998-2000 Hewlett-Packard Co
* David Mosberger-Tang <davidm@hpl.hp.com>
*/
/* Bytes per L1 (data) cache line. */
#define L1_CACHE_SHIFT CONFIG_IA64_L1_CACHE_SHIFT
#define L1_CACHE_BYTES (1 << L1_CACHE_SHIFT)
#ifdef CONFIG_SMP
# define SMP_CACHE_SHIFT L1_CACHE_SHIFT
# define SMP_CACHE_BYTES L1_CACHE_BYTES
#else
/*
* The "aligned" directive can only _increase_ alignment, so this is
* safe and provides an easy way to avoid wasting space on a
* uni-processor:
*/
# define SMP_CACHE_SHIFT 3
# define SMP_CACHE_BYTES (1 << 3)
#endif
#define __read_mostly __section(".data..read_mostly")
#endif /* _ASM_IA64_CACHE_H */

View File

@ -1,39 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef _ASM_IA64_CACHEFLUSH_H
#define _ASM_IA64_CACHEFLUSH_H
/*
* Copyright (C) 2002 Hewlett-Packard Co
* David Mosberger-Tang <davidm@hpl.hp.com>
*/
#include <linux/page-flags.h>
#include <linux/bitops.h>
#include <asm/page.h>
#define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 1
static inline void flush_dcache_folio(struct folio *folio)
{
clear_bit(PG_arch_1, &folio->flags);
}
#define flush_dcache_folio flush_dcache_folio
static inline void flush_dcache_page(struct page *page)
{
flush_dcache_folio(page_folio(page));
}
extern void flush_icache_range(unsigned long start, unsigned long end);
#define flush_icache_range flush_icache_range
extern void clflush_cache_range(void *addr, int size);
#define flush_icache_user_page(vma, page, user_addr, len) \
do { \
unsigned long _addr = (unsigned long) page_address(page) + ((user_addr) & ~PAGE_MASK); \
flush_icache_range(_addr, _addr + (len)); \
} while (0)
#include <asm-generic/cacheflush.h>
#endif /* _ASM_IA64_CACHEFLUSH_H */

View File

@ -1,63 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef _ASM_IA64_CHECKSUM_H
#define _ASM_IA64_CHECKSUM_H
/*
* Modified 1998, 1999
* David Mosberger-Tang <davidm@hpl.hp.com>, Hewlett-Packard Co
*/
/*
* This is a version of ip_compute_csum() optimized for IP headers,
* which always checksum on 4 octet boundaries.
*/
extern __sum16 ip_fast_csum(const void *iph, unsigned int ihl);
/*
* Computes the checksum of the TCP/UDP pseudo-header returns a 16-bit
* checksum, already complemented
*/
extern __sum16 csum_tcpudp_magic(__be32 saddr, __be32 daddr,
__u32 len, __u8 proto, __wsum sum);
extern __wsum csum_tcpudp_nofold(__be32 saddr, __be32 daddr,
__u32 len, __u8 proto, __wsum sum);
/*
* Computes the checksum of a memory block at buff, length len,
* and adds in "sum" (32-bit)
*
* returns a 32-bit number suitable for feeding into itself
* or csum_tcpudp_magic
*
* this function must be called with even lengths, except
* for the last fragment, which may be odd
*
* it's best to have buff aligned on a 32-bit boundary
*/
extern __wsum csum_partial(const void *buff, int len, __wsum sum);
/*
* This routine is used for miscellaneous IP-like checksums, mainly in
* icmp.c
*/
extern __sum16 ip_compute_csum(const void *buff, int len);
/*
* Fold a partial checksum without adding pseudo headers.
*/
static inline __sum16 csum_fold(__wsum csum)
{
u32 sum = (__force u32)csum;
sum = (sum & 0xffff) + (sum >> 16);
sum = (sum & 0xffff) + (sum >> 16);
return (__force __sum16)~sum;
}
#define _HAVE_ARCH_IPV6_CSUM 1
struct in6_addr;
extern __sum16 csum_ipv6_magic(const struct in6_addr *saddr,
const struct in6_addr *daddr,
__u32 len, __u8 proto, __wsum csum);
#endif /* _ASM_IA64_CHECKSUM_H */

View File

@ -1,11 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0 */
/* IA64-specific clocksource additions */
#ifndef _ASM_IA64_CLOCKSOURCE_H
#define _ASM_IA64_CLOCKSOURCE_H
struct arch_clocksource_data {
void *fsys_mmio; /* used by fsyscall asm code */
};
#endif /* _ASM_IA64_CLOCKSOURCE_H */

View File

@ -1,33 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef _ASM_IA64_CMPXCHG_H
#define _ASM_IA64_CMPXCHG_H
#include <uapi/asm/cmpxchg.h>
#define arch_xchg(ptr, x) \
({(__typeof__(*(ptr))) __arch_xchg((unsigned long) (x), (ptr), sizeof(*(ptr)));})
#define arch_cmpxchg(ptr, o, n) cmpxchg_acq((ptr), (o), (n))
#define arch_cmpxchg64(ptr, o, n) cmpxchg_acq((ptr), (o), (n))
#define arch_cmpxchg_local arch_cmpxchg
#define arch_cmpxchg64_local arch_cmpxchg64
#ifdef CONFIG_IA64_DEBUG_CMPXCHG
# define CMPXCHG_BUGCHECK_DECL int _cmpxchg_bugcheck_count = 128;
# define CMPXCHG_BUGCHECK(v) \
do { \
if (_cmpxchg_bugcheck_count-- <= 0) { \
void *ip; \
extern int _printk(const char *fmt, ...); \
ip = (void *) ia64_getreg(_IA64_REG_IP); \
_printk("CMPXCHG_BUGCHECK: stuck at %p on word %p\n", ip, (v));\
break; \
} \
} while (0)
#else /* !CONFIG_IA64_DEBUG_CMPXCHG */
# define CMPXCHG_BUGCHECK_DECL
# define CMPXCHG_BUGCHECK(v)
#endif /* !CONFIG_IA64_DEBUG_CMPXCHG */
#endif /* _ASM_IA64_CMPXCHG_H */

View File

@ -1,23 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef _ASM_IA64_CPU_H_
#define _ASM_IA64_CPU_H_
#include <linux/device.h>
#include <linux/cpu.h>
#include <linux/topology.h>
#include <linux/percpu.h>
struct ia64_cpu {
struct cpu cpu;
};
DECLARE_PER_CPU(struct ia64_cpu, cpu_devices);
DECLARE_PER_CPU(int, cpu_state);
#ifdef CONFIG_HOTPLUG_CPU
extern int arch_register_cpu(int num);
extern void arch_unregister_cpu(int);
#endif
#endif /* _ASM_IA64_CPU_H_ */

View File

@ -1,21 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0-or-later */
/*
* Definitions for measuring cputime on ia64 machines.
*
* Based on <asm-powerpc/cputime.h>.
*
* Copyright (C) 2007 FUJITSU LIMITED
* Copyright (C) 2007 Hidetoshi Seto <seto.hidetoshi@jp.fujitsu.com>
*
* If we have CONFIG_VIRT_CPU_ACCOUNTING_NATIVE, we measure cpu time in nsec.
* Otherwise we measure cpu time in jiffies using the generic definitions.
*/
#ifndef __IA64_CPUTIME_H
#define __IA64_CPUTIME_H
#ifdef CONFIG_VIRT_CPU_ACCOUNTING_NATIVE
extern void arch_vtime_task_switch(struct task_struct *tsk);
#endif /* CONFIG_VIRT_CPU_ACCOUNTING_NATIVE */
#endif /* __IA64_CPUTIME_H */

View File

@ -1,18 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef _ASM_IA64_CURRENT_H
#define _ASM_IA64_CURRENT_H
/*
* Modified 1998-2000
* David Mosberger-Tang <davidm@hpl.hp.com>, Hewlett-Packard Co
*/
#include <asm/intrinsics.h>
/*
* In kernel mode, thread pointer (r13) is used to point to the current task
* structure.
*/
#define current ((struct task_struct *) ia64_getreg(_IA64_REG_TP))
#endif /* _ASM_IA64_CURRENT_H */

View File

@ -1,16 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef ASM_IA64_CYCLONE_H
#define ASM_IA64_CYCLONE_H
#ifdef CONFIG_IA64_CYCLONE
extern int use_cyclone;
extern void __init cyclone_setup(void);
#else /* CONFIG_IA64_CYCLONE */
#define use_cyclone 0
static inline void cyclone_setup(void)
{
printk(KERN_ERR "Cyclone Counter: System not configured"
" w/ CONFIG_IA64_CYCLONE.\n");
}
#endif /* CONFIG_IA64_CYCLONE */
#endif /* !ASM_IA64_CYCLONE_H */

View File

@ -1,89 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef _ASM_IA64_DELAY_H
#define _ASM_IA64_DELAY_H
/*
* Delay routines using a pre-computed "cycles/usec" value.
*
* Copyright (C) 1998, 1999 Hewlett-Packard Co
* David Mosberger-Tang <davidm@hpl.hp.com>
* Copyright (C) 1999 VA Linux Systems
* Copyright (C) 1999 Walt Drummond <drummond@valinux.com>
* Copyright (C) 1999 Asit Mallick <asit.k.mallick@intel.com>
* Copyright (C) 1999 Don Dugger <don.dugger@intel.com>
*/
#include <linux/kernel.h>
#include <linux/sched.h>
#include <linux/compiler.h>
#include <asm/intrinsics.h>
#include <asm/processor.h>
static __inline__ void
ia64_set_itm (unsigned long val)
{
ia64_setreg(_IA64_REG_CR_ITM, val);
ia64_srlz_d();
}
static __inline__ unsigned long
ia64_get_itm (void)
{
unsigned long result;
result = ia64_getreg(_IA64_REG_CR_ITM);
ia64_srlz_d();
return result;
}
static __inline__ void
ia64_set_itv (unsigned long val)
{
ia64_setreg(_IA64_REG_CR_ITV, val);
ia64_srlz_d();
}
static __inline__ unsigned long
ia64_get_itv (void)
{
return ia64_getreg(_IA64_REG_CR_ITV);
}
static __inline__ void
ia64_set_itc (unsigned long val)
{
ia64_setreg(_IA64_REG_AR_ITC, val);
ia64_srlz_d();
}
static __inline__ unsigned long
ia64_get_itc (void)
{
unsigned long result;
result = ia64_getreg(_IA64_REG_AR_ITC);
ia64_barrier();
#ifdef CONFIG_ITANIUM
while (unlikely((__s32) result == -1)) {
result = ia64_getreg(_IA64_REG_AR_ITC);
ia64_barrier();
}
#endif
return result;
}
extern void ia64_delay_loop (unsigned long loops);
static __inline__ void
__delay (unsigned long loops)
{
if (unlikely(loops < 1))
return;
ia64_delay_loop (loops - 1);
}
extern void udelay (unsigned long usecs);
#endif /* _ASM_IA64_DELAY_H */

View File

@ -1,14 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0-only */
/*
* Arch specific extensions to struct device
*/
#ifndef _ASM_IA64_DEVICE_H
#define _ASM_IA64_DEVICE_H
struct dev_archdata {
};
struct pdev_archdata {
};
#endif /* _ASM_IA64_DEVICE_H */

View File

@ -1 +0,0 @@
#include <asm-generic/div64.h>

View File

@ -1,16 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef _ASM_IA64_DMA_MAPPING_H
#define _ASM_IA64_DMA_MAPPING_H
/*
* Copyright (C) 2003-2004 Hewlett-Packard Co
* David Mosberger-Tang <davidm@hpl.hp.com>
*/
extern const struct dma_map_ops *dma_ops;
static inline const struct dma_map_ops *get_arch_dma_ops(void)
{
return dma_ops;
}
#endif /* _ASM_IA64_DMA_MAPPING_H */

View File

@ -1,17 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef _ASM_IA64_DMA_H
#define _ASM_IA64_DMA_H
/*
* Copyright (C) 1998-2002 Hewlett-Packard Co
* David Mosberger-Tang <davidm@hpl.hp.com>
*/
#include <asm/io.h> /* need byte IO */
extern unsigned long MAX_DMA_ADDRESS;
#define free_dma(x)
#endif /* _ASM_IA64_DMA_H */

View File

@ -1,15 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef _ASM_DMI_H
#define _ASM_DMI_H 1
#include <linux/slab.h>
#include <asm/io.h>
/* Use normal IO mappings for DMI */
#define dmi_early_remap ioremap
#define dmi_early_unmap(x, l) iounmap(x)
#define dmi_remap ioremap
#define dmi_unmap iounmap
#define dmi_alloc(l) kzalloc(l, GFP_ATOMIC)
#endif

View File

@ -1,11 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef _ASM_IA64_EARLY_IOREMAP_H
#define _ASM_IA64_EARLY_IOREMAP_H
extern void __iomem * early_ioremap (unsigned long phys_addr, unsigned long size);
#define early_memremap(phys_addr, size) early_ioremap(phys_addr, size)
extern void early_iounmap (volatile void __iomem *addr, unsigned long size);
#define early_memunmap(addr, size) early_iounmap(addr, size)
#endif

View File

@ -1,13 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef _ASM_EFI_H
#define _ASM_EFI_H
typedef int (*efi_freemem_callback_t) (u64 start, u64 end, void *arg);
void *efi_get_pal_addr(void);
void efi_map_pal_code(void);
void efi_memmap_walk(efi_freemem_callback_t, void *);
void efi_memmap_walk_uc(efi_freemem_callback_t, void *);
void efi_gettimeofday(struct timespec64 *ts);
#endif

View File

@ -1,233 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef _ASM_IA64_ELF_H
#define _ASM_IA64_ELF_H
/*
* ELF-specific definitions.
*
* Copyright (C) 1998-1999, 2002-2004 Hewlett-Packard Co
* David Mosberger-Tang <davidm@hpl.hp.com>
*/
#include <asm/fpu.h>
#include <asm/page.h>
#include <asm/auxvec.h>
/*
* This is used to ensure we don't load something for the wrong architecture.
*/
#define elf_check_arch(x) ((x)->e_machine == EM_IA_64)
/*
* These are used to set parameters in the core dumps.
*/
#define ELF_CLASS ELFCLASS64
#define ELF_DATA ELFDATA2LSB
#define ELF_ARCH EM_IA_64
#define CORE_DUMP_USE_REGSET
/* Least-significant four bits of ELF header's e_flags are OS-specific. The bits are
interpreted as follows by Linux: */
#define EF_IA_64_LINUX_EXECUTABLE_STACK 0x1 /* is stack (& heap) executable by default? */
#define ELF_EXEC_PAGESIZE PAGE_SIZE
/*
* This is the location that an ET_DYN program is loaded if exec'ed.
* Typical use of this is to invoke "./ld.so someprog" to test out a
* new version of the loader. We need to make sure that it is out of
* the way of the program that it will "exec", and that there is
* sufficient room for the brk.
*/
#define ELF_ET_DYN_BASE (TASK_UNMAPPED_BASE + 0x800000000UL)
#define PT_IA_64_UNWIND 0x70000001
/* IA-64 relocations: */
#define R_IA64_NONE 0x00 /* none */
#define R_IA64_IMM14 0x21 /* symbol + addend, add imm14 */
#define R_IA64_IMM22 0x22 /* symbol + addend, add imm22 */
#define R_IA64_IMM64 0x23 /* symbol + addend, mov imm64 */
#define R_IA64_DIR32MSB 0x24 /* symbol + addend, data4 MSB */
#define R_IA64_DIR32LSB 0x25 /* symbol + addend, data4 LSB */
#define R_IA64_DIR64MSB 0x26 /* symbol + addend, data8 MSB */
#define R_IA64_DIR64LSB 0x27 /* symbol + addend, data8 LSB */
#define R_IA64_GPREL22 0x2a /* @gprel(sym+add), add imm22 */
#define R_IA64_GPREL64I 0x2b /* @gprel(sym+add), mov imm64 */
#define R_IA64_GPREL32MSB 0x2c /* @gprel(sym+add), data4 MSB */
#define R_IA64_GPREL32LSB 0x2d /* @gprel(sym+add), data4 LSB */
#define R_IA64_GPREL64MSB 0x2e /* @gprel(sym+add), data8 MSB */
#define R_IA64_GPREL64LSB 0x2f /* @gprel(sym+add), data8 LSB */
#define R_IA64_LTOFF22 0x32 /* @ltoff(sym+add), add imm22 */
#define R_IA64_LTOFF64I 0x33 /* @ltoff(sym+add), mov imm64 */
#define R_IA64_PLTOFF22 0x3a /* @pltoff(sym+add), add imm22 */
#define R_IA64_PLTOFF64I 0x3b /* @pltoff(sym+add), mov imm64 */
#define R_IA64_PLTOFF64MSB 0x3e /* @pltoff(sym+add), data8 MSB */
#define R_IA64_PLTOFF64LSB 0x3f /* @pltoff(sym+add), data8 LSB */
#define R_IA64_FPTR64I 0x43 /* @fptr(sym+add), mov imm64 */
#define R_IA64_FPTR32MSB 0x44 /* @fptr(sym+add), data4 MSB */
#define R_IA64_FPTR32LSB 0x45 /* @fptr(sym+add), data4 LSB */
#define R_IA64_FPTR64MSB 0x46 /* @fptr(sym+add), data8 MSB */
#define R_IA64_FPTR64LSB 0x47 /* @fptr(sym+add), data8 LSB */
#define R_IA64_PCREL60B 0x48 /* @pcrel(sym+add), brl */
#define R_IA64_PCREL21B 0x49 /* @pcrel(sym+add), ptb, call */
#define R_IA64_PCREL21M 0x4a /* @pcrel(sym+add), chk.s */
#define R_IA64_PCREL21F 0x4b /* @pcrel(sym+add), fchkf */
#define R_IA64_PCREL32MSB 0x4c /* @pcrel(sym+add), data4 MSB */
#define R_IA64_PCREL32LSB 0x4d /* @pcrel(sym+add), data4 LSB */
#define R_IA64_PCREL64MSB 0x4e /* @pcrel(sym+add), data8 MSB */
#define R_IA64_PCREL64LSB 0x4f /* @pcrel(sym+add), data8 LSB */
#define R_IA64_LTOFF_FPTR22 0x52 /* @ltoff(@fptr(s+a)), imm22 */
#define R_IA64_LTOFF_FPTR64I 0x53 /* @ltoff(@fptr(s+a)), imm64 */
#define R_IA64_LTOFF_FPTR32MSB 0x54 /* @ltoff(@fptr(s+a)), 4 MSB */
#define R_IA64_LTOFF_FPTR32LSB 0x55 /* @ltoff(@fptr(s+a)), 4 LSB */
#define R_IA64_LTOFF_FPTR64MSB 0x56 /* @ltoff(@fptr(s+a)), 8 MSB */
#define R_IA64_LTOFF_FPTR64LSB 0x57 /* @ltoff(@fptr(s+a)), 8 LSB */
#define R_IA64_SEGREL32MSB 0x5c /* @segrel(sym+add), data4 MSB */
#define R_IA64_SEGREL32LSB 0x5d /* @segrel(sym+add), data4 LSB */
#define R_IA64_SEGREL64MSB 0x5e /* @segrel(sym+add), data8 MSB */
#define R_IA64_SEGREL64LSB 0x5f /* @segrel(sym+add), data8 LSB */
#define R_IA64_SECREL32MSB 0x64 /* @secrel(sym+add), data4 MSB */
#define R_IA64_SECREL32LSB 0x65 /* @secrel(sym+add), data4 LSB */
#define R_IA64_SECREL64MSB 0x66 /* @secrel(sym+add), data8 MSB */
#define R_IA64_SECREL64LSB 0x67 /* @secrel(sym+add), data8 LSB */
#define R_IA64_REL32MSB 0x6c /* data 4 + REL */
#define R_IA64_REL32LSB 0x6d /* data 4 + REL */
#define R_IA64_REL64MSB 0x6e /* data 8 + REL */
#define R_IA64_REL64LSB 0x6f /* data 8 + REL */
#define R_IA64_LTV32MSB 0x74 /* symbol + addend, data4 MSB */
#define R_IA64_LTV32LSB 0x75 /* symbol + addend, data4 LSB */
#define R_IA64_LTV64MSB 0x76 /* symbol + addend, data8 MSB */
#define R_IA64_LTV64LSB 0x77 /* symbol + addend, data8 LSB */
#define R_IA64_PCREL21BI 0x79 /* @pcrel(sym+add), ptb, call */
#define R_IA64_PCREL22 0x7a /* @pcrel(sym+add), imm22 */
#define R_IA64_PCREL64I 0x7b /* @pcrel(sym+add), imm64 */
#define R_IA64_IPLTMSB 0x80 /* dynamic reloc, imported PLT, MSB */
#define R_IA64_IPLTLSB 0x81 /* dynamic reloc, imported PLT, LSB */
#define R_IA64_COPY 0x84 /* dynamic reloc, data copy */
#define R_IA64_SUB 0x85 /* -symbol + addend, add imm22 */
#define R_IA64_LTOFF22X 0x86 /* LTOFF22, relaxable. */
#define R_IA64_LDXMOV 0x87 /* Use of LTOFF22X. */
#define R_IA64_TPREL14 0x91 /* @tprel(sym+add), add imm14 */
#define R_IA64_TPREL22 0x92 /* @tprel(sym+add), add imm22 */
#define R_IA64_TPREL64I 0x93 /* @tprel(sym+add), add imm64 */
#define R_IA64_TPREL64MSB 0x96 /* @tprel(sym+add), data8 MSB */
#define R_IA64_TPREL64LSB 0x97 /* @tprel(sym+add), data8 LSB */
#define R_IA64_LTOFF_TPREL22 0x9a /* @ltoff(@tprel(s+a)), add imm22 */
#define R_IA64_DTPMOD64MSB 0xa6 /* @dtpmod(sym+add), data8 MSB */
#define R_IA64_DTPMOD64LSB 0xa7 /* @dtpmod(sym+add), data8 LSB */
#define R_IA64_LTOFF_DTPMOD22 0xaa /* @ltoff(@dtpmod(s+a)), imm22 */
#define R_IA64_DTPREL14 0xb1 /* @dtprel(sym+add), imm14 */
#define R_IA64_DTPREL22 0xb2 /* @dtprel(sym+add), imm22 */
#define R_IA64_DTPREL64I 0xb3 /* @dtprel(sym+add), imm64 */
#define R_IA64_DTPREL32MSB 0xb4 /* @dtprel(sym+add), data4 MSB */
#define R_IA64_DTPREL32LSB 0xb5 /* @dtprel(sym+add), data4 LSB */
#define R_IA64_DTPREL64MSB 0xb6 /* @dtprel(sym+add), data8 MSB */
#define R_IA64_DTPREL64LSB 0xb7 /* @dtprel(sym+add), data8 LSB */
#define R_IA64_LTOFF_DTPREL22 0xba /* @ltoff(@dtprel(s+a)), imm22 */
/* IA-64 specific section flags: */
#define SHF_IA_64_SHORT 0x10000000 /* section near gp */
/*
* We use (abuse?) this macro to insert the (empty) vm_area that is
* used to map the register backing store. I don't see any better
* place to do this, but we should discuss this with Linus once we can
* talk to him...
*/
extern void ia64_init_addr_space (void);
#define ELF_PLAT_INIT(_r, load_addr) ia64_init_addr_space()
/* ELF register definitions. This is needed for core dump support. */
/*
* elf_gregset_t contains the application-level state in the following order:
* r0-r31
* NaT bits (for r0-r31; bit N == 1 iff rN is a NaT)
* predicate registers (p0-p63)
* b0-b7
* ip cfm psr
* ar.rsc ar.bsp ar.bspstore ar.rnat
* ar.ccv ar.unat ar.fpsr ar.pfs ar.lc ar.ec ar.csd ar.ssd
*/
#define ELF_NGREG 128 /* we really need just 72 but let's leave some headroom... */
#define ELF_NFPREG 128 /* f0 and f1 could be omitted, but so what... */
/* elf_gregset_t register offsets */
#define ELF_GR_0_OFFSET 0
#define ELF_NAT_OFFSET (32 * sizeof(elf_greg_t))
#define ELF_PR_OFFSET (33 * sizeof(elf_greg_t))
#define ELF_BR_0_OFFSET (34 * sizeof(elf_greg_t))
#define ELF_CR_IIP_OFFSET (42 * sizeof(elf_greg_t))
#define ELF_CFM_OFFSET (43 * sizeof(elf_greg_t))
#define ELF_CR_IPSR_OFFSET (44 * sizeof(elf_greg_t))
#define ELF_GR_OFFSET(i) (ELF_GR_0_OFFSET + i * sizeof(elf_greg_t))
#define ELF_BR_OFFSET(i) (ELF_BR_0_OFFSET + i * sizeof(elf_greg_t))
#define ELF_AR_RSC_OFFSET (45 * sizeof(elf_greg_t))
#define ELF_AR_BSP_OFFSET (46 * sizeof(elf_greg_t))
#define ELF_AR_BSPSTORE_OFFSET (47 * sizeof(elf_greg_t))
#define ELF_AR_RNAT_OFFSET (48 * sizeof(elf_greg_t))
#define ELF_AR_CCV_OFFSET (49 * sizeof(elf_greg_t))
#define ELF_AR_UNAT_OFFSET (50 * sizeof(elf_greg_t))
#define ELF_AR_FPSR_OFFSET (51 * sizeof(elf_greg_t))
#define ELF_AR_PFS_OFFSET (52 * sizeof(elf_greg_t))
#define ELF_AR_LC_OFFSET (53 * sizeof(elf_greg_t))
#define ELF_AR_EC_OFFSET (54 * sizeof(elf_greg_t))
#define ELF_AR_CSD_OFFSET (55 * sizeof(elf_greg_t))
#define ELF_AR_SSD_OFFSET (56 * sizeof(elf_greg_t))
#define ELF_AR_END_OFFSET (57 * sizeof(elf_greg_t))
typedef unsigned long elf_greg_t;
typedef elf_greg_t elf_gregset_t[ELF_NGREG];
typedef struct ia64_fpreg elf_fpreg_t;
typedef elf_fpreg_t elf_fpregset_t[ELF_NFPREG];
struct pt_regs; /* forward declaration... */
extern void ia64_elf_core_copy_regs (struct pt_regs *src, elf_gregset_t dst);
#define ELF_CORE_COPY_REGS(_dest,_regs) ia64_elf_core_copy_regs(_regs, _dest);
/* This macro yields a bitmask that programs can use to figure out
what instruction set this CPU supports. */
#define ELF_HWCAP 0
/* This macro yields a string that ld.so will use to load
implementation specific libraries for optimization. Not terribly
relevant until we have real hardware to play with... */
#define ELF_PLATFORM NULL
#define elf_read_implies_exec(ex, executable_stack) \
((executable_stack!=EXSTACK_DISABLE_X) && ((ex).e_flags & EF_IA_64_LINUX_EXECUTABLE_STACK) != 0)
struct task_struct;
#define GATE_EHDR ((const struct elfhdr *) GATE_ADDR)
/* update AT_VECTOR_SIZE_ARCH if the number of NEW_AUX_ENT entries changes */
#define ARCH_DLINFO \
do { \
extern char __kernel_syscall_via_epc[]; \
NEW_AUX_ENT(AT_SYSINFO, (unsigned long) __kernel_syscall_via_epc); \
NEW_AUX_ENT(AT_SYSINFO_EHDR, (unsigned long) GATE_EHDR); \
} while (0)
/*
* format for entries in the Global Offset Table
*/
struct got_entry {
uint64_t val;
};
/*
* Layout of the Function Descriptor
*/
struct fdesc {
uint64_t addr;
uint64_t gp;
};
#endif /* _ASM_IA64_ELF_H */

View File

@ -1,6 +0,0 @@
#ifndef _ASM_EMERGENCY_RESTART_H
#define _ASM_EMERGENCY_RESTART_H
#include <asm-generic/emergency-restart.h>
#endif /* _ASM_EMERGENCY_RESTART_H */

View File

@ -1,30 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0 */
/*
* ESI service calls.
*
* Copyright (c) Copyright 2005-2006 Hewlett-Packard Development Company, L.P.
* Alex Williamson <alex.williamson@hp.com>
*/
#ifndef esi_h
#define esi_h
#include <linux/efi.h>
#define ESI_QUERY 0x00000001
#define ESI_OPEN_HANDLE 0x02000000
#define ESI_CLOSE_HANDLE 0x02000001
enum esi_proc_type {
ESI_PROC_SERIALIZED, /* calls need to be serialized */
ESI_PROC_MP_SAFE, /* MP-safe, but not reentrant */
ESI_PROC_REENTRANT /* MP-safe and reentrant */
};
extern struct ia64_sal_retval esi_call_phys (void *, u64 *);
extern int ia64_esi_call(efi_guid_t, struct ia64_sal_retval *,
enum esi_proc_type,
u64, u64, u64, u64, u64, u64, u64, u64);
extern int ia64_esi_call_phys(efi_guid_t, struct ia64_sal_retval *, u64, u64,
u64, u64, u64, u64, u64, u64);
#endif /* esi_h */

View File

@ -1,23 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0-only */
#ifndef __ASM_EXCEPTION_H
#define __ASM_EXCEPTION_H
struct pt_regs;
struct exception_table_entry;
extern void ia64_handle_exception(struct pt_regs *regs,
const struct exception_table_entry *e);
#define ia64_done_with_exception(regs) \
({ \
int __ex_ret = 0; \
const struct exception_table_entry *e; \
e = search_exception_tables((regs)->cr_iip + ia64_psr(regs)->ri); \
if (e) { \
ia64_handle_exception(regs, e); \
__ex_ret = 1; \
} \
__ex_ret; \
})
#endif /* __ASM_EXCEPTION_H */

View File

@ -1,12 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef _ASM_IA64_EXTABLE_H
#define _ASM_IA64_EXTABLE_H
#define ARCH_HAS_RELATIVE_EXTABLE
struct exception_table_entry {
int insn; /* location-relative address of insn this fixup is for */
int fixup; /* location-relative continuation addr.; if bit 2 is set, r9 is set to 0 */
};
#endif

View File

@ -1,43 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef _ASM_FB_H_
#define _ASM_FB_H_
#include <linux/compiler.h>
#include <linux/efi.h>
#include <linux/string.h>
#include <asm/page.h>
struct file;
static inline void fb_pgprotect(struct file *file, struct vm_area_struct *vma,
unsigned long off)
{
if (efi_range_is_wc(vma->vm_start, vma->vm_end - vma->vm_start))
vma->vm_page_prot = pgprot_writecombine(vma->vm_page_prot);
else
vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
}
#define fb_pgprotect fb_pgprotect
static inline void fb_memcpy_fromio(void *to, const volatile void __iomem *from, size_t n)
{
memcpy(to, (void __force *)from, n);
}
#define fb_memcpy_fromio fb_memcpy_fromio
static inline void fb_memcpy_toio(volatile void __iomem *to, const void *from, size_t n)
{
memcpy((void __force *)to, from, n);
}
#define fb_memcpy_toio fb_memcpy_toio
static inline void fb_memset_io(volatile void __iomem *addr, int c, size_t n)
{
memset((void __force *)addr, c, n);
}
#define fb_memset fb_memset_io
#include <asm-generic/fb.h>
#endif /* _ASM_FB_H_ */

View File

@ -1,74 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef _ASM_IA64_FPSWA_H
#define _ASM_IA64_FPSWA_H
/*
* Floating-point Software Assist
*
* Copyright (C) 1999 Intel Corporation.
* Copyright (C) 1999 Asit Mallick <asit.k.mallick@intel.com>
* Copyright (C) 1999 Goutham Rao <goutham.rao@intel.com>
*/
typedef struct {
/* 4 * 128 bits */
unsigned long fp_lp[4*2];
} fp_state_low_preserved_t;
typedef struct {
/* 10 * 128 bits */
unsigned long fp_lv[10 * 2];
} fp_state_low_volatile_t;
typedef struct {
/* 16 * 128 bits */
unsigned long fp_hp[16 * 2];
} fp_state_high_preserved_t;
typedef struct {
/* 96 * 128 bits */
unsigned long fp_hv[96 * 2];
} fp_state_high_volatile_t;
/**
* floating point state to be passed to the FP emulation library by
* the trap/fault handler
*/
typedef struct {
unsigned long bitmask_low64;
unsigned long bitmask_high64;
fp_state_low_preserved_t *fp_state_low_preserved;
fp_state_low_volatile_t *fp_state_low_volatile;
fp_state_high_preserved_t *fp_state_high_preserved;
fp_state_high_volatile_t *fp_state_high_volatile;
} fp_state_t;
typedef struct {
unsigned long status;
unsigned long err0;
unsigned long err1;
unsigned long err2;
} fpswa_ret_t;
/**
* function header for the Floating Point software assist
* library. This function is invoked by the Floating point software
* assist trap/fault handler.
*/
typedef fpswa_ret_t (*efi_fpswa_t) (unsigned long trap_type, void *bundle, unsigned long *ipsr,
unsigned long *fsr, unsigned long *isr, unsigned long *preds,
unsigned long *ifs, fp_state_t *fp_state);
/**
* This is the FPSWA library interface as defined by EFI. We need to pass a
* pointer to the interface itself on a call to the assist library
*/
typedef struct {
unsigned int revision;
unsigned int reserved;
efi_fpswa_t fpswa;
} fpswa_interface_t;
extern fpswa_interface_t *fpswa_interface;
#endif /* _ASM_IA64_FPSWA_H */

View File

@ -1,28 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef _ASM_IA64_FTRACE_H
#define _ASM_IA64_FTRACE_H
#ifdef CONFIG_FUNCTION_TRACER
#define MCOUNT_INSN_SIZE 32 /* sizeof mcount call */
#ifndef __ASSEMBLY__
extern void _mcount(unsigned long pfs, unsigned long r1, unsigned long b0, unsigned long r0);
#define mcount _mcount
/* In IA64, MCOUNT_ADDR is set in link time, so it's not a constant at compile time */
#define MCOUNT_ADDR (((struct fnptr *)mcount)->ip)
#define FTRACE_ADDR (((struct fnptr *)ftrace_caller)->ip)
static inline unsigned long ftrace_call_adjust(unsigned long addr)
{
/* second bundle, insn 2 */
return addr - 0x12;
}
struct dyn_arch_ftrace {
};
#endif
#endif /* CONFIG_FUNCTION_TRACER */
#endif /* _ASM_IA64_FTRACE_H */

View File

@ -1,109 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef _ASM_FUTEX_H
#define _ASM_FUTEX_H
#include <linux/futex.h>
#include <linux/uaccess.h>
#include <asm/errno.h>
#define __futex_atomic_op1(insn, ret, oldval, uaddr, oparg) \
do { \
register unsigned long r8 __asm ("r8") = 0; \
__asm__ __volatile__( \
" mf;; \n" \
"[1:] " insn ";; \n" \
" .xdata4 \"__ex_table\", 1b-., 2f-. \n" \
"[2:]" \
: "+r" (r8), "=r" (oldval) \
: "r" (uaddr), "r" (oparg) \
: "memory"); \
ret = r8; \
} while (0)
#define __futex_atomic_op2(insn, ret, oldval, uaddr, oparg) \
do { \
register unsigned long r8 __asm ("r8") = 0; \
int val, newval; \
do { \
__asm__ __volatile__( \
" mf;; \n" \
"[1:] ld4 %3=[%4];; \n" \
" mov %2=%3 \n" \
insn ";; \n" \
" mov ar.ccv=%2;; \n" \
"[2:] cmpxchg4.acq %1=[%4],%3,ar.ccv;; \n" \
" .xdata4 \"__ex_table\", 1b-., 3f-.\n" \
" .xdata4 \"__ex_table\", 2b-., 3f-.\n" \
"[3:]" \
: "+r" (r8), "=r" (val), "=&r" (oldval), \
"=&r" (newval) \
: "r" (uaddr), "r" (oparg) \
: "memory"); \
if (unlikely (r8)) \
break; \
} while (unlikely (val != oldval)); \
ret = r8; \
} while (0)
static inline int
arch_futex_atomic_op_inuser(int op, int oparg, int *oval, u32 __user *uaddr)
{
int oldval = 0, ret;
if (!access_ok(uaddr, sizeof(u32)))
return -EFAULT;
switch (op) {
case FUTEX_OP_SET:
__futex_atomic_op1("xchg4 %1=[%2],%3", ret, oldval, uaddr,
oparg);
break;
case FUTEX_OP_ADD:
__futex_atomic_op2("add %3=%3,%5", ret, oldval, uaddr, oparg);
break;
case FUTEX_OP_OR:
__futex_atomic_op2("or %3=%3,%5", ret, oldval, uaddr, oparg);
break;
case FUTEX_OP_ANDN:
__futex_atomic_op2("and %3=%3,%5", ret, oldval, uaddr,
~oparg);
break;
case FUTEX_OP_XOR:
__futex_atomic_op2("xor %3=%3,%5", ret, oldval, uaddr, oparg);
break;
default:
ret = -ENOSYS;
}
if (!ret)
*oval = oldval;
return ret;
}
static inline int
futex_atomic_cmpxchg_inatomic(u32 *uval, u32 __user *uaddr,
u32 oldval, u32 newval)
{
if (!access_ok(uaddr, sizeof(u32)))
return -EFAULT;
{
register unsigned long r8 __asm ("r8") = 0;
unsigned long prev;
__asm__ __volatile__(
" mf;; \n"
" mov ar.ccv=%4;; \n"
"[1:] cmpxchg4.acq %1=[%2],%3,ar.ccv \n"
" .xdata4 \"__ex_table\", 1b-., 2f-. \n"
"[2:]"
: "+r" (r8), "=&r" (prev)
: "r" (uaddr), "r" (newval),
"rO" ((long) (unsigned) oldval)
: "memory");
*uval = prev;
return r8;
}
}
#endif /* _ASM_FUTEX_H */

View File

@ -1,13 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0 */
/*
*
* Copyright (C) 2002,2003 Jun Nakajima <jun.nakajima@intel.com>
* Copyright (C) 2002,2003 Suresh Siddha <suresh.b.siddha@intel.com>
*/
#ifndef _ASM_IA64_GCC_INTRIN_H
#define _ASM_IA64_GCC_INTRIN_H
#include <uapi/asm/gcc_intrin.h>
register unsigned long ia64_r13 asm ("r13") __used;
#endif /* _ASM_IA64_GCC_INTRIN_H */

View File

@ -1,27 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef _ASM_IA64_HARDIRQ_H
#define _ASM_IA64_HARDIRQ_H
/*
* Modified 1998-2002, 2004 Hewlett-Packard Co
* David Mosberger-Tang <davidm@hpl.hp.com>
*/
/*
* No irq_cpustat_t for IA-64. The data is held in the per-CPU data structure.
*/
#define __ARCH_IRQ_STAT 1
#define local_softirq_pending_ref ia64_cpu_info.softirq_pending
#include <linux/threads.h>
#include <linux/irq.h>
#include <asm/processor.h>
extern void __iomem *ipi_base_addr;
void ack_bad_irq(unsigned int irq);
#endif /* _ASM_IA64_HARDIRQ_H */

View File

@ -1,34 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef _ASM_IA64_HUGETLB_H
#define _ASM_IA64_HUGETLB_H
#include <asm/page.h>
#define __HAVE_ARCH_HUGETLB_FREE_PGD_RANGE
void hugetlb_free_pgd_range(struct mmu_gather *tlb, unsigned long addr,
unsigned long end, unsigned long floor,
unsigned long ceiling);
#define __HAVE_ARCH_PREPARE_HUGEPAGE_RANGE
int prepare_hugepage_range(struct file *file,
unsigned long addr, unsigned long len);
static inline int is_hugepage_only_range(struct mm_struct *mm,
unsigned long addr,
unsigned long len)
{
return (REGION_NUMBER(addr) == RGN_HPAGE ||
REGION_NUMBER((addr)+(len)-1) == RGN_HPAGE);
}
#define is_hugepage_only_range is_hugepage_only_range
#define __HAVE_ARCH_HUGE_PTEP_CLEAR_FLUSH
static inline pte_t huge_ptep_clear_flush(struct vm_area_struct *vma,
unsigned long addr, pte_t *ptep)
{
return *ptep;
}
#include <asm-generic/hugetlb.h>
#endif /* _ASM_IA64_HUGETLB_H */

View File

@ -1,167 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef _ASM_IA64_HW_IRQ_H
#define _ASM_IA64_HW_IRQ_H
/*
* Copyright (C) 2001-2003 Hewlett-Packard Co
* David Mosberger-Tang <davidm@hpl.hp.com>
*/
#include <linux/interrupt.h>
#include <linux/sched.h>
#include <linux/types.h>
#include <linux/profile.h>
#include <asm/ptrace.h>
#include <asm/smp.h>
typedef u8 ia64_vector;
/*
* 0 special
*
* 1,3-14 are reserved from firmware
*
* 16-255 (vectored external interrupts) are available
*
* 15 spurious interrupt (see IVR)
*
* 16 lowest priority, 255 highest priority
*
* 15 classes of 16 interrupts each.
*/
#define IA64_MIN_VECTORED_IRQ 16
#define IA64_MAX_VECTORED_IRQ 255
#define IA64_NUM_VECTORS 256
#define AUTO_ASSIGN -1
#define IA64_SPURIOUS_INT_VECTOR 0x0f
/*
* Vectors 0x10-0x1f are used for low priority interrupts, e.g. CMCI.
*/
#define IA64_CPEP_VECTOR 0x1c /* corrected platform error polling vector */
#define IA64_CMCP_VECTOR 0x1d /* corrected machine-check polling vector */
#define IA64_CPE_VECTOR 0x1e /* corrected platform error interrupt vector */
#define IA64_CMC_VECTOR 0x1f /* corrected machine-check interrupt vector */
/*
* Vectors 0x20-0x2f are reserved for legacy ISA IRQs.
* Use vectors 0x30-0xe7 as the default device vector range for ia64.
* Platforms may choose to reduce this range in platform_irq_setup, but the
* platform range must fall within
* [IA64_DEF_FIRST_DEVICE_VECTOR..IA64_DEF_LAST_DEVICE_VECTOR]
*/
extern int ia64_first_device_vector;
extern int ia64_last_device_vector;
#ifdef CONFIG_SMP
/* Reserve the lower priority vector than device vectors for "move IRQ" IPI */
#define IA64_IRQ_MOVE_VECTOR 0x30 /* "move IRQ" IPI */
#define IA64_DEF_FIRST_DEVICE_VECTOR 0x31
#else
#define IA64_DEF_FIRST_DEVICE_VECTOR 0x30
#endif
#define IA64_DEF_LAST_DEVICE_VECTOR 0xe7
#define IA64_FIRST_DEVICE_VECTOR ia64_first_device_vector
#define IA64_LAST_DEVICE_VECTOR ia64_last_device_vector
#define IA64_MAX_DEVICE_VECTORS (IA64_DEF_LAST_DEVICE_VECTOR - IA64_DEF_FIRST_DEVICE_VECTOR + 1)
#define IA64_NUM_DEVICE_VECTORS (IA64_LAST_DEVICE_VECTOR - IA64_FIRST_DEVICE_VECTOR + 1)
#define IA64_MCA_RENDEZ_VECTOR 0xe8 /* MCA rendez interrupt */
#define IA64_TIMER_VECTOR 0xef /* use highest-prio group 15 interrupt for timer */
#define IA64_MCA_WAKEUP_VECTOR 0xf0 /* MCA wakeup (must be >MCA_RENDEZ_VECTOR) */
#define IA64_IPI_LOCAL_TLB_FLUSH 0xfc /* SMP flush local TLB */
#define IA64_IPI_RESCHEDULE 0xfd /* SMP reschedule */
#define IA64_IPI_VECTOR 0xfe /* inter-processor interrupt vector */
/* Used for encoding redirected irqs */
#define IA64_IRQ_REDIRECTED (1 << 31)
/* IA64 inter-cpu interrupt related definitions */
#define IA64_IPI_DEFAULT_BASE_ADDR 0xfee00000
/* Delivery modes for inter-cpu interrupts */
enum {
IA64_IPI_DM_INT = 0x0, /* pend an external interrupt */
IA64_IPI_DM_PMI = 0x2, /* pend a PMI */
IA64_IPI_DM_NMI = 0x4, /* pend an NMI (vector 2) */
IA64_IPI_DM_INIT = 0x5, /* pend an INIT interrupt */
IA64_IPI_DM_EXTINT = 0x7, /* pend an 8259-compatible interrupt. */
};
extern __u8 isa_irq_to_vector_map[16];
#define isa_irq_to_vector(x) isa_irq_to_vector_map[(x)]
struct irq_cfg {
ia64_vector vector;
cpumask_t domain;
cpumask_t old_domain;
unsigned move_cleanup_count;
u8 move_in_progress : 1;
};
extern spinlock_t vector_lock;
extern struct irq_cfg irq_cfg[NR_IRQS];
#define irq_to_domain(x) irq_cfg[(x)].domain
DECLARE_PER_CPU(int[IA64_NUM_VECTORS], vector_irq);
extern struct irq_chip irq_type_ia64_lsapic; /* CPU-internal interrupt controller */
#define ia64_register_ipi ia64_native_register_ipi
#define assign_irq_vector ia64_native_assign_irq_vector
#define free_irq_vector ia64_native_free_irq_vector
#define ia64_resend_irq ia64_native_resend_irq
extern void ia64_native_register_ipi(void);
extern int bind_irq_vector(int irq, int vector, cpumask_t domain);
extern int ia64_native_assign_irq_vector (int irq); /* allocate a free vector */
extern void ia64_native_free_irq_vector (int vector);
extern int reserve_irq_vector (int vector);
extern void __setup_vector_irq(int cpu);
extern void ia64_send_ipi (int cpu, int vector, int delivery_mode, int redirect);
extern void destroy_and_reserve_irq (unsigned int irq);
#ifdef CONFIG_SMP
extern int irq_prepare_move(int irq, int cpu);
extern void irq_complete_move(unsigned int irq);
#else
static inline int irq_prepare_move(int irq, int cpu) { return 0; }
static inline void irq_complete_move(unsigned int irq) {}
#endif
static inline void ia64_native_resend_irq(unsigned int vector)
{
ia64_send_ipi(smp_processor_id(), vector, IA64_IPI_DM_INT, 0);
}
/*
* Next follows the irq descriptor interface. On IA-64, each CPU supports 256 interrupt
* vectors. On smaller systems, there is a one-to-one correspondence between interrupt
* vectors and the Linux irq numbers. However, larger systems may have multiple interrupt
* domains meaning that the translation from vector number to irq number depends on the
* interrupt domain that a CPU belongs to. This API abstracts such platform-dependent
* differences and provides a uniform means to translate between vector and irq numbers
* and to obtain the irq descriptor for a given irq number.
*/
/* Extract the IA-64 vector that corresponds to IRQ. */
static inline ia64_vector
irq_to_vector (int irq)
{
return irq_cfg[irq].vector;
}
/*
* Convert the local IA-64 vector to the corresponding irq number. This translation is
* done in the context of the interrupt domain that the currently executing CPU belongs
* to.
*/
static inline unsigned int
local_vector_to_irq (ia64_vector vec)
{
return __this_cpu_read(vector_irq[vec]);
}
#endif /* _ASM_IA64_HW_IRQ_H */

View File

@ -1,8 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef _ASM_IA64_IDLE_H
#define _ASM_IA64_IDLE_H
static inline void enter_idle(void) { }
static inline void exit_idle(void) { }
#endif /* _ASM_IA64_IDLE_H */

View File

@ -1,13 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Compiler-dependent intrinsics.
*
* Copyright (C) 2002-2003 Hewlett-Packard Co
* David Mosberger-Tang <davidm@hpl.hp.com>
*/
#ifndef _ASM_IA64_INTRINSICS_H
#define _ASM_IA64_INTRINSICS_H
#include <uapi/asm/intrinsics.h>
#endif /* _ASM_IA64_INTRINSICS_H */

View File

@ -1,271 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef _ASM_IA64_IO_H
#define _ASM_IA64_IO_H
/*
* This file contains the definitions for the emulated IO instructions
* inb/inw/inl/outb/outw/outl and the "string versions" of the same
* (insb/insw/insl/outsb/outsw/outsl). You can also use "pausing"
* versions of the single-IO instructions (inb_p/inw_p/..).
*
* This file is not meant to be obfuscating: it's just complicated to
* (a) handle it all in a way that makes gcc able to optimize it as
* well as possible and (b) trying to avoid writing the same thing
* over and over again with slight variations and possibly making a
* mistake somewhere.
*
* Copyright (C) 1998-2003 Hewlett-Packard Co
* David Mosberger-Tang <davidm@hpl.hp.com>
* Copyright (C) 1999 Asit Mallick <asit.k.mallick@intel.com>
* Copyright (C) 1999 Don Dugger <don.dugger@intel.com>
*/
#include <asm/unaligned.h>
#include <asm/early_ioremap.h>
#define __IA64_UNCACHED_OFFSET RGN_BASE(RGN_UNCACHED)
/*
* The legacy I/O space defined by the ia64 architecture supports only 65536 ports, but
* large machines may have multiple other I/O spaces so we can't place any a priori limit
* on IO_SPACE_LIMIT. These additional spaces are described in ACPI.
*/
#define IO_SPACE_LIMIT 0xffffffffffffffffUL
#define MAX_IO_SPACES_BITS 8
#define MAX_IO_SPACES (1UL << MAX_IO_SPACES_BITS)
#define IO_SPACE_BITS 24
#define IO_SPACE_SIZE (1UL << IO_SPACE_BITS)
#define IO_SPACE_NR(port) ((port) >> IO_SPACE_BITS)
#define IO_SPACE_BASE(space) ((space) << IO_SPACE_BITS)
#define IO_SPACE_PORT(port) ((port) & (IO_SPACE_SIZE - 1))
#define IO_SPACE_SPARSE_ENCODING(p) ((((p) >> 2) << 12) | ((p) & 0xfff))
struct io_space {
unsigned long mmio_base; /* base in MMIO space */
int sparse;
};
extern struct io_space io_space[];
extern unsigned int num_io_spaces;
# ifdef __KERNEL__
/*
* All MMIO iomem cookies are in region 6; anything less is a PIO cookie:
* 0xCxxxxxxxxxxxxxxx MMIO cookie (return from ioremap)
* 0x000000001SPPPPPP PIO cookie (S=space number, P..P=port)
*
* ioread/writeX() uses the leading 1 in PIO cookies (PIO_OFFSET) to catch
* code that uses bare port numbers without the prerequisite pci_iomap().
*/
#define PIO_OFFSET (1UL << (MAX_IO_SPACES_BITS + IO_SPACE_BITS))
#define PIO_MASK (PIO_OFFSET - 1)
#define PIO_RESERVED __IA64_UNCACHED_OFFSET
#define HAVE_ARCH_PIO_SIZE
#include <asm/intrinsics.h>
#include <asm/page.h>
#include <asm-generic/iomap.h>
/*
* Change virtual addresses to physical addresses and vv.
*/
static inline unsigned long
virt_to_phys (volatile void *address)
{
return (unsigned long) address - PAGE_OFFSET;
}
#define virt_to_phys virt_to_phys
static inline void*
phys_to_virt (unsigned long address)
{
return (void *) (address + PAGE_OFFSET);
}
#define phys_to_virt phys_to_virt
#define ARCH_HAS_VALID_PHYS_ADDR_RANGE
extern u64 kern_mem_attribute (unsigned long phys_addr, unsigned long size);
extern int valid_phys_addr_range (phys_addr_t addr, size_t count); /* efi.c */
extern int valid_mmap_phys_addr_range (unsigned long pfn, size_t count);
# endif /* KERNEL */
/*
* Memory fence w/accept. This should never be used in code that is
* not IA-64 specific.
*/
#define __ia64_mf_a() ia64_mfa()
static inline void*
__ia64_mk_io_addr (unsigned long port)
{
struct io_space *space;
unsigned long offset;
space = &io_space[IO_SPACE_NR(port)];
port = IO_SPACE_PORT(port);
if (space->sparse)
offset = IO_SPACE_SPARSE_ENCODING(port);
else
offset = port;
return (void *) (space->mmio_base | offset);
}
/*
* For the in/out routines, we need to do "mf.a" _after_ doing the I/O access to ensure
* that the access has completed before executing other I/O accesses. Since we're doing
* the accesses through an uncachable (UC) translation, the CPU will execute them in
* program order. However, we still need to tell the compiler not to shuffle them around
* during optimization, which is why we use "volatile" pointers.
*/
#define inb inb
static inline unsigned int inb(unsigned long port)
{
volatile unsigned char *addr = __ia64_mk_io_addr(port);
unsigned char ret;
ret = *addr;
__ia64_mf_a();
return ret;
}
#define inw inw
static inline unsigned int inw(unsigned long port)
{
volatile unsigned short *addr = __ia64_mk_io_addr(port);
unsigned short ret;
ret = *addr;
__ia64_mf_a();
return ret;
}
#define inl inl
static inline unsigned int inl(unsigned long port)
{
volatile unsigned int *addr = __ia64_mk_io_addr(port);
unsigned int ret;
ret = *addr;
__ia64_mf_a();
return ret;
}
#define outb outb
static inline void outb(unsigned char val, unsigned long port)
{
volatile unsigned char *addr = __ia64_mk_io_addr(port);
*addr = val;
__ia64_mf_a();
}
#define outw outw
static inline void outw(unsigned short val, unsigned long port)
{
volatile unsigned short *addr = __ia64_mk_io_addr(port);
*addr = val;
__ia64_mf_a();
}
#define outl outl
static inline void outl(unsigned int val, unsigned long port)
{
volatile unsigned int *addr = __ia64_mk_io_addr(port);
*addr = val;
__ia64_mf_a();
}
#define insb insb
static inline void insb(unsigned long port, void *dst, unsigned long count)
{
unsigned char *dp = dst;
while (count--)
*dp++ = inb(port);
}
#define insw insw
static inline void insw(unsigned long port, void *dst, unsigned long count)
{
unsigned short *dp = dst;
while (count--)
put_unaligned(inw(port), dp++);
}
#define insl insl
static inline void insl(unsigned long port, void *dst, unsigned long count)
{
unsigned int *dp = dst;
while (count--)
put_unaligned(inl(port), dp++);
}
#define outsb outsb
static inline void outsb(unsigned long port, const void *src,
unsigned long count)
{
const unsigned char *sp = src;
while (count--)
outb(*sp++, port);
}
#define outsw outsw
static inline void outsw(unsigned long port, const void *src,
unsigned long count)
{
const unsigned short *sp = src;
while (count--)
outw(get_unaligned(sp++), port);
}
#define outsl outsl
static inline void outsl(unsigned long port, const void *src,
unsigned long count)
{
const unsigned int *sp = src;
while (count--)
outl(get_unaligned(sp++), port);
}
# ifdef __KERNEL__
#define _PAGE_IOREMAP pgprot_val(PAGE_KERNEL)
extern void __iomem * ioremap_uc(unsigned long offset, unsigned long size);
#define ioremap_prot ioremap_prot
#define ioremap_cache ioremap
#define ioremap_uc ioremap_uc
#define iounmap iounmap
/*
* String version of IO memory access ops:
*/
extern void memcpy_fromio(void *dst, const volatile void __iomem *src, long n);
extern void memcpy_toio(volatile void __iomem *dst, const void *src, long n);
extern void memset_io(volatile void __iomem *s, int c, long n);
#define memcpy_fromio memcpy_fromio
#define memcpy_toio memcpy_toio
#define memset_io memset_io
#define xlate_dev_mem_ptr xlate_dev_mem_ptr
#include <asm-generic/io.h>
#undef PCI_IOBASE
# endif /* __KERNEL__ */
#endif /* _ASM_IA64_IO_H */

View File

@ -1,22 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef _ASM_IA64_IOMMU_H
#define _ASM_IA64_IOMMU_H 1
#include <linux/acpi.h>
/* 10 seconds */
#define DMAR_OPERATION_TIMEOUT (((cycles_t) local_cpu_data->itc_freq)*10)
extern void no_iommu_init(void);
#ifdef CONFIG_INTEL_IOMMU
extern int force_iommu, no_iommu;
extern int iommu_detected;
static inline int __init
arch_rmrr_sanity_check(struct acpi_dmar_reserved_memory *rmrr) { return 0; }
#else
#define no_iommu (1)
#define iommu_detected (0)
#endif
#endif

View File

@ -1,106 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef __ASM_IA64_IOSAPIC_H
#define __ASM_IA64_IOSAPIC_H
#define IOSAPIC_REG_SELECT 0x0
#define IOSAPIC_WINDOW 0x10
#define IOSAPIC_EOI 0x40
#define IOSAPIC_VERSION 0x1
/*
* Redirection table entry
*/
#define IOSAPIC_RTE_LOW(i) (0x10+i*2)
#define IOSAPIC_RTE_HIGH(i) (0x11+i*2)
#define IOSAPIC_DEST_SHIFT 16
/*
* Delivery mode
*/
#define IOSAPIC_DELIVERY_SHIFT 8
#define IOSAPIC_FIXED 0x0
#define IOSAPIC_LOWEST_PRIORITY 0x1
#define IOSAPIC_PMI 0x2
#define IOSAPIC_NMI 0x4
#define IOSAPIC_INIT 0x5
#define IOSAPIC_EXTINT 0x7
/*
* Interrupt polarity
*/
#define IOSAPIC_POLARITY_SHIFT 13
#define IOSAPIC_POL_HIGH 0
#define IOSAPIC_POL_LOW 1
/*
* Trigger mode
*/
#define IOSAPIC_TRIGGER_SHIFT 15
#define IOSAPIC_EDGE 0
#define IOSAPIC_LEVEL 1
/*
* Mask bit
*/
#define IOSAPIC_MASK_SHIFT 16
#define IOSAPIC_MASK (1<<IOSAPIC_MASK_SHIFT)
#define IOSAPIC_VECTOR_MASK 0xffffff00
#ifndef __ASSEMBLY__
#define NR_IOSAPICS 256
#define iosapic_pcat_compat_init ia64_native_iosapic_pcat_compat_init
#define __iosapic_read __ia64_native_iosapic_read
#define __iosapic_write __ia64_native_iosapic_write
#define iosapic_get_irq_chip ia64_native_iosapic_get_irq_chip
extern void __init ia64_native_iosapic_pcat_compat_init(void);
extern struct irq_chip *ia64_native_iosapic_get_irq_chip(unsigned long trigger);
static inline unsigned int
__ia64_native_iosapic_read(char __iomem *iosapic, unsigned int reg)
{
writel(reg, iosapic + IOSAPIC_REG_SELECT);
return readl(iosapic + IOSAPIC_WINDOW);
}
static inline void
__ia64_native_iosapic_write(char __iomem *iosapic, unsigned int reg, u32 val)
{
writel(reg, iosapic + IOSAPIC_REG_SELECT);
writel(val, iosapic + IOSAPIC_WINDOW);
}
static inline void iosapic_eoi(char __iomem *iosapic, u32 vector)
{
writel(vector, iosapic + IOSAPIC_EOI);
}
extern void __init iosapic_system_init (int pcat_compat);
extern int iosapic_init (unsigned long address, unsigned int gsi_base);
extern int iosapic_remove (unsigned int gsi_base);
extern int gsi_to_irq (unsigned int gsi);
extern int iosapic_register_intr (unsigned int gsi, unsigned long polarity,
unsigned long trigger);
extern void iosapic_unregister_intr (unsigned int irq);
extern void iosapic_override_isa_irq (unsigned int isa_irq, unsigned int gsi,
unsigned long polarity,
unsigned long trigger);
extern int __init iosapic_register_platform_intr (u32 int_type,
unsigned int gsi,
int pmi_vector,
u16 eid, u16 id,
unsigned long polarity,
unsigned long trigger);
#ifdef CONFIG_NUMA
extern void map_iosapic_to_node (unsigned int, int);
#endif
# endif /* !__ASSEMBLY__ */
#endif /* __ASM_IA64_IOSAPIC_H */

View File

@ -1,37 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef _ASM_IA64_IRQ_H
#define _ASM_IA64_IRQ_H
/*
* Copyright (C) 1999-2000, 2002 Hewlett-Packard Co
* David Mosberger-Tang <davidm@hpl.hp.com>
* Stephane Eranian <eranian@hpl.hp.com>
*
* 11/24/98 S.Eranian updated TIMER_IRQ and irq_canonicalize
* 01/20/99 S.Eranian added keyboard interrupt
* 02/29/00 D.Mosberger moved most things into hw_irq.h
*/
#include <linux/types.h>
#include <linux/cpumask.h>
#include <asm/native/irq.h>
#define NR_IRQS IA64_NATIVE_NR_IRQS
static __inline__ int
irq_canonicalize (int irq)
{
/*
* We do the legacy thing here of pretending that irqs < 16
* are 8259 irqs. This really shouldn't be necessary at all,
* but we keep it here as serial.c still uses it...
*/
return ((irq == 2) ? 9 : irq);
}
extern void set_irq_affinity_info (unsigned int irq, int dest, int redir);
int create_irq(void);
void destroy_irq(unsigned int irq);
#endif /* _ASM_IA64_IRQ_H */

View File

@ -1 +0,0 @@
#include <asm-generic/irq_regs.h>

View File

@ -1,5 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef __IA64_INTR_REMAPPING_H
#define __IA64_INTR_REMAPPING_H
#define irq_remapping_enabled 0
#endif

View File

@ -1,95 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0 */
/*
* IRQ flags defines.
*
* Copyright (C) 1998-2003 Hewlett-Packard Co
* David Mosberger-Tang <davidm@hpl.hp.com>
* Copyright (C) 1999 Asit Mallick <asit.k.mallick@intel.com>
* Copyright (C) 1999 Don Dugger <don.dugger@intel.com>
*/
#ifndef _ASM_IA64_IRQFLAGS_H
#define _ASM_IA64_IRQFLAGS_H
#include <asm/pal.h>
#include <asm/kregs.h>
#ifdef CONFIG_IA64_DEBUG_IRQ
extern unsigned long last_cli_ip;
static inline void arch_maybe_save_ip(unsigned long flags)
{
if (flags & IA64_PSR_I)
last_cli_ip = ia64_getreg(_IA64_REG_IP);
}
#else
#define arch_maybe_save_ip(flags) do {} while (0)
#endif
/*
* - clearing psr.i is implicitly serialized (visible by next insn)
* - setting psr.i requires data serialization
* - we need a stop-bit before reading PSR because we sometimes
* write a floating-point register right before reading the PSR
* and that writes to PSR.mfl
*/
static inline unsigned long arch_local_save_flags(void)
{
ia64_stop();
return ia64_getreg(_IA64_REG_PSR);
}
static inline unsigned long arch_local_irq_save(void)
{
unsigned long flags = arch_local_save_flags();
ia64_stop();
ia64_rsm(IA64_PSR_I);
arch_maybe_save_ip(flags);
return flags;
}
static inline void arch_local_irq_disable(void)
{
#ifdef CONFIG_IA64_DEBUG_IRQ
arch_local_irq_save();
#else
ia64_stop();
ia64_rsm(IA64_PSR_I);
#endif
}
static inline void arch_local_irq_enable(void)
{
ia64_stop();
ia64_ssm(IA64_PSR_I);
ia64_srlz_d();
}
static inline void arch_local_irq_restore(unsigned long flags)
{
#ifdef CONFIG_IA64_DEBUG_IRQ
unsigned long old_psr = arch_local_save_flags();
#endif
ia64_intrin_local_irq_restore(flags & IA64_PSR_I);
arch_maybe_save_ip(old_psr & ~flags);
}
static inline bool arch_irqs_disabled_flags(unsigned long flags)
{
return (flags & IA64_PSR_I) == 0;
}
static inline bool arch_irqs_disabled(void)
{
return arch_irqs_disabled_flags(arch_local_save_flags());
}
static inline void arch_safe_halt(void)
{
arch_local_irq_enable();
ia64_pal_halt_light(); /* PAL_HALT_LIGHT */
}
#endif /* _ASM_IA64_IRQFLAGS_H */

View File

@ -1,45 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0-or-later */
#ifndef _IA64_KDEBUG_H
#define _IA64_KDEBUG_H 1
/*
*
* Copyright (C) Intel Corporation, 2005
*
* 2005-Apr Rusty Lynch <rusty.lynch@intel.com> and Anil S Keshavamurthy
* <anil.s.keshavamurthy@intel.com> adopted from
* include/asm-x86_64/kdebug.h
*
* 2005-Oct Keith Owens <kaos@sgi.com>. Expand notify_die to cover more
* events.
*/
enum die_val {
DIE_BREAK = 1,
DIE_FAULT,
DIE_OOPS,
DIE_MACHINE_HALT,
DIE_MACHINE_RESTART,
DIE_MCA_MONARCH_ENTER,
DIE_MCA_MONARCH_PROCESS,
DIE_MCA_MONARCH_LEAVE,
DIE_MCA_SLAVE_ENTER,
DIE_MCA_SLAVE_PROCESS,
DIE_MCA_SLAVE_LEAVE,
DIE_MCA_RENDZVOUS_ENTER,
DIE_MCA_RENDZVOUS_PROCESS,
DIE_MCA_RENDZVOUS_LEAVE,
DIE_MCA_NEW_TIMEOUT,
DIE_INIT_ENTER,
DIE_INIT_MONARCH_ENTER,
DIE_INIT_MONARCH_PROCESS,
DIE_INIT_MONARCH_LEAVE,
DIE_INIT_SLAVE_ENTER,
DIE_INIT_SLAVE_PROCESS,
DIE_INIT_SLAVE_LEAVE,
DIE_KDEBUG_ENTER,
DIE_KDEBUG_LEAVE,
DIE_KDUMP_ENTER,
DIE_KDUMP_LEAVE,
};
#endif

View File

@ -1,46 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef _ASM_IA64_KEXEC_H
#define _ASM_IA64_KEXEC_H
#include <asm/setup.h>
/* Maximum physical address we can use pages from */
#define KEXEC_SOURCE_MEMORY_LIMIT (-1UL)
/* Maximum address we can reach in physical address mode */
#define KEXEC_DESTINATION_MEMORY_LIMIT (-1UL)
/* Maximum address we can use for the control code buffer */
#define KEXEC_CONTROL_MEMORY_LIMIT TASK_SIZE
#define KEXEC_CONTROL_PAGE_SIZE (8192 + 8192 + 4096)
/* The native architecture */
#define KEXEC_ARCH KEXEC_ARCH_IA_64
#define kexec_flush_icache_page(page) do { \
unsigned long page_addr = (unsigned long)page_address(page); \
flush_icache_range(page_addr, page_addr + PAGE_SIZE); \
} while(0)
extern struct kimage *ia64_kimage;
extern const unsigned int relocate_new_kernel_size;
extern void relocate_new_kernel(unsigned long, unsigned long,
struct ia64_boot_param *, unsigned long);
static inline void
crash_setup_regs(struct pt_regs *newregs, struct pt_regs *oldregs)
{
}
extern struct resource efi_memmap_res;
extern struct resource boot_param_res;
extern void kdump_smp_send_stop(void);
extern void kdump_smp_send_init(void);
extern void kexec_disable_iosapic(void);
extern void crash_save_this_cpu(void);
struct rsvd_region;
extern unsigned long kdump_find_rsvd_region(unsigned long size,
struct rsvd_region *rsvd_regions, int n);
extern void kdump_cpu_freeze(struct unw_frame_info *info, void *arg);
extern int kdump_status[];
extern atomic_t kdump_cpu_freezed;
extern atomic_t kdump_in_progress;
#endif /* _ASM_IA64_KEXEC_H */

View File

@ -1,116 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0-or-later */
#ifndef _ASM_KPROBES_H
#define _ASM_KPROBES_H
/*
* Kernel Probes (KProbes)
*
* Copyright (C) IBM Corporation, 2002, 2004
* Copyright (C) Intel Corporation, 2005
*
* 2005-Apr Rusty Lynch <rusty.lynch@intel.com> and Anil S Keshavamurthy
* <anil.s.keshavamurthy@intel.com> adapted from i386
*/
#include <asm-generic/kprobes.h>
#include <asm/break.h>
#define BREAK_INST (long)(__IA64_BREAK_KPROBE << 6)
#ifdef CONFIG_KPROBES
#include <linux/types.h>
#include <linux/ptrace.h>
#include <linux/percpu.h>
#define __ARCH_WANT_KPROBES_INSN_SLOT
#define MAX_INSN_SIZE 2 /* last half is for kprobe-booster */
#define NOP_M_INST (long)(1<<27)
#define BRL_INST(i1, i2) ((long)((0xcL << 37) | /* brl */ \
(0x1L << 12) | /* many */ \
(((i1) & 1) << 36) | ((i2) << 13))) /* imm */
typedef union cmp_inst {
struct {
unsigned long long qp : 6;
unsigned long long p1 : 6;
unsigned long long c : 1;
unsigned long long r2 : 7;
unsigned long long r3 : 7;
unsigned long long p2 : 6;
unsigned long long ta : 1;
unsigned long long x2 : 2;
unsigned long long tb : 1;
unsigned long long opcode : 4;
unsigned long long reserved : 23;
}f;
unsigned long long l;
} cmp_inst_t;
struct kprobe;
typedef struct _bundle {
struct {
unsigned long long template : 5;
unsigned long long slot0 : 41;
unsigned long long slot1_p0 : 64-46;
} quad0;
struct {
unsigned long long slot1_p1 : 41 - (64-46);
unsigned long long slot2 : 41;
} quad1;
} __attribute__((__aligned__(16))) bundle_t;
struct prev_kprobe {
struct kprobe *kp;
unsigned long status;
};
#define MAX_PARAM_RSE_SIZE (0x60+0x60/0x3f)
/* per-cpu kprobe control block */
#define ARCH_PREV_KPROBE_SZ 2
struct kprobe_ctlblk {
unsigned long kprobe_status;
unsigned long *bsp;
unsigned long cfm;
atomic_t prev_kprobe_index;
struct prev_kprobe prev_kprobe[ARCH_PREV_KPROBE_SZ];
};
#define kretprobe_blacklist_size 0
#define SLOT0_OPCODE_SHIFT (37)
#define SLOT1_p1_OPCODE_SHIFT (37 - (64-46))
#define SLOT2_OPCODE_SHIFT (37)
#define INDIRECT_CALL_OPCODE (1)
#define IP_RELATIVE_CALL_OPCODE (5)
#define IP_RELATIVE_BRANCH_OPCODE (4)
#define IP_RELATIVE_PREDICT_OPCODE (7)
#define LONG_BRANCH_OPCODE (0xC)
#define LONG_CALL_OPCODE (0xD)
#define flush_insn_slot(p) do { } while (0)
typedef struct kprobe_opcode {
bundle_t bundle;
} kprobe_opcode_t;
/* Architecture specific copy of original instruction*/
struct arch_specific_insn {
/* copy of the instruction to be emulated */
kprobe_opcode_t *insn;
#define INST_FLAG_FIX_RELATIVE_IP_ADDR 1
#define INST_FLAG_FIX_BRANCH_REG 2
#define INST_FLAG_BREAK_INST 4
#define INST_FLAG_BOOSTABLE 8
unsigned long inst_flag;
unsigned short target_br_reg;
unsigned short slot;
};
extern int kprobe_fault_handler(struct pt_regs *regs, int trapnr);
extern int kprobe_exceptions_notify(struct notifier_block *self,
unsigned long val, void *data);
extern void arch_remove_kprobe(struct kprobe *p);
#endif /* CONFIG_KPROBES */
#endif /* _ASM_KPROBES_H */

View File

@ -1,166 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef _ASM_IA64_KREGS_H
#define _ASM_IA64_KREGS_H
/*
* Copyright (C) 2001-2002 Hewlett-Packard Co
* David Mosberger-Tang <davidm@hpl.hp.com>
*/
/*
* This file defines the kernel register usage convention used by Linux/ia64.
*/
/*
* Kernel registers:
*/
#define IA64_KR_IO_BASE 0 /* ar.k0: legacy I/O base address */
#define IA64_KR_TSSD 1 /* ar.k1: IVE uses this as the TSSD */
#define IA64_KR_PER_CPU_DATA 3 /* ar.k3: physical per-CPU base */
#define IA64_KR_CURRENT_STACK 4 /* ar.k4: what's mapped in IA64_TR_CURRENT_STACK */
#define IA64_KR_FPU_OWNER 5 /* ar.k5: fpu-owner (UP only, at the moment) */
#define IA64_KR_CURRENT 6 /* ar.k6: "current" task pointer */
#define IA64_KR_PT_BASE 7 /* ar.k7: page table base address (physical) */
#define _IA64_KR_PASTE(x,y) x##y
#define _IA64_KR_PREFIX(n) _IA64_KR_PASTE(ar.k, n)
#define IA64_KR(n) _IA64_KR_PREFIX(IA64_KR_##n)
/*
* Translation registers:
*/
#define IA64_TR_KERNEL 0 /* itr0, dtr0: maps kernel image (code & data) */
#define IA64_TR_PALCODE 1 /* itr1: maps PALcode as required by EFI */
#define IA64_TR_CURRENT_STACK 1 /* dtr1: maps kernel's memory- & register-stacks */
#define IA64_TR_ALLOC_BASE 2 /* itr&dtr: Base of dynamic TR resource*/
#define IA64_TR_ALLOC_MAX 64 /* Max number for dynamic use*/
/* Processor status register bits: */
#define IA64_PSR_BE_BIT 1
#define IA64_PSR_UP_BIT 2
#define IA64_PSR_AC_BIT 3
#define IA64_PSR_MFL_BIT 4
#define IA64_PSR_MFH_BIT 5
#define IA64_PSR_IC_BIT 13
#define IA64_PSR_I_BIT 14
#define IA64_PSR_PK_BIT 15
#define IA64_PSR_DT_BIT 17
#define IA64_PSR_DFL_BIT 18
#define IA64_PSR_DFH_BIT 19
#define IA64_PSR_SP_BIT 20
#define IA64_PSR_PP_BIT 21
#define IA64_PSR_DI_BIT 22
#define IA64_PSR_SI_BIT 23
#define IA64_PSR_DB_BIT 24
#define IA64_PSR_LP_BIT 25
#define IA64_PSR_TB_BIT 26
#define IA64_PSR_RT_BIT 27
/* The following are not affected by save_flags()/restore_flags(): */
#define IA64_PSR_CPL0_BIT 32
#define IA64_PSR_CPL1_BIT 33
#define IA64_PSR_IS_BIT 34
#define IA64_PSR_MC_BIT 35
#define IA64_PSR_IT_BIT 36
#define IA64_PSR_ID_BIT 37
#define IA64_PSR_DA_BIT 38
#define IA64_PSR_DD_BIT 39
#define IA64_PSR_SS_BIT 40
#define IA64_PSR_RI_BIT 41
#define IA64_PSR_ED_BIT 43
#define IA64_PSR_BN_BIT 44
#define IA64_PSR_IA_BIT 45
/* A mask of PSR bits that we generally don't want to inherit across a clone2() or an
execve(). Only list flags here that need to be cleared/set for BOTH clone2() and
execve(). */
#define IA64_PSR_BITS_TO_CLEAR (IA64_PSR_MFL | IA64_PSR_MFH | IA64_PSR_DB | IA64_PSR_LP | \
IA64_PSR_TB | IA64_PSR_ID | IA64_PSR_DA | IA64_PSR_DD | \
IA64_PSR_SS | IA64_PSR_ED | IA64_PSR_IA)
#define IA64_PSR_BITS_TO_SET (IA64_PSR_DFH | IA64_PSR_SP)
#define IA64_PSR_BE (__IA64_UL(1) << IA64_PSR_BE_BIT)
#define IA64_PSR_UP (__IA64_UL(1) << IA64_PSR_UP_BIT)
#define IA64_PSR_AC (__IA64_UL(1) << IA64_PSR_AC_BIT)
#define IA64_PSR_MFL (__IA64_UL(1) << IA64_PSR_MFL_BIT)
#define IA64_PSR_MFH (__IA64_UL(1) << IA64_PSR_MFH_BIT)
#define IA64_PSR_IC (__IA64_UL(1) << IA64_PSR_IC_BIT)
#define IA64_PSR_I (__IA64_UL(1) << IA64_PSR_I_BIT)
#define IA64_PSR_PK (__IA64_UL(1) << IA64_PSR_PK_BIT)
#define IA64_PSR_DT (__IA64_UL(1) << IA64_PSR_DT_BIT)
#define IA64_PSR_DFL (__IA64_UL(1) << IA64_PSR_DFL_BIT)
#define IA64_PSR_DFH (__IA64_UL(1) << IA64_PSR_DFH_BIT)
#define IA64_PSR_SP (__IA64_UL(1) << IA64_PSR_SP_BIT)
#define IA64_PSR_PP (__IA64_UL(1) << IA64_PSR_PP_BIT)
#define IA64_PSR_DI (__IA64_UL(1) << IA64_PSR_DI_BIT)
#define IA64_PSR_SI (__IA64_UL(1) << IA64_PSR_SI_BIT)
#define IA64_PSR_DB (__IA64_UL(1) << IA64_PSR_DB_BIT)
#define IA64_PSR_LP (__IA64_UL(1) << IA64_PSR_LP_BIT)
#define IA64_PSR_TB (__IA64_UL(1) << IA64_PSR_TB_BIT)
#define IA64_PSR_RT (__IA64_UL(1) << IA64_PSR_RT_BIT)
/* The following are not affected by save_flags()/restore_flags(): */
#define IA64_PSR_CPL (__IA64_UL(3) << IA64_PSR_CPL0_BIT)
#define IA64_PSR_IS (__IA64_UL(1) << IA64_PSR_IS_BIT)
#define IA64_PSR_MC (__IA64_UL(1) << IA64_PSR_MC_BIT)
#define IA64_PSR_IT (__IA64_UL(1) << IA64_PSR_IT_BIT)
#define IA64_PSR_ID (__IA64_UL(1) << IA64_PSR_ID_BIT)
#define IA64_PSR_DA (__IA64_UL(1) << IA64_PSR_DA_BIT)
#define IA64_PSR_DD (__IA64_UL(1) << IA64_PSR_DD_BIT)
#define IA64_PSR_SS (__IA64_UL(1) << IA64_PSR_SS_BIT)
#define IA64_PSR_RI (__IA64_UL(3) << IA64_PSR_RI_BIT)
#define IA64_PSR_ED (__IA64_UL(1) << IA64_PSR_ED_BIT)
#define IA64_PSR_BN (__IA64_UL(1) << IA64_PSR_BN_BIT)
#define IA64_PSR_IA (__IA64_UL(1) << IA64_PSR_IA_BIT)
/* User mask bits: */
#define IA64_PSR_UM (IA64_PSR_BE | IA64_PSR_UP | IA64_PSR_AC | IA64_PSR_MFL | IA64_PSR_MFH)
/* Default Control Register */
#define IA64_DCR_PP_BIT 0 /* privileged performance monitor default */
#define IA64_DCR_BE_BIT 1 /* big-endian default */
#define IA64_DCR_LC_BIT 2 /* ia32 lock-check enable */
#define IA64_DCR_DM_BIT 8 /* defer TLB miss faults */
#define IA64_DCR_DP_BIT 9 /* defer page-not-present faults */
#define IA64_DCR_DK_BIT 10 /* defer key miss faults */
#define IA64_DCR_DX_BIT 11 /* defer key permission faults */
#define IA64_DCR_DR_BIT 12 /* defer access right faults */
#define IA64_DCR_DA_BIT 13 /* defer access bit faults */
#define IA64_DCR_DD_BIT 14 /* defer debug faults */
#define IA64_DCR_PP (__IA64_UL(1) << IA64_DCR_PP_BIT)
#define IA64_DCR_BE (__IA64_UL(1) << IA64_DCR_BE_BIT)
#define IA64_DCR_LC (__IA64_UL(1) << IA64_DCR_LC_BIT)
#define IA64_DCR_DM (__IA64_UL(1) << IA64_DCR_DM_BIT)
#define IA64_DCR_DP (__IA64_UL(1) << IA64_DCR_DP_BIT)
#define IA64_DCR_DK (__IA64_UL(1) << IA64_DCR_DK_BIT)
#define IA64_DCR_DX (__IA64_UL(1) << IA64_DCR_DX_BIT)
#define IA64_DCR_DR (__IA64_UL(1) << IA64_DCR_DR_BIT)
#define IA64_DCR_DA (__IA64_UL(1) << IA64_DCR_DA_BIT)
#define IA64_DCR_DD (__IA64_UL(1) << IA64_DCR_DD_BIT)
/* Interrupt Status Register */
#define IA64_ISR_X_BIT 32 /* execute access */
#define IA64_ISR_W_BIT 33 /* write access */
#define IA64_ISR_R_BIT 34 /* read access */
#define IA64_ISR_NA_BIT 35 /* non-access */
#define IA64_ISR_SP_BIT 36 /* speculative load exception */
#define IA64_ISR_RS_BIT 37 /* mandatory register-stack exception */
#define IA64_ISR_IR_BIT 38 /* invalid register frame exception */
#define IA64_ISR_CODE_MASK 0xf
#define IA64_ISR_X (__IA64_UL(1) << IA64_ISR_X_BIT)
#define IA64_ISR_W (__IA64_UL(1) << IA64_ISR_W_BIT)
#define IA64_ISR_R (__IA64_UL(1) << IA64_ISR_R_BIT)
#define IA64_ISR_NA (__IA64_UL(1) << IA64_ISR_NA_BIT)
#define IA64_ISR_SP (__IA64_UL(1) << IA64_ISR_SP_BIT)
#define IA64_ISR_RS (__IA64_UL(1) << IA64_ISR_RS_BIT)
#define IA64_ISR_IR (__IA64_UL(1) << IA64_ISR_IR_BIT)
/* ISR code field for non-access instructions */
#define IA64_ISR_CODE_TPA 0
#define IA64_ISR_CODE_FC 1
#define IA64_ISR_CODE_PROBE 2
#define IA64_ISR_CODE_TAK 3
#define IA64_ISR_CODE_LFETCH 4
#define IA64_ISR_CODE_PROBEF 5
#endif /* _ASM_IA64_kREGS_H */

View File

@ -1,9 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef __ASM_IA64_LIBATA_PORTMAP_H
#define __ASM_IA64_LIBATA_PORTMAP_H
#define ATA_PRIMARY_IRQ(dev) isa_irq_to_vector(14)
#define ATA_SECONDARY_IRQ(dev) isa_irq_to_vector(15)
#endif

View File

@ -1,19 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef __ASM_LINKAGE_H
#define __ASM_LINKAGE_H
#ifndef __ASSEMBLY__
#define asmlinkage CPP_ASMLINKAGE __attribute__((syscall_linkage))
#else
#include <asm/asmmacro.h>
#endif
#define cond_syscall(x) asm(".weak\t" #x "#\n" #x "#\t=\tsys_ni_syscall#")
#define SYSCALL_ALIAS(alias, name) \
asm ( #alias "# = " #name "#\n\t.globl " #alias "#")
#endif

View File

@ -1 +0,0 @@
#include <asm-generic/local.h>

View File

@ -1,185 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0 */
/*
* File: mca.h
* Purpose: Machine check handling specific defines
*
* Copyright (C) 1999, 2004 Silicon Graphics, Inc.
* Copyright (C) Vijay Chander <vijay@engr.sgi.com>
* Copyright (C) Srinivasa Thirumalachar <sprasad@engr.sgi.com>
* Copyright (C) Russ Anderson <rja@sgi.com>
*/
#ifndef _ASM_IA64_MCA_H
#define _ASM_IA64_MCA_H
#if !defined(__ASSEMBLY__)
#include <linux/percpu.h>
#include <linux/threads.h>
#include <linux/types.h>
#include <asm/ptrace.h>
#define IA64_MCA_RENDEZ_TIMEOUT (20 * 1000) /* value in milliseconds - 20 seconds */
typedef struct ia64_fptr {
unsigned long fp;
unsigned long gp;
} ia64_fptr_t;
typedef union cmcv_reg_u {
u64 cmcv_regval;
struct {
u64 cmcr_vector : 8;
u64 cmcr_reserved1 : 4;
u64 cmcr_ignored1 : 1;
u64 cmcr_reserved2 : 3;
u64 cmcr_mask : 1;
u64 cmcr_ignored2 : 47;
} cmcv_reg_s;
} cmcv_reg_t;
#define cmcv_mask cmcv_reg_s.cmcr_mask
#define cmcv_vector cmcv_reg_s.cmcr_vector
enum {
IA64_MCA_RENDEZ_CHECKIN_NOTDONE = 0x0,
IA64_MCA_RENDEZ_CHECKIN_DONE = 0x1,
IA64_MCA_RENDEZ_CHECKIN_INIT = 0x2,
IA64_MCA_RENDEZ_CHECKIN_CONCURRENT_MCA = 0x3,
};
/* Information maintained by the MC infrastructure */
typedef struct ia64_mc_info_s {
u64 imi_mca_handler;
size_t imi_mca_handler_size;
u64 imi_monarch_init_handler;
size_t imi_monarch_init_handler_size;
u64 imi_slave_init_handler;
size_t imi_slave_init_handler_size;
u8 imi_rendez_checkin[NR_CPUS];
} ia64_mc_info_t;
/* Handover state from SAL to OS and vice versa, for both MCA and INIT events.
* Besides the handover state, it also contains some saved registers from the
* time of the event.
* Note: mca_asm.S depends on the precise layout of this structure.
*/
struct ia64_sal_os_state {
/* SAL to OS */
unsigned long os_gp; /* GP of the os registered with the SAL, physical */
unsigned long pal_proc; /* PAL_PROC entry point, physical */
unsigned long sal_proc; /* SAL_PROC entry point, physical */
unsigned long rv_rc; /* MCA - Rendezvous state, INIT - reason code */
unsigned long proc_state_param; /* from R18 */
unsigned long monarch; /* 1 for a monarch event, 0 for a slave */
/* common */
unsigned long sal_ra; /* Return address in SAL, physical */
unsigned long sal_gp; /* GP of the SAL - physical */
struct pal_min_state_area *pal_min_state; /* from R17. physical in asm, virtual in C */
/* Previous values of IA64_KR(CURRENT) and IA64_KR(CURRENT_STACK).
* Note: if the MCA/INIT recovery code wants to resume to a new context
* then it must change these values to reflect the new kernel stack.
*/
unsigned long prev_IA64_KR_CURRENT; /* previous value of IA64_KR(CURRENT) */
unsigned long prev_IA64_KR_CURRENT_STACK;
struct task_struct *prev_task; /* previous task, NULL if it is not useful */
/* Some interrupt registers are not saved in minstate, pt_regs or
* switch_stack. Because MCA/INIT can occur when interrupts are
* disabled, we need to save the additional interrupt registers over
* MCA/INIT and resume.
*/
unsigned long isr;
unsigned long ifa;
unsigned long itir;
unsigned long iipa;
unsigned long iim;
unsigned long iha;
/* OS to SAL */
unsigned long os_status; /* OS status to SAL, enum below */
unsigned long context; /* 0 if return to same context
1 if return to new context */
/* I-resources */
unsigned long iip;
unsigned long ipsr;
unsigned long ifs;
};
enum {
IA64_MCA_CORRECTED = 0x0, /* Error has been corrected by OS_MCA */
IA64_MCA_WARM_BOOT = -1, /* Warm boot of the system need from SAL */
IA64_MCA_COLD_BOOT = -2, /* Cold boot of the system need from SAL */
IA64_MCA_HALT = -3 /* System to be halted by SAL */
};
enum {
IA64_INIT_RESUME = 0x0, /* Resume after return from INIT */
IA64_INIT_WARM_BOOT = -1, /* Warm boot of the system need from SAL */
};
enum {
IA64_MCA_SAME_CONTEXT = 0x0, /* SAL to return to same context */
IA64_MCA_NEW_CONTEXT = -1 /* SAL to return to new context */
};
/* Per-CPU MCA state that is too big for normal per-CPU variables. */
struct ia64_mca_cpu {
u64 mca_stack[KERNEL_STACK_SIZE/8];
u64 init_stack[KERNEL_STACK_SIZE/8];
};
/* Array of physical addresses of each CPU's MCA area. */
extern unsigned long __per_cpu_mca[NR_CPUS];
extern int cpe_vector;
extern int ia64_cpe_irq;
extern void ia64_mca_init(void);
extern void ia64_mca_irq_init(void);
extern void ia64_mca_cpu_init(void *);
extern void ia64_os_mca_dispatch(void);
extern void ia64_os_mca_dispatch_end(void);
extern void ia64_mca_ucmc_handler(struct pt_regs *, struct ia64_sal_os_state *);
extern void ia64_init_handler(struct pt_regs *,
struct switch_stack *,
struct ia64_sal_os_state *);
extern void ia64_os_init_on_kdump(void);
extern void ia64_monarch_init_handler(void);
extern void ia64_slave_init_handler(void);
extern void ia64_mca_cmc_vector_setup(void);
extern int ia64_reg_MCA_extension(int (*fn)(void *, struct ia64_sal_os_state *));
extern void ia64_unreg_MCA_extension(void);
extern unsigned long ia64_get_rnat(unsigned long *);
extern void ia64_set_psr_mc(void);
extern void ia64_mca_printk(const char * fmt, ...)
__attribute__ ((format (printf, 1, 2)));
struct ia64_mca_notify_die {
struct ia64_sal_os_state *sos;
int *monarch_cpu;
int *data;
};
DECLARE_PER_CPU(u64, ia64_mca_pal_base);
#else /* __ASSEMBLY__ */
#define IA64_MCA_CORRECTED 0x0 /* Error has been corrected by OS_MCA */
#define IA64_MCA_WARM_BOOT -1 /* Warm boot of the system need from SAL */
#define IA64_MCA_COLD_BOOT -2 /* Cold boot of the system need from SAL */
#define IA64_MCA_HALT -3 /* System to be halted by SAL */
#define IA64_INIT_RESUME 0x0 /* Resume after return from INIT */
#define IA64_INIT_WARM_BOOT -1 /* Warm boot of the system need from SAL */
#define IA64_MCA_SAME_CONTEXT 0x0 /* SAL to return to same context */
#define IA64_MCA_NEW_CONTEXT -1 /* SAL to return to new context */
#endif /* !__ASSEMBLY__ */
#endif /* _ASM_IA64_MCA_H */

View File

@ -1,245 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0 */
/*
* File: mca_asm.h
* Purpose: Machine check handling specific defines
*
* Copyright (C) 1999 Silicon Graphics, Inc.
* Copyright (C) Vijay Chander <vijay@engr.sgi.com>
* Copyright (C) Srinivasa Thirumalachar <sprasad@engr.sgi.com>
* Copyright (C) 2000 Hewlett-Packard Co.
* Copyright (C) 2000 David Mosberger-Tang <davidm@hpl.hp.com>
* Copyright (C) 2002 Intel Corp.
* Copyright (C) 2002 Jenna Hall <jenna.s.hall@intel.com>
* Copyright (C) 2005 Silicon Graphics, Inc
* Copyright (C) 2005 Keith Owens <kaos@sgi.com>
*/
#ifndef _ASM_IA64_MCA_ASM_H
#define _ASM_IA64_MCA_ASM_H
#include <asm/percpu.h>
#define PSR_IC 13
#define PSR_I 14
#define PSR_DT 17
#define PSR_RT 27
#define PSR_MC 35
#define PSR_IT 36
#define PSR_BN 44
/*
* This macro converts a instruction virtual address to a physical address
* Right now for simulation purposes the virtual addresses are
* direct mapped to physical addresses.
* 1. Lop off bits 61 thru 63 in the virtual address
*/
#define INST_VA_TO_PA(addr) \
dep addr = 0, addr, 61, 3
/*
* This macro converts a data virtual address to a physical address
* Right now for simulation purposes the virtual addresses are
* direct mapped to physical addresses.
* 1. Lop off bits 61 thru 63 in the virtual address
*/
#define DATA_VA_TO_PA(addr) \
tpa addr = addr
/*
* This macro converts a data physical address to a virtual address
* Right now for simulation purposes the virtual addresses are
* direct mapped to physical addresses.
* 1. Put 0x7 in bits 61 thru 63.
*/
#define DATA_PA_TO_VA(addr,temp) \
mov temp = 0x7 ;; \
dep addr = temp, addr, 61, 3
#define GET_THIS_PADDR(reg, var) \
mov reg = IA64_KR(PER_CPU_DATA);; \
addl reg = THIS_CPU(var), reg
/*
* This macro jumps to the instruction at the given virtual address
* and starts execution in physical mode with all the address
* translations turned off.
* 1. Save the current psr
* 2. Make sure that all the upper 32 bits are off
*
* 3. Clear the interrupt enable and interrupt state collection bits
* in the psr before updating the ipsr and iip.
*
* 4. Turn off the instruction, data and rse translation bits of the psr
* and store the new value into ipsr
* Also make sure that the interrupts are disabled.
* Ensure that we are in little endian mode.
* [psr.{rt, it, dt, i, be} = 0]
*
* 5. Get the physical address corresponding to the virtual address
* of the next instruction bundle and put it in iip.
* (Using magic numbers 24 and 40 in the deposint instruction since
* the IA64_SDK code directly maps to lower 24bits as physical address
* from a virtual address).
*
* 6. Do an rfi to move the values from ipsr to psr and iip to ip.
*/
#define PHYSICAL_MODE_ENTER(temp1, temp2, start_addr, old_psr) \
mov old_psr = psr; \
;; \
dep old_psr = 0, old_psr, 32, 32; \
\
mov ar.rsc = 0 ; \
;; \
srlz.d; \
mov temp2 = ar.bspstore; \
;; \
DATA_VA_TO_PA(temp2); \
;; \
mov temp1 = ar.rnat; \
;; \
mov ar.bspstore = temp2; \
;; \
mov ar.rnat = temp1; \
mov temp1 = psr; \
mov temp2 = psr; \
;; \
\
dep temp2 = 0, temp2, PSR_IC, 2; \
;; \
mov psr.l = temp2; \
;; \
srlz.d; \
dep temp1 = 0, temp1, 32, 32; \
;; \
dep temp1 = 0, temp1, PSR_IT, 1; \
;; \
dep temp1 = 0, temp1, PSR_DT, 1; \
;; \
dep temp1 = 0, temp1, PSR_RT, 1; \
;; \
dep temp1 = 0, temp1, PSR_I, 1; \
;; \
dep temp1 = 0, temp1, PSR_IC, 1; \
;; \
dep temp1 = -1, temp1, PSR_MC, 1; \
;; \
mov cr.ipsr = temp1; \
;; \
LOAD_PHYSICAL(p0, temp2, start_addr); \
;; \
mov cr.iip = temp2; \
mov cr.ifs = r0; \
DATA_VA_TO_PA(sp); \
DATA_VA_TO_PA(gp); \
;; \
srlz.i; \
;; \
nop 1; \
nop 2; \
nop 1; \
nop 2; \
rfi; \
;;
/*
* This macro jumps to the instruction at the given virtual address
* and starts execution in virtual mode with all the address
* translations turned on.
* 1. Get the old saved psr
*
* 2. Clear the interrupt state collection bit in the current psr.
*
* 3. Set the instruction translation bit back in the old psr
* Note we have to do this since we are right now saving only the
* lower 32-bits of old psr.(Also the old psr has the data and
* rse translation bits on)
*
* 4. Set ipsr to this old_psr with "it" bit set and "bn" = 1.
*
* 5. Reset the current thread pointer (r13).
*
* 6. Set iip to the virtual address of the next instruction bundle.
*
* 7. Do an rfi to move ipsr to psr and iip to ip.
*/
#define VIRTUAL_MODE_ENTER(temp1, temp2, start_addr, old_psr) \
mov temp2 = psr; \
;; \
mov old_psr = temp2; \
;; \
dep temp2 = 0, temp2, PSR_IC, 2; \
;; \
mov psr.l = temp2; \
mov ar.rsc = 0; \
;; \
srlz.d; \
mov r13 = ar.k6; \
mov temp2 = ar.bspstore; \
;; \
DATA_PA_TO_VA(temp2,temp1); \
;; \
mov temp1 = ar.rnat; \
;; \
mov ar.bspstore = temp2; \
;; \
mov ar.rnat = temp1; \
;; \
mov temp1 = old_psr; \
;; \
mov temp2 = 1; \
;; \
dep temp1 = temp2, temp1, PSR_IC, 1; \
;; \
dep temp1 = temp2, temp1, PSR_IT, 1; \
;; \
dep temp1 = temp2, temp1, PSR_DT, 1; \
;; \
dep temp1 = temp2, temp1, PSR_RT, 1; \
;; \
dep temp1 = temp2, temp1, PSR_BN, 1; \
;; \
\
mov cr.ipsr = temp1; \
movl temp2 = start_addr; \
;; \
mov cr.iip = temp2; \
movl gp = __gp \
;; \
DATA_PA_TO_VA(sp, temp1); \
srlz.i; \
;; \
nop 1; \
nop 2; \
nop 1; \
rfi \
;;
/*
* The MCA and INIT stacks in struct ia64_mca_cpu look like normal kernel
* stacks, except that the SAL/OS state and a switch_stack are stored near the
* top of the MCA/INIT stack. To support concurrent entry to MCA or INIT, as
* well as MCA over INIT, each event needs its own SAL/OS state. All entries
* are 16 byte aligned.
*
* +---------------------------+
* | pt_regs |
* +---------------------------+
* | switch_stack |
* +---------------------------+
* | SAL/OS state |
* +---------------------------+
* | 16 byte scratch area |
* +---------------------------+ <-------- SP at start of C MCA handler
* | ..... |
* +---------------------------+
* | RBS for MCA/INIT handler |
* +---------------------------+
* | struct task for MCA/INIT |
* +---------------------------+ <-------- Bottom of MCA/INIT stack
*/
#define ALIGN16(x) ((x)&~15)
#define MCA_PT_REGS_OFFSET ALIGN16(KERNEL_STACK_SIZE-IA64_PT_REGS_SIZE)
#define MCA_SWITCH_STACK_OFFSET ALIGN16(MCA_PT_REGS_OFFSET-IA64_SWITCH_STACK_SIZE)
#define MCA_SOS_OFFSET ALIGN16(MCA_SWITCH_STACK_OFFSET-IA64_SAL_OS_STATE_SIZE)
#define MCA_SP_OFFSET ALIGN16(MCA_SOS_OFFSET-16)
#endif /* _ASM_IA64_MCA_ASM_H */

View File

@ -1,59 +0,0 @@
#ifndef meminit_h
#define meminit_h
/*
* This file is subject to the terms and conditions of the GNU General Public
* License. See the file "COPYING" in the main directory of this archive
* for more details.
*/
/*
* Entries defined so far:
* - boot param structure itself
* - memory map
* - initrd (optional)
* - command line string
* - kernel code & data
* - crash dumping code reserved region
* - Kernel memory map built from EFI memory map
* - ELF core header
*
* More could be added if necessary
*/
#define IA64_MAX_RSVD_REGIONS 9
struct rsvd_region {
u64 start; /* virtual address of beginning of element */
u64 end; /* virtual address of end of element + 1 */
};
extern struct rsvd_region rsvd_region[IA64_MAX_RSVD_REGIONS + 1];
extern void find_memory (void);
extern void reserve_memory (void);
extern void find_initrd (void);
extern int filter_rsvd_memory (u64 start, u64 end, void *arg);
extern int filter_memory (u64 start, u64 end, void *arg);
extern unsigned long efi_memmap_init(u64 *s, u64 *e);
extern int find_max_min_low_pfn (u64, u64, void *);
extern unsigned long vmcore_find_descriptor_size(unsigned long address);
/*
* For rounding an address to the next IA64_GRANULE_SIZE or order
*/
#define GRANULEROUNDDOWN(n) ((n) & ~(IA64_GRANULE_SIZE-1))
#define GRANULEROUNDUP(n) (((n)+IA64_GRANULE_SIZE-1) & ~(IA64_GRANULE_SIZE-1))
#ifdef CONFIG_NUMA
extern void call_pernode_memory (unsigned long start, unsigned long len, void *func);
#else
# define call_pernode_memory(start, len, func) (*func)(start, len, 0)
#endif
#define IGNORE_PFN0 1 /* XXX fix me: ignore pfn 0 until TLB miss handler is updated... */
extern int register_active_ranges(u64 start, u64 len, int nid);
#endif /* meminit_h */

View File

@ -1,18 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0 */
/*
* Based on <asm-i386/mman.h>.
*
* Modified 1998-2000, 2002
* David Mosberger-Tang <davidm@hpl.hp.com>, Hewlett-Packard Co
*/
#ifndef _ASM_IA64_MMAN_H
#define _ASM_IA64_MMAN_H
#include <uapi/asm/mman.h>
#ifndef __ASSEMBLY__
#define arch_mmap_check ia64_mmap_check
int ia64_mmap_check(unsigned long addr, unsigned long len,
unsigned long flags);
#endif
#endif /* _ASM_IA64_MMAN_H */

View File

@ -1,17 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef _ASM_IA64_MMIOWB_H
#define _ASM_IA64_MMIOWB_H
/**
* mmiowb - I/O write barrier
*
* Ensure ordering of I/O space writes. This will make sure that writes
* following the barrier will arrive after all previous writes. For most
* ia64 platforms, this is a simple 'mf.a' instruction.
*/
#define mmiowb() ia64_mfa()
#include <asm-generic/mmiowb.h>
#endif /* _ASM_IA64_MMIOWB_H */

View File

@ -1,14 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef __MMU_H
#define __MMU_H
/*
* Type for a context number. We declare it volatile to ensure proper
* ordering when it's accessed outside of spinlock'd critical sections
* (e.g., as done in activate_mm() and init_new_context()).
*/
typedef volatile unsigned long mm_context_t;
typedef unsigned long nv_mm_context_t;
#endif

View File

@ -1,194 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef _ASM_IA64_MMU_CONTEXT_H
#define _ASM_IA64_MMU_CONTEXT_H
/*
* Copyright (C) 1998-2002 Hewlett-Packard Co
* David Mosberger-Tang <davidm@hpl.hp.com>
*/
/*
* Routines to manage the allocation of task context numbers. Task context
* numbers are used to reduce or eliminate the need to perform TLB flushes
* due to context switches. Context numbers are implemented using ia-64
* region ids. Since the IA-64 TLB does not consider the region number when
* performing a TLB lookup, we need to assign a unique region id to each
* region in a process. We use the least significant three bits in aregion
* id for this purpose.
*/
#define IA64_REGION_ID_KERNEL 0 /* the kernel's region id (tlb.c depends on this being 0) */
#define ia64_rid(ctx,addr) (((ctx) << 3) | (addr >> 61))
# include <asm/page.h>
# ifndef __ASSEMBLY__
#include <linux/compiler.h>
#include <linux/percpu.h>
#include <linux/sched.h>
#include <linux/mm_types.h>
#include <linux/spinlock.h>
#include <asm/processor.h>
#include <asm-generic/mm_hooks.h>
struct ia64_ctx {
spinlock_t lock;
unsigned int next; /* next context number to use */
unsigned int limit; /* available free range */
unsigned int max_ctx; /* max. context value supported by all CPUs */
/* call wrap_mmu_context when next >= max */
unsigned long *bitmap; /* bitmap size is max_ctx+1 */
unsigned long *flushmap;/* pending rid to be flushed */
};
extern struct ia64_ctx ia64_ctx;
DECLARE_PER_CPU(u8, ia64_need_tlb_flush);
extern void mmu_context_init (void);
extern void wrap_mmu_context (struct mm_struct *mm);
/*
* When the context counter wraps around all TLBs need to be flushed because
* an old context number might have been reused. This is signalled by the
* ia64_need_tlb_flush per-CPU variable, which is checked in the routine
* below. Called by activate_mm(). <efocht@ess.nec.de>
*/
static inline void
delayed_tlb_flush (void)
{
extern void local_flush_tlb_all (void);
unsigned long flags;
if (unlikely(__ia64_per_cpu_var(ia64_need_tlb_flush))) {
spin_lock_irqsave(&ia64_ctx.lock, flags);
if (__ia64_per_cpu_var(ia64_need_tlb_flush)) {
local_flush_tlb_all();
__ia64_per_cpu_var(ia64_need_tlb_flush) = 0;
}
spin_unlock_irqrestore(&ia64_ctx.lock, flags);
}
}
static inline nv_mm_context_t
get_mmu_context (struct mm_struct *mm)
{
unsigned long flags;
nv_mm_context_t context = mm->context;
if (likely(context))
goto out;
spin_lock_irqsave(&ia64_ctx.lock, flags);
/* re-check, now that we've got the lock: */
context = mm->context;
if (context == 0) {
cpumask_clear(mm_cpumask(mm));
if (ia64_ctx.next >= ia64_ctx.limit) {
ia64_ctx.next = find_next_zero_bit(ia64_ctx.bitmap,
ia64_ctx.max_ctx, ia64_ctx.next);
ia64_ctx.limit = find_next_bit(ia64_ctx.bitmap,
ia64_ctx.max_ctx, ia64_ctx.next);
if (ia64_ctx.next >= ia64_ctx.max_ctx)
wrap_mmu_context(mm);
}
mm->context = context = ia64_ctx.next++;
__set_bit(context, ia64_ctx.bitmap);
}
spin_unlock_irqrestore(&ia64_ctx.lock, flags);
out:
/*
* Ensure we're not starting to use "context" before any old
* uses of it are gone from our TLB.
*/
delayed_tlb_flush();
return context;
}
/*
* Initialize context number to some sane value. MM is guaranteed to be a
* brand-new address-space, so no TLB flushing is needed, ever.
*/
#define init_new_context init_new_context
static inline int
init_new_context (struct task_struct *p, struct mm_struct *mm)
{
mm->context = 0;
return 0;
}
static inline void
reload_context (nv_mm_context_t context)
{
unsigned long rid;
unsigned long rid_incr = 0;
unsigned long rr0, rr1, rr2, rr3, rr4;
#ifdef CONFIG_HUGETLB_PAGE
unsigned long old_rr4;
old_rr4 = ia64_get_rr(RGN_BASE(RGN_HPAGE));
#endif
rid = context << 3; /* make space for encoding the region number */
rid_incr = 1 << 8;
/* encode the region id, preferred page size, and VHPT enable bit: */
rr0 = (rid << 8) | (PAGE_SHIFT << 2) | 1;
rr1 = rr0 + 1*rid_incr;
rr2 = rr0 + 2*rid_incr;
rr3 = rr0 + 3*rid_incr;
rr4 = rr0 + 4*rid_incr;
#ifdef CONFIG_HUGETLB_PAGE
rr4 = (rr4 & (~(0xfcUL))) | (old_rr4 & 0xfc);
# if RGN_HPAGE != 4
# error "reload_context assumes RGN_HPAGE is 4"
# endif
#endif
ia64_set_rr0_to_rr4(rr0, rr1, rr2, rr3, rr4);
ia64_srlz_i(); /* srlz.i implies srlz.d */
}
/*
* Must be called with preemption off
*/
static inline void
activate_context (struct mm_struct *mm)
{
nv_mm_context_t context;
do {
context = get_mmu_context(mm);
if (!cpumask_test_cpu(smp_processor_id(), mm_cpumask(mm)))
cpumask_set_cpu(smp_processor_id(), mm_cpumask(mm));
reload_context(context);
/*
* in the unlikely event of a TLB-flush by another thread,
* redo the load.
*/
} while (unlikely(context != mm->context));
}
/*
* Switch from address space PREV to address space NEXT.
*/
#define activate_mm activate_mm
static inline void
activate_mm (struct mm_struct *prev, struct mm_struct *next)
{
/*
* We may get interrupts here, but that's OK because interrupt
* handlers cannot touch user-space.
*/
ia64_set_kr(IA64_KR_PT_BASE, __pa(next->pgd));
activate_context(next);
}
#define switch_mm(prev_mm,next_mm,next_task) activate_mm(prev_mm, next_mm)
#include <asm-generic/mmu_context.h>
# endif /* ! __ASSEMBLY__ */
#endif /* _ASM_IA64_MMU_CONTEXT_H */

View File

@ -1,35 +0,0 @@
/*
* This file is subject to the terms and conditions of the GNU General Public
* License. See the file "COPYING" in the main directory of this archive
* for more details.
*
* Copyright (c) 2000,2003 Silicon Graphics, Inc. All rights reserved.
* Copyright (c) 2002 NEC Corp.
* Copyright (c) 2002 Erich Focht <efocht@ess.nec.de>
* Copyright (c) 2002 Kimio Suganuma <k-suganuma@da.jp.nec.com>
*/
#ifndef _ASM_IA64_MMZONE_H
#define _ASM_IA64_MMZONE_H
#include <linux/numa.h>
#include <asm/page.h>
#include <asm/meminit.h>
#ifdef CONFIG_NUMA
static inline int pfn_to_nid(unsigned long pfn)
{
extern int paddr_to_nid(unsigned long);
int nid = paddr_to_nid(pfn << PAGE_SHIFT);
if (nid < 0)
return 0;
else
return nid;
}
#define MAX_PHYSNODE_ID 2048
#endif /* CONFIG_NUMA */
#define NR_NODE_MEMBLKS (MAX_NUMNODES * 4)
#endif /* _ASM_IA64_MMZONE_H */

View File

@ -1,35 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef _ASM_IA64_MODULE_H
#define _ASM_IA64_MODULE_H
#include <asm-generic/module.h>
/*
* IA-64-specific support for kernel module loader.
*
* Copyright (C) 2003 Hewlett-Packard Co
* David Mosberger-Tang <davidm@hpl.hp.com>
*/
struct elf64_shdr; /* forward declration */
struct mod_arch_specific {
/* Used only at module load time. */
struct elf64_shdr *core_plt; /* core PLT section */
struct elf64_shdr *init_plt; /* init PLT section */
struct elf64_shdr *got; /* global offset table */
struct elf64_shdr *opd; /* official procedure descriptors */
struct elf64_shdr *unwind; /* unwind-table section */
unsigned long gp; /* global-pointer for module */
unsigned int next_got_entry; /* index of next available got entry */
/* Used at module run and cleanup time. */
void *core_unw_table; /* core unwind-table cookie returned by unwinder */
void *init_unw_table; /* init unwind-table cookie returned by unwinder */
void *opd_addr; /* symbolize uses .opd to get to actual function */
unsigned long opd_size;
};
#define ARCH_SHF_SMALL SHF_IA_64_SHORT
#endif /* _ASM_IA64_MODULE_H */

View File

@ -1,14 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0 */
SECTIONS {
/* Group unwind sections into a single section: */
.IA_64.unwind_info : { *(.IA_64.unwind_info*) }
.IA_64.unwind : { *(.IA_64.unwind*) }
/*
* Create place-holder sections to hold the PLTs, GOT, and
* official procedure-descriptors (.opd).
*/
.core.plt : { BYTE(0) }
.init.plt : { BYTE(0) }
.got : { BYTE(0) }
.opd : { BYTE(0) }
}

View File

@ -1,43 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef _IA64_MSI_DEF_H
#define _IA64_MSI_DEF_H
/*
* Shifts for APIC-based data
*/
#define MSI_DATA_VECTOR_SHIFT 0
#define MSI_DATA_VECTOR(v) (((u8)v) << MSI_DATA_VECTOR_SHIFT)
#define MSI_DATA_VECTOR_MASK 0xffffff00
#define MSI_DATA_DELIVERY_MODE_SHIFT 8
#define MSI_DATA_DELIVERY_FIXED (0 << MSI_DATA_DELIVERY_MODE_SHIFT)
#define MSI_DATA_DELIVERY_LOWPRI (1 << MSI_DATA_DELIVERY_MODE_SHIFT)
#define MSI_DATA_LEVEL_SHIFT 14
#define MSI_DATA_LEVEL_DEASSERT (0 << MSI_DATA_LEVEL_SHIFT)
#define MSI_DATA_LEVEL_ASSERT (1 << MSI_DATA_LEVEL_SHIFT)
#define MSI_DATA_TRIGGER_SHIFT 15
#define MSI_DATA_TRIGGER_EDGE (0 << MSI_DATA_TRIGGER_SHIFT)
#define MSI_DATA_TRIGGER_LEVEL (1 << MSI_DATA_TRIGGER_SHIFT)
/*
* Shift/mask fields for APIC-based bus address
*/
#define MSI_ADDR_DEST_ID_SHIFT 4
#define MSI_ADDR_HEADER 0xfee00000
#define MSI_ADDR_DEST_ID_MASK 0xfff0000f
#define MSI_ADDR_DEST_ID_CPU(cpu) ((cpu) << MSI_ADDR_DEST_ID_SHIFT)
#define MSI_ADDR_DEST_MODE_SHIFT 2
#define MSI_ADDR_DEST_MODE_PHYS (0 << MSI_ADDR_DEST_MODE_SHIFT)
#define MSI_ADDR_DEST_MODE_LOGIC (1 << MSI_ADDR_DEST_MODE_SHIFT)
#define MSI_ADDR_REDIRECTION_SHIFT 3
#define MSI_ADDR_REDIRECTION_CPU (0 << MSI_ADDR_REDIRECTION_SHIFT)
#define MSI_ADDR_REDIRECTION_LOWPRI (1 << MSI_ADDR_REDIRECTION_SHIFT)
#endif/* _IA64_MSI_DEF_H */

View File

@ -1,119 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0-or-later */
/******************************************************************************
* arch/ia64/include/asm/native/inst.h
*
* Copyright (c) 2008 Isaku Yamahata <yamahata at valinux co jp>
* VA Linux Systems Japan K.K.
*/
#define DO_SAVE_MIN IA64_NATIVE_DO_SAVE_MIN
#define MOV_FROM_IFA(reg) \
mov reg = cr.ifa
#define MOV_FROM_ITIR(reg) \
mov reg = cr.itir
#define MOV_FROM_ISR(reg) \
mov reg = cr.isr
#define MOV_FROM_IHA(reg) \
mov reg = cr.iha
#define MOV_FROM_IPSR(pred, reg) \
(pred) mov reg = cr.ipsr
#define MOV_FROM_IIM(reg) \
mov reg = cr.iim
#define MOV_FROM_IIP(reg) \
mov reg = cr.iip
#define MOV_FROM_IVR(reg, clob) \
mov reg = cr.ivr
#define MOV_FROM_PSR(pred, reg, clob) \
(pred) mov reg = psr
#define MOV_FROM_ITC(pred, pred_clob, reg, clob) \
(pred) mov reg = ar.itc
#define MOV_TO_IFA(reg, clob) \
mov cr.ifa = reg
#define MOV_TO_ITIR(pred, reg, clob) \
(pred) mov cr.itir = reg
#define MOV_TO_IHA(pred, reg, clob) \
(pred) mov cr.iha = reg
#define MOV_TO_IPSR(pred, reg, clob) \
(pred) mov cr.ipsr = reg
#define MOV_TO_IFS(pred, reg, clob) \
(pred) mov cr.ifs = reg
#define MOV_TO_IIP(reg, clob) \
mov cr.iip = reg
#define MOV_TO_KR(kr, reg, clob0, clob1) \
mov IA64_KR(kr) = reg
#define ITC_I(pred, reg, clob) \
(pred) itc.i reg
#define ITC_D(pred, reg, clob) \
(pred) itc.d reg
#define ITC_I_AND_D(pred_i, pred_d, reg, clob) \
(pred_i) itc.i reg; \
(pred_d) itc.d reg
#define THASH(pred, reg0, reg1, clob) \
(pred) thash reg0 = reg1
#define SSM_PSR_IC_AND_DEFAULT_BITS_AND_SRLZ_I(clob0, clob1) \
ssm psr.ic | PSR_DEFAULT_BITS \
;; \
srlz.i /* guarantee that interruption collectin is on */ \
;;
#define SSM_PSR_IC_AND_SRLZ_D(clob0, clob1) \
ssm psr.ic \
;; \
srlz.d
#define RSM_PSR_IC(clob) \
rsm psr.ic
#define SSM_PSR_I(pred, pred_clob, clob) \
(pred) ssm psr.i
#define RSM_PSR_I(pred, clob0, clob1) \
(pred) rsm psr.i
#define RSM_PSR_I_IC(clob0, clob1, clob2) \
rsm psr.i | psr.ic
#define RSM_PSR_DT \
rsm psr.dt
#define RSM_PSR_BE_I(clob0, clob1) \
rsm psr.be | psr.i
#define SSM_PSR_DT_AND_SRLZ_I \
ssm psr.dt \
;; \
srlz.i
#define BSW_0(clob0, clob1, clob2) \
bsw.0
#define BSW_1(clob0, clob1) \
bsw.1
#define COVER \
cover
#define RFI \
rfi

View File

@ -1,20 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0-or-later */
/******************************************************************************
* arch/ia64/include/asm/native/irq.h
*
* Copyright (c) 2008 Isaku Yamahata <yamahata at valinux co jp>
* VA Linux Systems Japan K.K.
*/
#ifndef _ASM_IA64_NATIVE_IRQ_H
#define _ASM_IA64_NATIVE_IRQ_H
#define NR_VECTORS 256
#if (NR_VECTORS + 32 * NR_CPUS) < 1024
#define IA64_NATIVE_NR_IRQS (NR_VECTORS + 32 * NR_CPUS)
#else
#define IA64_NATIVE_NR_IRQS 1024
#endif
#endif /* _ASM_IA64_NATIVE_IRQ_H */

View File

@ -1,24 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0-or-later */
/******************************************************************************
* arch/ia64/include/asm/native/inst.h
*
* Copyright (c) 2008 Isaku Yamahata <yamahata at valinux co jp>
* VA Linux Systems Japan K.K.
*/
#define __paravirt_start_gate_fsyscall_patchlist \
__ia64_native_start_gate_fsyscall_patchlist
#define __paravirt_end_gate_fsyscall_patchlist \
__ia64_native_end_gate_fsyscall_patchlist
#define __paravirt_start_gate_brl_fsys_bubble_down_patchlist \
__ia64_native_start_gate_brl_fsys_bubble_down_patchlist
#define __paravirt_end_gate_brl_fsys_bubble_down_patchlist \
__ia64_native_end_gate_brl_fsys_bubble_down_patchlist
#define __paravirt_start_gate_vtop_patchlist \
__ia64_native_start_gate_vtop_patchlist
#define __paravirt_end_gate_vtop_patchlist \
__ia64_native_end_gate_vtop_patchlist
#define __paravirt_start_gate_mckinley_e9_patchlist \
__ia64_native_start_gate_mckinley_e9_patchlist
#define __paravirt_end_gate_mckinley_e9_patchlist \
__ia64_native_end_gate_mckinley_e9_patchlist

View File

@ -1,63 +0,0 @@
/*
* This file is subject to the terms and conditions of the GNU General Public
* License. See the file "COPYING" in the main directory of this archive
* for more details.
*
* Copyright (c) 2000 Silicon Graphics, Inc. All rights reserved.
* Copyright (c) 2002 NEC Corp.
* Copyright (c) 2002 Erich Focht <efocht@ess.nec.de>
* Copyright (c) 2002 Kimio Suganuma <k-suganuma@da.jp.nec.com>
*/
#ifndef _ASM_IA64_NODEDATA_H
#define _ASM_IA64_NODEDATA_H
#include <linux/numa.h>
#include <asm/percpu.h>
#include <asm/mmzone.h>
#ifdef CONFIG_NUMA
/*
* Node Data. One of these structures is located on each node of a NUMA system.
*/
struct pglist_data;
struct ia64_node_data {
short active_cpu_count;
short node;
struct pglist_data *pg_data_ptrs[MAX_NUMNODES];
};
/*
* Return a pointer to the node_data structure for the executing cpu.
*/
#define local_node_data (local_cpu_data->node_data)
/*
* Given a node id, return a pointer to the pg_data_t for the node.
*
* NODE_DATA - should be used in all code not related to system
* initialization. It uses pernode data structures to minimize
* offnode memory references. However, these structure are not
* present during boot. This macro can be used once cpu_init
* completes.
*/
#define NODE_DATA(nid) (local_node_data->pg_data_ptrs[nid])
/*
* LOCAL_DATA_ADDR - This is to calculate the address of other node's
* "local_node_data" at hot-plug phase. The local_node_data
* is pointed by per_cpu_page. Kernel usually use it for
* just executing cpu. However, when new node is hot-added,
* the addresses of local data for other nodes are necessary
* to update all of them.
*/
#define LOCAL_DATA_ADDR(pgdat) \
((struct ia64_node_data *)((u64)(pgdat) + \
L1_CACHE_ALIGN(sizeof(struct pglist_data))))
#endif /* CONFIG_NUMA */
#endif /* _ASM_IA64_NODEDATA_H */

View File

@ -1,83 +0,0 @@
/*
* This file is subject to the terms and conditions of the GNU General Public
* License. See the file "COPYING" in the main directory of this archive
* for more details.
*
* This file contains NUMA specific prototypes and definitions.
*
* 2002/08/05 Erich Focht <efocht@ess.nec.de>
*
*/
#ifndef _ASM_IA64_NUMA_H
#define _ASM_IA64_NUMA_H
#ifdef CONFIG_NUMA
#include <linux/cache.h>
#include <linux/cpumask.h>
#include <linux/numa.h>
#include <linux/smp.h>
#include <linux/threads.h>
#include <asm/mmzone.h>
extern u16 cpu_to_node_map[NR_CPUS] __cacheline_aligned;
extern cpumask_t node_to_cpu_mask[MAX_NUMNODES] __cacheline_aligned;
extern pg_data_t *pgdat_list[MAX_NUMNODES];
/* Stuff below this line could be architecture independent */
extern int num_node_memblks; /* total number of memory chunks */
/*
* List of node memory chunks. Filled when parsing SRAT table to
* obtain information about memory nodes.
*/
struct node_memblk_s {
unsigned long start_paddr;
unsigned long size;
int nid; /* which logical node contains this chunk? */
int bank; /* which mem bank on this node */
};
struct node_cpuid_s {
u16 phys_id; /* id << 8 | eid */
int nid; /* logical node containing this CPU */
};
extern struct node_memblk_s node_memblk[NR_NODE_MEMBLKS];
extern struct node_cpuid_s node_cpuid[NR_CPUS];
/*
* ACPI 2.0 SLIT (System Locality Information Table)
* http://devresource.hp.com/devresource/Docs/TechPapers/IA64/slit.pdf
*
* This is a matrix with "distances" between nodes, they should be
* proportional to the memory access latency ratios.
*/
extern u8 numa_slit[MAX_NUMNODES * MAX_NUMNODES];
#define slit_distance(from,to) (numa_slit[(from) * MAX_NUMNODES + (to)])
extern int __node_distance(int from, int to);
#define node_distance(from,to) __node_distance(from, to)
extern int paddr_to_nid(unsigned long paddr);
#define local_nodeid (cpu_to_node_map[smp_processor_id()])
#define numa_off 0
extern void map_cpu_to_node(int cpu, int nid);
extern void unmap_cpu_from_node(int cpu, int nid);
extern void numa_clear_node(int cpu);
#else /* !CONFIG_NUMA */
#define map_cpu_to_node(cpu, nid) do{}while(0)
#define unmap_cpu_from_node(cpu, nid) do{}while(0)
#define paddr_to_nid(addr) 0
#define numa_clear_node(cpu) do { } while (0)
#endif /* CONFIG_NUMA */
#endif /* _ASM_IA64_NUMA_H */

View File

@ -1,208 +0,0 @@
/* SPDX-License-Identifier: GPL-2.0 */
#ifndef _ASM_IA64_PAGE_H
#define _ASM_IA64_PAGE_H
/*
* Pagetable related stuff.
*
* Copyright (C) 1998, 1999, 2002 Hewlett-Packard Co
* David Mosberger-Tang <davidm@hpl.hp.com>
*/
#include <asm/intrinsics.h>
#include <asm/types.h>
/*
* The top three bits of an IA64 address are its Region Number.
* Different regions are assigned to different purposes.
*/
#define RGN_SHIFT (61)
#define RGN_BASE(r) (__IA64_UL_CONST(r)<<RGN_SHIFT)
#define RGN_BITS (RGN_BASE(-1))
#define RGN_KERNEL 7 /* Identity mapped region */
#define RGN_UNCACHED 6 /* Identity mapped I/O region */
#define RGN_GATE 5 /* Gate page, Kernel text, etc */
#define RGN_HPAGE 4 /* For Huge TLB pages */
/*
* PAGE_SHIFT determines the actual kernel page size.
*/
#if defined(CONFIG_IA64_PAGE_SIZE_4KB)
# define PAGE_SHIFT 12
#elif defined(CONFIG_IA64_PAGE_SIZE_8KB)
# define PAGE_SHIFT 13
#elif defined(CONFIG_IA64_PAGE_SIZE_16KB)
# define PAGE_SHIFT 14
#elif defined(CONFIG_IA64_PAGE_SIZE_64KB)
# define PAGE_SHIFT 16
#else
# error Unsupported page size!
#endif
#define PAGE_SIZE (__IA64_UL_CONST(1) << PAGE_SHIFT)
#define PAGE_MASK (~(PAGE_SIZE - 1))
#define PERCPU_PAGE_SHIFT 18 /* log2() of max. size of per-CPU area */
#define PERCPU_PAGE_SIZE (__IA64_UL_CONST(1) << PERCPU_PAGE_SHIFT)
#ifdef CONFIG_HUGETLB_PAGE
# define HPAGE_REGION_BASE RGN_BASE(RGN_HPAGE)
# define HPAGE_SHIFT hpage_shift
# define HPAGE_SHIFT_DEFAULT 28 /* check ia64 SDM for architecture supported size */
# define HPAGE_SIZE (__IA64_UL_CONST(1) << HPAGE_SHIFT)
# define HPAGE_MASK (~(HPAGE_SIZE - 1))
# define HAVE_ARCH_HUGETLB_UNMAPPED_AREA
#endif /* CONFIG_HUGETLB_PAGE */
#ifdef __ASSEMBLY__
# define __pa(x) ((x) - PAGE_OFFSET)
# define __va(x) ((x) + PAGE_OFFSET)
#else /* !__ASSEMBLY */
# define STRICT_MM_TYPECHECKS
extern void clear_page (void *page);
extern void copy_page (void *to, void *from);
/*
* clear_user_page() and copy_user_page() can't be inline functions because
* flush_dcache_page() can't be defined until later...
*/
#define clear_user_page(addr, vaddr, page) \
do { \
clear_page(addr); \
flush_dcache_page(page); \
} while (0)
#define copy_user_page(to, from, vaddr, page) \
do { \
copy_page((to), (from)); \
flush_dcache_page(page); \
} while (0)
#define vma_alloc_zeroed_movable_folio(vma, vaddr) \
({ \
struct folio *folio = vma_alloc_folio( \
GFP_HIGHUSER_MOVABLE | __GFP_ZERO, 0, vma, vaddr, false); \
if (folio) \
flush_dcache_folio(folio); \
folio; \
})
#define virt_addr_valid(kaddr) pfn_valid(__pa(kaddr) >> PAGE_SHIFT)
#include <asm-generic/memory_model.h>
#define page_to_phys(page) (page_to_pfn(page) << PAGE_SHIFT)
#define virt_to_page(kaddr) pfn_to_page(__pa(kaddr) >> PAGE_SHIFT)
#define pfn_to_kaddr(pfn) __va((pfn) << PAGE_SHIFT)
typedef union ia64_va {
struct {
unsigned long off : 61; /* intra-region offset */
unsigned long reg : 3; /* region number */
} f;
unsigned long l;
void *p;
} ia64_va;
/*
* Note: These macros depend on the fact that PAGE_OFFSET has all
* region bits set to 1 and all other bits set to zero. They are
* expressed in this way to ensure they result in a single "dep"
* instruction.
*/
#define __pa(x) ({ia64_va _v; _v.l = (long) (x); _v.f.reg = 0; _v.l;})
#define __va(x) ({ia64_va _v; _v.l = (long) (x); _v.f.reg = -1; _v.p;})
#define REGION_NUMBER(x) ({ia64_va _v; _v.l = (long) (x); _v.f.reg;})
#define REGION_OFFSET(x) ({ia64_va _v; _v.l = (long) (x); _v.f.off;})
#ifdef CONFIG_HUGETLB_PAGE
# define htlbpage_to_page(x) (((unsigned long) REGION_NUMBER(x) << 61) \
| (REGION_OFFSET(x) >> (HPAGE_SHIFT-PAGE_SHIFT)))
# define HUGETLB_PAGE_ORDER (HPAGE_SHIFT - PAGE_SHIFT)
extern unsigned int hpage_shift;
#endif
static __inline__ int
get_order (unsigned long size)
{
long double d = size - 1;
long order;
order = ia64_getf_exp(d);
order = order - PAGE_SHIFT - 0xffff + 1;
if (order < 0)
order = 0;
return order;
}
#endif /* !__ASSEMBLY__ */
#ifdef STRICT_MM_TYPECHECKS
/*
* These are used to make use of C type-checking..
*/
typedef struct { unsigned long pte; } pte_t;
typedef struct { unsigned long pmd; } pmd_t;
#if CONFIG_PGTABLE_LEVELS == 4
typedef struct { unsigned long pud; } pud_t;
#endif
typedef struct { unsigned long pgd; } pgd_t;
typedef struct { unsigned long pgprot; } pgprot_t;
typedef struct page *pgtable_t;
# define pte_val(x) ((x).pte)
# define pmd_val(x) ((x).pmd)
#if CONFIG_PGTABLE_LEVELS == 4
# define pud_val(x) ((x).pud)
#endif
# define pgd_val(x) ((x).pgd)
# define pgprot_val(x) ((x).pgprot)
# define __pte(x) ((pte_t) { (x) } )
# define __pmd(x) ((pmd_t) { (x) } )
# define __pgprot(x) ((pgprot_t) { (x) } )
#else /* !STRICT_MM_TYPECHECKS */
/*
* .. while these make it easier on the compiler
*/
# ifndef __ASSEMBLY__
typedef unsigned long pte_t;
typedef unsigned long pmd_t;
typedef unsigned long pgd_t;
typedef unsigned long pgprot_t;
typedef struct page *pgtable_t;
# endif
# define pte_val(x) (x)
# define pmd_val(x) (x)
# define pgd_val(x) (x)
# define pgprot_val(x) (x)
# define __pte(x) (x)
# define __pgd(x) (x)
# define __pgprot(x) (x)
#endif /* !STRICT_MM_TYPECHECKS */
#define PAGE_OFFSET RGN_BASE(RGN_KERNEL)
#define VM_DATA_DEFAULT_FLAGS VM_DATA_FLAGS_TSK_EXEC
#define GATE_ADDR RGN_BASE(RGN_GATE)
/*
* 0xa000000000000000+2*PERCPU_PAGE_SIZE
* - 0xa000000000000000+3*PERCPU_PAGE_SIZE remain unmapped (guard page)
*/
#define KERNEL_START (GATE_ADDR+__IA64_UL_CONST(0x100000000))
#define PERCPU_ADDR (-PERCPU_PAGE_SIZE)
#define LOAD_OFFSET (KERNEL_START - KERNEL_TR_PAGE_SIZE)
#define __HAVE_ARCH_GATE_AREA 1
#endif /* _ASM_IA64_PAGE_H */

File diff suppressed because it is too large Load Diff

Some files were not shown because too many files have changed in this diff Show More