Merge branch 'linus' into tracing/kmemtrace2

This commit is contained in:
Ingo Molnar 2009-01-06 09:53:05 +01:00
commit 3d7a96f5a4
1327 changed files with 56556 additions and 27848 deletions

View File

@ -369,10 +369,10 @@ P: 1024/8462A731 4C 55 86 34 44 59 A7 99 2B 97 88 4A 88 9A 0D 97
D: sun4 port, Sparc hacker
N: Hugh Blemings
E: hugh@misc.nu
W: http://misc.nu/hugh/
D: Author and maintainer of the Keyspan USB to Serial drivers
S: Po Box 234
E: hugh@blemings.org
W: http://blemings.org/hugh
D: Original author of the Keyspan USB to serial drivers, random PowerPC hacker
S: PO Box 234
S: Belconnen ACT 2616
S: Australia

View File

@ -32,14 +32,16 @@ Contact: linux-usb@vger.kernel.org
Description:
Write:
<channel> [<bpst offset>]
<channel>
to start beaconing on a specific channel, or stop
beaconing if <channel> is -1. Valid channels depends
on the radio controller's supported band groups.
to force a specific channel to be used when beaconing,
or, if <channel> is -1, to prohibit beaconing. If
<channel> is 0, then the default channel selection
algorithm will be used. Valid channels depends on the
radio controller's supported band groups.
<bpst offset> may be used to try and join a specific
beacon group if more than one was found during a scan.
Reading returns the currently active channel, or -1 if
the radio controller is not beaconing.
What: /sys/class/uwb_rc/uwbN/scan
Date: July 2008

View File

@ -50,16 +50,17 @@ additional_cpus=n (*) Use this to limit hotpluggable cpus. This option sets
cpu_possible_map = cpu_present_map + additional_cpus
(*) Option valid only for following architectures
- x86_64, ia64
- ia64
ia64 and x86_64 use the number of disabled local apics in ACPI tables MADT
to determine the number of potentially hot-pluggable cpus. The implementation
should only rely on this to count the # of cpus, but *MUST* not rely on the
apicid values in those tables for disabled apics. In the event BIOS doesn't
mark such hot-pluggable cpus as disabled entries, one could use this
parameter "additional_cpus=x" to represent those cpus in the cpu_possible_map.
ia64 uses the number of disabled local apics in ACPI tables MADT to
determine the number of potentially hot-pluggable cpus. The implementation
should only rely on this to count the # of cpus, but *MUST* not rely
on the apicid values in those tables for disabled apics. In the event
BIOS doesn't mark such hot-pluggable cpus as disabled entries, one could
use this parameter "additional_cpus=x" to represent those cpus in the
cpu_possible_map.
possible_cpus=n [s390 only] use this to set hotpluggable cpus.
possible_cpus=n [s390,x86_64] use this to set hotpluggable cpus.
This option sets possible_cpus bits in
cpu_possible_map. Thus keeping the numbers of bits set
constant even if the machine gets rebooted.

View File

@ -31,3 +31,51 @@ not defined by include/asm-XXX/topology.h:
2) core_id: 0
3) thread_siblings: just the given CPU
4) core_siblings: just the given CPU
Additionally, cpu topology information is provided under
/sys/devices/system/cpu and includes these files. The internal
source for the output is in brackets ("[]").
kernel_max: the maximum cpu index allowed by the kernel configuration.
[NR_CPUS-1]
offline: cpus that are not online because they have been
HOTPLUGGED off (see cpu-hotplug.txt) or exceed the limit
of cpus allowed by the kernel configuration (kernel_max
above). [~cpu_online_mask + cpus >= NR_CPUS]
online: cpus that are online and being scheduled [cpu_online_mask]
possible: cpus that have been allocated resources and can be
brought online if they are present. [cpu_possible_mask]
present: cpus that have been identified as being present in the
system. [cpu_present_mask]
The format for the above output is compatible with cpulist_parse()
[see <linux/cpumask.h>]. Some examples follow.
In this example, there are 64 cpus in the system but cpus 32-63 exceed
the kernel max which is limited to 0..31 by the NR_CPUS config option
being 32. Note also that cpus 2 and 4-31 are not online but could be
brought online as they are both present and possible.
kernel_max: 31
offline: 2,4-31,32-63
online: 0-1,3
possible: 0-31
present: 0-31
In this example, the NR_CPUS config option is 128, but the kernel was
started with possible_cpus=144. There are 4 cpus in the system and cpu2
was manually taken offline (and is the only cpu that can be brought
online.)
kernel_max: 127
offline: 2,4-127,128-143
online: 0-1,3
possible: 0-127
present: 0-3
See cpu-hotplug.txt for the possible_cpus=NUM kernel start parameter
as well as more information on the various cpumask's.

View File

@ -310,15 +310,6 @@ Who: Krzysztof Piotr Oledzki <ole@ans.pl>
---------------------------
What: ide-scsi (BLK_DEV_IDESCSI)
When: 2.6.29
Why: The 2.6 kernel supports direct writing to ide CD drives, which
eliminates the need for ide-scsi. The new method is more
efficient in every way.
Who: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
---------------------------
What: i2c_attach_client(), i2c_detach_client(), i2c_driver->detach_client()
When: 2.6.29 (ideally) or 2.6.30 (more likely)
Why: Deprecated by the new (standard) device driver binding model. Use

View File

@ -394,7 +394,6 @@ prototypes:
unsigned long (*get_unmapped_area)(struct file *, unsigned long,
unsigned long, unsigned long, unsigned long);
int (*check_flags)(int);
int (*dir_notify)(struct file *, unsigned long);
};
locking rules:
@ -424,7 +423,6 @@ sendfile: no
sendpage: no
get_unmapped_area: no
check_flags: no
dir_notify: no
->llseek() locking has moved from llseek to the individual llseek
implementations. If your fs is not using generic_file_llseek, you

View File

@ -0,0 +1,132 @@
To support containers, we now allow multiple instances of devpts filesystem,
such that indices of ptys allocated in one instance are independent of indices
allocated in other instances of devpts.
To preserve backward compatibility, this support for multiple instances is
enabled only if:
- CONFIG_DEVPTS_MULTIPLE_INSTANCES=y, and
- '-o newinstance' mount option is specified while mounting devpts
IOW, devpts now supports both single-instance and multi-instance semantics.
If CONFIG_DEVPTS_MULTIPLE_INSTANCES=n, there is no change in behavior and
this referred to as the "legacy" mode. In this mode, the new mount options
(-o newinstance and -o ptmxmode) will be ignored with a 'bogus option' message
on console.
If CONFIG_DEVPTS_MULTIPLE_INSTANCES=y and devpts is mounted without the
'newinstance' option (as in current start-up scripts) the new mount binds
to the initial kernel mount of devpts. This mode is referred to as the
'single-instance' mode and the current, single-instance semantics are
preserved, i.e PTYs are common across the system.
The only difference between this single-instance mode and the legacy mode
is the presence of new, '/dev/pts/ptmx' node with permissions 0000, which
can safely be ignored.
If CONFIG_DEVPTS_MULTIPLE_INSTANCES=y and 'newinstance' option is specified,
the mount is considered to be in the multi-instance mode and a new instance
of the devpts fs is created. Any ptys created in this instance are independent
of ptys in other instances of devpts. Like in the single-instance mode, the
/dev/pts/ptmx node is present. To effectively use the multi-instance mode,
open of /dev/ptmx must be a redirected to '/dev/pts/ptmx' using a symlink or
bind-mount.
Eg: A container startup script could do the following:
$ chmod 0666 /dev/pts/ptmx
$ rm /dev/ptmx
$ ln -s pts/ptmx /dev/ptmx
$ ns_exec -cm /bin/bash
# We are now in new container
$ umount /dev/pts
$ mount -t devpts -o newinstance lxcpts /dev/pts
$ sshd -p 1234
where 'ns_exec -cm /bin/bash' calls clone() with CLONE_NEWNS flag and execs
/bin/bash in the child process. A pty created by the sshd is not visible in
the original mount of /dev/pts.
User-space changes
------------------
In multi-instance mode (i.e '-o newinstance' mount option is specified at least
once), following user-space issues should be noted.
1. If -o newinstance mount option is never used, /dev/pts/ptmx can be ignored
and no change is needed to system-startup scripts.
2. To effectively use multi-instance mode (i.e -o newinstance is specified)
administrators or startup scripts should "redirect" open of /dev/ptmx to
/dev/pts/ptmx using either a bind mount or symlink.
$ mount -t devpts -o newinstance devpts /dev/pts
followed by either
$ rm /dev/ptmx
$ ln -s pts/ptmx /dev/ptmx
$ chmod 666 /dev/pts/ptmx
or
$ mount -o bind /dev/pts/ptmx /dev/ptmx
3. The '/dev/ptmx -> pts/ptmx' symlink is the preferred method since it
enables better error-reporting and treats both single-instance and
multi-instance mounts similarly.
But this method requires that system-startup scripts set the mode of
/dev/pts/ptmx correctly (default mode is 0000). The scripts can set the
mode by, either
- adding ptmxmode mount option to devpts entry in /etc/fstab, or
- using 'chmod 0666 /dev/pts/ptmx'
4. If multi-instance mode mount is needed for containers, but the system
startup scripts have not yet been updated, container-startup scripts
should bind mount /dev/ptmx to /dev/pts/ptmx to avoid breaking single-
instance mounts.
Or, in general, container-startup scripts should use:
mount -t devpts -o newinstance -o ptmxmode=0666 devpts /dev/pts
if [ ! -L /dev/ptmx ]; then
mount -o bind /dev/pts/ptmx /dev/ptmx
fi
When all devpts mounts are multi-instance, /dev/ptmx can permanently be
a symlink to pts/ptmx and the bind mount can be ignored.
5. A multi-instance mount that is not accompanied by the /dev/ptmx to
/dev/pts/ptmx redirection would result in an unusable/unreachable pty.
mount -t devpts -o newinstance lxcpts /dev/pts
immediately followed by:
open("/dev/ptmx")
would create a pty, say /dev/pts/7, in the initial kernel mount.
But /dev/pts/7 would be invisible in the new mount.
6. The permissions for /dev/pts/ptmx node should be specified when mounting
/dev/pts, using the '-o ptmxmode=%o' mount option (default is 0000).
mount -t devpts -o newinstance -o ptmxmode=0644 devpts /dev/pts
The permissions can be later be changed as usual with 'chmod'.
chmod 666 /dev/pts/ptmx
7. A mount of devpts without the 'newinstance' option results in binding to
initial kernel mount. This behavior while preserving legacy semantics,
does not provide strict isolation in a container environment. i.e by
mounting devpts without the 'newinstance' option, a container could
get visibility into the 'host' or root container's devpts.
To workaround this and have strict isolation, all mounts of devpts,
including the mount in the root container, should use the newinstance
option.

View File

@ -76,13 +76,13 @@ the fdtable structure -
5. Handling of the file structures is special. Since the look-up
of the fd (fget()/fget_light()) are lock-free, it is possible
that look-up may race with the last put() operation on the
file structure. This is avoided using atomic_inc_not_zero()
file structure. This is avoided using atomic_long_inc_not_zero()
on ->f_count :
rcu_read_lock();
file = fcheck_files(files, fd);
if (file) {
if (atomic_inc_not_zero(&file->f_count))
if (atomic_long_inc_not_zero(&file->f_count))
*fput_needed = 1;
else
/* Didn't get the reference, someone's freed */
@ -92,7 +92,7 @@ the fdtable structure -
....
return file;
atomic_inc_not_zero() detects if refcounts is already zero or
atomic_long_inc_not_zero() detects if refcounts is already zero or
goes to zero during increment. If it does, we fail
fget()/fget_light().

View File

@ -31,7 +31,6 @@ Features which OCFS2 does not support yet:
- quotas
- Directory change notification (F_NOTIFY)
- Distributed Caching (F_SETLEASE/F_GETLEASE/break_lease)
- POSIX ACLs
Mount options
=============
@ -79,3 +78,5 @@ inode64 Indicates that Ocfs2 is allowed to create inodes at
bits of significance.
user_xattr (*) Enables Extended User Attributes.
nouser_xattr Disables Extended User Attributes.
acl Enables POSIX Access Control Lists support.
noacl (*) Disables POSIX Access Control Lists support.

View File

@ -95,6 +95,9 @@ no_chk_data_crc skip checking of CRCs on data nodes in order to
of this option is that corruption of the contents
of a file can go unnoticed.
chk_data_crc (*) do not skip checking CRCs on data nodes
compr=none override default compressor and set it to "none"
compr=lzo override default compressor and set it to "lzo"
compr=zlib override default compressor and set it to "zlib"
Quick usage instructions

View File

@ -733,7 +733,6 @@ struct file_operations {
ssize_t (*sendpage) (struct file *, struct page *, int, size_t, loff_t *, int);
unsigned long (*get_unmapped_area)(struct file *, unsigned long, unsigned long, unsigned long, unsigned long);
int (*check_flags)(int);
int (*dir_notify)(struct file *filp, unsigned long arg);
int (*flock) (struct file *, int, struct file_lock *);
ssize_t (*splice_write)(struct pipe_inode_info *, struct file *, size_t, unsigned int);
ssize_t (*splice_read)(struct file *, struct pipe_inode_info *, size_t, unsigned int);
@ -800,8 +799,6 @@ otherwise noted.
check_flags: called by the fcntl(2) system call for F_SETFL command
dir_notify: called by the fcntl(2) system call for F_NOTIFY command
flock: called by the flock(2) system call
splice_write: called by the VFS to splice data from a pipe to a file. This
@ -931,7 +928,7 @@ manipulate dentries:
d_lookup: look up a dentry given its parent and path name component
It looks up the child of that given name from the dcache
hash table. If it is found, the reference count is incremented
and the dentry is returned. The caller must use d_put()
and the dentry is returned. The caller must use dput()
to free the dentry when it finishes using it.
For further information on dentry locking, please refer to the document

View File

@ -97,6 +97,7 @@ Code Seq# Include File Comments
<http://linux01.gwdg.de/~alatham/ppdd.html>
'M' all linux/soundcard.h
'N' 00-1F drivers/usb/scanner.h
'O' 00-02 include/mtd/ubi-user.h UBI
'P' all linux/soundcard.h
'Q' all linux/soundcard.h
'R' 00-1F linux/random.h
@ -142,6 +143,9 @@ Code Seq# Include File Comments
'n' 00-7F linux/ncp_fs.h
'n' E0-FF video/matrox.h matroxfb
'o' 00-1F fs/ocfs2/ocfs2_fs.h OCFS2
'o' 00-03 include/mtd/ubi-user.h conflict! (OCFS2 and UBI overlaps)
'o' 40-41 include/mtd/ubi-user.h UBI
'o' 01-A1 include/linux/dvb/*.h DVB
'p' 00-0F linux/phantom.h conflict! (OpenHaptics needs this)
'p' 00-3F linux/mc146818rtc.h conflict!
'p' 40-7F linux/nvram.h

View File

@ -1,5 +1,9 @@
00-INDEX
- this file: info on the kernel build process
- this file: info on the kernel build process
kbuild.txt
- developer information on kbuild
kconfig.txt
- usage help for make *config
kconfig-language.txt
- specification of Config Language, the language in Kconfig files
makefiles.txt

View File

@ -0,0 +1,126 @@
Environment variables
KCPPFLAGS
--------------------------------------------------
Additional options to pass when preprocessing. The preprocessing options
will be used in all cases where kbuild do preprocessing including
building C files and assembler files.
KAFLAGS
--------------------------------------------------
Additional options to the assembler.
KCFLAGS
--------------------------------------------------
Additional options to the C compiler.
KBUILD_VERBOSE
--------------------------------------------------
Set the kbuild verbosity. Can be assinged same values as "V=...".
See make help for the full list.
Setting "V=..." takes precedence over KBUILD_VERBOSE.
KBUILD_EXTMOD
--------------------------------------------------
Set the directory to look for the kernel source when building external
modules.
The directory can be specified in several ways:
1) Use "M=..." on the command line
2) Environmnet variable KBUILD_EXTMOD
3) Environmnet variable SUBDIRS
The possibilities are listed in the order they take precedence.
Using "M=..." will always override the others.
KBUILD_OUTPUT
--------------------------------------------------
Specify the output directory when building the kernel.
The output directory can also be specificed using "O=...".
Setting "O=..." takes precedence over KBUILD_OUTPUT
ARCH
--------------------------------------------------
Set ARCH to the architecture to be built.
In most cases the name of the architecture is the same as the
directory name found in the arch/ directory.
But some architectures suach as x86 and sparc has aliases.
x86: i386 for 32 bit, x86_64 for 64 bit
sparc: sparc for 32 bit, sparc64 for 64 bit
CROSS_COMPILE
--------------------------------------------------
Specify an optional fixed part of the binutils filename.
CROSS_COMPILE can be a part of the filename or the full path.
CROSS_COMPILE is also used for ccache is some setups.
CF
--------------------------------------------------
Additional options for sparse.
CF is often used on the command-line like this:
make CF=-Wbitwise C=2
INSTALL_PATH
--------------------------------------------------
INSTALL_PATH specifies where to place the updated kernel and system map
images. Default is /boot, but you can set it to other values
MODLIB
--------------------------------------------------
Specify where to install modules.
The default value is:
$(INSTALL_MOD_PATH)/lib/modules/$(KERNELRELEASE)
The value can be overridden in which case the default value is ignored.
INSTALL_MOD_PATH
--------------------------------------------------
INSTALL_MOD_PATH specifies a prefix to MODLIB for module directory
relocations required by build roots. This is not defined in the
makefile but the argument can be passed to make if needed.
INSTALL_MOD_STRIP
--------------------------------------------------
INSTALL_MOD_STRIP, if defined, will cause modules to be
stripped after they are installed. If INSTALL_MOD_STRIP is '1', then
the default option --strip-debug will be used. Otherwise,
INSTALL_MOD_STRIP will used as the options to the strip command.
INSTALL_FW_PATH
--------------------------------------------------
INSTALL_FW_PATH specify where to install the firmware blobs.
The default value is:
$(INSTALL_MOD_PATH)/lib/firmware
The value can be overridden in which case the default value is ignored.
INSTALL_HDR_PATH
--------------------------------------------------
INSTALL_HDR_PATH specify where to install user space headers when
executing "make headers_*".
The default value is:
$(objtree)/usr
$(objtree) is the directory where output files are saved.
The output directory is often set using "O=..." on the commandline.
The value can be overridden in which case the default value is ignored.
KBUILD_MODPOST_WARN
--------------------------------------------------
KBUILD_MODPOST_WARN can be set to avoid error out in case of undefined
symbols in the final module linking stage.
KBUILD_MODPOST_FINAL
--------------------------------------------------
KBUILD_MODPOST_NOFINAL can be set to skip the final link of modules.
This is solely usefull to speed up test compiles.
KBUILD_EXTRA_SYMBOLS
--------------------------------------------------
For modules use symbols from another modules.
See more details in modules.txt.

View File

@ -0,0 +1,188 @@
This file contains some assistance for using "make *config".
Use "make help" to list all of the possible configuration targets.
The xconfig ('qconf') and menuconfig ('mconf') programs also
have embedded help text. Be sure to check it for navigation,
search, and other general help text.
======================================================================
General
--------------------------------------------------
New kernel releases often introduce new config symbols. Often more
important, new kernel releases may rename config symbols. When
this happens, using a previously working .config file and running
"make oldconfig" won't necessarily produce a working new kernel
for you, so you may find that you need to see what NEW kernel
symbols have been introduced.
To see a list of new config symbols when using "make oldconfig", use
cp user/some/old.config .config
yes "" | make oldconfig >conf.new
and the config program will list as (NEW) any new symbols that have
unknown values. Of course, the .config file is also updated with
new (default) values, so you can use:
grep "(NEW)" conf.new
to see the new config symbols or you can 'diff' the previous and
new .config files to see the differences:
diff .config.old .config | less
(Yes, we need something better here.)
======================================================================
menuconfig
--------------------------------------------------
SEARCHING for CONFIG symbols
Searching in menuconfig:
The Search function searches for kernel configuration symbol
names, so you have to know something close to what you are
looking for.
Example:
/hotplug
This lists all config symbols that contain "hotplug",
e.g., HOTPLUG, HOTPLUG_CPU, MEMORY_HOTPLUG.
For search help, enter / followed TAB-TAB-TAB (to highlight
<Help>) and Enter. This will tell you that you can also use
regular expressions (regexes) in the search string, so if you
are not interested in MEMORY_HOTPLUG, you could try
/^hotplug
______________________________________________________________________
Color Themes for 'menuconfig'
It is possible to select different color themes using the variable
MENUCONFIG_COLOR. To select a theme use:
make MENUCONFIG_COLOR=<theme> menuconfig
Available themes are:
mono => selects colors suitable for monochrome displays
blackbg => selects a color scheme with black background
classic => theme with blue background. The classic look
bluetitle => a LCD friendly version of classic. (default)
______________________________________________________________________
Environment variables in 'menuconfig'
KCONFIG_ALLCONFIG
--------------------------------------------------
(partially based on lkml email from/by Rob Landley, re: miniconfig)
--------------------------------------------------
The allyesconfig/allmodconfig/allnoconfig/randconfig variants can
also use the environment variable KCONFIG_ALLCONFIG as a flag or a
filename that contains config symbols that the user requires to be
set to a specific value. If KCONFIG_ALLCONFIG is used without a
filename, "make *config" checks for a file named
"all{yes/mod/no/random}.config" (corresponding to the *config command
that was used) for symbol values that are to be forced. If this file
is not found, it checks for a file named "all.config" to contain forced
values.
This enables you to create "miniature" config (miniconfig) or custom
config files containing just the config symbols that you are interested
in. Then the kernel config system generates the full .config file,
including dependencies of your miniconfig file, based on the miniconfig
file.
This 'KCONFIG_ALLCONFIG' file is a config file which contains
(usually a subset of all) preset config symbols. These variable
settings are still subject to normal dependency checks.
Examples:
KCONFIG_ALLCONFIG=custom-notebook.config make allnoconfig
or
KCONFIG_ALLCONFIG=mini.config make allnoconfig
or
make KCONFIG_ALLCONFIG=mini.config allnoconfig
These examples will disable most options (allnoconfig) but enable or
disable the options that are explicitly listed in the specified
mini-config files.
KCONFIG_NOSILENTUPDATE
--------------------------------------------------
If this variable has a non-blank value, it prevents silent kernel
config udpates (requires explicit updates).
KCONFIG_CONFIG
--------------------------------------------------
This environment variable can be used to specify a default kernel config
file name to override the default name of ".config".
KCONFIG_OVERWRITECONFIG
--------------------------------------------------
If you set KCONFIG_OVERWRITECONFIG in the environment, Kconfig will not
break symlinks when .config is a symlink to somewhere else.
KCONFIG_NOTIMESTAMP
--------------------------------------------------
If this environment variable exists and is non-null, the timestamp line
in generated .config files is omitted.
KCONFIG_AUTOCONFIG
--------------------------------------------------
This environment variable can be set to specify the path & name of the
"auto.conf" file. Its default value is "include/config/auto.conf".
KCONFIG_AUTOHEADER
--------------------------------------------------
This environment variable can be set to specify the path & name of the
"autoconf.h" (header) file. Its default value is "include/linux/autoconf.h".
______________________________________________________________________
menuconfig User Interface Options
----------------------------------------------------------------------
MENUCONFIG_MODE
--------------------------------------------------
This mode shows all sub-menus in one large tree.
Example:
MENUCONFIG_MODE=single_menu make menuconfig
======================================================================
xconfig
--------------------------------------------------
Searching in xconfig:
The Search function searches for kernel configuration symbol
names, so you have to know something close to what you are
looking for.
Example:
Ctrl-F hotplug
or
Menu: File, Search, hotplug
lists all config symbol entries that contain "hotplug" in
the symbol name. In this Search dialog, you may change the
config setting for any of the entries that are not grayed out.
You can also enter a different search string without having
to return to the main menu.
======================================================================
gconfig
--------------------------------------------------
Searching in gconfig:
None (gconfig isn't maintained as well as xconfig or menuconfig);
however, gconfig does have a few more viewing choices than
xconfig does.
###

View File

@ -80,12 +80,6 @@ case $1 in
start)
for dev in ${2:-$hdevs}
do
uwb_rc=$(readlink -f $dev/uwb_rc)
if cat $uwb_rc/beacon | grep -q -- "-1"
then
echo 13 0 > $uwb_rc/beacon
echo I: started beaconing on ch 13 on $(basename $uwb_rc) >&2
fi
echo $host_CHID > $dev/wusb_chid
echo I: started host $(basename $dev) >&2
done
@ -95,9 +89,6 @@ case $1 in
do
echo 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 > $dev/wusb_chid
echo I: stopped host $(basename $dev) >&2
uwb_rc=$(readlink -f $dev/uwb_rc)
echo -1 | cat > $uwb_rc/beacon
echo I: stopped beaconing on $(basename $uwb_rc) >&2
done
;;
set-chid)

View File

@ -152,3 +152,4 @@
151 -> ADS Tech Instant HDTV [1421:0380]
152 -> Asus Tiger Rev:1.00 [1043:4857]
153 -> Kworld Plus TV Analog Lite PCI [17de:7128]
154 -> Avermedia AVerTV GO 007 FM Plus [1461:f31d]

View File

@ -41,6 +41,7 @@ chips are known to work:
- 10c4:818a: Silicon Labs USB FM Radio Reference Design
- 06e1:a155: ADS/Tech FM Radio Receiver (formerly Instant FM Music) (RDX-155-EF)
- 1b80:d700: KWorld USB FM Radio SnapMusic Mobile 700 (FM700)
- 10c5:819a: DealExtreme USB Radio
Software

View File

@ -184,7 +184,7 @@ may be NULL if the subdev driver does not support anything from that category.
It looks like this:
struct v4l2_subdev_core_ops {
int (*g_chip_ident)(struct v4l2_subdev *sd, struct v4l2_chip_ident *chip);
int (*g_chip_ident)(struct v4l2_subdev *sd, struct v4l2_dbg_chip_ident *chip);
int (*log_status)(struct v4l2_subdev *sd);
int (*init)(struct v4l2_subdev *sd, u32 val);
...
@ -390,16 +390,18 @@ allocated memory.
You should also set these fields:
- parent: set to the parent device (same device as was used to register
v4l2_device).
- v4l2_dev: set to the v4l2_device parent device.
- name: set to something descriptive and unique.
- fops: set to the file_operations struct.
- fops: set to the v4l2_file_operations struct.
- ioctl_ops: if you use the v4l2_ioctl_ops to simplify ioctl maintenance
(highly recommended to use this and it might become compulsory in the
future!), then set this to your v4l2_ioctl_ops struct.
If you use v4l2_ioctl_ops, then you should set .unlocked_ioctl to
__video_ioctl2 or .ioctl to video_ioctl2 in your file_operations struct.
If you use v4l2_ioctl_ops, then you should set either .unlocked_ioctl or
.ioctl to video_ioctl2 in your v4l2_file_operations struct.
The v4l2_file_operations struct is a subset of file_operations. The main
difference is that the inode argument is omitted since it is never used.
video_device registration
@ -410,7 +412,7 @@ for you.
err = video_register_device(vdev, VFL_TYPE_GRABBER, -1);
if (err) {
video_device_release(vdev); // or kfree(my_vdev);
video_device_release(vdev); /* or kfree(my_vdev); */
return err;
}
@ -516,5 +518,4 @@ void *video_drvdata(struct file *file);
You can go from a video_device struct to the v4l2_device struct using:
struct v4l2_device *v4l2_dev = dev_get_drvdata(vdev->parent);
struct v4l2_device *v4l2_dev = vdev->v4l2_dev;

View File

@ -2049,6 +2049,12 @@ M: mikulas@artax.karlin.mff.cuni.cz
W: http://artax.karlin.mff.cuni.cz/~mikulas/vyplody/hpfs/index-e.cgi
S: Maintained
HSO 3G Modem Driver (hso.c)
P: Denis Joseph Barrow
M: d.barow@option.com
W: http://www.pharscape.org
S: Maintained
HTCPEN TOUCHSCREEN DRIVER
P: Pau Oliva Fora
M: pof@eslack.org
@ -2146,11 +2152,6 @@ M: Gadi Oxman <gadio@netvision.net.il>
L: linux-kernel@vger.kernel.org
S: Maintained
IDE-SCSI DRIVER
L: linux-ide@vger.kernel.org
L: linux-scsi@vger.kernel.org
S: Orphan
IDLE-I7300
P: Andy Henroid
M: andrew.d.henroid@intel.com
@ -2541,8 +2542,6 @@ W: http://kvm.qumranet.com
S: Supported
KERNEL VIRTUAL MACHINE For Itanium (KVM/IA64)
P: Anthony Xu
M: anthony.xu@intel.com
P: Xiantao Zhang
M: xiantao.zhang@intel.com
L: kvm-ia64@vger.kernel.org
@ -2641,13 +2640,13 @@ W: http://www.hansenpartnership.com/voyager
S: Maintained
LINUX FOR POWERPC (32-BIT AND 64-BIT)
P: Paul Mackerras
M: paulus@samba.org
P: Benjamin Herrenschmidt
M: benh@kernel.crashing.org
P: Paul Mackerras
M: paulus@samba.org
W: http://www.penguinppc.org/
L: linuxppc-dev@ozlabs.org
T: git kernel.org:/pub/scm/linux/kernel/git/paulus/powerpc.git
T: git kernel.org:/pub/scm/linux/kernel/git/benh/powerpc.git
S: Supported
LINUX FOR POWER MACINTOSH
@ -4022,10 +4021,12 @@ L: alsa-devel@alsa-project.org (subscribers-only)
W: http://alsa-project.org/main/index.php/ASoC
S: Supported
SPARC (sparc32)
P: William L. Irwin
M: wli@holomorphy.com
SPARC + UltraSPARC (sparc/sparc64)
P: David S. Miller
M: davem@davemloft.net
L: sparclinux@vger.kernel.org
T: git kernel.org:/pub/scm/linux/kernel/git/davem/sparc-2.6.git
T: git kernel.org:/pub/scm/linux/kernel/git/davem/sparc-next-2.6.git
S: Maintained
SPECIALIX IO8+ MULTIPORT SERIAL CARD DRIVER
@ -4309,13 +4310,6 @@ M: dushistov@mail.ru
L: linux-kernel@vger.kernel.org
S: Maintained
UltraSPARC (sparc64)
P: David S. Miller
M: davem@davemloft.net
L: sparclinux@vger.kernel.org
T: git kernel.org:/pub/scm/linux/kernel/git/davem/sparc-2.6.git
S: Maintained
ULTRA-WIDEBAND (UWB) SUBSYSTEM:
P: David Vrabel
M: david.vrabel@csr.com

View File

@ -321,7 +321,8 @@ KALLSYMS = scripts/kallsyms
PERL = perl
CHECK = sparse
CHECKFLAGS := -D__linux__ -Dlinux -D__STDC__ -Dunix -D__unix__ -Wbitwise $(CF)
CHECKFLAGS := -D__linux__ -Dlinux -D__STDC__ -Dunix -D__unix__ \
-Wbitwise -Wno-return-void $(CF)
MODFLAGS = -DMODULE
CFLAGS_MODULE = $(MODFLAGS)
AFLAGS_MODULE = $(MODFLAGS)

32
README
View File

@ -52,11 +52,11 @@ DOCUMENTATION:
- The Documentation/DocBook/ subdirectory contains several guides for
kernel developers and users. These guides can be rendered in a
number of formats: PostScript (.ps), PDF, and HTML, among others.
After installation, "make psdocs", "make pdfdocs", or "make htmldocs"
will render the documentation in the requested format.
number of formats: PostScript (.ps), PDF, HTML, & man-pages, among others.
After installation, "make psdocs", "make pdfdocs", "make htmldocs",
or "make mandocs" will render the documentation in the requested format.
INSTALLING the kernel:
INSTALLING the kernel source:
- If you install the full sources, put the kernel tarball in a
directory where you have permissions (eg. your home directory) and
@ -187,14 +187,9 @@ CONFIGURING the kernel:
"make randconfig" Create a ./.config file by setting symbol
values to random values.
The allyesconfig/allmodconfig/allnoconfig/randconfig variants can
also use the environment variable KCONFIG_ALLCONFIG to specify a
filename that contains config options that the user requires to be
set to a specific value. If KCONFIG_ALLCONFIG=filename is not used,
"make *config" checks for a file named "all{yes/mod/no/random}.config"
for symbol values that are to be forced. If this file is not found,
it checks for a file named "all.config" to contain forced values.
You can find more information on using the Linux kernel config tools
in Documentation/kbuild/make-configs.txt.
NOTES on "make config":
- having unnecessary drivers will make the kernel bigger, and can
under some circumstances lead to problems: probing for a
@ -231,6 +226,19 @@ COMPILING the kernel:
- If you configured any of the parts of the kernel as `modules', you
will also have to do "make modules_install".
- Verbose kernel compile/build output:
Normally the kernel build system runs in a fairly quiet mode (but not
totally silent). However, sometimes you or other kernel developers need
to see compile, link, or other commands exactly as they are executed.
For this, use "verbose" build mode. This is done by inserting
"V=1" in the "make" command. E.g.:
make V=1 all
To have the build system also tell the reason for the rebuild of each
target, use "V=2". The default is "V=0".
- Keep a backup kernel handy in case something goes wrong. This is
especially true for the development releases, since each new release
contains new code which has not been debugged. Make sure you keep a

View File

@ -45,7 +45,6 @@ extern struct cpuinfo_alpha cpu_data[NR_CPUS];
#define raw_smp_processor_id() (current_thread_info()->cpu)
extern int smp_num_cpus;
#define cpu_possible_map cpu_present_map
extern void arch_send_call_function_single_ipi(int cpu);
extern void arch_send_call_function_ipi(cpumask_t mask);

View File

@ -39,7 +39,24 @@ static inline cpumask_t node_to_cpumask(int node)
return node_cpu_mask;
}
extern struct cpumask node_to_cpumask_map[];
/* FIXME: This is dumb, recalculating every time. But simple. */
static const struct cpumask *cpumask_of_node(int node)
{
int cpu;
cpumask_clear(&node_to_cpumask_map[node]);
for_each_online_cpu(cpu) {
if (cpu_to_node(cpu) == node)
cpumask_set_cpu(cpu, node_to_cpumask_map[node]);
}
return &node_to_cpumask_map[node];
}
#define pcibus_to_cpumask(bus) (cpu_online_map)
#define cpumask_of_pcibus(bus) (cpu_online_mask)
#endif /* !CONFIG_NUMA */
# include <asm-generic/topology.h>

View File

@ -8,7 +8,7 @@ EXTRA_CFLAGS := -Werror -Wno-sign-compare
obj-y := entry.o traps.o process.o init_task.o osf_sys.o irq.o \
irq_alpha.o signal.o setup.o ptrace.o time.o \
alpha_ksyms.o systbls.o err_common.o io.o
alpha_ksyms.o systbls.o err_common.o io.o binfmt_loader.o
obj-$(CONFIG_VGA_HOSE) += console.o
obj-$(CONFIG_SMP) += smp.o

View File

@ -0,0 +1,51 @@
#include <linux/init.h>
#include <linux/fs.h>
#include <linux/file.h>
#include <linux/mm_types.h>
#include <linux/binfmts.h>
#include <linux/a.out.h>
static int load_binary(struct linux_binprm *bprm, struct pt_regs *regs)
{
struct exec *eh = (struct exec *)bprm->buf;
unsigned long loader;
struct file *file;
int retval;
if (eh->fh.f_magic != 0x183 || (eh->fh.f_flags & 0x3000) != 0x3000)
return -ENOEXEC;
if (bprm->loader)
return -ENOEXEC;
allow_write_access(bprm->file);
fput(bprm->file);
bprm->file = NULL;
loader = bprm->vma->vm_end - sizeof(void *);
file = open_exec("/sbin/loader");
retval = PTR_ERR(file);
if (IS_ERR(file))
return retval;
/* Remember if the application is TASO. */
bprm->taso = eh->ah.entry < 0x100000000UL;
bprm->file = file;
bprm->loader = loader;
retval = prepare_binprm(bprm);
if (retval < 0)
return retval;
return search_binary_handler(bprm,regs);
}
static struct linux_binfmt loader_format = {
.load_binary = load_binary,
};
static int __init init_loader_binfmt(void)
{
return register_binfmt(&loader_format);
}
arch_initcall(init_loader_binfmt);

View File

@ -8,7 +8,6 @@
#include <asm/uaccess.h>
static struct fs_struct init_fs = INIT_FS;
static struct signal_struct init_signals = INIT_SIGNALS(init_signals);
static struct sighand_struct init_sighand = INIT_SIGHAND(init_sighand);
struct mm_struct init_mm = INIT_MM(init_mm);

View File

@ -50,12 +50,13 @@ int irq_select_affinity(unsigned int irq)
if (!irq_desc[irq].chip->set_affinity || irq_user_affinity[irq])
return 1;
while (!cpu_possible(cpu) || !cpu_isset(cpu, irq_default_affinity))
while (!cpu_possible(cpu) ||
!cpumask_test_cpu(cpu, irq_default_affinity))
cpu = (cpu < (NR_CPUS-1) ? cpu + 1 : 0);
last_cpu = cpu;
irq_desc[irq].affinity = cpumask_of_cpu(cpu);
irq_desc[irq].chip->set_affinity(irq, cpumask_of_cpu(cpu));
irq_desc[irq].chip->set_affinity(irq, cpumask_of(cpu));
return 0;
}
#endif /* CONFIG_SMP */

View File

@ -94,6 +94,7 @@ common_shutdown_1(void *generic_ptr)
flags |= 0x00040000UL; /* "remain halted" */
*pflags = flags;
cpu_clear(cpuid, cpu_present_map);
cpu_clear(cpuid, cpu_possible_map);
halt();
}
#endif
@ -120,6 +121,7 @@ common_shutdown_1(void *generic_ptr)
#ifdef CONFIG_SMP
/* Wait for the secondaries to halt. */
cpu_clear(boot_cpuid, cpu_present_map);
cpu_clear(boot_cpuid, cpu_possible_map);
while (cpus_weight(cpu_present_map))
barrier();
#endif

View File

@ -79,6 +79,11 @@ int alpha_l3_cacheshape;
unsigned long alpha_verbose_mcheck = CONFIG_VERBOSE_MCHECK_ON;
#endif
#ifdef CONFIG_NUMA
struct cpumask node_to_cpumask_map[MAX_NUMNODES] __read_mostly;
EXPORT_SYMBOL(node_to_cpumask_map);
#endif
/* Which processor we booted from. */
int boot_cpuid;

View File

@ -70,11 +70,6 @@ enum ipi_message_type {
/* Set to a secondary's cpuid when it comes online. */
static int smp_secondary_alive __devinitdata = 0;
/* Which cpus ids came online. */
cpumask_t cpu_online_map;
EXPORT_SYMBOL(cpu_online_map);
int smp_num_probed; /* Internal processor count */
int smp_num_cpus = 1; /* Number that came online. */
EXPORT_SYMBOL(smp_num_cpus);
@ -440,6 +435,7 @@ setup_smp(void)
((char *)cpubase + i*hwrpb->processor_size);
if ((cpu->flags & 0x1cc) == 0x1cc) {
smp_num_probed++;
cpu_set(i, cpu_possible_map);
cpu_set(i, cpu_present_map);
cpu->pal_revision = boot_cpu_palrev;
}
@ -473,6 +469,7 @@ smp_prepare_cpus(unsigned int max_cpus)
/* Nothing to do on a UP box, or when told not to. */
if (smp_num_probed == 1 || max_cpus == 0) {
cpu_possible_map = cpumask_of_cpu(boot_cpuid);
cpu_present_map = cpumask_of_cpu(boot_cpuid);
printk(KERN_INFO "SMP mode deactivated.\n");
return;

View File

@ -177,19 +177,19 @@ cpu_set_irq_affinity(unsigned int irq, cpumask_t affinity)
}
static void
dp264_set_affinity(unsigned int irq, cpumask_t affinity)
dp264_set_affinity(unsigned int irq, const struct cpumask *affinity)
{
spin_lock(&dp264_irq_lock);
cpu_set_irq_affinity(irq, affinity);
cpu_set_irq_affinity(irq, *affinity);
tsunami_update_irq_hw(cached_irq_mask);
spin_unlock(&dp264_irq_lock);
}
static void
clipper_set_affinity(unsigned int irq, cpumask_t affinity)
clipper_set_affinity(unsigned int irq, const struct cpumask *affinity)
{
spin_lock(&dp264_irq_lock);
cpu_set_irq_affinity(irq - 16, affinity);
cpu_set_irq_affinity(irq - 16, *affinity);
tsunami_update_irq_hw(cached_irq_mask);
spin_unlock(&dp264_irq_lock);
}

View File

@ -158,10 +158,10 @@ titan_cpu_set_irq_affinity(unsigned int irq, cpumask_t affinity)
}
static void
titan_set_irq_affinity(unsigned int irq, cpumask_t affinity)
titan_set_irq_affinity(unsigned int irq, const struct cpumask *affinity)
{
spin_lock(&titan_irq_lock);
titan_cpu_set_irq_affinity(irq - 16, affinity);
titan_cpu_set_irq_affinity(irq - 16, *affinity);
titan_update_irq_hw(titan_cached_irq_mask);
spin_unlock(&titan_irq_lock);
}

View File

@ -109,11 +109,11 @@ static void gic_unmask_irq(unsigned int irq)
}
#ifdef CONFIG_SMP
static void gic_set_cpu(unsigned int irq, cpumask_t mask_val)
static void gic_set_cpu(unsigned int irq, const struct cpumask *mask_val)
{
void __iomem *reg = gic_dist_base(irq) + GIC_DIST_TARGET + (gic_irq(irq) & ~3);
unsigned int shift = (irq % 4) * 8;
unsigned int cpu = first_cpu(mask_val);
unsigned int cpu = cpumask_first(mask_val);
u32 val;
spin_lock(&irq_controller_lock);

View File

@ -12,7 +12,6 @@
#include <asm/pgtable.h>
static struct fs_struct init_fs = INIT_FS;
static struct signal_struct init_signals = INIT_SIGNALS(init_signals);
static struct sighand_struct init_sighand = INIT_SIGHAND(init_sighand);
struct mm_struct init_mm = INIT_MM(init_mm);

View File

@ -174,7 +174,7 @@ static void route_irq(struct irq_desc *desc, unsigned int irq, unsigned int cpu)
pr_debug("IRQ%u: moving from cpu%u to cpu%u\n", irq, desc->cpu, cpu);
spin_lock_irq(&desc->lock);
desc->chip->set_affinity(irq, cpumask_of_cpu(cpu));
desc->chip->set_affinity(irq, cpumask_of(cpu));
spin_unlock_irq(&desc->lock);
}

View File

@ -33,16 +33,6 @@
#include <asm/tlbflush.h>
#include <asm/ptrace.h>
/*
* bitmask of present and online CPUs.
* The present bitmask indicates that the CPU is physically present.
* The online bitmask indicates that the CPU is up and running.
*/
cpumask_t cpu_possible_map;
EXPORT_SYMBOL(cpu_possible_map);
cpumask_t cpu_online_map;
EXPORT_SYMBOL(cpu_online_map);
/*
* as from 2.5, kernels no longer have an init_tasks structure
* so we need some other way of telling a new secondary core

View File

@ -178,7 +178,6 @@ static struct clock_event_device clkevt = {
.features = CLOCK_EVT_FEAT_PERIODIC | CLOCK_EVT_FEAT_ONESHOT,
.shift = 32,
.rating = 150,
.cpumask = CPU_MASK_CPU0,
.set_next_event = clkevt32k_next_event,
.set_mode = clkevt32k_mode,
};
@ -206,7 +205,7 @@ void __init at91rm9200_timer_init(void)
clkevt.mult = div_sc(AT91_SLOW_CLOCK, NSEC_PER_SEC, clkevt.shift);
clkevt.max_delta_ns = clockevent_delta2ns(AT91_ST_ALMV, &clkevt);
clkevt.min_delta_ns = clockevent_delta2ns(2, &clkevt) + 1;
clkevt.cpumask = cpumask_of_cpu(0);
clkevt.cpumask = cpumask_of(0);
clockevents_register_device(&clkevt);
/* register clocksource */

View File

@ -91,7 +91,6 @@ static struct clock_event_device pit_clkevt = {
.features = CLOCK_EVT_FEAT_PERIODIC,
.shift = 32,
.rating = 100,
.cpumask = CPU_MASK_CPU0,
.set_mode = pit_clkevt_mode,
};
@ -173,6 +172,7 @@ static void __init at91sam926x_pit_init(void)
/* Set up and register clockevents */
pit_clkevt.mult = div_sc(pit_rate, NSEC_PER_SEC, pit_clkevt.shift);
pit_clkevt.cpumask = cpumask_of(0);
clockevents_register_device(&pit_clkevt);
}

View File

@ -322,7 +322,7 @@ static void __init davinci_timer_init(void)
clockevent_davinci.min_delta_ns =
clockevent_delta2ns(1, &clockevent_davinci);
clockevent_davinci.cpumask = cpumask_of_cpu(0);
clockevent_davinci.cpumask = cpumask_of(0);
clockevents_register_device(&clockevent_davinci);
}

View File

@ -184,7 +184,7 @@ static int __init imx_clockevent_init(unsigned long rate)
clockevent_imx.min_delta_ns =
clockevent_delta2ns(0xf, &clockevent_imx);
clockevent_imx.cpumask = cpumask_of_cpu(0);
clockevent_imx.cpumask = cpumask_of(0);
clockevents_register_device(&clockevent_imx);

View File

@ -487,7 +487,7 @@ static int __init ixp4xx_clockevent_init(void)
clockevent_delta2ns(0xfffffffe, &clockevent_ixp4xx);
clockevent_ixp4xx.min_delta_ns =
clockevent_delta2ns(0xf, &clockevent_ixp4xx);
clockevent_ixp4xx.cpumask = cpumask_of_cpu(0);
clockevent_ixp4xx.cpumask = cpumask_of(0);
clockevents_register_device(&clockevent_ixp4xx);
return 0;

View File

@ -182,7 +182,7 @@ static void __init msm_timer_init(void)
clockevent_delta2ns(0xf0000000 >> clock->shift, ce);
/* 4 gets rounded down to 3 */
ce->min_delta_ns = clockevent_delta2ns(4, ce);
ce->cpumask = cpumask_of_cpu(0);
ce->cpumask = cpumask_of(0);
cs->mult = clocksource_hz2mult(clock->freq, cs->shift);
res = clocksource_register(cs);

View File

@ -173,7 +173,7 @@ static void __init ns9360_timer_init(void)
ns9360_clockevent_device.min_delta_ns =
clockevent_delta2ns(1, &ns9360_clockevent_device);
ns9360_clockevent_device.cpumask = cpumask_of_cpu(0);
ns9360_clockevent_device.cpumask = cpumask_of(0);
clockevents_register_device(&ns9360_clockevent_device);
setup_irq(IRQ_NS9360_TIMER0 + TIMER_CLOCKEVENT,

View File

@ -173,7 +173,7 @@ static __init void omap_init_mpu_timer(unsigned long rate)
clockevent_mpu_timer1.min_delta_ns =
clockevent_delta2ns(1, &clockevent_mpu_timer1);
clockevent_mpu_timer1.cpumask = cpumask_of_cpu(0);
clockevent_mpu_timer1.cpumask = cpumask_of(0);
clockevents_register_device(&clockevent_mpu_timer1);
}

View File

@ -187,7 +187,7 @@ static __init void omap_init_32k_timer(void)
clockevent_32k_timer.min_delta_ns =
clockevent_delta2ns(1, &clockevent_32k_timer);
clockevent_32k_timer.cpumask = cpumask_of_cpu(0);
clockevent_32k_timer.cpumask = cpumask_of(0);
clockevents_register_device(&clockevent_32k_timer);
}

View File

@ -2321,7 +2321,7 @@ static struct clk i2c2_fck = {
};
static struct clk i2chs2_fck = {
.name = "i2chs_fck",
.name = "i2c_fck",
.id = 2,
.parent = &func_96m_ck,
.flags = CLOCK_IN_OMAP243X,
@ -2354,7 +2354,7 @@ static struct clk i2c1_fck = {
};
static struct clk i2chs1_fck = {
.name = "i2chs_fck",
.name = "i2c_fck",
.id = 1,
.parent = &func_96m_ck,
.flags = CLOCK_IN_OMAP243X,

View File

@ -120,7 +120,7 @@ static void __init omap2_gp_clockevent_init(void)
clockevent_gpt.min_delta_ns =
clockevent_delta2ns(1, &clockevent_gpt);
clockevent_gpt.cpumask = cpumask_of_cpu(0);
clockevent_gpt.cpumask = cpumask_of(0);
clockevents_register_device(&clockevent_gpt);
}

View File

@ -122,7 +122,6 @@ static struct clock_event_device ckevt_pxa_osmr0 = {
.features = CLOCK_EVT_FEAT_ONESHOT,
.shift = 32,
.rating = 200,
.cpumask = CPU_MASK_CPU0,
.set_next_event = pxa_osmr0_set_next_event,
.set_mode = pxa_osmr0_set_mode,
};
@ -163,6 +162,7 @@ static void __init pxa_timer_init(void)
clockevent_delta2ns(0x7fffffff, &ckevt_pxa_osmr0);
ckevt_pxa_osmr0.min_delta_ns =
clockevent_delta2ns(MIN_OSCR_DELTA * 2, &ckevt_pxa_osmr0) + 1;
ckevt_pxa_osmr0.cpumask = cpumask_of(0);
cksrc_pxa_oscr0.mult =
clocksource_hz2mult(clock_tick_rate, cksrc_pxa_oscr0.shift);

View File

@ -624,7 +624,7 @@ static struct clock_event_device timer0_clockevent = {
.set_mode = timer_set_mode,
.set_next_event = timer_set_next_event,
.rating = 300,
.cpumask = CPU_MASK_ALL,
.cpumask = cpu_all_mask,
};
static void __init realview_clockevents_init(unsigned int timer_irq)

View File

@ -154,7 +154,7 @@ void __cpuinit local_timer_setup(void)
clk->set_mode = local_timer_set_mode;
clk->set_next_event = local_timer_set_next_event;
clk->irq = IRQ_LOCALTIMER;
clk->cpumask = cpumask_of_cpu(cpu);
clk->cpumask = cpumask_of(cpu);
clk->shift = 20;
clk->mult = div_sc(mpcore_timer_rate, NSEC_PER_SEC, clk->shift);
clk->max_delta_ns = clockevent_delta2ns(0xffffffff, clk);
@ -193,7 +193,7 @@ void __cpuinit local_timer_setup(void)
clk->rating = 200;
clk->set_mode = dummy_timer_set_mode;
clk->broadcast = smp_timer_broadcast;
clk->cpumask = cpumask_of_cpu(cpu);
clk->cpumask = cpumask_of(cpu);
clockevents_register_device(clk);
}

View File

@ -73,7 +73,6 @@ static struct clock_event_device ckevt_sa1100_osmr0 = {
.features = CLOCK_EVT_FEAT_ONESHOT,
.shift = 32,
.rating = 200,
.cpumask = CPU_MASK_CPU0,
.set_next_event = sa1100_osmr0_set_next_event,
.set_mode = sa1100_osmr0_set_mode,
};
@ -110,6 +109,7 @@ static void __init sa1100_timer_init(void)
clockevent_delta2ns(0x7fffffff, &ckevt_sa1100_osmr0);
ckevt_sa1100_osmr0.min_delta_ns =
clockevent_delta2ns(MIN_OSCR_DELTA * 2, &ckevt_sa1100_osmr0) + 1;
ckevt_sa1100_osmr0.cpumask = cpumask_of(0);
cksrc_sa1100_oscr.mult =
clocksource_hz2mult(CLOCK_TICK_RATE, cksrc_sa1100_oscr.shift);

View File

@ -1005,7 +1005,7 @@ static void __init versatile_timer_init(void)
timer0_clockevent.min_delta_ns =
clockevent_delta2ns(0xf, &timer0_clockevent);
timer0_clockevent.cpumask = cpumask_of_cpu(0);
timer0_clockevent.cpumask = cpumask_of(0);
clockevents_register_device(&timer0_clockevent);
}

View File

@ -260,10 +260,10 @@ static void em_stop(void)
static void em_route_irq(int irq, unsigned int cpu)
{
struct irq_desc *desc = irq_desc + irq;
cpumask_t mask = cpumask_of_cpu(cpu);
const struct cpumask *mask = cpumask_of(cpu);
spin_lock_irq(&desc->lock);
desc->affinity = mask;
desc->affinity = *mask;
desc->chip->set_affinity(irq, mask);
spin_unlock_irq(&desc->lock);
}

View File

@ -190,7 +190,7 @@ static int __init mxc_clockevent_init(void)
clockevent_mxc.min_delta_ns =
clockevent_delta2ns(0xff, &clockevent_mxc);
clockevent_mxc.cpumask = cpumask_of_cpu(0);
clockevent_mxc.cpumask = cpumask_of(0);
clockevents_register_device(&clockevent_mxc);

View File

@ -149,7 +149,6 @@ static struct clock_event_device orion_clkevt = {
.features = CLOCK_EVT_FEAT_ONESHOT | CLOCK_EVT_FEAT_PERIODIC,
.shift = 32,
.rating = 300,
.cpumask = CPU_MASK_CPU0,
.set_next_event = orion_clkevt_next_event,
.set_mode = orion_clkevt_mode,
};
@ -199,5 +198,6 @@ void __init orion_time_init(unsigned int irq, unsigned int tclk)
orion_clkevt.mult = div_sc(tclk, NSEC_PER_SEC, orion_clkevt.shift);
orion_clkevt.max_delta_ns = clockevent_delta2ns(0xfffffffe, &orion_clkevt);
orion_clkevt.min_delta_ns = clockevent_delta2ns(1, &orion_clkevt);
orion_clkevt.cpumask = cpumask_of(0);
clockevents_register_device(&orion_clkevt);
}

View File

@ -263,6 +263,11 @@ static inline int fls(unsigned long word)
return 32 - result;
}
static inline int __fls(unsigned long word)
{
return fls(word) - 1;
}
unsigned long find_first_zero_bit(const unsigned long *addr,
unsigned long size);
unsigned long find_next_zero_bit(const unsigned long *addr,

View File

@ -13,7 +13,6 @@
#include <asm/pgtable.h>
static struct fs_struct init_fs = INIT_FS;
static struct signal_struct init_signals = INIT_SIGNALS(init_signals);
static struct sighand_struct init_sighand = INIT_SIGHAND(init_sighand);
struct mm_struct init_mm = INIT_MM(init_mm);

View File

@ -106,7 +106,6 @@ static struct clock_event_device comparator = {
.features = CLOCK_EVT_FEAT_ONESHOT,
.shift = 16,
.rating = 50,
.cpumask = CPU_MASK_CPU0,
.set_next_event = comparator_next_event,
.set_mode = comparator_mode,
};
@ -134,6 +133,7 @@ void __init time_init(void)
comparator.mult = div_sc(counter_hz, NSEC_PER_SEC, comparator.shift);
comparator.max_delta_ns = clockevent_delta2ns((u32)~0, &comparator);
comparator.min_delta_ns = clockevent_delta2ns(50, &comparator) + 1;
comparator.cpumask = cpumask_of(0);
sysreg_write(COMPARE, 0);
timer_irqaction.dev_id = &comparator;

View File

@ -213,6 +213,7 @@ static __inline__ int __test_bit(int nr, const void *addr)
#endif /* __KERNEL__ */
#include <asm-generic/bitops/fls.h>
#include <asm-generic/bitops/__fls.h>
#include <asm-generic/bitops/fls64.h>
#endif /* _BLACKFIN_BITOPS_H */

View File

@ -33,7 +33,6 @@
#include <linux/mqueue.h>
#include <linux/fs.h>
static struct fs_struct init_fs = INIT_FS;
static struct signal_struct init_signals = INIT_SIGNALS(init_signals);
static struct sighand_struct init_sighand = INIT_SIGHAND(init_sighand);

View File

@ -162,7 +162,6 @@ static struct clock_event_device clockevent_bfin = {
.name = "bfin_core_timer",
.features = CLOCK_EVT_FEAT_PERIODIC | CLOCK_EVT_FEAT_ONESHOT,
.shift = 32,
.cpumask = CPU_MASK_CPU0,
.set_next_event = bfin_timer_set_next_event,
.set_mode = bfin_timer_set_mode,
};
@ -193,6 +192,7 @@ static int __init bfin_clockevent_init(void)
clockevent_bfin.mult = div_sc(timer_clk, NSEC_PER_SEC, clockevent_bfin.shift);
clockevent_bfin.max_delta_ns = clockevent_delta2ns(-1, &clockevent_bfin);
clockevent_bfin.min_delta_ns = clockevent_delta2ns(100, &clockevent_bfin);
clockevent_bfin.cpumask = cpumask_of(0);
clockevents_register_device(&clockevent_bfin);
return 0;

View File

@ -325,11 +325,11 @@ static void end_crisv32_irq(unsigned int irq)
{
}
void set_affinity_crisv32_irq(unsigned int irq, cpumask_t dest)
void set_affinity_crisv32_irq(unsigned int irq, const struct cpumask *dest)
{
unsigned long flags;
spin_lock_irqsave(&irq_lock, flags);
irq_allocations[irq - FIRST_IRQ].mask = dest;
irq_allocations[irq - FIRST_IRQ].mask = *dest;
spin_unlock_irqrestore(&irq_lock, flags);
}

View File

@ -29,11 +29,7 @@
spinlock_t cris_atomic_locks[] = { [0 ... LOCK_COUNT - 1] = SPIN_LOCK_UNLOCKED};
/* CPU masks */
cpumask_t cpu_online_map = CPU_MASK_NONE;
EXPORT_SYMBOL(cpu_online_map);
cpumask_t phys_cpu_present_map = CPU_MASK_NONE;
cpumask_t cpu_possible_map;
EXPORT_SYMBOL(cpu_possible_map);
EXPORT_SYMBOL(phys_cpu_present_map);
/* Variables used during SMP boot */

View File

@ -148,6 +148,7 @@ static inline int test_and_change_bit(int nr, volatile unsigned long *addr)
#define ffs kernel_ffs
#include <asm-generic/bitops/fls.h>
#include <asm-generic/bitops/__fls.h>
#include <asm-generic/bitops/fls64.h>
#include <asm-generic/bitops/hweight.h>
#include <asm-generic/bitops/find.h>

View File

@ -4,7 +4,6 @@
#include <linux/cpumask.h>
extern cpumask_t phys_cpu_present_map;
extern cpumask_t cpu_possible_map;
#define raw_smp_processor_id() (current_thread_info()->cpu)

View File

@ -37,7 +37,6 @@
* setup.
*/
static struct fs_struct init_fs = INIT_FS;
static struct signal_struct init_signals = INIT_SIGNALS(init_signals);
static struct sighand_struct init_sighand = INIT_SIGHAND(init_sighand);
struct mm_struct init_mm = INIT_MM(init_mm);

View File

@ -10,7 +10,6 @@
#include <asm/pgtable.h>
static struct fs_struct init_fs = INIT_FS;
static struct signal_struct init_signals = INIT_SIGNALS(init_signals);
static struct sighand_struct init_sighand = INIT_SIGHAND(init_sighand);
struct mm_struct init_mm = INIT_MM(init_mm);

View File

@ -207,6 +207,7 @@ static __inline__ unsigned long __ffs(unsigned long word)
#endif /* __KERNEL__ */
#include <asm-generic/bitops/fls.h>
#include <asm-generic/bitops/__fls.h>
#include <asm-generic/bitops/fls64.h>
#endif /* _H8300_BITOPS_H */

View File

@ -12,7 +12,6 @@
#include <asm/uaccess.h>
#include <asm/pgtable.h>
static struct fs_struct init_fs = INIT_FS;
static struct signal_struct init_signals = INIT_SIGNALS(init_signals);
static struct sighand_struct init_sighand = INIT_SIGHAND(init_sighand);
struct mm_struct init_mm = INIT_MM(init_mm);

View File

@ -687,3 +687,6 @@ config IRQ_PER_CPU
config IOMMU_HELPER
def_bool (IA64_HP_ZX1 || IA64_HP_ZX1_SWIOTLB || IA64_GENERIC || SWIOTLB)
config IOMMU_API
def_bool (DMAR)

View File

@ -22,7 +22,7 @@ hpsim_irq_noop (unsigned int irq)
}
static void
hpsim_set_affinity_noop (unsigned int a, cpumask_t b)
hpsim_set_affinity_noop(unsigned int a, const struct cpumask *b)
{
}

View File

@ -27,7 +27,7 @@ irq_canonicalize (int irq)
}
extern void set_irq_affinity_info (unsigned int irq, int dest, int redir);
bool is_affinity_mask_valid(cpumask_t cpumask);
bool is_affinity_mask_valid(cpumask_var_t cpumask);
#define is_affinity_mask_valid is_affinity_mask_valid

View File

@ -166,8 +166,6 @@ struct saved_vpd {
};
struct kvm_regs {
char *saved_guest;
char *saved_stack;
struct saved_vpd vpd;
/*Arch-regs*/
int mp_state;
@ -200,6 +198,10 @@ struct kvm_regs {
unsigned long fp_psr; /*used for lazy float register */
unsigned long saved_gp;
/*for phycial emulation */
union context saved_guest;
unsigned long reserved[64]; /* for future use */
};
struct kvm_sregs {

View File

@ -23,17 +23,6 @@
#ifndef __ASM_KVM_HOST_H
#define __ASM_KVM_HOST_H
#include <linux/types.h>
#include <linux/mm.h>
#include <linux/kvm.h>
#include <linux/kvm_para.h>
#include <linux/kvm_types.h>
#include <asm/pal.h>
#include <asm/sal.h>
#define KVM_MAX_VCPUS 4
#define KVM_MEMORY_SLOTS 32
/* memory slots that does not exposed to userspace */
#define KVM_PRIVATE_MEM_SLOTS 4
@ -50,70 +39,132 @@
#define EXIT_REASON_EXTERNAL_INTERRUPT 6
#define EXIT_REASON_IPI 7
#define EXIT_REASON_PTC_G 8
#define EXIT_REASON_DEBUG 20
/*Define vmm address space and vm data space.*/
#define KVM_VMM_SIZE (16UL<<20)
#define KVM_VMM_SIZE (__IA64_UL_CONST(16)<<20)
#define KVM_VMM_SHIFT 24
#define KVM_VMM_BASE 0xD000000000000000UL
#define VMM_SIZE (8UL<<20)
#define KVM_VMM_BASE 0xD000000000000000
#define VMM_SIZE (__IA64_UL_CONST(8)<<20)
/*
* Define vm_buffer, used by PAL Services, base address.
* Note: vmbuffer is in the VMM-BLOCK, the size must be < 8M
* Note: vm_buffer is in the VMM-BLOCK, the size must be < 8M
*/
#define KVM_VM_BUFFER_BASE (KVM_VMM_BASE + VMM_SIZE)
#define KVM_VM_BUFFER_SIZE (8UL<<20)
#define KVM_VM_BUFFER_SIZE (__IA64_UL_CONST(8)<<20)
/*Define Virtual machine data layout.*/
#define KVM_VM_DATA_SHIFT 24
#define KVM_VM_DATA_SIZE (1UL << KVM_VM_DATA_SHIFT)
#define KVM_VM_DATA_BASE (KVM_VMM_BASE + KVM_VMM_SIZE)
/*
* kvm guest's data area looks as follow:
*
* +----------------------+ ------- KVM_VM_DATA_SIZE
* | vcpu[n]'s data | | ___________________KVM_STK_OFFSET
* | | | / |
* | .......... | | /vcpu's struct&stack |
* | .......... | | /---------------------|---- 0
* | vcpu[5]'s data | | / vpd |
* | vcpu[4]'s data | |/-----------------------|
* | vcpu[3]'s data | / vtlb |
* | vcpu[2]'s data | /|------------------------|
* | vcpu[1]'s data |/ | vhpt |
* | vcpu[0]'s data |____________________________|
* +----------------------+ |
* | memory dirty log | |
* +----------------------+ |
* | vm's data struct | |
* +----------------------+ |
* | | |
* | | |
* | | |
* | | |
* | | |
* | | |
* | | |
* | vm's p2m table | |
* | | |
* | | |
* | | | |
* vm's data->| | | |
* +----------------------+ ------- 0
* To support large memory, needs to increase the size of p2m.
* To support more vcpus, needs to ensure it has enough space to
* hold vcpus' data.
*/
#define KVM_VM_DATA_SHIFT 26
#define KVM_VM_DATA_SIZE (__IA64_UL_CONST(1) << KVM_VM_DATA_SHIFT)
#define KVM_VM_DATA_BASE (KVM_VMM_BASE + KVM_VM_DATA_SIZE)
#define KVM_P2M_BASE KVM_VM_DATA_BASE
#define KVM_P2M_OFS 0
#define KVM_P2M_SIZE (8UL << 20)
#define KVM_P2M_BASE KVM_VM_DATA_BASE
#define KVM_P2M_SIZE (__IA64_UL_CONST(24) << 20)
#define KVM_VHPT_BASE (KVM_P2M_BASE + KVM_P2M_SIZE)
#define KVM_VHPT_OFS KVM_P2M_SIZE
#define KVM_VHPT_BLOCK_SIZE (2UL << 20)
#define VHPT_SHIFT 18
#define VHPT_SIZE (1UL << VHPT_SHIFT)
#define VHPT_NUM_ENTRIES (1<<(VHPT_SHIFT-5))
#define VHPT_SHIFT 16
#define VHPT_SIZE (__IA64_UL_CONST(1) << VHPT_SHIFT)
#define VHPT_NUM_ENTRIES (__IA64_UL_CONST(1) << (VHPT_SHIFT-5))
#define KVM_VTLB_BASE (KVM_VHPT_BASE+KVM_VHPT_BLOCK_SIZE)
#define KVM_VTLB_OFS (KVM_VHPT_OFS+KVM_VHPT_BLOCK_SIZE)
#define KVM_VTLB_BLOCK_SIZE (1UL<<20)
#define VTLB_SHIFT 17
#define VTLB_SIZE (1UL<<VTLB_SHIFT)
#define VTLB_NUM_ENTRIES (1<<(VTLB_SHIFT-5))
#define VTLB_SHIFT 16
#define VTLB_SIZE (__IA64_UL_CONST(1) << VTLB_SHIFT)
#define VTLB_NUM_ENTRIES (1UL << (VHPT_SHIFT-5))
#define KVM_VPD_BASE (KVM_VTLB_BASE+KVM_VTLB_BLOCK_SIZE)
#define KVM_VPD_OFS (KVM_VTLB_OFS+KVM_VTLB_BLOCK_SIZE)
#define KVM_VPD_BLOCK_SIZE (2UL<<20)
#define VPD_SHIFT 16
#define VPD_SIZE (1UL<<VPD_SHIFT)
#define VPD_SHIFT 16
#define VPD_SIZE (__IA64_UL_CONST(1) << VPD_SHIFT)
#define KVM_VCPU_BASE (KVM_VPD_BASE+KVM_VPD_BLOCK_SIZE)
#define KVM_VCPU_OFS (KVM_VPD_OFS+KVM_VPD_BLOCK_SIZE)
#define KVM_VCPU_BLOCK_SIZE (2UL<<20)
#define VCPU_SHIFT 18
#define VCPU_SIZE (1UL<<VCPU_SHIFT)
#define MAX_VCPU_NUM KVM_VCPU_BLOCK_SIZE/VCPU_SIZE
#define VCPU_STRUCT_SHIFT 16
#define VCPU_STRUCT_SIZE (__IA64_UL_CONST(1) << VCPU_STRUCT_SHIFT)
#define KVM_VM_BASE (KVM_VCPU_BASE+KVM_VCPU_BLOCK_SIZE)
#define KVM_VM_OFS (KVM_VCPU_OFS+KVM_VCPU_BLOCK_SIZE)
#define KVM_VM_BLOCK_SIZE (1UL<<19)
#define KVM_STK_OFFSET VCPU_STRUCT_SIZE
#define KVM_MEM_DIRTY_LOG_BASE (KVM_VM_BASE+KVM_VM_BLOCK_SIZE)
#define KVM_MEM_DIRTY_LOG_OFS (KVM_VM_OFS+KVM_VM_BLOCK_SIZE)
#define KVM_MEM_DIRTY_LOG_SIZE (1UL<<19)
#define KVM_VM_STRUCT_SHIFT 19
#define KVM_VM_STRUCT_SIZE (__IA64_UL_CONST(1) << KVM_VM_STRUCT_SHIFT)
/* Get vpd, vhpt, tlb, vcpu, base*/
#define VPD_ADDR(n) (KVM_VPD_BASE+n*VPD_SIZE)
#define VHPT_ADDR(n) (KVM_VHPT_BASE+n*VHPT_SIZE)
#define VTLB_ADDR(n) (KVM_VTLB_BASE+n*VTLB_SIZE)
#define VCPU_ADDR(n) (KVM_VCPU_BASE+n*VCPU_SIZE)
#define KVM_MEM_DIRY_LOG_SHIFT 19
#define KVM_MEM_DIRTY_LOG_SIZE (__IA64_UL_CONST(1) << KVM_MEM_DIRY_LOG_SHIFT)
#ifndef __ASSEMBLY__
/*Define the max vcpus and memory for Guests.*/
#define KVM_MAX_VCPUS (KVM_VM_DATA_SIZE - KVM_P2M_SIZE - KVM_VM_STRUCT_SIZE -\
KVM_MEM_DIRTY_LOG_SIZE) / sizeof(struct kvm_vcpu_data)
#define KVM_MAX_MEM_SIZE (KVM_P2M_SIZE >> 3 << PAGE_SHIFT)
#define VMM_LOG_LEN 256
#include <linux/types.h>
#include <linux/mm.h>
#include <linux/kvm.h>
#include <linux/kvm_para.h>
#include <linux/kvm_types.h>
#include <asm/pal.h>
#include <asm/sal.h>
#include <asm/page.h>
struct kvm_vcpu_data {
char vcpu_vhpt[VHPT_SIZE];
char vcpu_vtlb[VTLB_SIZE];
char vcpu_vpd[VPD_SIZE];
char vcpu_struct[VCPU_STRUCT_SIZE];
};
struct kvm_vm_data {
char kvm_p2m[KVM_P2M_SIZE];
char kvm_vm_struct[KVM_VM_STRUCT_SIZE];
char kvm_mem_dirty_log[KVM_MEM_DIRTY_LOG_SIZE];
struct kvm_vcpu_data vcpu_data[KVM_MAX_VCPUS];
};
#define VCPU_BASE(n) KVM_VM_DATA_BASE + \
offsetof(struct kvm_vm_data, vcpu_data[n])
#define VM_BASE KVM_VM_DATA_BASE + \
offsetof(struct kvm_vm_data, kvm_vm_struct)
#define KVM_MEM_DIRTY_LOG_BASE KVM_VM_DATA_BASE + \
offsetof(struct kvm_vm_data, kvm_mem_dirty_log)
#define VHPT_BASE(n) (VCPU_BASE(n) + offsetof(struct kvm_vcpu_data, vcpu_vhpt))
#define VTLB_BASE(n) (VCPU_BASE(n) + offsetof(struct kvm_vcpu_data, vcpu_vtlb))
#define VPD_BASE(n) (VCPU_BASE(n) + offsetof(struct kvm_vcpu_data, vcpu_vpd))
#define VCPU_STRUCT_BASE(n) (VCPU_BASE(n) + \
offsetof(struct kvm_vcpu_data, vcpu_struct))
/*IO section definitions*/
#define IOREQ_READ 1
@ -389,6 +440,7 @@ struct kvm_vcpu_arch {
unsigned long opcode;
unsigned long cause;
char log_buf[VMM_LOG_LEN];
union context host;
union context guest;
};
@ -403,20 +455,19 @@ struct kvm_sal_data {
};
struct kvm_arch {
spinlock_t dirty_log_lock;
unsigned long vm_base;
unsigned long metaphysical_rr0;
unsigned long metaphysical_rr4;
unsigned long vmm_init_rr;
unsigned long vhpt_base;
unsigned long vtlb_base;
unsigned long vpd_base;
spinlock_t dirty_log_lock;
struct kvm_ioapic *vioapic;
struct kvm_vm_stat stat;
struct kvm_sal_data rdv_sal_data;
struct list_head assigned_dev_head;
struct dmar_domain *intel_iommu_domain;
struct iommu_domain *iommu_domain;
struct hlist_head irq_ack_notifier_list;
unsigned long irq_sources_bitmap;
@ -512,7 +563,7 @@ struct kvm_pt_regs {
static inline struct kvm_pt_regs *vcpu_regs(struct kvm_vcpu *v)
{
return (struct kvm_pt_regs *) ((unsigned long) v + IA64_STK_OFFSET) - 1;
return (struct kvm_pt_regs *) ((unsigned long) v + KVM_STK_OFFSET) - 1;
}
typedef int kvm_vmm_entry(void);
@ -531,5 +582,6 @@ int kvm_pal_emul(struct kvm_vcpu *vcpu, struct kvm_run *kvm_run);
void kvm_sal_emul(struct kvm_vcpu *vcpu);
static inline void kvm_inject_nmi(struct kvm_vcpu *vcpu) {}
#endif /* __ASSEMBLY__*/
#endif

View File

@ -57,7 +57,6 @@ extern struct smp_boot_data {
extern char no_int_routing __devinitdata;
extern cpumask_t cpu_online_map;
extern cpumask_t cpu_core_map[NR_CPUS];
DECLARE_PER_CPU(cpumask_t, cpu_sibling_map);
extern int smp_num_siblings;

View File

@ -34,6 +34,7 @@
* Returns a bitmask of CPUs on Node 'node'.
*/
#define node_to_cpumask(node) (node_to_cpu_mask[node])
#define cpumask_of_node(node) (&node_to_cpu_mask[node])
/*
* Returns the number of the node containing Node 'nid'.
@ -45,7 +46,7 @@
/*
* Returns the number of the first CPU on Node 'node'.
*/
#define node_to_first_cpu(node) (first_cpu(node_to_cpumask(node)))
#define node_to_first_cpu(node) (cpumask_first(cpumask_of_node(node)))
/*
* Determines the node for a given pci bus
@ -55,7 +56,6 @@
void build_cpu_to_node_map(void);
#define SD_CPU_INIT (struct sched_domain) { \
.span = CPU_MASK_NONE, \
.parent = NULL, \
.child = NULL, \
.groups = NULL, \
@ -80,7 +80,6 @@ void build_cpu_to_node_map(void);
/* sched_domains SD_NODE_INIT for IA64 NUMA machines */
#define SD_NODE_INIT (struct sched_domain) { \
.span = CPU_MASK_NONE, \
.parent = NULL, \
.child = NULL, \
.groups = NULL, \
@ -111,6 +110,8 @@ void build_cpu_to_node_map(void);
#define topology_core_id(cpu) (cpu_data(cpu)->core_id)
#define topology_core_siblings(cpu) (cpu_core_map[cpu])
#define topology_thread_siblings(cpu) (per_cpu(cpu_sibling_map, cpu))
#define topology_core_cpumask(cpu) (&cpu_core_map[cpu])
#define topology_thread_cpumask(cpu) (&per_cpu(cpu_sibling_map, cpu))
#define smt_capable() (smp_num_siblings > 1)
#endif
@ -121,6 +122,10 @@ extern void arch_fix_phys_package_id(int num, u32 slot);
node_to_cpumask(pcibus_to_node(bus)) \
)
#define cpumask_of_pcibus(bus) (pcibus_to_node(bus) == -1 ? \
cpu_all_mask : \
cpumask_from_node(pcibus_to_node(bus)))
#include <asm-generic/topology.h>
#endif /* _ASM_IA64_TOPOLOGY_H */

View File

@ -202,7 +202,6 @@ char *__init __acpi_map_table(unsigned long phys_addr, unsigned long size)
Boot-time Table Parsing
-------------------------------------------------------------------------- */
static int total_cpus __initdata;
static int available_cpus __initdata;
struct acpi_table_madt *acpi_madt __initdata;
static u8 has_8259;
@ -1001,7 +1000,7 @@ acpi_map_iosapic(acpi_handle handle, u32 depth, void *context, void **ret)
node = pxm_to_node(pxm);
if (node >= MAX_NUMNODES || !node_online(node) ||
cpus_empty(node_to_cpumask(node)))
cpumask_empty(cpumask_of_node(node)))
return AE_OK;
/* We know a gsi to node mapping! */

View File

@ -17,7 +17,6 @@
#include <asm/uaccess.h>
#include <asm/pgtable.h>
static struct fs_struct init_fs = INIT_FS;
static struct signal_struct init_signals = INIT_SIGNALS(init_signals);
static struct sighand_struct init_sighand = INIT_SIGHAND(init_sighand);
struct mm_struct init_mm = INIT_MM(init_mm);

View File

@ -330,25 +330,25 @@ unmask_irq (unsigned int irq)
static void
iosapic_set_affinity (unsigned int irq, cpumask_t mask)
iosapic_set_affinity(unsigned int irq, const struct cpumask *mask)
{
#ifdef CONFIG_SMP
u32 high32, low32;
int dest, rte_index;
int cpu, dest, rte_index;
int redir = (irq & IA64_IRQ_REDIRECTED) ? 1 : 0;
struct iosapic_rte_info *rte;
struct iosapic *iosapic;
irq &= (~IA64_IRQ_REDIRECTED);
cpus_and(mask, mask, cpu_online_map);
if (cpus_empty(mask))
cpu = cpumask_first_and(cpu_online_mask, mask);
if (cpu >= nr_cpu_ids)
return;
if (irq_prepare_move(irq, first_cpu(mask)))
if (irq_prepare_move(irq, cpu))
return;
dest = cpu_physical_id(first_cpu(mask));
dest = cpu_physical_id(cpu);
if (!iosapic_intr_info[irq].count)
return; /* not an IOSAPIC interrupt */
@ -695,32 +695,31 @@ get_target_cpu (unsigned int gsi, int irq)
#ifdef CONFIG_NUMA
{
int num_cpus, cpu_index, iosapic_index, numa_cpu, i = 0;
cpumask_t cpu_mask;
const struct cpumask *cpu_mask;
iosapic_index = find_iosapic(gsi);
if (iosapic_index < 0 ||
iosapic_lists[iosapic_index].node == MAX_NUMNODES)
goto skip_numa_setup;
cpu_mask = node_to_cpumask(iosapic_lists[iosapic_index].node);
cpus_and(cpu_mask, cpu_mask, domain);
for_each_cpu_mask(numa_cpu, cpu_mask) {
if (!cpu_online(numa_cpu))
cpu_clear(numa_cpu, cpu_mask);
cpu_mask = cpumask_of_node(iosapic_lists[iosapic_index].node);
num_cpus = 0;
for_each_cpu_and(numa_cpu, cpu_mask, &domain) {
if (cpu_online(numa_cpu))
num_cpus++;
}
num_cpus = cpus_weight(cpu_mask);
if (!num_cpus)
goto skip_numa_setup;
/* Use irq assignment to distribute across cpus in node */
cpu_index = irq % num_cpus;
for (numa_cpu = first_cpu(cpu_mask) ; i < cpu_index ; i++)
numa_cpu = next_cpu(numa_cpu, cpu_mask);
for_each_cpu_and(numa_cpu, cpu_mask, &domain)
if (cpu_online(numa_cpu) && i++ >= cpu_index)
break;
if (numa_cpu != NR_CPUS)
if (numa_cpu < nr_cpu_ids)
return cpu_physical_id(numa_cpu);
}
skip_numa_setup:
@ -731,7 +730,7 @@ skip_numa_setup:
* case of NUMA.)
*/
do {
if (++cpu >= NR_CPUS)
if (++cpu >= nr_cpu_ids)
cpu = 0;
} while (!cpu_online(cpu) || !cpu_isset(cpu, domain));

View File

@ -112,11 +112,11 @@ void set_irq_affinity_info (unsigned int irq, int hwid, int redir)
}
}
bool is_affinity_mask_valid(cpumask_t cpumask)
bool is_affinity_mask_valid(cpumask_var_t cpumask)
{
if (ia64_platform_is("sn2")) {
/* Only allow one CPU to be specified in the smp_affinity mask */
if (cpus_weight(cpumask) != 1)
if (cpumask_weight(cpumask) != 1)
return false;
}
return true;
@ -133,7 +133,6 @@ unsigned int vectors_in_migration[NR_IRQS];
*/
static void migrate_irqs(void)
{
cpumask_t mask;
irq_desc_t *desc;
int irq, new_cpu;
@ -152,15 +151,14 @@ static void migrate_irqs(void)
if (desc->status == IRQ_PER_CPU)
continue;
cpus_and(mask, irq_desc[irq].affinity, cpu_online_map);
if (any_online_cpu(mask) == NR_CPUS) {
if (cpumask_any_and(&irq_desc[irq].affinity, cpu_online_mask)
>= nr_cpu_ids) {
/*
* Save it for phase 2 processing
*/
vectors_in_migration[irq] = irq;
new_cpu = any_online_cpu(cpu_online_map);
mask = cpumask_of_cpu(new_cpu);
/*
* Al three are essential, currently WARN_ON.. maybe panic?
@ -168,7 +166,8 @@ static void migrate_irqs(void)
if (desc->chip && desc->chip->disable &&
desc->chip->enable && desc->chip->set_affinity) {
desc->chip->disable(irq);
desc->chip->set_affinity(irq, mask);
desc->chip->set_affinity(irq,
cpumask_of(new_cpu));
desc->chip->enable(irq);
} else {
WARN_ON((!(desc->chip) || !(desc->chip->disable) ||

View File

@ -49,11 +49,12 @@
static struct irq_chip ia64_msi_chip;
#ifdef CONFIG_SMP
static void ia64_set_msi_irq_affinity(unsigned int irq, cpumask_t cpu_mask)
static void ia64_set_msi_irq_affinity(unsigned int irq,
const cpumask_t *cpu_mask)
{
struct msi_msg msg;
u32 addr, data;
int cpu = first_cpu(cpu_mask);
int cpu = first_cpu(*cpu_mask);
if (!cpu_online(cpu))
return;
@ -166,12 +167,11 @@ void arch_teardown_msi_irq(unsigned int irq)
#ifdef CONFIG_DMAR
#ifdef CONFIG_SMP
static void dmar_msi_set_affinity(unsigned int irq, cpumask_t mask)
static void dmar_msi_set_affinity(unsigned int irq, const struct cpumask *mask)
{
struct irq_cfg *cfg = irq_cfg + irq;
struct msi_msg msg;
int cpu = first_cpu(mask);
int cpu = cpumask_first(mask);
if (!cpu_online(cpu))
return;
@ -187,7 +187,7 @@ static void dmar_msi_set_affinity(unsigned int irq, cpumask_t mask)
msg.address_lo |= MSI_ADDR_DESTID_CPU(cpu_physical_id(cpu));
dmar_msi_write(irq, &msg);
irq_desc[irq].affinity = mask;
irq_desc[irq].affinity = *mask;
}
#endif /* CONFIG_SMP */

View File

@ -131,12 +131,6 @@ struct task_struct *task_for_booting_cpu;
*/
DEFINE_PER_CPU(int, cpu_state);
/* Bitmasks of currently online, and possible CPUs */
cpumask_t cpu_online_map;
EXPORT_SYMBOL(cpu_online_map);
cpumask_t cpu_possible_map = CPU_MASK_NONE;
EXPORT_SYMBOL(cpu_possible_map);
cpumask_t cpu_core_map[NR_CPUS] __cacheline_aligned;
EXPORT_SYMBOL(cpu_core_map);
DEFINE_PER_CPU_SHARED_ALIGNED(cpumask_t, cpu_sibling_map);
@ -688,7 +682,7 @@ int migrate_platform_irqs(unsigned int cpu)
{
int new_cpei_cpu;
irq_desc_t *desc = NULL;
cpumask_t mask;
const struct cpumask *mask;
int retval = 0;
/*
@ -701,7 +695,7 @@ int migrate_platform_irqs(unsigned int cpu)
* Now re-target the CPEI to a different processor
*/
new_cpei_cpu = any_online_cpu(cpu_online_map);
mask = cpumask_of_cpu(new_cpei_cpu);
mask = cpumask_of(new_cpei_cpu);
set_cpei_target_cpu(new_cpei_cpu);
desc = irq_desc + ia64_cpe_irq;
/*

View File

@ -93,13 +93,14 @@ void ia64_account_on_switch(struct task_struct *prev, struct task_struct *next)
now = ia64_get_itc();
delta_stime = cycle_to_cputime(pi->ac_stime + (now - pi->ac_stamp));
account_system_time(prev, 0, delta_stime);
account_system_time_scaled(prev, delta_stime);
if (idle_task(smp_processor_id()) != prev)
account_system_time(prev, 0, delta_stime, delta_stime);
else
account_idle_time(delta_stime);
if (pi->ac_utime) {
delta_utime = cycle_to_cputime(pi->ac_utime);
account_user_time(prev, delta_utime);
account_user_time_scaled(prev, delta_utime);
account_user_time(prev, delta_utime, delta_utime);
}
pi->ac_stamp = ni->ac_stamp = now;
@ -122,8 +123,10 @@ void account_system_vtime(struct task_struct *tsk)
now = ia64_get_itc();
delta_stime = cycle_to_cputime(ti->ac_stime + (now - ti->ac_stamp));
account_system_time(tsk, 0, delta_stime);
account_system_time_scaled(tsk, delta_stime);
if (irq_count() || idle_task(smp_processor_id()) != tsk)
account_system_time(tsk, 0, delta_stime, delta_stime);
else
account_idle_time(delta_stime);
ti->ac_stime = 0;
ti->ac_stamp = now;
@ -143,8 +146,7 @@ void account_process_tick(struct task_struct *p, int user_tick)
if (ti->ac_utime) {
delta_utime = cycle_to_cputime(ti->ac_utime);
account_user_time(p, delta_utime);
account_user_time_scaled(p, delta_utime);
account_user_time(p, delta_utime, delta_utime);
ti->ac_utime = 0;
}
}

View File

@ -219,7 +219,7 @@ static ssize_t show_shared_cpu_map(struct cache_info *this_leaf, char *buf)
cpumask_t shared_cpu_map;
cpus_and(shared_cpu_map, this_leaf->shared_cpu_map, cpu_online_map);
len = cpumask_scnprintf(buf, NR_CPUS+1, shared_cpu_map);
len = cpumask_scnprintf(buf, NR_CPUS+1, &shared_cpu_map);
len += sprintf(buf+len, "\n");
return len;
}

View File

@ -51,8 +51,8 @@ EXTRA_AFLAGS += -Ivirt/kvm -Iarch/ia64/kvm/
common-objs = $(addprefix ../../../virt/kvm/, kvm_main.o ioapic.o \
coalesced_mmio.o irq_comm.o)
ifeq ($(CONFIG_DMAR),y)
common-objs += $(addprefix ../../../virt/kvm/, vtd.o)
ifeq ($(CONFIG_IOMMU_API),y)
common-objs += $(addprefix ../../../virt/kvm/, iommu.o)
endif
kvm-objs := $(common-objs) kvm-ia64.o kvm_fw.o
@ -60,7 +60,7 @@ obj-$(CONFIG_KVM) += kvm.o
CFLAGS_vcpu.o += -mfixed-range=f2-f5,f12-f127
kvm-intel-objs = vmm.o vmm_ivt.o trampoline.o vcpu.o optvfault.o mmio.o \
vtlb.o process.o
vtlb.o process.o kvm_lib.o
#Add link memcpy and memset to avoid possible structure assignment error
kvm-intel-objs += memcpy.o memset.o
obj-$(CONFIG_KVM_INTEL) += kvm-intel.o

View File

@ -24,19 +24,10 @@
#include <linux/autoconf.h>
#include <linux/kvm_host.h>
#include <linux/kbuild.h>
#include "vcpu.h"
#define task_struct kvm_vcpu
#define DEFINE(sym, val) \
asm volatile("\n->" #sym " (%0) " #val : : "i" (val))
#define BLANK() asm volatile("\n->" : :)
#define OFFSET(_sym, _str, _mem) \
DEFINE(_sym, offsetof(_str, _mem));
void foo(void)
{
DEFINE(VMM_TASK_SIZE, sizeof(struct kvm_vcpu));

View File

@ -31,6 +31,7 @@
#include <linux/bitops.h>
#include <linux/hrtimer.h>
#include <linux/uaccess.h>
#include <linux/iommu.h>
#include <linux/intel-iommu.h>
#include <asm/pgtable.h>
@ -180,7 +181,6 @@ int kvm_dev_ioctl_check_extension(long ext)
switch (ext) {
case KVM_CAP_IRQCHIP:
case KVM_CAP_USER_MEMORY:
case KVM_CAP_MP_STATE:
r = 1;
@ -189,7 +189,7 @@ int kvm_dev_ioctl_check_extension(long ext)
r = KVM_COALESCED_MMIO_PAGE_OFFSET;
break;
case KVM_CAP_IOMMU:
r = intel_iommu_found();
r = iommu_found();
break;
default:
r = 0;
@ -439,7 +439,6 @@ int kvm_emulate_halt(struct kvm_vcpu *vcpu)
expires = div64_u64(itc_diff, cyc_per_usec);
kt = ktime_set(0, 1000 * expires);
down_read(&vcpu->kvm->slots_lock);
vcpu->arch.ht_active = 1;
hrtimer_start(p_ht, kt, HRTIMER_MODE_ABS);
@ -452,7 +451,6 @@ int kvm_emulate_halt(struct kvm_vcpu *vcpu)
if (vcpu->arch.mp_state == KVM_MP_STATE_HALTED)
vcpu->arch.mp_state =
KVM_MP_STATE_RUNNABLE;
up_read(&vcpu->kvm->slots_lock);
if (vcpu->arch.mp_state != KVM_MP_STATE_RUNNABLE)
return -EINTR;
@ -476,6 +474,13 @@ static int handle_external_interrupt(struct kvm_vcpu *vcpu,
return 1;
}
static int handle_vcpu_debug(struct kvm_vcpu *vcpu,
struct kvm_run *kvm_run)
{
printk("VMM: %s", vcpu->arch.log_buf);
return 1;
}
static int (*kvm_vti_exit_handlers[])(struct kvm_vcpu *vcpu,
struct kvm_run *kvm_run) = {
[EXIT_REASON_VM_PANIC] = handle_vm_error,
@ -487,6 +492,7 @@ static int (*kvm_vti_exit_handlers[])(struct kvm_vcpu *vcpu,
[EXIT_REASON_EXTERNAL_INTERRUPT] = handle_external_interrupt,
[EXIT_REASON_IPI] = handle_ipi,
[EXIT_REASON_PTC_G] = handle_global_purge,
[EXIT_REASON_DEBUG] = handle_vcpu_debug,
};
@ -698,27 +704,24 @@ out:
return r;
}
/*
* Allocate 16M memory for every vm to hold its specific data.
* Its memory map is defined in kvm_host.h.
*/
static struct kvm *kvm_alloc_kvm(void)
{
struct kvm *kvm;
uint64_t vm_base;
BUG_ON(sizeof(struct kvm) > KVM_VM_STRUCT_SIZE);
vm_base = __get_free_pages(GFP_KERNEL, get_order(KVM_VM_DATA_SIZE));
if (!vm_base)
return ERR_PTR(-ENOMEM);
printk(KERN_DEBUG"kvm: VM data's base Address:0x%lx\n", vm_base);
/* Zero all pages before use! */
memset((void *)vm_base, 0, KVM_VM_DATA_SIZE);
kvm = (struct kvm *)(vm_base + KVM_VM_OFS);
kvm = (struct kvm *)(vm_base +
offsetof(struct kvm_vm_data, kvm_vm_struct));
kvm->arch.vm_base = vm_base;
printk(KERN_DEBUG"kvm: vm's data area:0x%lx\n", vm_base);
return kvm;
}
@ -760,21 +763,12 @@ static void kvm_build_io_pmt(struct kvm *kvm)
static void kvm_init_vm(struct kvm *kvm)
{
long vm_base;
BUG_ON(!kvm);
kvm->arch.metaphysical_rr0 = GUEST_PHYSICAL_RR0;
kvm->arch.metaphysical_rr4 = GUEST_PHYSICAL_RR4;
kvm->arch.vmm_init_rr = VMM_INIT_RR;
vm_base = kvm->arch.vm_base;
if (vm_base) {
kvm->arch.vhpt_base = vm_base + KVM_VHPT_OFS;
kvm->arch.vtlb_base = vm_base + KVM_VTLB_OFS;
kvm->arch.vpd_base = vm_base + KVM_VPD_OFS;
}
/*
*Fill P2M entries for MMIO/IO ranges
*/
@ -838,9 +832,8 @@ static int kvm_vm_ioctl_set_irqchip(struct kvm *kvm, struct kvm_irqchip *chip)
int kvm_arch_vcpu_ioctl_set_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs)
{
int i;
struct vpd *vpd = to_host(vcpu->kvm, vcpu->arch.vpd);
int r;
int i;
vcpu_load(vcpu);
@ -857,18 +850,7 @@ int kvm_arch_vcpu_ioctl_set_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs)
vpd->vpr = regs->vpd.vpr;
r = -EFAULT;
r = copy_from_user(&vcpu->arch.guest, regs->saved_guest,
sizeof(union context));
if (r)
goto out;
r = copy_from_user(vcpu + 1, regs->saved_stack +
sizeof(struct kvm_vcpu),
IA64_STK_OFFSET - sizeof(struct kvm_vcpu));
if (r)
goto out;
vcpu->arch.exit_data =
((struct kvm_vcpu *)(regs->saved_stack))->arch.exit_data;
memcpy(&vcpu->arch.guest, &regs->saved_guest, sizeof(union context));
RESTORE_REGS(mp_state);
RESTORE_REGS(vmm_rr);
@ -902,9 +884,8 @@ int kvm_arch_vcpu_ioctl_set_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs)
set_bit(KVM_REQ_RESUME, &vcpu->requests);
vcpu_put(vcpu);
r = 0;
out:
return r;
return 0;
}
long kvm_arch_vm_ioctl(struct file *filp,
@ -1166,10 +1147,11 @@ int kvm_arch_vcpu_init(struct kvm_vcpu *vcpu)
/*Set entry address for first run.*/
regs->cr_iip = PALE_RESET_ENTRY;
/*Initilize itc offset for vcpus*/
/*Initialize itc offset for vcpus*/
itc_offset = 0UL - ia64_getreg(_IA64_REG_AR_ITC);
for (i = 0; i < MAX_VCPU_NUM; i++) {
v = (struct kvm_vcpu *)((char *)vcpu + VCPU_SIZE * i);
for (i = 0; i < KVM_MAX_VCPUS; i++) {
v = (struct kvm_vcpu *)((char *)vcpu +
sizeof(struct kvm_vcpu_data) * i);
v->arch.itc_offset = itc_offset;
v->arch.last_itc = 0;
}
@ -1183,7 +1165,7 @@ int kvm_arch_vcpu_init(struct kvm_vcpu *vcpu)
vcpu->arch.apic->vcpu = vcpu;
p_ctx->gr[1] = 0;
p_ctx->gr[12] = (unsigned long)((char *)vmm_vcpu + IA64_STK_OFFSET);
p_ctx->gr[12] = (unsigned long)((char *)vmm_vcpu + KVM_STK_OFFSET);
p_ctx->gr[13] = (unsigned long)vmm_vcpu;
p_ctx->psr = 0x1008522000UL;
p_ctx->ar[40] = FPSR_DEFAULT; /*fpsr*/
@ -1218,12 +1200,12 @@ int kvm_arch_vcpu_init(struct kvm_vcpu *vcpu)
vcpu->arch.hlt_timer.function = hlt_timer_fn;
vcpu->arch.last_run_cpu = -1;
vcpu->arch.vpd = (struct vpd *)VPD_ADDR(vcpu->vcpu_id);
vcpu->arch.vpd = (struct vpd *)VPD_BASE(vcpu->vcpu_id);
vcpu->arch.vsa_base = kvm_vsa_base;
vcpu->arch.__gp = kvm_vmm_gp;
vcpu->arch.dirty_log_lock_pa = __pa(&kvm->arch.dirty_log_lock);
vcpu->arch.vhpt.hash = (struct thash_data *)VHPT_ADDR(vcpu->vcpu_id);
vcpu->arch.vtlb.hash = (struct thash_data *)VTLB_ADDR(vcpu->vcpu_id);
vcpu->arch.vhpt.hash = (struct thash_data *)VHPT_BASE(vcpu->vcpu_id);
vcpu->arch.vtlb.hash = (struct thash_data *)VTLB_BASE(vcpu->vcpu_id);
init_ptce_info(vcpu);
r = 0;
@ -1273,12 +1255,22 @@ struct kvm_vcpu *kvm_arch_vcpu_create(struct kvm *kvm,
int r;
int cpu;
BUG_ON(sizeof(struct kvm_vcpu) > VCPU_STRUCT_SIZE/2);
r = -EINVAL;
if (id >= KVM_MAX_VCPUS) {
printk(KERN_ERR"kvm: Can't configure vcpus > %ld",
KVM_MAX_VCPUS);
goto fail;
}
r = -ENOMEM;
if (!vm_base) {
printk(KERN_ERR"kvm: Create vcpu[%d] error!\n", id);
goto fail;
}
vcpu = (struct kvm_vcpu *)(vm_base + KVM_VCPU_OFS + VCPU_SIZE * id);
vcpu = (struct kvm_vcpu *)(vm_base + offsetof(struct kvm_vm_data,
vcpu_data[id].vcpu_struct));
vcpu->kvm = kvm;
cpu = get_cpu();
@ -1374,9 +1366,9 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
int kvm_arch_vcpu_ioctl_get_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs)
{
int i;
int r;
struct vpd *vpd = to_host(vcpu->kvm, vcpu->arch.vpd);
int i;
vcpu_load(vcpu);
for (i = 0; i < 16; i++) {
@ -1391,14 +1383,8 @@ int kvm_arch_vcpu_ioctl_get_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs)
regs->vpd.vpsr = vpd->vpsr;
regs->vpd.vpr = vpd->vpr;
r = -EFAULT;
r = copy_to_user(regs->saved_guest, &vcpu->arch.guest,
sizeof(union context));
if (r)
goto out;
r = copy_to_user(regs->saved_stack, (void *)vcpu, IA64_STK_OFFSET);
if (r)
goto out;
memcpy(&regs->saved_guest, &vcpu->arch.guest, sizeof(union context));
SAVE_REGS(mp_state);
SAVE_REGS(vmm_rr);
memcpy(regs->itrs, vcpu->arch.itrs, sizeof(struct thash_data) * NITRS);
@ -1426,10 +1412,9 @@ int kvm_arch_vcpu_ioctl_get_regs(struct kvm_vcpu *vcpu, struct kvm_regs *regs)
SAVE_REGS(metaphysical_saved_rr4);
SAVE_REGS(fp_psr);
SAVE_REGS(saved_gp);
vcpu_put(vcpu);
r = 0;
out:
return r;
return 0;
}
void kvm_arch_vcpu_uninit(struct kvm_vcpu *vcpu)
@ -1457,6 +1442,9 @@ int kvm_arch_set_memory_region(struct kvm *kvm,
struct kvm_memory_slot *memslot = &kvm->memslots[mem->slot];
unsigned long base_gfn = memslot->base_gfn;
if (base_gfn + npages > (KVM_MAX_MEM_SIZE >> PAGE_SHIFT))
return -ENOMEM;
for (i = 0; i < npages; i++) {
pfn = gfn_to_pfn(kvm, base_gfn + i);
if (!kvm_is_mmio_pfn(pfn)) {
@ -1631,8 +1619,8 @@ static int kvm_ia64_sync_dirty_log(struct kvm *kvm,
struct kvm_memory_slot *memslot;
int r, i;
long n, base;
unsigned long *dirty_bitmap = (unsigned long *)((void *)kvm - KVM_VM_OFS
+ KVM_MEM_DIRTY_LOG_OFS);
unsigned long *dirty_bitmap = (unsigned long *)(kvm->arch.vm_base +
offsetof(struct kvm_vm_data, kvm_mem_dirty_log));
r = -EINVAL;
if (log->slot >= KVM_MEMORY_SLOTS)

15
arch/ia64/kvm/kvm_lib.c Normal file
View File

@ -0,0 +1,15 @@
/*
* kvm_lib.c: Compile some libraries for kvm-intel module.
*
* Just include kernel's library, and disable symbols export.
* Copyright (C) 2008, Intel Corporation.
* Xiantao Zhang (xiantao.zhang@intel.com)
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*
*/
#undef CONFIG_MODULES
#include "../../../lib/vsprintf.c"
#include "../../../lib/ctype.c"

View File

@ -24,6 +24,8 @@
#include <asm/asmmacro.h>
#include <asm/types.h>
#include <asm/kregs.h>
#include <asm/kvm_host.h>
#include "asm-offsets.h"
#define KVM_MINSTATE_START_SAVE_MIN \
@ -33,7 +35,7 @@
addl r22 = VMM_RBS_OFFSET,r1; /* compute base of RBS */ \
;; \
lfetch.fault.excl.nt1 [r22]; \
addl r1 = IA64_STK_OFFSET-VMM_PT_REGS_SIZE,r1; /* compute base of memory stack */ \
addl r1 = KVM_STK_OFFSET-VMM_PT_REGS_SIZE, r1; \
mov r23 = ar.bspstore; /* save ar.bspstore */ \
;; \
mov ar.bspstore = r22; /* switch to kernel RBS */\

View File

@ -27,7 +27,8 @@
*/
static inline uint64_t *kvm_host_get_pmt(struct kvm *kvm)
{
return (uint64_t *)(kvm->arch.vm_base + KVM_P2M_OFS);
return (uint64_t *)(kvm->arch.vm_base +
offsetof(struct kvm_vm_data, kvm_p2m));
}
static inline void kvm_set_pmt_entry(struct kvm *kvm, gfn_t gfn,

View File

@ -66,31 +66,25 @@ void lsapic_write(struct kvm_vcpu *v, unsigned long addr,
switch (addr) {
case PIB_OFST_INTA:
/*panic_domain(NULL, "Undefined write on PIB INTA\n");*/
panic_vm(v);
panic_vm(v, "Undefined write on PIB INTA\n");
break;
case PIB_OFST_XTP:
if (length == 1) {
vlsapic_write_xtp(v, val);
} else {
/*panic_domain(NULL,
"Undefined write on PIB XTP\n");*/
panic_vm(v);
panic_vm(v, "Undefined write on PIB XTP\n");
}
break;
default:
if (PIB_LOW_HALF(addr)) {
/*lower half */
/*Lower half */
if (length != 8)
/*panic_domain(NULL,
"Can't LHF write with size %ld!\n",
length);*/
panic_vm(v);
panic_vm(v, "Can't LHF write with size %ld!\n",
length);
else
vlsapic_write_ipi(v, addr, val);
} else { /* upper half
printk("IPI-UHF write %lx\n",addr);*/
panic_vm(v);
} else { /*Upper half */
panic_vm(v, "IPI-UHF write %lx\n", addr);
}
break;
}
@ -108,22 +102,18 @@ unsigned long lsapic_read(struct kvm_vcpu *v, unsigned long addr,
if (length == 1) /* 1 byte load */
; /* There is no i8259, there is no INTA access*/
else
/*panic_domain(NULL,"Undefined read on PIB INTA\n"); */
panic_vm(v);
panic_vm(v, "Undefined read on PIB INTA\n");
break;
case PIB_OFST_XTP:
if (length == 1) {
result = VLSAPIC_XTP(v);
/* printk("read xtp %lx\n", result); */
} else {
/*panic_domain(NULL,
"Undefined read on PIB XTP\n");*/
panic_vm(v);
panic_vm(v, "Undefined read on PIB XTP\n");
}
break;
default:
panic_vm(v);
panic_vm(v, "Undefined addr access for lsapic!\n");
break;
}
return result;
@ -162,7 +152,7 @@ static void mmio_access(struct kvm_vcpu *vcpu, u64 src_pa, u64 *dest,
/* it's necessary to ensure zero extending */
*dest = p->u.ioreq.data & (~0UL >> (64-(s*8)));
} else
panic_vm(vcpu);
panic_vm(vcpu, "Unhandled mmio access returned!\n");
out:
local_irq_restore(psr);
return ;
@ -324,7 +314,9 @@ void emulate_io_inst(struct kvm_vcpu *vcpu, u64 padr, u64 ma)
return;
} else {
inst_type = -1;
panic_vm(vcpu);
panic_vm(vcpu, "Unsupported MMIO access instruction! \
Bunld[0]=0x%lx, Bundle[1]=0x%lx\n",
bundle.i64[0], bundle.i64[1]);
}
size = 1 << size;
@ -335,7 +327,7 @@ void emulate_io_inst(struct kvm_vcpu *vcpu, u64 padr, u64 ma)
if (inst_type == SL_INTEGER)
vcpu_set_gr(vcpu, inst.M1.r1, data, 0);
else
panic_vm(vcpu);
panic_vm(vcpu, "Unsupported instruction type!\n");
}
vcpu_increment_iip(vcpu);

View File

@ -527,7 +527,8 @@ void reflect_interruption(u64 ifa, u64 isr, u64 iim,
vector = vec2off[vec];
if (!(vpsr & IA64_PSR_IC) && (vector != IA64_DATA_NESTED_TLB_VECTOR)) {
panic_vm(vcpu);
panic_vm(vcpu, "Interruption with vector :0x%lx occurs "
"with psr.ic = 0\n", vector);
return;
}
@ -586,7 +587,7 @@ static void set_pal_call_result(struct kvm_vcpu *vcpu)
vcpu_set_gr(vcpu, 10, p->u.pal_data.ret.v1, 0);
vcpu_set_gr(vcpu, 11, p->u.pal_data.ret.v2, 0);
} else
panic_vm(vcpu);
panic_vm(vcpu, "Mis-set for exit reason!\n");
}
static void set_sal_call_data(struct kvm_vcpu *vcpu)
@ -614,7 +615,7 @@ static void set_sal_call_result(struct kvm_vcpu *vcpu)
vcpu_set_gr(vcpu, 10, p->u.sal_data.ret.r10, 0);
vcpu_set_gr(vcpu, 11, p->u.sal_data.ret.r11, 0);
} else
panic_vm(vcpu);
panic_vm(vcpu, "Mis-set for exit reason!\n");
}
void kvm_ia64_handle_break(unsigned long ifa, struct kvm_pt_regs *regs,
@ -680,7 +681,7 @@ static void generate_exirq(struct kvm_vcpu *vcpu)
vpsr = VCPU(vcpu, vpsr);
isr = vpsr & IA64_PSR_RI;
if (!(vpsr & IA64_PSR_IC))
panic_vm(vcpu);
panic_vm(vcpu, "Trying to inject one IRQ with psr.ic=0\n");
reflect_interruption(0, isr, 0, 12, regs); /* EXT IRQ */
}
@ -941,8 +942,20 @@ static void vcpu_do_resume(struct kvm_vcpu *vcpu)
ia64_set_pta(vcpu->arch.vhpt.pta.val);
}
static void vmm_sanity_check(struct kvm_vcpu *vcpu)
{
struct exit_ctl_data *p = &vcpu->arch.exit_data;
if (!vmm_sanity && p->exit_reason != EXIT_REASON_DEBUG) {
panic_vm(vcpu, "Failed to do vmm sanity check,"
"it maybe caused by crashed vmm!!\n\n");
}
}
static void kvm_do_resume_op(struct kvm_vcpu *vcpu)
{
vmm_sanity_check(vcpu); /*Guarantee vcpu runing on healthy vmm!*/
if (test_and_clear_bit(KVM_REQ_RESUME, &vcpu->requests)) {
vcpu_do_resume(vcpu);
return;
@ -968,3 +981,11 @@ void vmm_transition(struct kvm_vcpu *vcpu)
1, 0, 0, 0, 0, 0);
kvm_do_resume_op(vcpu);
}
void vmm_panic_handler(u64 vec)
{
struct kvm_vcpu *vcpu = current_vcpu;
vmm_sanity = 0;
panic_vm(vcpu, "Unexpected interruption occurs in VMM, vector:0x%lx\n",
vec2off[vec]);
}

View File

@ -816,8 +816,9 @@ static void vcpu_set_itc(struct kvm_vcpu *vcpu, u64 val)
unsigned long vitv = VCPU(vcpu, itv);
if (vcpu->vcpu_id == 0) {
for (i = 0; i < MAX_VCPU_NUM; i++) {
v = (struct kvm_vcpu *)((char *)vcpu + VCPU_SIZE * i);
for (i = 0; i < KVM_MAX_VCPUS; i++) {
v = (struct kvm_vcpu *)((char *)vcpu +
sizeof(struct kvm_vcpu_data) * i);
VMX(v, itc_offset) = itc_offset;
VMX(v, last_itc) = 0;
}
@ -1650,7 +1651,8 @@ void vcpu_set_psr(struct kvm_vcpu *vcpu, unsigned long val)
* Otherwise panic
*/
if (val & (IA64_PSR_PK | IA64_PSR_IS | IA64_PSR_VM))
panic_vm(vcpu);
panic_vm(vcpu, "Only support guests with vpsr.pk =0 \
& vpsr.is=0\n");
/*
* For those IA64_PSR bits: id/da/dd/ss/ed/ia
@ -2103,7 +2105,7 @@ void kvm_init_all_rr(struct kvm_vcpu *vcpu)
if (is_physical_mode(vcpu)) {
if (vcpu->arch.mode_flags & GUEST_PHY_EMUL)
panic_vm(vcpu);
panic_vm(vcpu, "Machine Status conflicts!\n");
ia64_set_rr((VRN0 << VRN_SHIFT), vcpu->arch.metaphysical_rr0);
ia64_dv_serialize_data();
@ -2152,10 +2154,70 @@ int vmm_entry(void)
return 0;
}
void panic_vm(struct kvm_vcpu *v)
static void kvm_show_registers(struct kvm_pt_regs *regs)
{
struct exit_ctl_data *p = &v->arch.exit_data;
unsigned long ip = regs->cr_iip + ia64_psr(regs)->ri;
struct kvm_vcpu *vcpu = current_vcpu;
if (vcpu != NULL)
printk("vcpu 0x%p vcpu %d\n",
vcpu, vcpu->vcpu_id);
printk("psr : %016lx ifs : %016lx ip : [<%016lx>]\n",
regs->cr_ipsr, regs->cr_ifs, ip);
printk("unat: %016lx pfs : %016lx rsc : %016lx\n",
regs->ar_unat, regs->ar_pfs, regs->ar_rsc);
printk("rnat: %016lx bspstore: %016lx pr : %016lx\n",
regs->ar_rnat, regs->ar_bspstore, regs->pr);
printk("ldrs: %016lx ccv : %016lx fpsr: %016lx\n",
regs->loadrs, regs->ar_ccv, regs->ar_fpsr);
printk("csd : %016lx ssd : %016lx\n", regs->ar_csd, regs->ar_ssd);
printk("b0 : %016lx b6 : %016lx b7 : %016lx\n", regs->b0,
regs->b6, regs->b7);
printk("f6 : %05lx%016lx f7 : %05lx%016lx\n",
regs->f6.u.bits[1], regs->f6.u.bits[0],
regs->f7.u.bits[1], regs->f7.u.bits[0]);
printk("f8 : %05lx%016lx f9 : %05lx%016lx\n",
regs->f8.u.bits[1], regs->f8.u.bits[0],
regs->f9.u.bits[1], regs->f9.u.bits[0]);
printk("f10 : %05lx%016lx f11 : %05lx%016lx\n",
regs->f10.u.bits[1], regs->f10.u.bits[0],
regs->f11.u.bits[1], regs->f11.u.bits[0]);
printk("r1 : %016lx r2 : %016lx r3 : %016lx\n", regs->r1,
regs->r2, regs->r3);
printk("r8 : %016lx r9 : %016lx r10 : %016lx\n", regs->r8,
regs->r9, regs->r10);
printk("r11 : %016lx r12 : %016lx r13 : %016lx\n", regs->r11,
regs->r12, regs->r13);
printk("r14 : %016lx r15 : %016lx r16 : %016lx\n", regs->r14,
regs->r15, regs->r16);
printk("r17 : %016lx r18 : %016lx r19 : %016lx\n", regs->r17,
regs->r18, regs->r19);
printk("r20 : %016lx r21 : %016lx r22 : %016lx\n", regs->r20,
regs->r21, regs->r22);
printk("r23 : %016lx r24 : %016lx r25 : %016lx\n", regs->r23,
regs->r24, regs->r25);
printk("r26 : %016lx r27 : %016lx r28 : %016lx\n", regs->r26,
regs->r27, regs->r28);
printk("r29 : %016lx r30 : %016lx r31 : %016lx\n", regs->r29,
regs->r30, regs->r31);
}
void panic_vm(struct kvm_vcpu *v, const char *fmt, ...)
{
va_list args;
char buf[256];
struct kvm_pt_regs *regs = vcpu_regs(v);
struct exit_ctl_data *p = &v->arch.exit_data;
va_start(args, fmt);
vsnprintf(buf, sizeof(buf), fmt, args);
va_end(args);
printk(buf);
kvm_show_registers(regs);
p->exit_reason = EXIT_REASON_VM_PANIC;
vmm_transition(v);
/*Never to return*/

View File

@ -737,9 +737,12 @@ void kvm_init_vtlb(struct kvm_vcpu *v);
void kvm_init_vhpt(struct kvm_vcpu *v);
void thash_init(struct thash_cb *hcb, u64 sz);
void panic_vm(struct kvm_vcpu *v);
void panic_vm(struct kvm_vcpu *v, const char *fmt, ...);
extern u64 ia64_call_vsa(u64 proc, u64 arg1, u64 arg2, u64 arg3,
u64 arg4, u64 arg5, u64 arg6, u64 arg7);
extern long vmm_sanity;
#endif
#endif /* __VCPU_H__ */

View File

@ -20,6 +20,7 @@
*/
#include<linux/kernel.h>
#include<linux/module.h>
#include<asm/fpswa.h>
@ -31,6 +32,8 @@ MODULE_LICENSE("GPL");
extern char kvm_ia64_ivt;
extern fpswa_interface_t *vmm_fpswa_interface;
long vmm_sanity = 1;
struct kvm_vmm_info vmm_info = {
.module = THIS_MODULE,
.vmm_entry = vmm_entry,
@ -62,5 +65,31 @@ void vmm_spin_unlock(spinlock_t *lock)
{
_vmm_raw_spin_unlock(lock);
}
static void vcpu_debug_exit(struct kvm_vcpu *vcpu)
{
struct exit_ctl_data *p = &vcpu->arch.exit_data;
long psr;
local_irq_save(psr);
p->exit_reason = EXIT_REASON_DEBUG;
vmm_transition(vcpu);
local_irq_restore(psr);
}
asmlinkage int printk(const char *fmt, ...)
{
struct kvm_vcpu *vcpu = current_vcpu;
va_list args;
int r;
memset(vcpu->arch.log_buf, 0, VMM_LOG_LEN);
va_start(args, fmt);
r = vsnprintf(vcpu->arch.log_buf, VMM_LOG_LEN, fmt, args);
va_end(args);
vcpu_debug_exit(vcpu);
return r;
}
module_init(kvm_vmm_init)
module_exit(kvm_vmm_exit)

File diff suppressed because it is too large Load Diff

View File

@ -183,8 +183,8 @@ void mark_pages_dirty(struct kvm_vcpu *v, u64 pte, u64 ps)
u64 i, dirty_pages = 1;
u64 base_gfn = (pte&_PAGE_PPN_MASK) >> PAGE_SHIFT;
spinlock_t *lock = __kvm_va(v->arch.dirty_log_lock_pa);
void *dirty_bitmap = (void *)v - (KVM_VCPU_OFS + v->vcpu_id * VCPU_SIZE)
+ KVM_MEM_DIRTY_LOG_OFS;
void *dirty_bitmap = (void *)KVM_MEM_DIRTY_LOG_BASE;
dirty_pages <<= ps <= PAGE_SHIFT ? 0 : ps - PAGE_SHIFT;
vmm_spin_lock(lock);

View File

@ -227,14 +227,14 @@ finish_up:
return new_irq_info;
}
static void sn_set_affinity_irq(unsigned int irq, cpumask_t mask)
static void sn_set_affinity_irq(unsigned int irq, const struct cpumask *mask)
{
struct sn_irq_info *sn_irq_info, *sn_irq_info_safe;
nasid_t nasid;
int slice;
nasid = cpuid_to_nasid(first_cpu(mask));
slice = cpuid_to_slice(first_cpu(mask));
nasid = cpuid_to_nasid(cpumask_first(mask));
slice = cpuid_to_slice(cpumask_first(mask));
list_for_each_entry_safe(sn_irq_info, sn_irq_info_safe,
sn_irq_lh[irq], list)

View File

@ -151,7 +151,8 @@ int sn_setup_msi_irq(struct pci_dev *pdev, struct msi_desc *entry)
}
#ifdef CONFIG_SMP
static void sn_set_msi_irq_affinity(unsigned int irq, cpumask_t cpu_mask)
static void sn_set_msi_irq_affinity(unsigned int irq,
const struct cpumask *cpu_mask)
{
struct msi_msg msg;
int slice;
@ -164,7 +165,7 @@ static void sn_set_msi_irq_affinity(unsigned int irq, cpumask_t cpu_mask)
struct sn_pcibus_provider *provider;
unsigned int cpu;
cpu = first_cpu(cpu_mask);
cpu = cpumask_first(cpu_mask);
sn_irq_info = sn_msi_info[irq].sn_irq_info;
if (sn_irq_info == NULL || sn_irq_info->irq_int_bit >= 0)
return;
@ -204,7 +205,7 @@ static void sn_set_msi_irq_affinity(unsigned int irq, cpumask_t cpu_mask)
msg.address_lo = (u32)(bus_addr & 0x00000000ffffffff);
write_msi_msg(irq, &msg);
irq_desc[irq].affinity = cpu_mask;
irq_desc[irq].affinity = *cpu_mask;
}
#endif /* CONFIG_SMP */

Some files were not shown because too many files have changed in this diff Show More