Automatic merge of /spare/repo/linux-2.6/.git branch HEAD

This commit is contained in:
2005-06-02 18:43:09 -04:00 committed by Jeff Garzik
commit d7aaf48128
194 changed files with 4608 additions and 1713 deletions

View File

@ -0,0 +1,128 @@
CPU frequency and voltage scaling statictics in the Linux(TM) kernel
L i n u x c p u f r e q - s t a t s d r i v e r
- information for users -
Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>
Contents
1. Introduction
2. Statistics Provided (with example)
3. Configuring cpufreq-stats
1. Introduction
cpufreq-stats is a driver that provices CPU frequency statistics for each CPU.
This statistics is provided in /sysfs as a bunch of read_only interfaces. This
interface (when configured) will appear in a seperate directory under cpufreq
in /sysfs (<sysfs root>/devices/system/cpu/cpuX/cpufreq/stats/) for each CPU.
Various statistics will form read_only files under this directory.
This driver is designed to be independent of any particular cpufreq_driver
that may be running on your CPU. So, it will work with any cpufreq_driver.
2. Statistics Provided (with example)
cpufreq stats provides following statistics (explained in detail below).
- time_in_state
- total_trans
- trans_table
All the statistics will be from the time the stats driver has been inserted
to the time when a read of a particular statistic is done. Obviously, stats
driver will not have any information about the the frequcny transitions before
the stats driver insertion.
--------------------------------------------------------------------------------
<mysystem>:/sys/devices/system/cpu/cpu0/cpufreq/stats # ls -l
total 0
drwxr-xr-x 2 root root 0 May 14 16:06 .
drwxr-xr-x 3 root root 0 May 14 15:58 ..
-r--r--r-- 1 root root 4096 May 14 16:06 time_in_state
-r--r--r-- 1 root root 4096 May 14 16:06 total_trans
-r--r--r-- 1 root root 4096 May 14 16:06 trans_table
--------------------------------------------------------------------------------
- time_in_state
This gives the amount of time spent in each of the frequencies supported by
this CPU. The cat output will have "<frequency> <time>" pair in each line, which
will mean this CPU spent <time> usertime units of time at <frequency>. Output
will have one line for each of the supported freuencies. usertime units here
is 10mS (similar to other time exported in /proc).
--------------------------------------------------------------------------------
<mysystem>:/sys/devices/system/cpu/cpu0/cpufreq/stats # cat time_in_state
3600000 2089
3400000 136
3200000 34
3000000 67
2800000 172488
--------------------------------------------------------------------------------
- total_trans
This gives the total number of frequency transitions on this CPU. The cat
output will have a single count which is the total number of frequency
transitions.
--------------------------------------------------------------------------------
<mysystem>:/sys/devices/system/cpu/cpu0/cpufreq/stats # cat total_trans
20
--------------------------------------------------------------------------------
- trans_table
This will give a fine grained information about all the CPU frequency
transitions. The cat output here is a two dimensional matrix, where an entry
<i,j> (row i, column j) represents the count of number of transitions from
Freq_i to Freq_j. Freq_i is in descending order with increasing rows and
Freq_j is in descending order with increasing columns. The output here also
contains the actual freq values for each row and column for better readability.
--------------------------------------------------------------------------------
<mysystem>:/sys/devices/system/cpu/cpu0/cpufreq/stats # cat trans_table
From : To
: 3600000 3400000 3200000 3000000 2800000
3600000: 0 5 0 0 0
3400000: 4 0 2 0 0
3200000: 0 1 0 2 0
3000000: 0 0 1 0 3
2800000: 0 0 0 2 0
--------------------------------------------------------------------------------
3. Configuring cpufreq-stats
To configure cpufreq-stats in your kernel
Config Main Menu
Power management options (ACPI, APM) --->
CPU Frequency scaling --->
[*] CPU Frequency scaling
<*> CPU frequency translation statistics
[*] CPU frequency translation statistics details
"CPU Frequency scaling" (CONFIG_CPU_FREQ) should be enabled to configure
cpufreq-stats.
"CPU frequency translation statistics" (CONFIG_CPU_FREQ_STAT) provides the
basic statistics which includes time_in_state and total_trans.
"CPU frequency translation statistics details" (CONFIG_CPU_FREQ_STAT_DETAILS)
provides fine grained cpufreq stats by trans_table. The reason for having a
seperate config option for trans_table is:
- trans_table goes against the traditional /sysfs rule of one value per
interface. It provides a whole bunch of value in a 2 dimensional matrix
form.
Once these two options are enabled and your CPU supports cpufrequency, you
will be able to see the CPU frequency statistics in /sysfs.

View File

@ -239,6 +239,12 @@ L: linux-usb-devel@lists.sourceforge.net
W: http://www.linux-usb.org/SpeedTouch/
S: Maintained
ALI1563 I2C DRIVER
P: Rudolf Marek
M: r.marek@sh.cvut.cz
L: sensors@stimpy.netroedge.com
S: Maintained
ALPHA PORT
P: Richard Henderson
M: rth@twiddle.net
@ -1023,8 +1029,8 @@ W: http://www.ia64-linux.org/
S: Maintained
SN-IA64 (Itanium) SUB-PLATFORM
P: Jesse Barnes
M: jbarnes@sgi.com
P: Greg Edwards
M: edwardsg@sgi.com
L: linux-altix@sgi.com
L: linux-ia64@vger.kernel.org
W: http://www.sgi.com/altix

View File

@ -54,7 +54,7 @@ asmlinkage void ret_from_fork(void);
void default_idle(void)
{
while(1) {
if (need_resched()) {
if (!need_resched()) {
local_irq_enable();
__asm__("sleep");
local_irq_disable();

View File

@ -23,7 +23,7 @@ config X86_ACPI_CPUFREQ
If in doubt, say N.
config ELAN_CPUFREQ
tristate "AMD Elan"
tristate "AMD Elan SC400 and SC410"
select CPU_FREQ_TABLE
depends on X86_ELAN
---help---
@ -38,6 +38,18 @@ config ELAN_CPUFREQ
If in doubt, say N.
config SC520_CPUFREQ
tristate "AMD Elan SC520"
select CPU_FREQ_TABLE
depends on X86_ELAN
---help---
This adds the CPUFreq driver for AMD Elan SC520 processor.
For details, take a look at <file:Documentation/cpu-freq/>.
If in doubt, say N.
config X86_POWERNOW_K6
tristate "AMD Mobile K6-2/K6-3 PowerNow!"
select CPU_FREQ_TABLE

View File

@ -3,6 +3,7 @@ obj-$(CONFIG_X86_POWERNOW_K7) += powernow-k7.o
obj-$(CONFIG_X86_POWERNOW_K8) += powernow-k8.o
obj-$(CONFIG_X86_LONGHAUL) += longhaul.o
obj-$(CONFIG_ELAN_CPUFREQ) += elanfreq.o
obj-$(CONFIG_SC520_CPUFREQ) += sc520_freq.o
obj-$(CONFIG_X86_LONGRUN) += longrun.o
obj-$(CONFIG_X86_GX_SUSPMOD) += gx-suspmod.o
obj-$(CONFIG_X86_SPEEDSTEP_ICH) += speedstep-ich.o

View File

@ -29,6 +29,7 @@
#include <linux/cpufreq.h>
#include <linux/slab.h>
#include <linux/string.h>
#include <linux/pci.h>
#include <asm/msr.h>
#include <asm/timex.h>
@ -119,7 +120,13 @@ static int longhaul_get_cpu_mult(void)
static void do_powersaver(union msr_longhaul *longhaul,
unsigned int clock_ratio_index)
{
struct pci_dev *dev;
unsigned long flags;
unsigned int tmp_mask;
int version;
int i;
u16 pci_cmd;
u16 cmd_state[64];
switch (cpu_model) {
case CPU_EZRA_T:
@ -137,17 +144,58 @@ static void do_powersaver(union msr_longhaul *longhaul,
longhaul->bits.SoftBusRatio4 = (clock_ratio_index & 0x10) >> 4;
longhaul->bits.EnableSoftBusRatio = 1;
longhaul->bits.RevisionKey = 0;
local_irq_disable();
wrmsrl(MSR_VIA_LONGHAUL, longhaul->val);
preempt_disable();
local_irq_save(flags);
/*
* get current pci bus master state for all devices
* and clear bus master bit
*/
dev = NULL;
i = 0;
do {
dev = pci_get_device(PCI_ANY_ID, PCI_ANY_ID, dev);
if (dev != NULL) {
pci_read_config_word(dev, PCI_COMMAND, &pci_cmd);
cmd_state[i++] = pci_cmd;
pci_cmd &= ~PCI_COMMAND_MASTER;
pci_write_config_word(dev, PCI_COMMAND, pci_cmd);
}
} while (dev != NULL);
tmp_mask=inb(0x21); /* works on C3. save mask. */
outb(0xFE,0x21); /* TMR0 only */
outb(0xFF,0x80); /* delay */
local_irq_enable();
__hlt();
wrmsrl(MSR_VIA_LONGHAUL, longhaul->val);
__hlt();
local_irq_disable();
outb(tmp_mask,0x21); /* restore mask */
/* restore pci bus master state for all devices */
dev = NULL;
i = 0;
do {
dev = pci_get_device(PCI_ANY_ID, PCI_ANY_ID, dev);
if (dev != NULL) {
pci_cmd = cmd_state[i++];
pci_write_config_byte(dev, PCI_COMMAND, pci_cmd);
}
} while (dev != NULL);
local_irq_restore(flags);
preempt_enable();
/* disable bus ratio bit */
rdmsrl(MSR_VIA_LONGHAUL, longhaul->val);
longhaul->bits.EnableSoftBusRatio = 0;
longhaul->bits.RevisionKey = version;
local_irq_disable();
wrmsrl(MSR_VIA_LONGHAUL, longhaul->val);
local_irq_enable();
}
/**
@ -578,7 +626,7 @@ static int __init longhaul_cpu_init(struct cpufreq_policy *policy)
longhaul_setup_voltagescaling();
policy->governor = CPUFREQ_DEFAULT_GOVERNOR;
policy->cpuinfo.transition_latency = CPUFREQ_ETERNAL;
policy->cpuinfo.transition_latency = 200000; /* nsec */
policy->cur = calc_speed(longhaul_get_cpu_mult());
ret = cpufreq_frequency_table_cpuinfo(policy, longhaul_table);

View File

@ -23,6 +23,7 @@
#include <linux/dmi.h>
#include <asm/msr.h>
#include <asm/timer.h>
#include <asm/timex.h>
#include <asm/io.h>
#include <asm/system.h>
@ -586,13 +587,17 @@ static int __init powernow_cpu_init (struct cpufreq_policy *policy)
rdmsrl (MSR_K7_FID_VID_STATUS, fidvidstatus.val);
/* A K7 with powernow technology is set to max frequency by BIOS */
fsb = (10 * cpu_khz) / fid_codes[fidvidstatus.bits.MFID];
/* recalibrate cpu_khz */
result = recalibrate_cpu_khz();
if (result)
return result;
fsb = (10 * cpu_khz) / fid_codes[fidvidstatus.bits.CFID];
if (!fsb) {
printk(KERN_WARNING PFX "can not determine bus frequency\n");
return -EINVAL;
}
dprintk("FSB: %3d.%03d MHz\n", fsb/1000, fsb%1000);
dprintk("FSB: %3dMHz\n", fsb/1000);
if (dmi_check_system(powernow_dmi_table) || acpi_force) {
printk (KERN_INFO PFX "PSB/PST known to be broken. Trying ACPI instead\n");

View File

@ -4,7 +4,7 @@
* GNU general public license version 2. See "COPYING" or
* http://www.gnu.org/licenses/gpl.html
*
* Support : paul.devriendt@amd.com
* Support : mark.langsdorf@amd.com
*
* Based on the powernow-k7.c module written by Dave Jones.
* (C) 2003 Dave Jones <davej@codemonkey.org.uk> on behalf of SuSE Labs
@ -15,12 +15,13 @@
*
* Valuable input gratefully received from Dave Jones, Pavel Machek,
* Dominik Brodowski, and others.
* Originally developed by Paul Devriendt.
* Processor information obtained from Chapter 9 (Power and Thermal Management)
* of the "BIOS and Kernel Developer's Guide for the AMD Athlon 64 and AMD
* Opteron Processors" available for download from www.amd.com
*
* Tables for specific CPUs can be infrerred from
* http://www.amd.com/us-en/assets/content_type/white_papers_and_tech_docs/30430.pdf
* http://www.amd.com/us-en/assets/content_type/white_papers_and_tech_docs/30430.pdf
*/
#include <linux/kernel.h>
@ -30,6 +31,7 @@
#include <linux/cpufreq.h>
#include <linux/slab.h>
#include <linux/string.h>
#include <linux/cpumask.h>
#include <asm/msr.h>
#include <asm/io.h>
@ -42,7 +44,7 @@
#define PFX "powernow-k8: "
#define BFX PFX "BIOS error: "
#define VERSION "version 1.00.09e"
#define VERSION "version 1.40.2"
#include "powernow-k8.h"
/* serialize freq changes */
@ -50,6 +52,10 @@ static DECLARE_MUTEX(fidvid_sem);
static struct powernow_k8_data *powernow_data[NR_CPUS];
#ifndef CONFIG_SMP
static cpumask_t cpu_core_map[1];
#endif
/* Return a frequency in MHz, given an input fid */
static u32 find_freq_from_fid(u32 fid)
{
@ -274,11 +280,18 @@ static int core_voltage_pre_transition(struct powernow_k8_data *data, u32 reqvid
{
u32 rvosteps = data->rvo;
u32 savefid = data->currfid;
u32 maxvid, lo;
dprintk("ph1 (cpu%d): start, currfid 0x%x, currvid 0x%x, reqvid 0x%x, rvo 0x%x\n",
smp_processor_id(),
data->currfid, data->currvid, reqvid, data->rvo);
rdmsr(MSR_FIDVID_STATUS, lo, maxvid);
maxvid = 0x1f & (maxvid >> 16);
dprintk("ph1 maxvid=0x%x\n", maxvid);
if (reqvid < maxvid) /* lower numbers are higher voltages */
reqvid = maxvid;
while (data->currvid > reqvid) {
dprintk("ph1: curr 0x%x, req vid 0x%x\n",
data->currvid, reqvid);
@ -286,8 +299,8 @@ static int core_voltage_pre_transition(struct powernow_k8_data *data, u32 reqvid
return 1;
}
while ((rvosteps > 0) && ((data->rvo + data->currvid) > reqvid)) {
if (data->currvid == 0) {
while ((rvosteps > 0) && ((data->rvo + data->currvid) > reqvid)) {
if (data->currvid == maxvid) {
rvosteps = 0;
} else {
dprintk("ph1: changing vid for rvo, req 0x%x\n",
@ -671,7 +684,7 @@ static int find_psb_table(struct powernow_k8_data *data)
* BIOS and Kernel Developer's Guide, which is available on
* www.amd.com
*/
printk(KERN_ERR PFX "BIOS error - no PSB\n");
printk(KERN_INFO PFX "BIOS error - no PSB or ACPI _PSS objects\n");
return -ENODEV;
}
@ -695,7 +708,7 @@ static int powernow_k8_cpu_init_acpi(struct powernow_k8_data *data)
struct cpufreq_frequency_table *powernow_table;
if (acpi_processor_register_performance(&data->acpi_data, data->cpu)) {
dprintk("register performance failed\n");
dprintk("register performance failed: bad ACPI data\n");
return -EIO;
}
@ -746,22 +759,23 @@ static int powernow_k8_cpu_init_acpi(struct powernow_k8_data *data)
continue;
}
if (fid < HI_FID_TABLE_BOTTOM) {
if (cntlofreq) {
/* if both entries are the same, ignore this
* one...
*/
if ((powernow_table[i].frequency != powernow_table[cntlofreq].frequency) ||
(powernow_table[i].index != powernow_table[cntlofreq].index)) {
printk(KERN_ERR PFX "Too many lo freq table entries\n");
goto err_out_mem;
}
dprintk("double low frequency table entry, ignoring it.\n");
powernow_table[i].frequency = CPUFREQ_ENTRY_INVALID;
continue;
} else
cntlofreq = i;
/* verify only 1 entry from the lo frequency table */
if (fid < HI_FID_TABLE_BOTTOM) {
if (cntlofreq) {
/* if both entries are the same, ignore this
* one...
*/
if ((powernow_table[i].frequency != powernow_table[cntlofreq].frequency) ||
(powernow_table[i].index != powernow_table[cntlofreq].index)) {
printk(KERN_ERR PFX "Too many lo freq table entries\n");
goto err_out_mem;
}
dprintk("double low frequency table entry, ignoring it.\n");
powernow_table[i].frequency = CPUFREQ_ENTRY_INVALID;
continue;
} else
cntlofreq = i;
}
if (powernow_table[i].frequency != (data->acpi_data.states[i].core_frequency * 1000)) {
@ -816,7 +830,7 @@ static int transition_frequency(struct powernow_k8_data *data, unsigned int inde
{
u32 fid;
u32 vid;
int res;
int res, i;
struct cpufreq_freqs freqs;
dprintk("cpu %d transition to index %u\n", smp_processor_id(), index);
@ -841,7 +855,8 @@ static int transition_frequency(struct powernow_k8_data *data, unsigned int inde
}
if ((fid < HI_FID_TABLE_BOTTOM) && (data->currfid < HI_FID_TABLE_BOTTOM)) {
printk("ignoring illegal change in lo freq table-%x to 0x%x\n",
printk(KERN_ERR PFX
"ignoring illegal change in lo freq table-%x to 0x%x\n",
data->currfid, fid);
return 1;
}
@ -850,18 +865,20 @@ static int transition_frequency(struct powernow_k8_data *data, unsigned int inde
smp_processor_id(), fid, vid);
freqs.cpu = data->cpu;
freqs.old = find_khz_freq_from_fid(data->currfid);
freqs.new = find_khz_freq_from_fid(fid);
cpufreq_notify_transition(&freqs, CPUFREQ_PRECHANGE);
for_each_cpu_mask(i, cpu_core_map[data->cpu]) {
freqs.cpu = i;
cpufreq_notify_transition(&freqs, CPUFREQ_PRECHANGE);
}
down(&fidvid_sem);
res = transition_fid_vid(data, fid, vid);
up(&fidvid_sem);
freqs.new = find_khz_freq_from_fid(data->currfid);
cpufreq_notify_transition(&freqs, CPUFREQ_POSTCHANGE);
for_each_cpu_mask(i, cpu_core_map[data->cpu]) {
freqs.cpu = i;
cpufreq_notify_transition(&freqs, CPUFREQ_POSTCHANGE);
}
return res;
}
@ -874,6 +891,7 @@ static int powernowk8_target(struct cpufreq_policy *pol, unsigned targfreq, unsi
u32 checkvid = data->currvid;
unsigned int newstate;
int ret = -EIO;
int i;
/* only run on specific CPU from here on */
oldmask = current->cpus_allowed;
@ -902,22 +920,41 @@ static int powernowk8_target(struct cpufreq_policy *pol, unsigned targfreq, unsi
data->currfid, data->currvid);
if ((checkvid != data->currvid) || (checkfid != data->currfid)) {
printk(KERN_ERR PFX
"error - out of sync, fid 0x%x 0x%x, vid 0x%x 0x%x\n",
checkfid, data->currfid, checkvid, data->currvid);
printk(KERN_INFO PFX
"error - out of sync, fix 0x%x 0x%x, vid 0x%x 0x%x\n",
checkfid, data->currfid, checkvid, data->currvid);
}
if (cpufreq_frequency_table_target(pol, data->powernow_table, targfreq, relation, &newstate))
goto err_out;
down(&fidvid_sem);
for_each_cpu_mask(i, cpu_core_map[pol->cpu]) {
/* make sure the sibling is initialized */
if (!powernow_data[i]) {
ret = 0;
up(&fidvid_sem);
goto err_out;
}
}
powernow_k8_acpi_pst_values(data, newstate);
if (transition_frequency(data, newstate)) {
printk(KERN_ERR PFX "transition frequency failed\n");
ret = 1;
up(&fidvid_sem);
goto err_out;
}
/* Update all the fid/vids of our siblings */
for_each_cpu_mask(i, cpu_core_map[pol->cpu]) {
powernow_data[i]->currvid = data->currvid;
powernow_data[i]->currfid = data->currfid;
}
up(&fidvid_sem);
pol->cur = find_khz_freq_from_fid(data->currfid);
ret = 0;
@ -962,7 +999,7 @@ static int __init powernowk8_cpu_init(struct cpufreq_policy *pol)
*/
if ((num_online_cpus() != 1) || (num_possible_cpus() != 1)) {
printk(KERN_INFO PFX "MP systems not supported by PSB BIOS structure\n");
printk(KERN_ERR PFX "MP systems not supported by PSB BIOS structure\n");
kfree(data);
return -ENODEV;
}
@ -1003,6 +1040,7 @@ static int __init powernowk8_cpu_init(struct cpufreq_policy *pol)
schedule();
pol->governor = CPUFREQ_DEFAULT_GOVERNOR;
pol->cpus = cpu_core_map[pol->cpu];
/* Take a crude guess here.
* That guess was in microseconds, so multiply with 1000 */
@ -1069,7 +1107,7 @@ static unsigned int powernowk8_get (unsigned int cpu)
return 0;
}
preempt_disable();
if (query_current_values_with_pending_wait(data))
goto out;
@ -1127,9 +1165,10 @@ static void __exit powernowk8_exit(void)
cpufreq_unregister_driver(&cpufreq_amd64_driver);
}
MODULE_AUTHOR("Paul Devriendt <paul.devriendt@amd.com>");
MODULE_AUTHOR("Paul Devriendt <paul.devriendt@amd.com> and Mark Langsdorf <mark.langsdorf@amd.com.");
MODULE_DESCRIPTION("AMD Athlon 64 and Opteron processor frequency driver.");
MODULE_LICENSE("GPL");
late_initcall(powernowk8_init);
module_exit(powernowk8_exit);

View File

@ -174,3 +174,18 @@ static int core_voltage_post_transition(struct powernow_k8_data *data, u32 reqvi
static int core_frequency_transition(struct powernow_k8_data *data, u32 reqfid);
static void powernow_k8_acpi_pst_values(struct powernow_k8_data *data, unsigned int index);
#ifndef for_each_cpu_mask
#define for_each_cpu_mask(i,mask) for (i=0;i<1;i++)
#endif
#ifdef CONFIG_SMP
static inline void define_siblings(int cpu, cpumask_t cpu_sharedcore_mask[])
{
}
#else
static inline void define_siblings(int cpu, cpumask_t cpu_sharedcore_mask[])
{
cpu_set(0, cpu_sharedcore_mask[0]);
}
#endif

View File

@ -0,0 +1,186 @@
/*
* sc520_freq.c: cpufreq driver for the AMD Elan sc520
*
* Copyright (C) 2005 Sean Young <sean@mess.org>
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version
* 2 of the License, or (at your option) any later version.
*
* Based on elanfreq.c
*
* 2005-03-30: - initial revision
*/
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/init.h>
#include <linux/delay.h>
#include <linux/cpufreq.h>
#include <asm/msr.h>
#include <asm/timex.h>
#include <asm/io.h>
#define MMCR_BASE 0xfffef000 /* The default base address */
#define OFFS_CPUCTL 0x2 /* CPU Control Register */
static __u8 __iomem *cpuctl;
#define dprintk(msg...) cpufreq_debug_printk(CPUFREQ_DEBUG_DRIVER, "sc520_freq", msg)
static struct cpufreq_frequency_table sc520_freq_table[] = {
{0x01, 100000},
{0x02, 133000},
{0, CPUFREQ_TABLE_END},
};
static unsigned int sc520_freq_get_cpu_frequency(unsigned int cpu)
{
u8 clockspeed_reg = *cpuctl;
switch (clockspeed_reg & 0x03) {
default:
printk(KERN_ERR "sc520_freq: error: cpuctl register has unexpected value %02x\n", clockspeed_reg);
case 0x01:
return 100000;
case 0x02:
return 133000;
}
}
static void sc520_freq_set_cpu_state (unsigned int state)
{
struct cpufreq_freqs freqs;
u8 clockspeed_reg;
freqs.old = sc520_freq_get_cpu_frequency(0);
freqs.new = sc520_freq_table[state].frequency;
freqs.cpu = 0; /* AMD Elan is UP */
cpufreq_notify_transition(&freqs, CPUFREQ_PRECHANGE);
dprintk("attempting to set frequency to %i kHz\n",
sc520_freq_table[state].frequency);
local_irq_disable();
clockspeed_reg = *cpuctl & ~0x03;
*cpuctl = clockspeed_reg | sc520_freq_table[state].index;
local_irq_enable();
cpufreq_notify_transition(&freqs, CPUFREQ_POSTCHANGE);
};
static int sc520_freq_verify (struct cpufreq_policy *policy)
{
return cpufreq_frequency_table_verify(policy, &sc520_freq_table[0]);
}
static int sc520_freq_target (struct cpufreq_policy *policy,
unsigned int target_freq,
unsigned int relation)
{
unsigned int newstate = 0;
if (cpufreq_frequency_table_target(policy, sc520_freq_table, target_freq, relation, &newstate))
return -EINVAL;
sc520_freq_set_cpu_state(newstate);
return 0;
}
/*
* Module init and exit code
*/
static int sc520_freq_cpu_init(struct cpufreq_policy *policy)
{
struct cpuinfo_x86 *c = cpu_data;
int result;
/* capability check */
if (c->x86_vendor != X86_VENDOR_AMD ||
c->x86 != 4 || c->x86_model != 9)
return -ENODEV;
/* cpuinfo and default policy values */
policy->governor = CPUFREQ_DEFAULT_GOVERNOR;
policy->cpuinfo.transition_latency = 1000000; /* 1ms */
policy->cur = sc520_freq_get_cpu_frequency(0);
result = cpufreq_frequency_table_cpuinfo(policy, sc520_freq_table);
if (result)
return (result);
cpufreq_frequency_table_get_attr(sc520_freq_table, policy->cpu);
return 0;
}
static int sc520_freq_cpu_exit(struct cpufreq_policy *policy)
{
cpufreq_frequency_table_put_attr(policy->cpu);
return 0;
}
static struct freq_attr* sc520_freq_attr[] = {
&cpufreq_freq_attr_scaling_available_freqs,
NULL,
};
static struct cpufreq_driver sc520_freq_driver = {
.get = sc520_freq_get_cpu_frequency,
.verify = sc520_freq_verify,
.target = sc520_freq_target,
.init = sc520_freq_cpu_init,
.exit = sc520_freq_cpu_exit,
.name = "sc520_freq",
.owner = THIS_MODULE,
.attr = sc520_freq_attr,
};
static int __init sc520_freq_init(void)
{
struct cpuinfo_x86 *c = cpu_data;
/* Test if we have the right hardware */
if(c->x86_vendor != X86_VENDOR_AMD ||
c->x86 != 4 || c->x86_model != 9) {
dprintk("no Elan SC520 processor found!\n");
return -ENODEV;
}
cpuctl = ioremap((unsigned long)(MMCR_BASE + OFFS_CPUCTL), 1);
if(!cpuctl) {
printk(KERN_ERR "sc520_freq: error: failed to remap memory\n");
return -ENOMEM;
}
return cpufreq_register_driver(&sc520_freq_driver);
}
static void __exit sc520_freq_exit(void)
{
cpufreq_unregister_driver(&sc520_freq_driver);
iounmap(cpuctl);
}
MODULE_LICENSE("GPL");
MODULE_AUTHOR("Sean Young <sean@mess.org>");
MODULE_DESCRIPTION("cpufreq driver for AMD's Elan sc520 CPU");
module_init(sc520_freq_init);
module_exit(sc520_freq_exit);

View File

@ -54,6 +54,8 @@ enum {
CPU_DOTHAN_A1,
CPU_DOTHAN_A2,
CPU_DOTHAN_B0,
CPU_MP4HT_D0,
CPU_MP4HT_E0,
};
static const struct cpu_id cpu_ids[] = {
@ -61,6 +63,8 @@ static const struct cpu_id cpu_ids[] = {
[CPU_DOTHAN_A1] = { 6, 13, 1 },
[CPU_DOTHAN_A2] = { 6, 13, 2 },
[CPU_DOTHAN_B0] = { 6, 13, 6 },
[CPU_MP4HT_D0] = {15, 3, 4 },
[CPU_MP4HT_E0] = {15, 4, 1 },
};
#define N_IDS (sizeof(cpu_ids)/sizeof(cpu_ids[0]))
@ -226,6 +230,8 @@ static struct cpu_model models[] =
{ &cpu_ids[CPU_DOTHAN_A1], NULL, 0, NULL },
{ &cpu_ids[CPU_DOTHAN_A2], NULL, 0, NULL },
{ &cpu_ids[CPU_DOTHAN_B0], NULL, 0, NULL },
{ &cpu_ids[CPU_MP4HT_D0], NULL, 0, NULL },
{ &cpu_ids[CPU_MP4HT_E0], NULL, 0, NULL },
{ NULL, }
};

View File

@ -336,7 +336,7 @@ unsigned int speedstep_get_freqs(unsigned int processor,
if (!prev_speed)
return -EIO;
dprintk("previous seped is %u\n", prev_speed);
dprintk("previous speed is %u\n", prev_speed);
local_irq_save(flags);
@ -348,7 +348,7 @@ unsigned int speedstep_get_freqs(unsigned int processor,
goto out;
}
dprintk("low seped is %u\n", *low_speed);
dprintk("low speed is %u\n", *low_speed);
/* switch to high state */
set_state(SPEEDSTEP_HIGH);
@ -358,7 +358,7 @@ unsigned int speedstep_get_freqs(unsigned int processor,
goto out;
}
dprintk("high seped is %u\n", *high_speed);
dprintk("high speed is %u\n", *high_speed);
if (*low_speed == *high_speed) {
ret = -ENODEV;

View File

@ -357,6 +357,9 @@ static int __init speedstep_init(void)
case SPEEDSTEP_PROCESSOR_PIII_C:
case SPEEDSTEP_PROCESSOR_PIII_C_EARLY:
break;
case SPEEDSTEP_PROCESSOR_P4M:
printk(KERN_INFO "speedstep-smi: you're trying to use this cpufreq driver on a Pentium 4-based CPU. Most likely it will not work.\n");
break;
default:
speedstep_processor = 0;
}

View File

@ -118,7 +118,7 @@ struct _cpuid4_info {
};
#define MAX_CACHE_LEAVES 4
static unsigned short __devinitdata num_cache_leaves;
static unsigned short num_cache_leaves;
static int __devinit cpuid4_cache_lookup(int index, struct _cpuid4_info *this_leaf)
{

View File

@ -1502,11 +1502,13 @@ void __init setup_arch(char **cmdline_p)
if (efi_enabled)
efi_map_memmap();
#ifdef CONFIG_ACPI_BOOT
/*
* Parse the ACPI tables for possible boot-time SMP configuration.
*/
acpi_boot_table_init();
acpi_boot_init();
#endif
#ifdef CONFIG_X86_LOCAL_APIC
if (smp_found_config)

View File

@ -1074,8 +1074,10 @@ static void __init smp_boot_cpus(unsigned int max_cpus)
cpu_set(cpu, cpu_sibling_map[cpu]);
}
if (siblings != smp_num_siblings)
if (siblings != smp_num_siblings) {
printk(KERN_WARNING "WARNING: %d siblings found for CPU%d, should be %d\n", siblings, cpu, smp_num_siblings);
smp_num_siblings = siblings;
}
if (c->x86_num_cores > 1) {
for (i = 0; i < NR_CPUS; i++) {

View File

@ -6,6 +6,7 @@
#include <linux/timex.h>
#include <linux/errno.h>
#include <linux/jiffies.h>
#include <linux/module.h>
#include <asm/io.h>
#include <asm/timer.h>
@ -24,7 +25,7 @@
#define CALIBRATE_TIME (5 * 1000020/HZ)
unsigned long __init calibrate_tsc(void)
unsigned long calibrate_tsc(void)
{
mach_prepare_counter();
@ -139,7 +140,7 @@ bad_calibration:
#endif
/* calculate cpu_khz */
void __init init_cpu_khz(void)
void init_cpu_khz(void)
{
if (cpu_has_tsc) {
unsigned long tsc_quotient = calibrate_tsc();
@ -158,3 +159,4 @@ void __init init_cpu_khz(void)
}
}
}

View File

@ -320,6 +320,26 @@ core_initcall(cpufreq_tsc);
static inline void cpufreq_delayed_get(void) { return; }
#endif
int recalibrate_cpu_khz(void)
{
#ifndef CONFIG_SMP
unsigned long cpu_khz_old = cpu_khz;
if (cpu_has_tsc) {
init_cpu_khz();
cpu_data[0].loops_per_jiffy =
cpufreq_scale(cpu_data[0].loops_per_jiffy,
cpu_khz_old,
cpu_khz);
return 0;
} else
return -ENODEV;
#else
return -ENODEV;
#endif
}
EXPORT_SYMBOL(recalibrate_cpu_khz);
static void mark_offset_tsc(void)
{
unsigned long lost,delay;

View File

@ -2427,7 +2427,7 @@ sys32_epoll_wait(int epfd, struct epoll_event32 __user * events, int maxevents,
{
struct epoll_event *events64 = NULL;
mm_segment_t old_fs = get_fs();
int error, numevents, size;
int numevents, size;
int evt_idx;
int do_free_pages = 0;

View File

@ -1182,7 +1182,7 @@ ENTRY(notify_resume_user)
;;
(pNonSys) mov out2=0 // out2==0 => not a syscall
.fframe 16
.spillpsp ar.unat, 16 // (note that offset is relative to psp+0x10!)
.spillsp ar.unat, 16
st8 [sp]=r9,-16 // allocate space for ar.unat and save it
st8 [out1]=loc1,-8 // save ar.pfs, out1=&sigscratch
.body
@ -1208,7 +1208,7 @@ GLOBAL_ENTRY(sys_rt_sigsuspend)
adds out2=8,sp // out2=&sigscratch->ar_pfs
;;
.fframe 16
.spillpsp ar.unat, 16 // (note that offset is relative to psp+0x10!)
.spillsp ar.unat, 16
st8 [sp]=r9,-16 // allocate space for ar.unat and save it
st8 [out2]=loc1,-8 // save ar.pfs, out2=&sigscratch
.body

View File

@ -1103,8 +1103,6 @@ ia64_mca_cpe_int_caller(int cpe_irq, void *arg, struct pt_regs *ptregs)
return IRQ_HANDLED;
}
#endif /* CONFIG_ACPI */
/*
* ia64_mca_cpe_poll
*
@ -1122,6 +1120,8 @@ ia64_mca_cpe_poll (unsigned long dummy)
platform_send_ipi(first_cpu(cpu_online_map), IA64_CPEP_VECTOR, IA64_IPI_DM_INT, 0);
}
#endif /* CONFIG_ACPI */
/*
* C portion of the OS INIT handler
*
@ -1390,8 +1390,7 @@ ia64_mca_init(void)
register_percpu_irq(IA64_MCA_WAKEUP_VECTOR, &mca_wkup_irqaction);
#ifdef CONFIG_ACPI
/* Setup the CPEI/P vector and handler */
cpe_vector = acpi_request_vector(ACPI_INTERRUPT_CPEI);
/* Setup the CPEI/P handler */
register_percpu_irq(IA64_CPEP_VECTOR, &mca_cpep_irqaction);
#endif
@ -1436,6 +1435,7 @@ ia64_mca_late_init(void)
#ifdef CONFIG_ACPI
/* Setup the CPEI/P vector and handler */
cpe_vector = acpi_request_vector(ACPI_INTERRUPT_CPEI);
init_timer(&cpe_poll_timer);
cpe_poll_timer.function = ia64_mca_cpe_poll;

View File

@ -41,7 +41,7 @@
(pKStk) addl r3=THIS_CPU(ia64_mca_data),r3;; \
(pKStk) ld8 r3 = [r3];; \
(pKStk) addl r3=IA64_MCA_CPU_INIT_STACK_OFFSET,r3;; \
(pKStk) addl sp=IA64_STK_OFFSET-IA64_PT_REGS_SIZE,r3; \
(pKStk) addl r1=IA64_STK_OFFSET-IA64_PT_REGS_SIZE,r3; \
(pUStk) mov ar.rsc=0; /* set enforced lazy mode, pl 0, little-endian, loadrs=0 */ \
(pUStk) addl r22=IA64_RBS_OFFSET,r1; /* compute base of register backing store */ \
;; \
@ -50,7 +50,6 @@
(pUStk) mov r23=ar.bspstore; /* save ar.bspstore */ \
(pUStk) dep r22=-1,r22,61,3; /* compute kernel virtual addr of RBS */ \
;; \
(pKStk) addl r1=-IA64_PT_REGS_SIZE,r1; /* if in kernel mode, use sp (r12) */ \
(pUStk) mov ar.bspstore=r22; /* switch to kernel RBS */ \
;; \
(pUStk) mov r18=ar.bsp; \

View File

@ -11,7 +11,7 @@
* Version Perfmon-2.x is a rewrite of perfmon-1.x
* by Stephane Eranian, Hewlett Packard Co.
*
* Copyright (C) 1999-2003, 2005 Hewlett Packard Co
* Copyright (C) 1999-2005 Hewlett Packard Co
* Stephane Eranian <eranian@hpl.hp.com>
* David Mosberger-Tang <davidm@hpl.hp.com>
*
@ -497,6 +497,9 @@ typedef struct {
static pfm_stats_t pfm_stats[NR_CPUS];
static pfm_session_t pfm_sessions; /* global sessions information */
static spinlock_t pfm_alt_install_check = SPIN_LOCK_UNLOCKED;
static pfm_intr_handler_desc_t *pfm_alt_intr_handler;
static struct proc_dir_entry *perfmon_dir;
static pfm_uuid_t pfm_null_uuid = {0,};
@ -606,6 +609,7 @@ DEFINE_PER_CPU(unsigned long, pfm_syst_info);
DEFINE_PER_CPU(struct task_struct *, pmu_owner);
DEFINE_PER_CPU(pfm_context_t *, pmu_ctx);
DEFINE_PER_CPU(unsigned long, pmu_activation_number);
EXPORT_PER_CPU_SYMBOL_GPL(pfm_syst_info);
/* forward declaration */
@ -1325,7 +1329,7 @@ pfm_reserve_session(struct task_struct *task, int is_syswide, unsigned int cpu)
error_conflict:
DPRINT(("system wide not possible, conflicting session [%d] on CPU%d\n",
pfm_sessions.pfs_sys_session[cpu]->pid,
smp_processor_id()));
cpu));
abort:
UNLOCK_PFS(flags);
@ -5555,26 +5559,32 @@ pfm_interrupt_handler(int irq, void *arg, struct pt_regs *regs)
int ret;
this_cpu = get_cpu();
min = pfm_stats[this_cpu].pfm_ovfl_intr_cycles_min;
max = pfm_stats[this_cpu].pfm_ovfl_intr_cycles_max;
if (likely(!pfm_alt_intr_handler)) {
min = pfm_stats[this_cpu].pfm_ovfl_intr_cycles_min;
max = pfm_stats[this_cpu].pfm_ovfl_intr_cycles_max;
start_cycles = ia64_get_itc();
start_cycles = ia64_get_itc();
ret = pfm_do_interrupt_handler(irq, arg, regs);
ret = pfm_do_interrupt_handler(irq, arg, regs);
total_cycles = ia64_get_itc();
total_cycles = ia64_get_itc();
/*
* don't measure spurious interrupts
*/
if (likely(ret == 0)) {
total_cycles -= start_cycles;
/*
* don't measure spurious interrupts
*/
if (likely(ret == 0)) {
total_cycles -= start_cycles;
if (total_cycles < min) pfm_stats[this_cpu].pfm_ovfl_intr_cycles_min = total_cycles;
if (total_cycles > max) pfm_stats[this_cpu].pfm_ovfl_intr_cycles_max = total_cycles;
if (total_cycles < min) pfm_stats[this_cpu].pfm_ovfl_intr_cycles_min = total_cycles;
if (total_cycles > max) pfm_stats[this_cpu].pfm_ovfl_intr_cycles_max = total_cycles;
pfm_stats[this_cpu].pfm_ovfl_intr_cycles += total_cycles;
pfm_stats[this_cpu].pfm_ovfl_intr_cycles += total_cycles;
}
}
else {
(*pfm_alt_intr_handler->handler)(irq, arg, regs);
}
put_cpu_no_resched();
return IRQ_HANDLED;
}
@ -6425,6 +6435,141 @@ static struct irqaction perfmon_irqaction = {
.name = "perfmon"
};
static void
pfm_alt_save_pmu_state(void *data)
{
struct pt_regs *regs;
regs = ia64_task_regs(current);
DPRINT(("called\n"));
/*
* should not be necessary but
* let's take not risk
*/
pfm_clear_psr_up();
pfm_clear_psr_pp();
ia64_psr(regs)->pp = 0;
/*
* This call is required
* May cause a spurious interrupt on some processors
*/
pfm_freeze_pmu();
ia64_srlz_d();
}
void
pfm_alt_restore_pmu_state(void *data)
{
struct pt_regs *regs;
regs = ia64_task_regs(current);
DPRINT(("called\n"));
/*
* put PMU back in state expected
* by perfmon
*/
pfm_clear_psr_up();
pfm_clear_psr_pp();
ia64_psr(regs)->pp = 0;
/*
* perfmon runs with PMU unfrozen at all times
*/
pfm_unfreeze_pmu();
ia64_srlz_d();
}
int
pfm_install_alt_pmu_interrupt(pfm_intr_handler_desc_t *hdl)
{
int ret, i;
int reserve_cpu;
/* some sanity checks */
if (hdl == NULL || hdl->handler == NULL) return -EINVAL;
/* do the easy test first */
if (pfm_alt_intr_handler) return -EBUSY;
/* one at a time in the install or remove, just fail the others */
if (!spin_trylock(&pfm_alt_install_check)) {
return -EBUSY;
}
/* reserve our session */
for_each_online_cpu(reserve_cpu) {
ret = pfm_reserve_session(NULL, 1, reserve_cpu);
if (ret) goto cleanup_reserve;
}
/* save the current system wide pmu states */
ret = on_each_cpu(pfm_alt_save_pmu_state, NULL, 0, 1);
if (ret) {
DPRINT(("on_each_cpu() failed: %d\n", ret));
goto cleanup_reserve;
}
/* officially change to the alternate interrupt handler */
pfm_alt_intr_handler = hdl;
spin_unlock(&pfm_alt_install_check);
return 0;
cleanup_reserve:
for_each_online_cpu(i) {
/* don't unreserve more than we reserved */
if (i >= reserve_cpu) break;
pfm_unreserve_session(NULL, 1, i);
}
spin_unlock(&pfm_alt_install_check);
return ret;
}
EXPORT_SYMBOL_GPL(pfm_install_alt_pmu_interrupt);
int
pfm_remove_alt_pmu_interrupt(pfm_intr_handler_desc_t *hdl)
{
int i;
int ret;
if (hdl == NULL) return -EINVAL;
/* cannot remove someone else's handler! */
if (pfm_alt_intr_handler != hdl) return -EINVAL;
/* one at a time in the install or remove, just fail the others */
if (!spin_trylock(&pfm_alt_install_check)) {
return -EBUSY;
}
pfm_alt_intr_handler = NULL;
ret = on_each_cpu(pfm_alt_restore_pmu_state, NULL, 0, 1);
if (ret) {
DPRINT(("on_each_cpu() failed: %d\n", ret));
}
for_each_online_cpu(i) {
pfm_unreserve_session(NULL, 1, i);
}
spin_unlock(&pfm_alt_install_check);
return 0;
}
EXPORT_SYMBOL_GPL(pfm_remove_alt_pmu_interrupt);
/*
* perfmon initialization routine, called from the initcall() table
*/

View File

@ -692,16 +692,30 @@ convert_to_non_syscall (struct task_struct *child, struct pt_regs *pt,
unsigned long cfm)
{
struct unw_frame_info info, prev_info;
unsigned long ip, pr;
unsigned long ip, sp, pr;
unw_init_from_blocked_task(&info, child);
while (1) {
prev_info = info;
if (unw_unwind(&info) < 0)
return;
if (unw_get_rp(&info, &ip) < 0)
unw_get_sp(&info, &sp);
if ((long)((unsigned long)child + IA64_STK_OFFSET - sp)
< IA64_PT_REGS_SIZE) {
dprintk("ptrace.%s: ran off the top of the kernel "
"stack\n", __FUNCTION__);
return;
if (ip < FIXADDR_USER_END)
}
if (unw_get_pr (&prev_info, &pr) < 0) {
unw_get_rp(&prev_info, &ip);
dprintk("ptrace.%s: failed to read "
"predicate register (ip=0x%lx)\n",
__FUNCTION__, ip);
return;
}
if (unw_is_intr_frame(&info)
&& (pr & (1UL << PRED_USER_STACK)))
break;
}

View File

@ -624,7 +624,7 @@ static struct {
__u16 thread_id;
__u16 proc_fixed_addr;
__u8 valid;
}mt_info[NR_CPUS] __devinit;
} mt_info[NR_CPUS] __devinitdata;
#ifdef CONFIG_HOTPLUG_CPU
static inline void

View File

@ -182,13 +182,6 @@ do_mmap2 (unsigned long addr, unsigned long len, int prot, int flags, int fd, un
}
}
/*
* A zero mmap always succeeds in Linux, independent of whether or not the
* remaining arguments are valid.
*/
if (len == 0)
goto out;
/* Careful about overflows.. */
len = PAGE_ALIGN(len);
if (!len || len > TASK_SIZE) {

View File

@ -271,6 +271,8 @@ void __init sn_setup(char **cmdline_p)
int major = sn_sal_rev_major(), minor = sn_sal_rev_minor();
extern void sn_cpu_init(void);
ia64_sn_plat_set_error_handling_features();
/*
* If the generic code has enabled vga console support - lets
* get rid of it again. This is a kludge for the fact that ACPI

View File

@ -1143,12 +1143,12 @@ config PCI_QSPAN
config PCI_8260
bool
depends on PCI && 8260 && !8272
depends on PCI && 8260
default y
config 8260_PCI9
bool " Enable workaround for MPC826x erratum PCI 9"
depends on PCI_8260
depends on PCI_8260 && !ADS8272
default y
choice

View File

@ -22,7 +22,8 @@ targets += uImage
$(obj)/uImage: $(obj)/vmlinux.gz
$(Q)rm -f $@
$(call if_changed,uimage)
@echo ' Image: $@' $(if $(wildcard $@),'is ready','not made')
@echo -n ' Image: $@ '
@if [ -f $@ ]; then echo 'is ready' ; else echo 'not made'; fi
# Files generated that shall be removed upon make clean
clean-files := sImage vmapus vmlinux* miboot* zImage* uImage

View File

@ -1,7 +1,7 @@
#
# Automatically generated make config: don't edit
# Linux kernel version: 2.6.11-rc1
# Thu Jan 20 01:25:35 2005
# Linux kernel version: 2.6.12-rc4
# Tue May 17 11:56:01 2005
#
CONFIG_MMU=y
CONFIG_GENERIC_HARDIRQS=y
@ -11,6 +11,7 @@ CONFIG_HAVE_DEC_LOCK=y
CONFIG_PPC=y
CONFIG_PPC32=y
CONFIG_GENERIC_NVRAM=y
CONFIG_SCHED_NO_NO_OMIT_FRAME_POINTER=y
#
# Code maturity level options
@ -18,6 +19,7 @@ CONFIG_GENERIC_NVRAM=y
CONFIG_EXPERIMENTAL=y
CONFIG_CLEAN_COMPILE=y
CONFIG_BROKEN_ON_SMP=y
CONFIG_INIT_ENV_ARG_LIMIT=32
#
# General setup
@ -29,12 +31,14 @@ CONFIG_SYSVIPC=y
# CONFIG_BSD_PROCESS_ACCT is not set
CONFIG_SYSCTL=y
# CONFIG_AUDIT is not set
CONFIG_LOG_BUF_SHIFT=14
# CONFIG_HOTPLUG is not set
CONFIG_KOBJECT_UEVENT=y
# CONFIG_IKCONFIG is not set
CONFIG_EMBEDDED=y
# CONFIG_KALLSYMS is not set
CONFIG_PRINTK=y
CONFIG_BUG=y
CONFIG_BASE_FULL=y
CONFIG_FUTEX=y
# CONFIG_EPOLL is not set
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
@ -44,6 +48,7 @@ CONFIG_CC_ALIGN_LABELS=0
CONFIG_CC_ALIGN_LOOPS=0
CONFIG_CC_ALIGN_JUMPS=0
# CONFIG_TINY_SHMEM is not set
CONFIG_BASE_SMALL=0
#
# Loadable module support
@ -62,10 +67,12 @@ CONFIG_CC_ALIGN_JUMPS=0
CONFIG_E500=y
CONFIG_BOOKE=y
CONFIG_FSL_BOOKE=y
# CONFIG_PHYS_64BIT is not set
CONFIG_SPE=y
CONFIG_MATH_EMULATION=y
# CONFIG_CPU_FREQ is not set
CONFIG_PPC_GEN550=y
# CONFIG_PM is not set
CONFIG_85xx=y
CONFIG_PPC_INDIRECT_PCI_BE=y
@ -76,6 +83,7 @@ CONFIG_PPC_INDIRECT_PCI_BE=y
CONFIG_MPC8555_CDS=y
# CONFIG_MPC8560_ADS is not set
# CONFIG_SBC8560 is not set
# CONFIG_STX_GP3 is not set
CONFIG_MPC8555=y
CONFIG_85xx_PCI2=y
@ -90,6 +98,7 @@ CONFIG_CPM2=y
CONFIG_BINFMT_ELF=y
# CONFIG_BINFMT_MISC is not set
# CONFIG_CMDLINE_BOOL is not set
CONFIG_ISA_DMA_API=y
#
# Bus options
@ -104,10 +113,6 @@ CONFIG_PCI_NAMES=y
#
# CONFIG_PCCARD is not set
#
# PC-card bridges
#
#
# Advanced setup
#
@ -180,7 +185,59 @@ CONFIG_IOSCHED_CFQ=y
#
# ATA/ATAPI/MFM/RLL support
#
# CONFIG_IDE is not set
CONFIG_IDE=y
CONFIG_BLK_DEV_IDE=y
#
# Please see Documentation/ide.txt for help/info on IDE drives
#
# CONFIG_BLK_DEV_IDE_SATA is not set
CONFIG_BLK_DEV_IDEDISK=y
# CONFIG_IDEDISK_MULTI_MODE is not set
# CONFIG_BLK_DEV_IDECD is not set
# CONFIG_BLK_DEV_IDETAPE is not set
# CONFIG_BLK_DEV_IDEFLOPPY is not set
# CONFIG_IDE_TASK_IOCTL is not set
#
# IDE chipset support/bugfixes
#
CONFIG_IDE_GENERIC=y
CONFIG_BLK_DEV_IDEPCI=y
CONFIG_IDEPCI_SHARE_IRQ=y
# CONFIG_BLK_DEV_OFFBOARD is not set
CONFIG_BLK_DEV_GENERIC=y
# CONFIG_BLK_DEV_OPTI621 is not set
# CONFIG_BLK_DEV_SL82C105 is not set
CONFIG_BLK_DEV_IDEDMA_PCI=y
# CONFIG_BLK_DEV_IDEDMA_FORCED is not set
CONFIG_IDEDMA_PCI_AUTO=y
# CONFIG_IDEDMA_ONLYDISK is not set
# CONFIG_BLK_DEV_AEC62XX is not set
# CONFIG_BLK_DEV_ALI15X3 is not set
# CONFIG_BLK_DEV_AMD74XX is not set
# CONFIG_BLK_DEV_CMD64X is not set
# CONFIG_BLK_DEV_TRIFLEX is not set
# CONFIG_BLK_DEV_CY82C693 is not set
# CONFIG_BLK_DEV_CS5520 is not set
# CONFIG_BLK_DEV_CS5530 is not set
# CONFIG_BLK_DEV_HPT34X is not set
# CONFIG_BLK_DEV_HPT366 is not set
# CONFIG_BLK_DEV_SC1200 is not set
# CONFIG_BLK_DEV_PIIX is not set
# CONFIG_BLK_DEV_NS87415 is not set
# CONFIG_BLK_DEV_PDC202XX_OLD is not set
# CONFIG_BLK_DEV_PDC202XX_NEW is not set
# CONFIG_BLK_DEV_SVWKS is not set
# CONFIG_BLK_DEV_SIIMAGE is not set
# CONFIG_BLK_DEV_SLC90E66 is not set
# CONFIG_BLK_DEV_TRM290 is not set
CONFIG_BLK_DEV_VIA82CXXX=y
# CONFIG_IDE_ARM is not set
CONFIG_BLK_DEV_IDEDMA=y
# CONFIG_IDEDMA_IVB is not set
CONFIG_IDEDMA_AUTO=y
# CONFIG_BLK_DEV_HD is not set
#
# SCSI device support
@ -220,7 +277,6 @@ CONFIG_NET=y
#
CONFIG_PACKET=y
# CONFIG_PACKET_MMAP is not set
# CONFIG_NETLINK_DEV is not set
CONFIG_UNIX=y
# CONFIG_NET_KEY is not set
CONFIG_INET=y
@ -369,14 +425,6 @@ CONFIG_INPUT=y
# CONFIG_INPUT_EVDEV is not set
# CONFIG_INPUT_EVBUG is not set
#
# Input I/O drivers
#
# CONFIG_GAMEPORT is not set
CONFIG_SOUND_GAMEPORT=y
# CONFIG_SERIO is not set
# CONFIG_SERIO_I8042 is not set
#
# Input Device Drivers
#
@ -386,6 +434,13 @@ CONFIG_SOUND_GAMEPORT=y
# CONFIG_INPUT_TOUCHSCREEN is not set
# CONFIG_INPUT_MISC is not set
#
# Hardware I/O ports
#
# CONFIG_SERIO is not set
# CONFIG_GAMEPORT is not set
CONFIG_SOUND_GAMEPORT=y
#
# Character devices
#
@ -406,6 +461,7 @@ CONFIG_SERIAL_8250_NR_UARTS=4
CONFIG_SERIAL_CORE=y
CONFIG_SERIAL_CORE_CONSOLE=y
# CONFIG_SERIAL_CPM is not set
# CONFIG_SERIAL_JSM is not set
CONFIG_UNIX98_PTYS=y
CONFIG_LEGACY_PTYS=y
CONFIG_LEGACY_PTY_COUNT=256
@ -433,6 +489,11 @@ CONFIG_GEN_RTC=y
# CONFIG_DRM is not set
# CONFIG_RAW_DRIVER is not set
#
# TPM devices
#
# CONFIG_TCG_TPM is not set
#
# I2C support
#
@ -456,11 +517,11 @@ CONFIG_I2C_CHARDEV=y
# CONFIG_I2C_AMD8111 is not set
# CONFIG_I2C_I801 is not set
# CONFIG_I2C_I810 is not set
# CONFIG_I2C_PIIX4 is not set
# CONFIG_I2C_ISA is not set
CONFIG_I2C_MPC=y
# CONFIG_I2C_NFORCE2 is not set
# CONFIG_I2C_PARPORT_LIGHT is not set
# CONFIG_I2C_PIIX4 is not set
# CONFIG_I2C_PROSAVAGE is not set
# CONFIG_I2C_SAVAGE4 is not set
# CONFIG_SCx200_ACB is not set
@ -483,7 +544,9 @@ CONFIG_I2C_MPC=y
# CONFIG_SENSORS_ASB100 is not set
# CONFIG_SENSORS_DS1621 is not set
# CONFIG_SENSORS_FSCHER is not set
# CONFIG_SENSORS_FSCPOS is not set
# CONFIG_SENSORS_GL518SM is not set
# CONFIG_SENSORS_GL520SM is not set
# CONFIG_SENSORS_IT87 is not set
# CONFIG_SENSORS_LM63 is not set
# CONFIG_SENSORS_LM75 is not set
@ -494,9 +557,11 @@ CONFIG_I2C_MPC=y
# CONFIG_SENSORS_LM85 is not set
# CONFIG_SENSORS_LM87 is not set
# CONFIG_SENSORS_LM90 is not set
# CONFIG_SENSORS_LM92 is not set
# CONFIG_SENSORS_MAX1619 is not set
# CONFIG_SENSORS_PC87360 is not set
# CONFIG_SENSORS_SMSC47B397 is not set
# CONFIG_SENSORS_SIS5595 is not set
# CONFIG_SENSORS_SMSC47M1 is not set
# CONFIG_SENSORS_VIA686A is not set
# CONFIG_SENSORS_W83781D is not set
@ -506,10 +571,12 @@ CONFIG_I2C_MPC=y
#
# Other I2C Chip support
#
# CONFIG_SENSORS_DS1337 is not set
# CONFIG_SENSORS_EEPROM is not set
# CONFIG_SENSORS_PCF8574 is not set
# CONFIG_SENSORS_PCF8591 is not set
# CONFIG_SENSORS_RTC8564 is not set
# CONFIG_SENSORS_M41T00 is not set
# CONFIG_I2C_DEBUG_CORE is not set
# CONFIG_I2C_DEBUG_ALGO is not set
# CONFIG_I2C_DEBUG_BUS is not set
@ -538,7 +605,6 @@ CONFIG_I2C_MPC=y
# Graphics support
#
# CONFIG_FB is not set
# CONFIG_BACKLIGHT_LCD_SUPPORT is not set
#
# Sound
@ -548,13 +614,9 @@ CONFIG_I2C_MPC=y
#
# USB support
#
# CONFIG_USB is not set
CONFIG_USB_ARCH_HAS_HCD=y
CONFIG_USB_ARCH_HAS_OHCI=y
#
# NOTE: USB_STORAGE enables SCSI, and 'SCSI disk support' may also be needed; see USB_STORAGE Help for more information
#
# CONFIG_USB is not set
#
# USB Gadget Support
@ -585,6 +647,10 @@ CONFIG_JBD=y
CONFIG_FS_MBCACHE=y
# CONFIG_REISERFS_FS is not set
# CONFIG_JFS_FS is not set
#
# XFS support
#
# CONFIG_XFS_FS is not set
# CONFIG_MINIX_FS is not set
# CONFIG_ROMFS_FS is not set
@ -646,7 +712,6 @@ CONFIG_NFS_FS=y
# CONFIG_NFSD is not set
CONFIG_ROOT_NFS=y
CONFIG_LOCKD=y
# CONFIG_EXPORTFS is not set
CONFIG_SUNRPC=y
# CONFIG_RPCSEC_GSS_KRB5 is not set
# CONFIG_RPCSEC_GSS_SPKM3 is not set
@ -698,7 +763,9 @@ CONFIG_CRC32=y
#
# Kernel hacking
#
# CONFIG_PRINTK_TIME is not set
# CONFIG_DEBUG_KERNEL is not set
CONFIG_LOG_BUF_SHIFT=14
# CONFIG_KGDB_CONSOLE is not set
# CONFIG_SERIAL_TEXT_DEBUG is not set

View File

@ -232,7 +232,8 @@ skpinv: addi r6,r6,1 /* Increment */
tlbwe
/* 7. Jump to KERNELBASE mapping */
li r7,0
lis r7,MSR_KERNEL@h
ori r7,r7,MSR_KERNEL@l
bl 1f /* Find our address */
1: mflr r9
rlwimi r6,r9,0,20,31
@ -293,6 +294,18 @@ skpinv: addi r6,r6,1 /* Increment */
mtspr SPRN_HID0, r2
#endif
#if !defined(CONFIG_BDI_SWITCH)
/*
* The Abatron BDI JTAG debugger does not tolerate others
* mucking with the debug registers.
*/
lis r2,DBCR0_IDM@h
mtspr SPRN_DBCR0,r2
/* clear any residual debug events */
li r2,-1
mtspr SPRN_DBSR,r2
#endif
/*
* This is where the main kernel code starts.
*/

View File

@ -408,12 +408,7 @@ static int emulate_string_inst(struct pt_regs *regs, u32 instword)
/* Early out if we are an invalid form of lswx */
if ((instword & INST_STRING_MASK) == INST_LSWX)
if ((rA >= rT) || (NB_RB >= rT) || (rT == rA) || (rT == NB_RB))
return -EINVAL;
/* Early out if we are an invalid form of lswi */
if ((instword & INST_STRING_MASK) == INST_LSWI)
if ((rA >= rT) || (rT == rA))
if ((rT == rA) || (rT == NB_RB))
return -EINVAL;
EA = (rA == 0) ? 0 : regs->gpr[rA];

View File

@ -127,7 +127,6 @@ mpc834x_sys_map_io(void)
{
/* we steal the lowest ioremap addr for virt space */
io_block_mapping(VIRT_IMMRBAR, immrbar, 1024*1024, _PAGE_IO);
io_block_mapping(BCSR_VIRT_ADDR, BCSR_PHYS_ADDR, BCSR_SIZE, _PAGE_IO);
}
int

View File

@ -26,9 +26,14 @@
#define VIRT_IMMRBAR ((uint)0xfe000000)
#define BCSR_PHYS_ADDR ((uint)0xf8000000)
#define BCSR_VIRT_ADDR ((uint)0xfe100000)
#define BCSR_SIZE ((uint)(32 * 1024))
#define BCSR_MISC_REG2_OFF 0x07
#define BCSR_MISC_REG2_PORESET 0x01
#define BCSR_MISC_REG3_OFF 0x08
#define BCSR_MISC_REG3_CNFLOCK 0x80
#ifdef CONFIG_PCI
/* PCI interrupt controller */
#define PIRQA MPC83xx_IRQ_IRQ4

View File

@ -210,6 +210,9 @@ platform_init(unsigned long r3, unsigned long r4, unsigned long r5,
#if defined(CONFIG_SERIAL_8250) && defined(CONFIG_SERIAL_TEXT_DEBUG)
ppc_md.progress = gen550_progress;
#endif /* CONFIG_SERIAL_8250 && CONFIG_SERIAL_TEXT_DEBUG */
#if defined(CONFIG_SERIAL_8250) && defined(CONFIG_KGDB)
ppc_md.early_serial_map = mpc85xx_early_serial_map;
#endif /* CONFIG_SERIAL_8250 && CONFIG_KGDB */
if (ppc_md.progress)
ppc_md.progress("mpc8540ads_init(): exit", 0);

View File

@ -44,6 +44,7 @@
#include <asm/machdep.h>
#include <asm/prom.h>
#include <asm/open_pic.h>
#include <asm/i8259.h>
#include <asm/bootinfo.h>
#include <asm/pci-bridge.h>
#include <asm/mpc85xx.h>
@ -181,6 +182,7 @@ void __init
mpc85xx_cds_init_IRQ(void)
{
bd_t *binfo = (bd_t *) __res;
int i;
/* Determine the Physical Address of the OpenPIC regs */
phys_addr_t OpenPIC_PAddr = binfo->bi_immr_base + MPC85xx_OPENPIC_OFFSET;
@ -198,6 +200,15 @@ mpc85xx_cds_init_IRQ(void)
*/
openpic_init(MPC85xx_OPENPIC_IRQ_OFFSET);
#ifdef CONFIG_PCI
openpic_hookup_cascade(PIRQ0A, "82c59 cascade", i8259_irq);
for (i = 0; i < NUM_8259_INTERRUPTS; i++)
irq_desc[i].handler = &i8259_pic;
i8259_init(0);
#endif
#ifdef CONFIG_CPM2
/* Setup CPM2 PIC */
cpm2_init_IRQ();
@ -231,7 +242,7 @@ mpc85xx_map_irq(struct pci_dev *dev, unsigned char idsel, unsigned char pin)
* interrupt on slot */
{
{ 0, 1, 2, 3 }, /* 16 - PMC */
{ 3, 0, 0, 0 }, /* 17 P2P (Tsi320) */
{ 0, 1, 2, 3 }, /* 17 P2P (Tsi320) */
{ 0, 1, 2, 3 }, /* 18 - Slot 1 */
{ 1, 2, 3, 0 }, /* 19 - Slot 2 */
{ 2, 3, 0, 1 }, /* 20 - Slot 3 */
@ -280,13 +291,135 @@ mpc85xx_exclude_device(u_char bus, u_char devfn)
return PCIBIOS_DEVICE_NOT_FOUND;
#endif
/* We explicitly do not go past the Tundra 320 Bridge */
if (bus == 1)
if ((bus == 1) && (PCI_SLOT(devfn) == ARCADIA_2ND_BRIDGE_IDSEL))
return PCIBIOS_DEVICE_NOT_FOUND;
if ((bus == 0) && (PCI_SLOT(devfn) == ARCADIA_2ND_BRIDGE_IDSEL))
return PCIBIOS_DEVICE_NOT_FOUND;
else
return PCIBIOS_SUCCESSFUL;
}
void __init
mpc85xx_cds_enable_via(struct pci_controller *hose)
{
u32 pci_class;
u16 vid, did;
early_read_config_dword(hose, 0, 0x88, PCI_CLASS_REVISION, &pci_class);
if ((pci_class >> 16) != PCI_CLASS_BRIDGE_PCI)
return;
/* Configure P2P so that we can reach bus 1 */
early_write_config_byte(hose, 0, 0x88, PCI_PRIMARY_BUS, 0);
early_write_config_byte(hose, 0, 0x88, PCI_SECONDARY_BUS, 1);
early_write_config_byte(hose, 0, 0x88, PCI_SUBORDINATE_BUS, 0xff);
early_read_config_word(hose, 1, 0x10, PCI_VENDOR_ID, &vid);
early_read_config_word(hose, 1, 0x10, PCI_DEVICE_ID, &did);
if ((vid != PCI_VENDOR_ID_VIA) ||
(did != PCI_DEVICE_ID_VIA_82C686))
return;
/* Enable USB and IDE functions */
early_write_config_byte(hose, 1, 0x10, 0x48, 0x08);
}
void __init
mpc85xx_cds_fixup_via(struct pci_controller *hose)
{
u32 pci_class;
u16 vid, did;
early_read_config_dword(hose, 0, 0x88, PCI_CLASS_REVISION, &pci_class);
if ((pci_class >> 16) != PCI_CLASS_BRIDGE_PCI)
return;
/*
* Force the backplane P2P bridge to have a window
* open from 0x00000000-0x00001fff in PCI I/O space.
* This allows legacy I/O (i8259, etc) on the VIA
* southbridge to be accessed.
*/
early_write_config_byte(hose, 0, 0x88, PCI_IO_BASE, 0x00);
early_write_config_word(hose, 0, 0x88, PCI_IO_BASE_UPPER16, 0x0000);
early_write_config_byte(hose, 0, 0x88, PCI_IO_LIMIT, 0x10);
early_write_config_word(hose, 0, 0x88, PCI_IO_LIMIT_UPPER16, 0x0000);
early_read_config_word(hose, 1, 0x10, PCI_VENDOR_ID, &vid);
early_read_config_word(hose, 1, 0x10, PCI_DEVICE_ID, &did);
if ((vid != PCI_VENDOR_ID_VIA) ||
(did != PCI_DEVICE_ID_VIA_82C686))
return;
/*
* Since the P2P window was forced to cover the fixed
* legacy I/O addresses, it is necessary to manually
* place the base addresses for the IDE and USB functions
* within this window.
*/
/* Function 1, IDE */
early_write_config_dword(hose, 1, 0x11, PCI_BASE_ADDRESS_0, 0x1ff8);
early_write_config_dword(hose, 1, 0x11, PCI_BASE_ADDRESS_1, 0x1ff4);
early_write_config_dword(hose, 1, 0x11, PCI_BASE_ADDRESS_2, 0x1fe8);
early_write_config_dword(hose, 1, 0x11, PCI_BASE_ADDRESS_3, 0x1fe4);
early_write_config_dword(hose, 1, 0x11, PCI_BASE_ADDRESS_4, 0x1fd0);
/* Function 2, USB ports 0-1 */
early_write_config_dword(hose, 1, 0x12, PCI_BASE_ADDRESS_4, 0x1fa0);
/* Function 3, USB ports 2-3 */
early_write_config_dword(hose, 1, 0x13, PCI_BASE_ADDRESS_4, 0x1f80);
/* Function 5, Power Management */
early_write_config_dword(hose, 1, 0x15, PCI_BASE_ADDRESS_0, 0x1e00);
early_write_config_dword(hose, 1, 0x15, PCI_BASE_ADDRESS_1, 0x1dfc);
early_write_config_dword(hose, 1, 0x15, PCI_BASE_ADDRESS_2, 0x1df8);
/* Function 6, AC97 Interface */
early_write_config_dword(hose, 1, 0x16, PCI_BASE_ADDRESS_0, 0x1c00);
}
void __init
mpc85xx_cds_pcibios_fixup(void)
{
struct pci_dev *dev = NULL;
u_char c;
if ((dev = pci_find_device(PCI_VENDOR_ID_VIA,
PCI_DEVICE_ID_VIA_82C586_1, NULL))) {
/*
* U-Boot does not set the enable bits
* for the IDE device. Force them on here.
*/
pci_read_config_byte(dev, 0x40, &c);
c |= 0x03; /* IDE: Chip Enable Bits */
pci_write_config_byte(dev, 0x40, c);
/*
* Since only primary interface works, force the
* IDE function to standard primary IDE interrupt
* w/ 8259 offset
*/
dev->irq = 14;
pci_write_config_byte(dev, PCI_INTERRUPT_LINE, dev->irq);
}
/*
* Force legacy USB interrupt routing
*/
if ((dev = pci_find_device(PCI_VENDOR_ID_VIA,
PCI_DEVICE_ID_VIA_82C586_2, NULL))) {
dev->irq = 10;
pci_write_config_byte(dev, PCI_INTERRUPT_LINE, 10);
}
if ((dev = pci_find_device(PCI_VENDOR_ID_VIA,
PCI_DEVICE_ID_VIA_82C586_2, dev))) {
dev->irq = 11;
pci_write_config_byte(dev, PCI_INTERRUPT_LINE, 11);
}
}
#endif /* CONFIG_PCI */
TODC_ALLOC();
@ -328,6 +461,9 @@ mpc85xx_cds_setup_arch(void)
loops_per_jiffy = freq / HZ;
#ifdef CONFIG_PCI
/* VIA IDE configuration */
ppc_md.pcibios_fixup = mpc85xx_cds_pcibios_fixup;
/* setup PCI host bridges */
mpc85xx_setup_hose();
#endif
@ -459,6 +595,9 @@ platform_init(unsigned long r3, unsigned long r4, unsigned long r5,
#if defined(CONFIG_SERIAL_8250) && defined(CONFIG_SERIAL_TEXT_DEBUG)
ppc_md.progress = gen550_progress;
#endif /* CONFIG_SERIAL_8250 && CONFIG_SERIAL_TEXT_DEBUG */
#if defined(CONFIG_SERIAL_8250) && defined(CONFIG_KGDB)
ppc_md.early_serial_map = mpc85xx_early_serial_map;
#endif /* CONFIG_SERIAL_8250 && CONFIG_KGDB */
if (ppc_md.progress)
ppc_md.progress("mpc85xx_cds_init(): exit", 0);

View File

@ -77,4 +77,7 @@
#define MPC85XX_PCI2_IO_SIZE 0x01000000
#define NR_8259_INTS 16
#define CPM_IRQ_OFFSET NR_8259_INTS
#endif /* __MACH_MPC85XX_CDS_H__ */

View File

@ -221,6 +221,9 @@ platform_init(unsigned long r3, unsigned long r4, unsigned long r5,
#if defined(CONFIG_SERIAL_8250) && defined(CONFIG_SERIAL_TEXT_DEBUG)
ppc_md.progress = gen550_progress;
#endif /* CONFIG_SERIAL_8250 && CONFIG_SERIAL_TEXT_DEBUG */
#if defined(CONFIG_SERIAL_8250) && defined(CONFIG_KGDB)
ppc_md.early_serial_map = sbc8560_early_serial_map;
#endif /* CONFIG_SERIAL_8250 && CONFIG_KGDB */
if (ppc_md.progress)
ppc_md.progress("sbc8560_init(): exit", 0);

View File

@ -85,14 +85,11 @@ static int no_schedule;
static int has_cpu_l2lve;
#define PMAC_CPU_LOW_SPEED 1
#define PMAC_CPU_HIGH_SPEED 0
/* There are only two frequency states for each processor. Values
* are in kHz for the time being.
*/
#define CPUFREQ_HIGH PMAC_CPU_HIGH_SPEED
#define CPUFREQ_LOW PMAC_CPU_LOW_SPEED
#define CPUFREQ_HIGH 0
#define CPUFREQ_LOW 1
static struct cpufreq_frequency_table pmac_cpu_freqs[] = {
{CPUFREQ_HIGH, 0},
@ -100,6 +97,11 @@ static struct cpufreq_frequency_table pmac_cpu_freqs[] = {
{0, CPUFREQ_TABLE_END},
};
static struct freq_attr* pmac_cpu_freqs_attr[] = {
&cpufreq_freq_attr_scaling_available_freqs,
NULL,
};
static inline void local_delay(unsigned long ms)
{
if (no_schedule)
@ -269,6 +271,8 @@ static int __pmac pmu_set_cpu_speed(int low_speed)
#ifdef DEBUG_FREQ
printk(KERN_DEBUG "HID1, before: %x\n", mfspr(SPRN_HID1));
#endif
pmu_suspend();
/* Disable all interrupt sources on openpic */
pic_prio = openpic_get_priority();
openpic_set_priority(0xf);
@ -343,6 +347,8 @@ static int __pmac pmu_set_cpu_speed(int low_speed)
debug_calc_bogomips();
#endif
pmu_resume();
preempt_enable();
return 0;
@ -355,7 +361,7 @@ static int __pmac do_set_cpu_speed(int speed_mode, int notify)
static unsigned long prev_l3cr;
freqs.old = cur_freq;
freqs.new = (speed_mode == PMAC_CPU_HIGH_SPEED) ? hi_freq : low_freq;
freqs.new = (speed_mode == CPUFREQ_HIGH) ? hi_freq : low_freq;
freqs.cpu = smp_processor_id();
if (freqs.old == freqs.new)
@ -363,7 +369,7 @@ static int __pmac do_set_cpu_speed(int speed_mode, int notify)
if (notify)
cpufreq_notify_transition(&freqs, CPUFREQ_PRECHANGE);
if (speed_mode == PMAC_CPU_LOW_SPEED &&
if (speed_mode == CPUFREQ_LOW &&
cpu_has_feature(CPU_FTR_L3CR)) {
l3cr = _get_L3CR();
if (l3cr & L3CR_L3E) {
@ -371,8 +377,8 @@ static int __pmac do_set_cpu_speed(int speed_mode, int notify)
_set_L3CR(0);
}
}
set_speed_proc(speed_mode == PMAC_CPU_LOW_SPEED);
if (speed_mode == PMAC_CPU_HIGH_SPEED &&
set_speed_proc(speed_mode == CPUFREQ_LOW);
if (speed_mode == CPUFREQ_HIGH &&
cpu_has_feature(CPU_FTR_L3CR)) {
l3cr = _get_L3CR();
if ((prev_l3cr & L3CR_L3E) && l3cr != prev_l3cr)
@ -380,7 +386,7 @@ static int __pmac do_set_cpu_speed(int speed_mode, int notify)
}
if (notify)
cpufreq_notify_transition(&freqs, CPUFREQ_POSTCHANGE);
cur_freq = (speed_mode == PMAC_CPU_HIGH_SPEED) ? hi_freq : low_freq;
cur_freq = (speed_mode == CPUFREQ_HIGH) ? hi_freq : low_freq;
return 0;
}
@ -423,7 +429,8 @@ static int __pmac pmac_cpufreq_cpu_init(struct cpufreq_policy *policy)
policy->cpuinfo.transition_latency = CPUFREQ_ETERNAL;
policy->cur = cur_freq;
return cpufreq_frequency_table_cpuinfo(policy, &pmac_cpu_freqs[0]);
cpufreq_frequency_table_get_attr(pmac_cpu_freqs, policy->cpu);
return cpufreq_frequency_table_cpuinfo(policy, pmac_cpu_freqs);
}
static u32 __pmac read_gpio(struct device_node *np)
@ -457,7 +464,7 @@ static int __pmac pmac_cpufreq_suspend(struct cpufreq_policy *policy, u32 state)
no_schedule = 1;
sleep_freq = cur_freq;
if (cur_freq == low_freq)
do_set_cpu_speed(PMAC_CPU_HIGH_SPEED, 0);
do_set_cpu_speed(CPUFREQ_HIGH, 0);
return 0;
}
@ -473,8 +480,8 @@ static int __pmac pmac_cpufreq_resume(struct cpufreq_policy *policy)
* is that we force a switch to whatever it was, which is
* probably high speed due to our suspend() routine
*/
do_set_cpu_speed(sleep_freq == low_freq ? PMAC_CPU_LOW_SPEED
: PMAC_CPU_HIGH_SPEED, 0);
do_set_cpu_speed(sleep_freq == low_freq ?
CPUFREQ_LOW : CPUFREQ_HIGH, 0);
no_schedule = 0;
return 0;
@ -488,6 +495,7 @@ static struct cpufreq_driver pmac_cpufreq_driver = {
.suspend = pmac_cpufreq_suspend,
.resume = pmac_cpufreq_resume,
.flags = CPUFREQ_PM_NO_WARN,
.attr = pmac_cpu_freqs_attr,
.name = "powermac",
.owner = THIS_MODULE,
};

View File

@ -49,10 +49,10 @@
/* PCI interrupt controller */
#define PCI_INT_STAT_REG 0xF8200000
#define PCI_INT_MASK_REG 0xF8200004
#define PIRQA (NR_SIU_INTS + 0)
#define PIRQB (NR_SIU_INTS + 1)
#define PIRQC (NR_SIU_INTS + 2)
#define PIRQD (NR_SIU_INTS + 3)
#define PIRQA (NR_CPM_INTS + 0)
#define PIRQB (NR_CPM_INTS + 1)
#define PIRQC (NR_CPM_INTS + 2)
#define PIRQD (NR_CPM_INTS + 3)
/*
* PCI memory map definitions for MPC8266ADS-PCI.
@ -68,28 +68,23 @@
* 0x00000000-0x1FFFFFFF 0x00000000-0x1FFFFFFF MPC8266 local memory
*/
/* window for a PCI master to access MPC8266 memory */
#define PCI_SLV_MEM_LOCAL 0x00000000 /* Local base */
#define PCI_SLV_MEM_BUS 0x00000000 /* PCI base */
/* All the other PCI memory map definitions reside at syslib/m82xx_pci.h
Here we should redefine what is unique for this board */
#define M82xx_PCI_SLAVE_MEM_LOCAL 0x00000000 /* Local base */
#define M82xx_PCI_SLAVE_MEM_BUS 0x00000000 /* PCI base */
#define M82xx_PCI_SLAVE_MEM_SIZE 0x10000000 /* 256 Mb */
/* window for the processor to access PCI memory with prefetching */
#define PCI_MSTR_MEM_LOCAL 0x80000000 /* Local base */
#define PCI_MSTR_MEM_BUS 0x80000000 /* PCI base */
#define PCI_MSTR_MEM_SIZE 0x20000000 /* 512MB */
#define M82xx_PCI_SLAVE_SEC_WND_SIZE ~(0x40000000 - 1U) /* 2 x 512Mb */
#define M82xx_PCI_SLAVE_SEC_WND_BASE 0x80000000 /* PCI Memory base */
/* window for the processor to access PCI memory without prefetching */
#define PCI_MSTR_MEMIO_LOCAL 0xA0000000 /* Local base */
#define PCI_MSTR_MEMIO_BUS 0xA0000000 /* PCI base */
#define PCI_MSTR_MEMIO_SIZE 0x20000000 /* 512MB */
#if defined(CONFIG_ADS8272)
#define PCI_INT_TO_SIU SIU_INT_IRQ2
#elif defined(CONFIG_PQ2FADS)
#define PCI_INT_TO_SIU SIU_INT_IRQ6
#else
#warning PCI Bridge will be without interrupts support
#endif
/* window for the processor to access PCI I/O */
#define PCI_MSTR_IO_LOCAL 0xF4000000 /* Local base */
#define PCI_MSTR_IO_BUS 0x00000000 /* PCI base */
#define PCI_MSTR_IO_SIZE 0x04000000 /* 64MB */
#define _IO_BASE PCI_MSTR_IO_LOCAL
#define _ISA_MEM_BASE PCI_MSTR_MEMIO_LOCAL
#define PCI_DRAM_OFFSET PCI_SLV_MEM_BUS
#endif /* CONFIG_PCI */
#endif /* __MACH_ADS8260_DEFS */

View File

@ -81,7 +81,7 @@ obj-$(CONFIG_SBC82xx) += todc_time.o
obj-$(CONFIG_SPRUCE) += cpc700_pic.o indirect_pci.o pci_auto.o \
todc_time.o
obj-$(CONFIG_8260) += m8260_setup.o
obj-$(CONFIG_PCI_8260) += m8260_pci.o indirect_pci.o
obj-$(CONFIG_PCI_8260) += m82xx_pci.o indirect_pci.o pci_auto.o
obj-$(CONFIG_8260_PCI9) += m8260_pci_erratum9.o
obj-$(CONFIG_CPM2) += cpm2_common.o cpm2_pic.o
ifeq ($(CONFIG_PPC_GEN550),y)
@ -97,7 +97,7 @@ obj-$(CONFIG_MPC10X_OPENPIC) += open_pic.o
obj-$(CONFIG_40x) += dcr.o
obj-$(CONFIG_BOOKE) += dcr.o
obj-$(CONFIG_85xx) += open_pic.o ppc85xx_common.o ppc85xx_setup.o \
ppc_sys.o mpc85xx_sys.o \
ppc_sys.o i8259.o mpc85xx_sys.o \
mpc85xx_devices.o
ifeq ($(CONFIG_85xx),y)
obj-$(CONFIG_PCI) += indirect_pci.o pci_auto.o

View File

@ -1,193 +0,0 @@
/*
* (C) Copyright 2003
* Wolfgang Denk, DENX Software Engineering, wd@denx.de.
*
* (C) Copyright 2004 Red Hat, Inc.
*
* See file CREDITS for list of people who contributed to this
* project.
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License as
* published by the Free Software Foundation; either version 2 of
* the License, or (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 59 Temple Place, Suite 330, Boston,
* MA 02111-1307 USA
*/
#include <linux/kernel.h>
#include <linux/init.h>
#include <linux/pci.h>
#include <linux/slab.h>
#include <linux/delay.h>
#include <asm/byteorder.h>
#include <asm/io.h>
#include <asm/irq.h>
#include <asm/uaccess.h>
#include <asm/machdep.h>
#include <asm/pci-bridge.h>
#include <asm/immap_cpm2.h>
#include <asm/mpc8260.h>
#include "m8260_pci.h"
/* PCI bus configuration registers.
*/
static void __init m8260_setup_pci(struct pci_controller *hose)
{
volatile cpm2_map_t *immap = cpm2_immr;
unsigned long pocmr;
u16 tempShort;
#ifndef CONFIG_ATC /* already done in U-Boot */
/*
* Setting required to enable IRQ1-IRQ7 (SIUMCR [DPPC]),
* and local bus for PCI (SIUMCR [LBPC]).
*/
immap->im_siu_conf.siu_82xx.sc_siumcr = 0x00640000;
#endif
/* Make PCI lowest priority */
/* Each 4 bits is a device bus request and the MS 4bits
is highest priority */
/* Bus 4bit value
--- ----------
CPM high 0b0000
CPM middle 0b0001
CPM low 0b0010
PCI reguest 0b0011
Reserved 0b0100
Reserved 0b0101
Internal Core 0b0110
External Master 1 0b0111
External Master 2 0b1000
External Master 3 0b1001
The rest are reserved */
immap->im_siu_conf.siu_82xx.sc_ppc_alrh = 0x61207893;
/* Park bus on core while modifying PCI Bus accesses */
immap->im_siu_conf.siu_82xx.sc_ppc_acr = 0x6;
/*
* Set up master window that allows the CPU to access PCI space. This
* window is set up using the first SIU PCIBR registers.
*/
immap->im_memctl.memc_pcimsk0 = MPC826x_PCI_MASK;
immap->im_memctl.memc_pcibr0 = MPC826x_PCI_BASE | PCIBR_ENABLE;
/* Disable machine check on no response or target abort */
immap->im_pci.pci_emr = cpu_to_le32(0x1fe7);
/* Release PCI RST (by default the PCI RST signal is held low) */
immap->im_pci.pci_gcr = cpu_to_le32(PCIGCR_PCI_BUS_EN);
/* give it some time */
mdelay(1);
/*
* Set up master window that allows the CPU to access PCI Memory (prefetch)
* space. This window is set up using the first set of Outbound ATU registers.
*/
immap->im_pci.pci_potar0 = cpu_to_le32(MPC826x_PCI_LOWER_MEM >> 12);
immap->im_pci.pci_pobar0 = cpu_to_le32((MPC826x_PCI_LOWER_MEM - MPC826x_PCI_MEM_OFFSET) >> 12);
pocmr = ((MPC826x_PCI_UPPER_MEM - MPC826x_PCI_LOWER_MEM) >> 12) ^ 0xfffff;
immap->im_pci.pci_pocmr0 = cpu_to_le32(pocmr | POCMR_ENABLE | POCMR_PREFETCH_EN);
/*
* Set up master window that allows the CPU to access PCI Memory (non-prefetch)
* space. This window is set up using the second set of Outbound ATU registers.
*/
immap->im_pci.pci_potar1 = cpu_to_le32(MPC826x_PCI_LOWER_MMIO >> 12);
immap->im_pci.pci_pobar1 = cpu_to_le32((MPC826x_PCI_LOWER_MMIO - MPC826x_PCI_MMIO_OFFSET) >> 12);
pocmr = ((MPC826x_PCI_UPPER_MMIO - MPC826x_PCI_LOWER_MMIO) >> 12) ^ 0xfffff;
immap->im_pci.pci_pocmr1 = cpu_to_le32(pocmr | POCMR_ENABLE);
/*
* Set up master window that allows the CPU to access PCI IO space. This window
* is set up using the third set of Outbound ATU registers.
*/
immap->im_pci.pci_potar2 = cpu_to_le32(MPC826x_PCI_IO_BASE >> 12);
immap->im_pci.pci_pobar2 = cpu_to_le32(MPC826x_PCI_LOWER_IO >> 12);
pocmr = ((MPC826x_PCI_UPPER_IO - MPC826x_PCI_LOWER_IO) >> 12) ^ 0xfffff;
immap->im_pci.pci_pocmr2 = cpu_to_le32(pocmr | POCMR_ENABLE | POCMR_PCI_IO);
/*
* Set up slave window that allows PCI masters to access MPC826x local memory.
* This window is set up using the first set of Inbound ATU registers
*/
immap->im_pci.pci_pitar0 = cpu_to_le32(MPC826x_PCI_SLAVE_MEM_LOCAL >> 12);
immap->im_pci.pci_pibar0 = cpu_to_le32(MPC826x_PCI_SLAVE_MEM_BUS >> 12);
pocmr = ((MPC826x_PCI_SLAVE_MEM_SIZE-1) >> 12) ^ 0xfffff;
immap->im_pci.pci_picmr0 = cpu_to_le32(pocmr | PICMR_ENABLE | PICMR_PREFETCH_EN);
/* See above for description - puts PCI request as highest priority */
immap->im_siu_conf.siu_82xx.sc_ppc_alrh = 0x03124567;
/* Park the bus on the PCI */
immap->im_siu_conf.siu_82xx.sc_ppc_acr = PPC_ACR_BUS_PARK_PCI;
/* Host mode - specify the bridge as a host-PCI bridge */
early_write_config_word(hose, 0, 0, PCI_CLASS_DEVICE, PCI_CLASS_BRIDGE_HOST);
/* Enable the host bridge to be a master on the PCI bus, and to act as a PCI memory target */
early_read_config_word(hose, 0, 0, PCI_COMMAND, &tempShort);
early_write_config_word(hose, 0, 0, PCI_COMMAND,
tempShort | PCI_COMMAND_MASTER | PCI_COMMAND_MEMORY);
}
void __init m8260_find_bridges(void)
{
extern int pci_assign_all_busses;
struct pci_controller * hose;
pci_assign_all_busses = 1;
hose = pcibios_alloc_controller();
if (!hose)
return;
ppc_md.pci_swizzle = common_swizzle;
hose->first_busno = 0;
hose->bus_offset = 0;
hose->last_busno = 0xff;
setup_m8260_indirect_pci(hose,
(unsigned long)&cpm2_immr->im_pci.pci_cfg_addr,
(unsigned long)&cpm2_immr->im_pci.pci_cfg_data);
m8260_setup_pci(hose);
hose->pci_mem_offset = MPC826x_PCI_MEM_OFFSET;
hose->io_base_virt = ioremap(MPC826x_PCI_IO_BASE,
MPC826x_PCI_IO_SIZE);
isa_io_base = (unsigned long) hose->io_base_virt;
/* setup resources */
pci_init_resource(&hose->mem_resources[0],
MPC826x_PCI_LOWER_MEM,
MPC826x_PCI_UPPER_MEM,
IORESOURCE_MEM|IORESOURCE_PREFETCH, "PCI prefetchable memory");
pci_init_resource(&hose->mem_resources[1],
MPC826x_PCI_LOWER_MMIO,
MPC826x_PCI_UPPER_MMIO,
IORESOURCE_MEM, "PCI memory");
pci_init_resource(&hose->io_resource,
MPC826x_PCI_LOWER_IO,
MPC826x_PCI_UPPER_IO,
IORESOURCE_IO, "PCI I/O");
}

View File

@ -1,76 +0,0 @@
#ifndef _PPC_KERNEL_M8260_PCI_H
#define _PPC_KERNEL_M8260_PCI_H
#include <asm/m8260_pci.h>
/*
* Local->PCI map (from CPU) controlled by
* MPC826x master window
*
* 0x80000000 - 0xBFFFFFFF Total CPU2PCI space PCIBR0
*
* 0x80000000 - 0x9FFFFFFF PCI Mem with prefetch (Outbound ATU #1)
* 0xA0000000 - 0xAFFFFFFF PCI Mem w/o prefetch (Outbound ATU #2)
* 0xB0000000 - 0xB0FFFFFF 32-bit PCI IO (Outbound ATU #3)
*
* PCI->Local map (from PCI)
* MPC826x slave window controlled by
*
* 0x00000000 - 0x07FFFFFF MPC826x local memory (Inbound ATU #1)
*/
/*
* Slave window that allows PCI masters to access MPC826x local memory.
* This window is set up using the first set of Inbound ATU registers
*/
#ifndef MPC826x_PCI_SLAVE_MEM_LOCAL
#define MPC826x_PCI_SLAVE_MEM_LOCAL (((struct bd_info *)__res)->bi_memstart)
#define MPC826x_PCI_SLAVE_MEM_BUS (((struct bd_info *)__res)->bi_memstart)
#define MPC826x_PCI_SLAVE_MEM_SIZE (((struct bd_info *)__res)->bi_memsize)
#endif
/*
* This is the window that allows the CPU to access PCI address space.
* It will be setup with the SIU PCIBR0 register. All three PCI master
* windows, which allow the CPU to access PCI prefetch, non prefetch,
* and IO space (see below), must all fit within this window.
*/
#ifndef MPC826x_PCI_BASE
#define MPC826x_PCI_BASE 0x80000000
#define MPC826x_PCI_MASK 0xc0000000
#endif
#ifndef MPC826x_PCI_LOWER_MEM
#define MPC826x_PCI_LOWER_MEM 0x80000000
#define MPC826x_PCI_UPPER_MEM 0x9fffffff
#define MPC826x_PCI_MEM_OFFSET 0x00000000
#endif
#ifndef MPC826x_PCI_LOWER_MMIO
#define MPC826x_PCI_LOWER_MMIO 0xa0000000
#define MPC826x_PCI_UPPER_MMIO 0xafffffff
#define MPC826x_PCI_MMIO_OFFSET 0x00000000
#endif
#ifndef MPC826x_PCI_LOWER_IO
#define MPC826x_PCI_LOWER_IO 0x00000000
#define MPC826x_PCI_UPPER_IO 0x00ffffff
#define MPC826x_PCI_IO_BASE 0xb0000000
#define MPC826x_PCI_IO_SIZE 0x01000000
#endif
#ifndef _IO_BASE
#define _IO_BASE isa_io_base
#endif
#ifdef CONFIG_8260_PCI9
struct pci_controller;
extern void setup_m8260_indirect_pci(struct pci_controller* hose,
u32 cfg_addr, u32 cfg_data);
#else
#define setup_m8260_indirect_pci setup_indirect_pci
#endif
#endif /* _PPC_KERNEL_M8260_PCI_H */

View File

@ -31,7 +31,7 @@
#include <asm/immap_cpm2.h>
#include <asm/cpm2.h>
#include "m8260_pci.h"
#include "m82xx_pci.h"
#ifdef CONFIG_8260_PCI9
/*#include <asm/mpc8260_pci9.h>*/ /* included in asm/io.h */
@ -248,11 +248,11 @@ EXPORT_SYMBOL(idma_pci9_read_le);
static inline int is_pci_mem(unsigned long addr)
{
if (addr >= MPC826x_PCI_LOWER_MMIO &&
addr <= MPC826x_PCI_UPPER_MMIO)
if (addr >= M82xx_PCI_LOWER_MMIO &&
addr <= M82xx_PCI_UPPER_MMIO)
return 1;
if (addr >= MPC826x_PCI_LOWER_MEM &&
addr <= MPC826x_PCI_UPPER_MEM)
if (addr >= M82xx_PCI_LOWER_MEM &&
addr <= M82xx_PCI_UPPER_MEM)
return 1;
return 0;
}

View File

@ -34,7 +34,8 @@
unsigned char __res[sizeof(bd_t)];
extern void cpm2_reset(void);
extern void m8260_find_bridges(void);
extern void pq2_find_bridges(void);
extern void pq2pci_init_irq(void);
extern void idma_pci9_init(void);
/* Place-holder for board-specific init */
@ -56,7 +57,7 @@ m8260_setup_arch(void)
idma_pci9_init();
#endif
#ifdef CONFIG_PCI_8260
m8260_find_bridges();
pq2_find_bridges();
#endif
#ifdef CONFIG_BLK_DEV_INITRD
if (initrd_start)
@ -173,6 +174,12 @@ m8260_init_IRQ(void)
* in case the boot rom changed something on us.
*/
cpm2_immr->im_intctl.ic_siprr = 0x05309770;
#if defined(CONFIG_PCI) && (defined(CONFIG_ADS8272) || defined(CONFIG_PQ2FADS))
/* Initialize stuff for the 82xx CPLD IC and install demux */
pq2pci_init_irq();
#endif
}
/*

383
arch/ppc/syslib/m82xx_pci.c Normal file
View File

@ -0,0 +1,383 @@
/*
*
* (C) Copyright 2003
* Wolfgang Denk, DENX Software Engineering, wd@denx.de.
*
* (C) Copyright 2004 Red Hat, Inc.
*
* 2005 (c) MontaVista Software, Inc.
* Vitaly Bordug <vbordug@ru.mvista.com>
*
* See file CREDITS for list of people who contributed to this
* project.
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License as
* published by the Free Software Foundation; either version 2 of
* the License, or (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 59 Temple Place, Suite 330, Boston,
* MA 02111-1307 USA
*/
#include <linux/kernel.h>
#include <linux/init.h>
#include <linux/pci.h>
#include <linux/slab.h>
#include <linux/delay.h>
#include <linux/irq.h>
#include <linux/interrupt.h>
#include <asm/byteorder.h>
#include <asm/io.h>
#include <asm/irq.h>
#include <asm/uaccess.h>
#include <asm/machdep.h>
#include <asm/pci-bridge.h>
#include <asm/immap_cpm2.h>
#include <asm/mpc8260.h>
#include <asm/cpm2.h>
#include "m82xx_pci.h"
/*
* Interrupt routing
*/
static inline int
pq2pci_map_irq(struct pci_dev *dev, unsigned char idsel, unsigned char pin)
{
static char pci_irq_table[][4] =
/*
* PCI IDSEL/INTPIN->INTLINE
* A B C D
*/
{
{ PIRQA, PIRQB, PIRQC, PIRQD }, /* IDSEL 22 - PCI slot 0 */
{ PIRQD, PIRQA, PIRQB, PIRQC }, /* IDSEL 23 - PCI slot 1 */
{ PIRQC, PIRQD, PIRQA, PIRQB }, /* IDSEL 24 - PCI slot 2 */
};
const long min_idsel = 22, max_idsel = 24, irqs_per_slot = 4;
return PCI_IRQ_TABLE_LOOKUP;
}
static void
pq2pci_mask_irq(unsigned int irq)
{
int bit = irq - NR_CPM_INTS;
*(volatile unsigned long *) PCI_INT_MASK_REG |= (1 << (31 - bit));
return;
}
static void
pq2pci_unmask_irq(unsigned int irq)
{
int bit = irq - NR_CPM_INTS;
*(volatile unsigned long *) PCI_INT_MASK_REG &= ~(1 << (31 - bit));
return;
}
static void
pq2pci_mask_and_ack(unsigned int irq)
{
int bit = irq - NR_CPM_INTS;
*(volatile unsigned long *) PCI_INT_MASK_REG |= (1 << (31 - bit));
return;
}
static void
pq2pci_end_irq(unsigned int irq)
{
int bit = irq - NR_CPM_INTS;
*(volatile unsigned long *) PCI_INT_MASK_REG &= ~(1 << (31 - bit));
return;
}
struct hw_interrupt_type pq2pci_ic = {
"PQ2 PCI",
NULL,
NULL,
pq2pci_unmask_irq,
pq2pci_mask_irq,
pq2pci_mask_and_ack,
pq2pci_end_irq,
0
};
static irqreturn_t
pq2pci_irq_demux(int irq, void *dev_id, struct pt_regs *regs)
{
unsigned long stat, mask, pend;
int bit;
for(;;) {
stat = *(volatile unsigned long *) PCI_INT_STAT_REG;
mask = *(volatile unsigned long *) PCI_INT_MASK_REG;
pend = stat & ~mask & 0xf0000000;
if (!pend)
break;
for (bit = 0; pend != 0; ++bit, pend <<= 1) {
if (pend & 0x80000000)
__do_IRQ(NR_CPM_INTS + bit, regs);
}
}
return IRQ_HANDLED;
}
static struct irqaction pq2pci_irqaction = {
.handler = pq2pci_irq_demux,
.flags = SA_INTERRUPT,
.mask = CPU_MASK_NONE,
.name = "PQ2 PCI cascade",
};
void
pq2pci_init_irq(void)
{
int irq;
volatile cpm2_map_t *immap = cpm2_immr;
#if defined CONFIG_ADS8272
/* configure chip select for PCI interrupt controller */
immap->im_memctl.memc_br3 = PCI_INT_STAT_REG | 0x00001801;
immap->im_memctl.memc_or3 = 0xffff8010;
#elif defined CONFIG_PQ2FADS
immap->im_memctl.memc_br8 = PCI_INT_STAT_REG | 0x00001801;
immap->im_memctl.memc_or8 = 0xffff8010;
#endif
for (irq = NR_CPM_INTS; irq < NR_CPM_INTS + 4; irq++)
irq_desc[irq].handler = &pq2pci_ic;
/* make PCI IRQ level sensitive */
immap->im_intctl.ic_siexr &=
~(1 << (14 - (PCI_INT_TO_SIU - SIU_INT_IRQ1)));
/* mask all PCI interrupts */
*(volatile unsigned long *) PCI_INT_MASK_REG |= 0xfff00000;
/* install the demultiplexer for the PCI cascade interrupt */
setup_irq(PCI_INT_TO_SIU, &pq2pci_irqaction);
return;
}
static int
pq2pci_exclude_device(u_char bus, u_char devfn)
{
return PCIBIOS_SUCCESSFUL;
}
/* PCI bus configuration registers.
*/
static void
pq2ads_setup_pci(struct pci_controller *hose)
{
__u32 val;
volatile cpm2_map_t *immap = cpm2_immr;
bd_t* binfo = (bd_t*) __res;
u32 sccr = immap->im_clkrst.car_sccr;
uint pci_div,freq,time;
/* PCI int lowest prio */
/* Each 4 bits is a device bus request and the MS 4bits
is highest priority */
/* Bus 4bit value
--- ----------
CPM high 0b0000
CPM middle 0b0001
CPM low 0b0010
PCI reguest 0b0011
Reserved 0b0100
Reserved 0b0101
Internal Core 0b0110
External Master 1 0b0111
External Master 2 0b1000
External Master 3 0b1001
The rest are reserved
*/
immap->im_siu_conf.siu_82xx.sc_ppc_alrh = 0x61207893;
/* park bus on core */
immap->im_siu_conf.siu_82xx.sc_ppc_acr = PPC_ACR_BUS_PARK_CORE;
/*
* Set up master windows that allow the CPU to access PCI space. These
* windows are set up using the two SIU PCIBR registers.
*/
immap->im_memctl.memc_pcimsk0 = M82xx_PCI_PRIM_WND_SIZE;
immap->im_memctl.memc_pcibr0 = M82xx_PCI_PRIM_WND_BASE | PCIBR_ENABLE;
#ifdef M82xx_PCI_SEC_WND_SIZE
immap->im_memctl.memc_pcimsk1 = M82xx_PCI_SEC_WND_SIZE;
immap->im_memctl.memc_pcibr1 = M82xx_PCI_SEC_WND_BASE | PCIBR_ENABLE;
#endif
#if defined CONFIG_ADS8272
immap->im_siu_conf.siu_82xx.sc_siumcr =
(immap->im_siu_conf.siu_82xx.sc_siumcr &
~(SIUMCR_BBD | SIUMCR_ESE | SIUMCR_PBSE |
SIUMCR_CDIS | SIUMCR_DPPC11 | SIUMCR_L2CPC11 |
SIUMCR_LBPC11 | SIUMCR_APPC11 |
SIUMCR_CS10PC11 | SIUMCR_BCTLC11 | SIUMCR_MMR11)) |
SIUMCR_DPPC11 | SIUMCR_L2CPC01 | SIUMCR_LBPC00 |
SIUMCR_APPC10 | SIUMCR_CS10PC00 |
SIUMCR_BCTLC00 | SIUMCR_MMR11 ;
#elif defined CONFIG_PQ2FADS
/*
* Setting required to enable IRQ1-IRQ7 (SIUMCR [DPPC]),
* and local bus for PCI (SIUMCR [LBPC]).
*/
immap->im_siu_conf.siu_82xx.sc_siumcr = (immap->im_siu_conf.sc_siumcr &
~(SIUMCR_L2PC11 | SIUMCR_LBPC11 | SIUMCR_CS10PC11 | SIUMCR_APPC11) |
SIUMCR_BBD | SIUMCR_LBPC01 | SIUMCR_DPPC11 | SIUMCR_APPC10;
#endif
/* Enable PCI */
immap->im_pci.pci_gcr = cpu_to_le32(PCIGCR_PCI_BUS_EN);
pci_div = ( (sccr & SCCR_PCI_MODCK) ? 2 : 1) *
( ( (sccr & SCCR_PCIDF_MSK) >> SCCR_PCIDF_SHIFT) + 1);
freq = (uint)((2*binfo->bi_cpmfreq)/(pci_div));
time = (int)666666/freq;
/* due to PCI Local Bus spec, some devices needs to wait such a long
time after RST deassertion. More specifically, 0.508s for 66MHz & twice more for 33 */
printk("%s: The PCI bus is %d Mhz.\nWaiting %s after deasserting RST...\n",__FILE__,freq,
(time==1) ? "0.5 seconds":"1 second" );
{
int i;
for(i=0;i<(500*time);i++)
udelay(1000);
}
/* setup ATU registers */
immap->im_pci.pci_pocmr0 = cpu_to_le32(POCMR_ENABLE | POCMR_PCI_IO |
((~(M82xx_PCI_IO_SIZE - 1U)) >> POTA_ADDR_SHIFT));
immap->im_pci.pci_potar0 = cpu_to_le32(M82xx_PCI_LOWER_IO >> POTA_ADDR_SHIFT);
immap->im_pci.pci_pobar0 = cpu_to_le32(M82xx_PCI_IO_BASE >> POTA_ADDR_SHIFT);
/* Set-up non-prefetchable window */
immap->im_pci.pci_pocmr1 = cpu_to_le32(POCMR_ENABLE | ((~(M82xx_PCI_MMIO_SIZE-1U)) >> POTA_ADDR_SHIFT));
immap->im_pci.pci_potar1 = cpu_to_le32(M82xx_PCI_LOWER_MMIO >> POTA_ADDR_SHIFT);
immap->im_pci.pci_pobar1 = cpu_to_le32((M82xx_PCI_LOWER_MMIO - M82xx_PCI_MMIO_OFFSET) >> POTA_ADDR_SHIFT);
/* Set-up prefetchable window */
immap->im_pci.pci_pocmr2 = cpu_to_le32(POCMR_ENABLE |POCMR_PREFETCH_EN |
(~(M82xx_PCI_MEM_SIZE-1U) >> POTA_ADDR_SHIFT));
immap->im_pci.pci_potar2 = cpu_to_le32(M82xx_PCI_LOWER_MEM >> POTA_ADDR_SHIFT);
immap->im_pci.pci_pobar2 = cpu_to_le32((M82xx_PCI_LOWER_MEM - M82xx_PCI_MEM_OFFSET) >> POTA_ADDR_SHIFT);
/* Inbound transactions from PCI memory space */
immap->im_pci.pci_picmr0 = cpu_to_le32(PICMR_ENABLE | PICMR_PREFETCH_EN |
((~(M82xx_PCI_SLAVE_MEM_SIZE-1U)) >> PITA_ADDR_SHIFT));
immap->im_pci.pci_pibar0 = cpu_to_le32(M82xx_PCI_SLAVE_MEM_BUS >> PITA_ADDR_SHIFT);
immap->im_pci.pci_pitar0 = cpu_to_le32(M82xx_PCI_SLAVE_MEM_LOCAL>> PITA_ADDR_SHIFT);
#if defined CONFIG_ADS8272
/* PCI int highest prio */
immap->im_siu_conf.siu_82xx.sc_ppc_alrh = 0x01236745;
#elif defined CONFIG_PQ2FADS
immap->im_siu_conf.siu_82xx.sc_ppc_alrh = 0x03124567;
#endif
/* park bus on PCI */
immap->im_siu_conf.siu_82xx.sc_ppc_acr = PPC_ACR_BUS_PARK_PCI;
/* Enable bus mastering and inbound memory transactions */
early_read_config_dword(hose, hose->first_busno, 0, PCI_COMMAND, &val);
val &= 0xffff0000;
val |= PCI_COMMAND_MEMORY|PCI_COMMAND_MASTER;
early_write_config_dword(hose, hose->first_busno, 0, PCI_COMMAND, val);
}
void __init pq2_find_bridges(void)
{
extern int pci_assign_all_busses;
struct pci_controller * hose;
int host_bridge;
pci_assign_all_busses = 1;
hose = pcibios_alloc_controller();
if (!hose)
return;
ppc_md.pci_swizzle = common_swizzle;
hose->first_busno = 0;
hose->bus_offset = 0;
hose->last_busno = 0xff;
#ifdef CONFIG_ADS8272
hose->set_cfg_type = 1;
#endif
setup_m8260_indirect_pci(hose,
(unsigned long)&cpm2_immr->im_pci.pci_cfg_addr,
(unsigned long)&cpm2_immr->im_pci.pci_cfg_data);
/* Make sure it is a supported bridge */
early_read_config_dword(hose,
0,
PCI_DEVFN(0,0),
PCI_VENDOR_ID,
&host_bridge);
switch (host_bridge) {
case PCI_DEVICE_ID_MPC8265:
break;
case PCI_DEVICE_ID_MPC8272:
break;
default:
printk("Attempting to use unrecognized host bridge ID"
" 0x%08x.\n", host_bridge);
break;
}
pq2ads_setup_pci(hose);
hose->io_space.start = M82xx_PCI_LOWER_IO;
hose->io_space.end = M82xx_PCI_UPPER_IO;
hose->mem_space.start = M82xx_PCI_LOWER_MEM;
hose->mem_space.end = M82xx_PCI_UPPER_MMIO;
hose->pci_mem_offset = M82xx_PCI_MEM_OFFSET;
isa_io_base =
(unsigned long) ioremap(M82xx_PCI_IO_BASE,
M82xx_PCI_IO_SIZE);
hose->io_base_virt = (void *) isa_io_base;
/* setup resources */
pci_init_resource(&hose->mem_resources[0],
M82xx_PCI_LOWER_MEM,
M82xx_PCI_UPPER_MEM,
IORESOURCE_MEM|IORESOURCE_PREFETCH, "PCI prefetchable memory");
pci_init_resource(&hose->mem_resources[1],
M82xx_PCI_LOWER_MMIO,
M82xx_PCI_UPPER_MMIO,
IORESOURCE_MEM, "PCI memory");
pci_init_resource(&hose->io_resource,
M82xx_PCI_LOWER_IO,
M82xx_PCI_UPPER_IO,
IORESOURCE_IO | 1, "PCI I/O");
ppc_md.pci_exclude_device = pq2pci_exclude_device;
hose->last_busno = pciauto_bus_scan(hose, hose->first_busno);
ppc_md.pci_map_irq = pq2pci_map_irq;
ppc_md.pcibios_fixup = NULL;
ppc_md.pcibios_fixup_bus = NULL;
}

View File

@ -0,0 +1,92 @@
#ifndef _PPC_KERNEL_M82XX_PCI_H
#define _PPC_KERNEL_M82XX_PCI_H
#include <asm/m8260_pci.h>
/*
* Local->PCI map (from CPU) controlled by
* MPC826x master window
*
* 0xF6000000 - 0xF7FFFFFF IO space
* 0x80000000 - 0xBFFFFFFF CPU2PCI memory space PCIBR0
*
* 0x80000000 - 0x9FFFFFFF PCI Mem with prefetch (Outbound ATU #1)
* 0xA0000000 - 0xBFFFFFFF PCI Mem w/o prefetch (Outbound ATU #2)
* 0xF6000000 - 0xF7FFFFFF 32-bit PCI IO (Outbound ATU #3)
*
* PCI->Local map (from PCI)
* MPC826x slave window controlled by
*
* 0x00000000 - 0x07FFFFFF MPC826x local memory (Inbound ATU #1)
*/
/*
* Slave window that allows PCI masters to access MPC826x local memory.
* This window is set up using the first set of Inbound ATU registers
*/
#ifndef M82xx_PCI_SLAVE_MEM_LOCAL
#define M82xx_PCI_SLAVE_MEM_LOCAL (((struct bd_info *)__res)->bi_memstart)
#define M82xx_PCI_SLAVE_MEM_BUS (((struct bd_info *)__res)->bi_memstart)
#define M82xx_PCI_SLAVE_MEM_SIZE (((struct bd_info *)__res)->bi_memsize)
#endif
/*
* This is the window that allows the CPU to access PCI address space.
* It will be setup with the SIU PCIBR0 register. All three PCI master
* windows, which allow the CPU to access PCI prefetch, non prefetch,
* and IO space (see below), must all fit within this window.
*/
#ifndef M82xx_PCI_LOWER_MEM
#define M82xx_PCI_LOWER_MEM 0x80000000
#define M82xx_PCI_UPPER_MEM 0x9fffffff
#define M82xx_PCI_MEM_OFFSET 0x00000000
#define M82xx_PCI_MEM_SIZE 0x20000000
#endif
#ifndef M82xx_PCI_LOWER_MMIO
#define M82xx_PCI_LOWER_MMIO 0xa0000000
#define M82xx_PCI_UPPER_MMIO 0xafffffff
#define M82xx_PCI_MMIO_OFFSET 0x00000000
#define M82xx_PCI_MMIO_SIZE 0x20000000
#endif
#ifndef M82xx_PCI_LOWER_IO
#define M82xx_PCI_LOWER_IO 0x00000000
#define M82xx_PCI_UPPER_IO 0x01ffffff
#define M82xx_PCI_IO_BASE 0xf6000000
#define M82xx_PCI_IO_SIZE 0x02000000
#endif
#ifndef M82xx_PCI_PRIM_WND_SIZE
#define M82xx_PCI_PRIM_WND_SIZE ~(M82xx_PCI_IO_SIZE - 1U)
#define M82xx_PCI_PRIM_WND_BASE (M82xx_PCI_IO_BASE)
#endif
#ifndef M82xx_PCI_SEC_WND_SIZE
#define M82xx_PCI_SEC_WND_SIZE ~(M82xx_PCI_MEM_SIZE + M82xx_PCI_MMIO_SIZE - 1U)
#define M82xx_PCI_SEC_WND_BASE (M82xx_PCI_LOWER_MEM)
#endif
#ifndef POTA_ADDR_SHIFT
#define POTA_ADDR_SHIFT 12
#endif
#ifndef PITA_ADDR_SHIFT
#define PITA_ADDR_SHIFT 12
#endif
#ifndef _IO_BASE
#define _IO_BASE isa_io_base
#endif
#ifdef CONFIG_8260_PCI9
struct pci_controller;
extern void setup_m8260_indirect_pci(struct pci_controller* hose,
u32 cfg_addr, u32 cfg_data);
#else
#define setup_m8260_indirect_pci setup_indirect_pci
#endif
#endif /* _PPC_KERNEL_M8260_PCI_H */

View File

@ -275,7 +275,7 @@ static void __init openpic_enable_sie(void)
}
#endif
#if defined(CONFIG_EPIC_SERIAL_MODE) || defined(CONFIG_PM)
#if defined(CONFIG_EPIC_SERIAL_MODE)
static void openpic_reset(void)
{
openpic_setfield(&OpenPIC->Global.Global_Configuration0,
@ -993,8 +993,6 @@ int openpic_resume(struct sys_device *sysdev)
return 0;
}
openpic_reset();
/* OpenPIC sometimes seem to need some time to be fully back up... */
do {
openpic_set_spurious(OPENPIC_VEC_SPURIOUS);

View File

@ -29,6 +29,7 @@
#include <asm/mmu.h>
#include <asm/ppc_sys.h>
#include <asm/kgdb.h>
#include <asm/delay.h>
#include <syslib/ppc83xx_setup.h>
@ -117,7 +118,34 @@ mpc83xx_early_serial_map(void)
void
mpc83xx_restart(char *cmd)
{
volatile unsigned char __iomem *reg;
unsigned char tmp;
reg = ioremap(BCSR_PHYS_ADDR, BCSR_SIZE);
local_irq_disable();
/*
* Unlock the BCSR bits so a PRST will update the contents.
* Otherwise the reset asserts but doesn't clear.
*/
tmp = in_8(reg + BCSR_MISC_REG3_OFF);
tmp |= BCSR_MISC_REG3_CNFLOCK; /* low true, high false */
out_8(reg + BCSR_MISC_REG3_OFF, tmp);
/*
* Trigger a reset via a low->high transition of the
* PORESET bit.
*/
tmp = in_8(reg + BCSR_MISC_REG2_OFF);
tmp &= ~BCSR_MISC_REG2_PORESET;
out_8(reg + BCSR_MISC_REG2_OFF, tmp);
udelay(1);
tmp |= BCSR_MISC_REG2_PORESET;
out_8(reg + BCSR_MISC_REG2_OFF, tmp);
for(;;);
}

View File

@ -132,6 +132,12 @@ mpc85xx_halt(void)
}
#ifdef CONFIG_PCI
#if defined(CONFIG_MPC8555_CDS)
extern void mpc85xx_cds_enable_via(struct pci_controller *hose);
extern void mpc85xx_cds_fixup_via(struct pci_controller *hose);
#endif
static void __init
mpc85xx_setup_pci1(struct pci_controller *hose)
{
@ -302,8 +308,18 @@ mpc85xx_setup_hose(void)
ppc_md.pci_exclude_device = mpc85xx_exclude_device;
#if defined(CONFIG_MPC8555_CDS)
/* Pre pciauto_bus_scan VIA init */
mpc85xx_cds_enable_via(hose_a);
#endif
hose_a->last_busno = pciauto_bus_scan(hose_a, hose_a->first_busno);
#if defined(CONFIG_MPC8555_CDS)
/* Post pciauto_bus_scan VIA fixup */
mpc85xx_cds_fixup_via(hose_a);
#endif
#ifdef CONFIG_85xx_PCI2
hose_b = pcibios_alloc_controller();

View File

@ -626,8 +626,18 @@ inspect_node(phandle node, struct device_node *dad,
l = call_prom("package-to-path", 3, 1, node,
mem_start, mem_end - mem_start);
if (l >= 0) {
char *p, *ep;
np->full_name = PTRUNRELOC((char *) mem_start);
*(char *)(mem_start + l) = 0;
/* Fixup an Apple bug where they have bogus \0 chars in the
* middle of the path in some properties
*/
for (p = (char *)mem_start, ep = p + l; p < ep; p++)
if ((*p) == '\0') {
memmove(p, p+1, ep - p);
ep--;
}
mem_start = ALIGNUL(mem_start + l + 1);
}

View File

@ -47,14 +47,6 @@ static void remove_node_proc_entries(struct device_node *np)
remove_proc_entry(pp->name, np->pde);
pp = pp->next;
}
/* Assuming that symlinks have the same parent directory as
* np->pde.
*/
if (np->name_link)
remove_proc_entry(np->name_link->name, parent->pde);
if (np->addr_link)
remove_proc_entry(np->addr_link->name, parent->pde);
if (np->pde)
remove_proc_entry(np->pde->name, parent->pde);
}

View File

@ -211,13 +211,23 @@ struct {
*/
#define ADDR(x) (u32) ((unsigned long)(x) - offset)
/*
* Error results ... some OF calls will return "-1" on error, some
* will return 0, some will return either. To simplify, here are
* macros to use with any ihandle or phandle return value to check if
* it is valid
*/
#define PROM_ERROR (-1u)
#define PHANDLE_VALID(p) ((p) != 0 && (p) != PROM_ERROR)
#define IHANDLE_VALID(i) ((i) != 0 && (i) != PROM_ERROR)
/* This is the one and *ONLY* place where we actually call open
* firmware from, since we need to make sure we're running in 32b
* mode when we do. We switch back to 64b mode upon return.
*/
#define PROM_ERROR (-1)
static int __init call_prom(const char *service, int nargs, int nret, ...)
{
int i;
@ -587,14 +597,13 @@ static void __init prom_send_capabilities(void)
{
unsigned long offset = reloc_offset();
ihandle elfloader;
int ret;
elfloader = call_prom("open", 1, 1, ADDR("/packages/elf-loader"));
if (elfloader == 0) {
prom_printf("couldn't open /packages/elf-loader\n");
return;
}
ret = call_prom("call-method", 3, 1, ADDR("process-elf-header"),
call_prom("call-method", 3, 1, ADDR("process-elf-header"),
elfloader, ADDR(&fake_elf));
call_prom("close", 1, 0, elfloader);
}
@ -646,7 +655,7 @@ static unsigned long __init alloc_up(unsigned long size, unsigned long align)
base = _ALIGN_UP(base + 0x100000, align)) {
prom_debug(" trying: 0x%x\n\r", base);
addr = (unsigned long)prom_claim(base, size, 0);
if ((int)addr != PROM_ERROR)
if (addr != PROM_ERROR)
break;
addr = 0;
if (align == 0)
@ -708,7 +717,7 @@ static unsigned long __init alloc_down(unsigned long size, unsigned long align,
for(; base > RELOC(alloc_bottom); base = _ALIGN_DOWN(base - 0x100000, align)) {
prom_debug(" trying: 0x%x\n\r", base);
addr = (unsigned long)prom_claim(base, size, 0);
if ((int)addr != PROM_ERROR)
if (addr != PROM_ERROR)
break;
addr = 0;
}
@ -902,18 +911,19 @@ static void __init prom_instantiate_rtas(void)
{
unsigned long offset = reloc_offset();
struct prom_t *_prom = PTRRELOC(&prom);
phandle prom_rtas, rtas_node;
phandle rtas_node;
ihandle rtas_inst;
u32 base, entry = 0;
u32 size = 0;
prom_debug("prom_instantiate_rtas: start...\n");
prom_rtas = call_prom("finddevice", 1, 1, ADDR("/rtas"));
prom_debug("prom_rtas: %x\n", prom_rtas);
if (prom_rtas == (phandle) -1)
rtas_node = call_prom("finddevice", 1, 1, ADDR("/rtas"));
prom_debug("rtas_node: %x\n", rtas_node);
if (!PHANDLE_VALID(rtas_node))
return;
prom_getprop(prom_rtas, "rtas-size", &size, sizeof(size));
prom_getprop(rtas_node, "rtas-size", &size, sizeof(size));
if (size == 0)
return;
@ -922,14 +932,18 @@ static void __init prom_instantiate_rtas(void)
prom_printf("RTAS allocation failed !\n");
return;
}
prom_printf("instantiating rtas at 0x%x", base);
rtas_node = call_prom("open", 1, 1, ADDR("/rtas"));
prom_printf("...");
rtas_inst = call_prom("open", 1, 1, ADDR("/rtas"));
if (!IHANDLE_VALID(rtas_inst)) {
prom_printf("opening rtas package failed");
return;
}
prom_printf("instantiating rtas at 0x%x ...", base);
if (call_prom("call-method", 3, 2,
ADDR("instantiate-rtas"),
rtas_node, base) != PROM_ERROR) {
rtas_inst, base) != PROM_ERROR) {
entry = (long)_prom->args.rets[1];
}
if (entry == 0) {
@ -940,8 +954,8 @@ static void __init prom_instantiate_rtas(void)
reserve_mem(base, size);
prom_setprop(prom_rtas, "linux,rtas-base", &base, sizeof(base));
prom_setprop(prom_rtas, "linux,rtas-entry", &entry, sizeof(entry));
prom_setprop(rtas_node, "linux,rtas-base", &base, sizeof(base));
prom_setprop(rtas_node, "linux,rtas-entry", &entry, sizeof(entry));
prom_debug("rtas base = 0x%x\n", base);
prom_debug("rtas entry = 0x%x\n", entry);
@ -1062,7 +1076,7 @@ static void __init prom_initialize_tce_table(void)
prom_printf("opening PHB %s", path);
phb_node = call_prom("open", 1, 1, path);
if ( (long)phb_node <= 0)
if (phb_node == 0)
prom_printf("... failed\n");
else
prom_printf("... done\n");
@ -1279,12 +1293,12 @@ static void __init prom_init_client_services(unsigned long pp)
/* get a handle for the stdout device */
_prom->chosen = call_prom("finddevice", 1, 1, ADDR("/chosen"));
if ((long)_prom->chosen <= 0)
if (!PHANDLE_VALID(_prom->chosen))
prom_panic("cannot find chosen"); /* msg won't be printed :( */
/* get device tree root */
_prom->root = call_prom("finddevice", 1, 1, ADDR("/"));
if ((long)_prom->root <= 0)
if (!PHANDLE_VALID(_prom->root))
prom_panic("cannot find device tree root"); /* msg won't be printed :( */
}
@ -1356,9 +1370,8 @@ static int __init prom_find_machine_type(void)
}
/* Default to pSeries. We need to know if we are running LPAR */
rtas = call_prom("finddevice", 1, 1, ADDR("/rtas"));
if (rtas != (phandle) -1) {
unsigned long x;
x = prom_getproplen(rtas, "ibm,hypertas-functions");
if (!PHANDLE_VALID(rtas)) {
int x = prom_getproplen(rtas, "ibm,hypertas-functions");
if (x != PROM_ERROR) {
prom_printf("Hypertas detected, assuming LPAR !\n");
return PLATFORM_PSERIES_LPAR;
@ -1426,12 +1439,13 @@ static void __init prom_check_displays(void)
* leave some room at the end of the path for appending extra
* arguments
*/
if (call_prom("package-to-path", 3, 1, node, path, PROM_SCRATCH_SIZE-10) < 0)
if (call_prom("package-to-path", 3, 1, node, path,
PROM_SCRATCH_SIZE-10) == PROM_ERROR)
continue;
prom_printf("found display : %s, opening ... ", path);
ih = call_prom("open", 1, 1, path);
if (ih == (ihandle)0 || ih == (ihandle)-1) {
if (ih == 0) {
prom_printf("failed\n");
continue;
}
@ -1514,6 +1528,12 @@ static unsigned long __init dt_find_string(char *str)
return 0;
}
/*
* The Open Firmware 1275 specification states properties must be 31 bytes or
* less, however not all firmwares obey this. Make it 64 bytes to be safe.
*/
#define MAX_PROPERTY_NAME 64
static void __init scan_dt_build_strings(phandle node, unsigned long *mem_start,
unsigned long *mem_end)
{
@ -1527,10 +1547,12 @@ static void __init scan_dt_build_strings(phandle node, unsigned long *mem_start,
/* get and store all property names */
prev_name = RELOC("");
for (;;) {
/* 32 is max len of name including nul. */
namep = make_room(mem_start, mem_end, 32, 1);
if (call_prom("nextprop", 3, 1, node, prev_name, namep) <= 0) {
int rc;
/* 64 is max len of name including nul. */
namep = make_room(mem_start, mem_end, MAX_PROPERTY_NAME, 1);
rc = call_prom("nextprop", 3, 1, node, prev_name, namep);
if (rc != 1) {
/* No more nodes: unwind alloc */
*mem_start = (unsigned long)namep;
break;
@ -1555,18 +1577,12 @@ static void __init scan_dt_build_strings(phandle node, unsigned long *mem_start,
}
}
/*
* The Open Firmware 1275 specification states properties must be 31 bytes or
* less, however not all firmwares obey this. Make it 64 bytes to be safe.
*/
#define MAX_PROPERTY_NAME 64
static void __init scan_dt_build_struct(phandle node, unsigned long *mem_start,
unsigned long *mem_end)
{
int l, align;
phandle child;
char *namep, *prev_name, *sstart;
char *namep, *prev_name, *sstart, *p, *ep;
unsigned long soff;
unsigned char *valp;
unsigned long offset = reloc_offset();
@ -1588,6 +1604,14 @@ static void __init scan_dt_build_struct(phandle node, unsigned long *mem_start,
call_prom("package-to-path", 3, 1, node, namep, l);
}
namep[l] = '\0';
/* Fixup an Apple bug where they have bogus \0 chars in the
* middle of the path in some properties
*/
for (p = namep, ep = namep + l; p < ep; p++)
if (*p == '\0') {
memmove(p, p+1, ep - p);
ep--; l--;
}
*mem_start = _ALIGN(((unsigned long) namep) + strlen(namep) + 1, 4);
}
@ -1599,7 +1623,10 @@ static void __init scan_dt_build_struct(phandle node, unsigned long *mem_start,
prev_name = RELOC("");
sstart = (char *)RELOC(dt_string_start);
for (;;) {
if (call_prom("nextprop", 3, 1, node, prev_name, pname) <= 0)
int rc;
rc = call_prom("nextprop", 3, 1, node, prev_name, pname);
if (rc != 1)
break;
/* find string offset */
@ -1615,7 +1642,7 @@ static void __init scan_dt_build_struct(phandle node, unsigned long *mem_start,
l = call_prom("getproplen", 2, 1, node, pname);
/* sanity checks */
if (l < 0)
if (l == PROM_ERROR)
continue;
if (l > MAX_PROPERTY_LENGTH) {
prom_printf("WARNING: ignoring large property ");
@ -1763,17 +1790,18 @@ static void __init fixup_device_tree(void)
/* Some G5s have a missing interrupt definition, fix it up here */
u3 = call_prom("finddevice", 1, 1, ADDR("/u3@0,f8000000"));
if ((long)u3 <= 0)
if (!PHANDLE_VALID(u3))
return;
i2c = call_prom("finddevice", 1, 1, ADDR("/u3@0,f8000000/i2c@f8001000"));
if ((long)i2c <= 0)
if (!PHANDLE_VALID(i2c))
return;
mpic = call_prom("finddevice", 1, 1, ADDR("/u3@0,f8000000/mpic@f8040000"));
if ((long)mpic <= 0)
if (!PHANDLE_VALID(mpic))
return;
/* check if proper rev of u3 */
if (prom_getprop(u3, "device-rev", &u3_rev, sizeof(u3_rev)) <= 0)
if (prom_getprop(u3, "device-rev", &u3_rev, sizeof(u3_rev))
== PROM_ERROR)
return;
if (u3_rev != 0x35)
return;
@ -1880,6 +1908,12 @@ unsigned long __init prom_init(unsigned long r3, unsigned long r4, unsigned long
prom_setprop(_prom->chosen, "linux,platform",
&getprop_rval, sizeof(getprop_rval));
/*
* On pSeries, inform the firmware about our capabilities
*/
if (RELOC(of_platform) & PLATFORM_PSERIES)
prom_send_capabilities();
/*
* On pSeries, copy the CPU hold code
*/

View File

@ -325,9 +325,7 @@ int timer_interrupt(struct pt_regs * regs)
irq_enter();
#ifndef CONFIG_PPC_ISERIES
profile_tick(CPU_PROFILING, regs);
#endif
lpaca->lppaca.int_dword.fields.decr_int = 0;

View File

@ -196,6 +196,34 @@ static iopte_t *alloc_consistent_cluster(struct pci_iommu *iommu, unsigned long
return NULL;
}
static int iommu_alloc_ctx(struct pci_iommu *iommu)
{
int lowest = iommu->ctx_lowest_free;
int sz = IOMMU_NUM_CTXS - lowest;
int n = find_next_zero_bit(iommu->ctx_bitmap, sz, lowest);
if (unlikely(n == sz)) {
n = find_next_zero_bit(iommu->ctx_bitmap, lowest, 1);
if (unlikely(n == lowest)) {
printk(KERN_WARNING "IOMMU: Ran out of contexts.\n");
n = 0;
}
}
if (n)
__set_bit(n, iommu->ctx_bitmap);
return n;
}
static inline void iommu_free_ctx(struct pci_iommu *iommu, int ctx)
{
if (likely(ctx)) {
__clear_bit(ctx, iommu->ctx_bitmap);
if (ctx < iommu->ctx_lowest_free)
iommu->ctx_lowest_free = ctx;
}
}
/* Allocate and map kernel buffer of size SIZE using consistent mode
* DMA for PCI device PDEV. Return non-NULL cpu-side address if
* successful and set *DMA_ADDRP to the PCI side dma address.
@ -236,7 +264,7 @@ void *pci_alloc_consistent(struct pci_dev *pdev, size_t size, dma_addr_t *dma_ad
npages = size >> IO_PAGE_SHIFT;
ctx = 0;
if (iommu->iommu_ctxflush)
ctx = iommu->iommu_cur_ctx++;
ctx = iommu_alloc_ctx(iommu);
first_page = __pa(first_page);
while (npages--) {
iopte_val(*iopte) = (IOPTE_CONSISTENT(ctx) |
@ -317,6 +345,8 @@ void pci_free_consistent(struct pci_dev *pdev, size_t size, void *cpu, dma_addr_
}
}
iommu_free_ctx(iommu, ctx);
spin_unlock_irqrestore(&iommu->lock, flags);
order = get_order(size);
@ -360,7 +390,7 @@ dma_addr_t pci_map_single(struct pci_dev *pdev, void *ptr, size_t sz, int direct
base_paddr = __pa(oaddr & IO_PAGE_MASK);
ctx = 0;
if (iommu->iommu_ctxflush)
ctx = iommu->iommu_cur_ctx++;
ctx = iommu_alloc_ctx(iommu);
if (strbuf->strbuf_enabled)
iopte_protection = IOPTE_STREAMING(ctx);
else
@ -380,39 +410,53 @@ bad:
return PCI_DMA_ERROR_CODE;
}
static void pci_strbuf_flush(struct pci_strbuf *strbuf, struct pci_iommu *iommu, u32 vaddr, unsigned long ctx, unsigned long npages)
static void pci_strbuf_flush(struct pci_strbuf *strbuf, struct pci_iommu *iommu, u32 vaddr, unsigned long ctx, unsigned long npages, int direction)
{
int limit;
PCI_STC_FLUSHFLAG_INIT(strbuf);
if (strbuf->strbuf_ctxflush &&
iommu->iommu_ctxflush) {
unsigned long matchreg, flushreg;
u64 val;
flushreg = strbuf->strbuf_ctxflush;
matchreg = PCI_STC_CTXMATCH_ADDR(strbuf, ctx);
limit = 100000;
pci_iommu_write(flushreg, ctx);
for(;;) {
if (((long)pci_iommu_read(matchreg)) >= 0L)
break;
limit--;
if (!limit)
break;
udelay(1);
val = pci_iommu_read(matchreg);
val &= 0xffff;
if (!val)
goto do_flush_sync;
while (val) {
if (val & 0x1)
pci_iommu_write(flushreg, ctx);
val >>= 1;
}
if (!limit)
val = pci_iommu_read(matchreg);
if (unlikely(val)) {
printk(KERN_WARNING "pci_strbuf_flush: ctx flush "
"timeout vaddr[%08x] ctx[%lx]\n",
vaddr, ctx);
"timeout matchreg[%lx] ctx[%lx]\n",
val, ctx);
goto do_page_flush;
}
} else {
unsigned long i;
do_page_flush:
for (i = 0; i < npages; i++, vaddr += IO_PAGE_SIZE)
pci_iommu_write(strbuf->strbuf_pflush, vaddr);
}
do_flush_sync:
/* If the device could not have possibly put dirty data into
* the streaming cache, no flush-flag synchronization needs
* to be performed.
*/
if (direction == PCI_DMA_TODEVICE)
return;
PCI_STC_FLUSHFLAG_INIT(strbuf);
pci_iommu_write(strbuf->strbuf_fsync, strbuf->strbuf_flushflag_pa);
(void) pci_iommu_read(iommu->write_complete_reg);
@ -466,7 +510,7 @@ void pci_unmap_single(struct pci_dev *pdev, dma_addr_t bus_addr, size_t sz, int
/* Step 1: Kick data out of streaming buffers if necessary. */
if (strbuf->strbuf_enabled)
pci_strbuf_flush(strbuf, iommu, bus_addr, ctx, npages);
pci_strbuf_flush(strbuf, iommu, bus_addr, ctx, npages, direction);
/* Step 2: Clear out first TSB entry. */
iopte_make_dummy(iommu, base);
@ -474,6 +518,8 @@ void pci_unmap_single(struct pci_dev *pdev, dma_addr_t bus_addr, size_t sz, int
free_streaming_cluster(iommu, bus_addr - iommu->page_table_map_base,
npages, ctx);
iommu_free_ctx(iommu, ctx);
spin_unlock_irqrestore(&iommu->lock, flags);
}
@ -613,7 +659,7 @@ int pci_map_sg(struct pci_dev *pdev, struct scatterlist *sglist, int nelems, int
/* Step 4: Choose a context if necessary. */
ctx = 0;
if (iommu->iommu_ctxflush)
ctx = iommu->iommu_cur_ctx++;
ctx = iommu_alloc_ctx(iommu);
/* Step 5: Create the mappings. */
if (strbuf->strbuf_enabled)
@ -678,7 +724,7 @@ void pci_unmap_sg(struct pci_dev *pdev, struct scatterlist *sglist, int nelems,
/* Step 1: Kick data out of streaming buffers if necessary. */
if (strbuf->strbuf_enabled)
pci_strbuf_flush(strbuf, iommu, bus_addr, ctx, npages);
pci_strbuf_flush(strbuf, iommu, bus_addr, ctx, npages, direction);
/* Step 2: Clear out first TSB entry. */
iopte_make_dummy(iommu, base);
@ -686,6 +732,8 @@ void pci_unmap_sg(struct pci_dev *pdev, struct scatterlist *sglist, int nelems,
free_streaming_cluster(iommu, bus_addr - iommu->page_table_map_base,
npages, ctx);
iommu_free_ctx(iommu, ctx);
spin_unlock_irqrestore(&iommu->lock, flags);
}
@ -724,7 +772,7 @@ void pci_dma_sync_single_for_cpu(struct pci_dev *pdev, dma_addr_t bus_addr, size
}
/* Step 2: Kick data out of streaming buffers. */
pci_strbuf_flush(strbuf, iommu, bus_addr, ctx, npages);
pci_strbuf_flush(strbuf, iommu, bus_addr, ctx, npages, direction);
spin_unlock_irqrestore(&iommu->lock, flags);
}
@ -768,7 +816,7 @@ void pci_dma_sync_sg_for_cpu(struct pci_dev *pdev, struct scatterlist *sglist, i
i--;
npages = (IO_PAGE_ALIGN(sglist[i].dma_address + sglist[i].dma_length)
- bus_addr) >> IO_PAGE_SHIFT;
pci_strbuf_flush(strbuf, iommu, bus_addr, ctx, npages);
pci_strbuf_flush(strbuf, iommu, bus_addr, ctx, npages, direction);
spin_unlock_irqrestore(&iommu->lock, flags);
}

View File

@ -1212,7 +1212,7 @@ static void __init psycho_iommu_init(struct pci_controller_info *p)
/* Setup initial software IOMMU state. */
spin_lock_init(&iommu->lock);
iommu->iommu_cur_ctx = 0;
iommu->ctx_lowest_free = 1;
/* Register addresses. */
iommu->iommu_control = p->pbm_A.controller_regs + PSYCHO_IOMMU_CONTROL;

View File

@ -1265,7 +1265,7 @@ static void __init sabre_iommu_init(struct pci_controller_info *p,
/* Setup initial software IOMMU state. */
spin_lock_init(&iommu->lock);
iommu->iommu_cur_ctx = 0;
iommu->ctx_lowest_free = 1;
/* Register addresses. */
iommu->iommu_control = p->pbm_A.controller_regs + SABRE_IOMMU_CONTROL;

View File

@ -1753,7 +1753,7 @@ static void schizo_pbm_iommu_init(struct pci_pbm_info *pbm)
/* Setup initial software IOMMU state. */
spin_lock_init(&iommu->lock);
iommu->iommu_cur_ctx = 0;
iommu->ctx_lowest_free = 1;
/* Register addresses, SCHIZO has iommu ctx flushing. */
iommu->iommu_control = pbm->pbm_regs + SCHIZO_IOMMU_CONTROL;

View File

@ -117,17 +117,25 @@ static void iommu_flush(struct sbus_iommu *iommu, u32 base, unsigned long npages
#define STRBUF_TAG_VALID 0x02UL
static void sbus_strbuf_flush(struct sbus_iommu *iommu, u32 base, unsigned long npages)
static void sbus_strbuf_flush(struct sbus_iommu *iommu, u32 base, unsigned long npages, int direction)
{
unsigned long n;
int limit;
iommu->strbuf_flushflag = 0UL;
n = npages;
while (n--)
upa_writeq(base + (n << IO_PAGE_SHIFT),
iommu->strbuf_regs + STRBUF_PFLUSH);
/* If the device could not have possibly put dirty data into
* the streaming cache, no flush-flag synchronization needs
* to be performed.
*/
if (direction == SBUS_DMA_TODEVICE)
return;
iommu->strbuf_flushflag = 0UL;
/* Whoopee cushion! */
upa_writeq(__pa(&iommu->strbuf_flushflag),
iommu->strbuf_regs + STRBUF_FSYNC);
@ -421,7 +429,7 @@ void sbus_unmap_single(struct sbus_dev *sdev, dma_addr_t dma_addr, size_t size,
spin_lock_irqsave(&iommu->lock, flags);
free_streaming_cluster(iommu, dma_base, size >> IO_PAGE_SHIFT);
sbus_strbuf_flush(iommu, dma_base, size >> IO_PAGE_SHIFT);
sbus_strbuf_flush(iommu, dma_base, size >> IO_PAGE_SHIFT, direction);
spin_unlock_irqrestore(&iommu->lock, flags);
}
@ -584,7 +592,7 @@ void sbus_unmap_sg(struct sbus_dev *sdev, struct scatterlist *sg, int nents, int
iommu = sdev->bus->iommu;
spin_lock_irqsave(&iommu->lock, flags);
free_streaming_cluster(iommu, dvma_base, size >> IO_PAGE_SHIFT);
sbus_strbuf_flush(iommu, dvma_base, size >> IO_PAGE_SHIFT);
sbus_strbuf_flush(iommu, dvma_base, size >> IO_PAGE_SHIFT, direction);
spin_unlock_irqrestore(&iommu->lock, flags);
}
@ -596,7 +604,7 @@ void sbus_dma_sync_single_for_cpu(struct sbus_dev *sdev, dma_addr_t base, size_t
size = (IO_PAGE_ALIGN(base + size) - (base & IO_PAGE_MASK));
spin_lock_irqsave(&iommu->lock, flags);
sbus_strbuf_flush(iommu, base & IO_PAGE_MASK, size >> IO_PAGE_SHIFT);
sbus_strbuf_flush(iommu, base & IO_PAGE_MASK, size >> IO_PAGE_SHIFT, direction);
spin_unlock_irqrestore(&iommu->lock, flags);
}
@ -620,7 +628,7 @@ void sbus_dma_sync_sg_for_cpu(struct sbus_dev *sdev, struct scatterlist *sg, int
size = IO_PAGE_ALIGN(sg[i].dma_address + sg[i].dma_length) - base;
spin_lock_irqsave(&iommu->lock, flags);
sbus_strbuf_flush(iommu, base, size >> IO_PAGE_SHIFT);
sbus_strbuf_flush(iommu, base, size >> IO_PAGE_SHIFT, direction);
spin_unlock_irqrestore(&iommu->lock, flags);
}

View File

@ -2,10 +2,6 @@ menu "Kernel hacking"
source "lib/Kconfig.debug"
config FRAME_POINTER
bool
default y if DEBUG_INFO
config PT_PROXY
bool "Enable ptrace proxy"
depends on XTERM_CHAN && DEBUG_INFO && MODE_TT

View File

@ -1,5 +1,10 @@
/* Much of this ripped from hw_random.c */
/* Copyright (C) 2005 Jeff Dike <jdike@addtoit.com> */
/* Much of this ripped from drivers/char/hw_random.c, see there for other
* copyright.
*
* This software may be used and distributed according to the terms
* of the GNU General Public License, incorporated herein by reference.
*/
#include <linux/module.h>
#include <linux/fs.h>
#include <linux/miscdevice.h>
@ -12,8 +17,6 @@
*/
#define RNG_VERSION "1.0.0"
#define RNG_MODULE_NAME "random"
#define RNG_DRIVER_NAME RNG_MODULE_NAME " virtual driver " RNG_VERSION
#define PFX RNG_MODULE_NAME ": "
#define RNG_MISCDEV_MINOR 183 /* official */
@ -98,7 +101,7 @@ static int __init rng_init (void)
err = misc_register (&rng_miscdev);
if (err) {
printk (KERN_ERR PFX "misc device register failed\n");
printk (KERN_ERR RNG_MODULE_NAME ": misc device register failed\n");
goto err_out_cleanup_hw;
}
@ -120,3 +123,6 @@ static void __exit rng_cleanup (void)
module_init (rng_init);
module_exit (rng_cleanup);
MODULE_DESCRIPTION("UML Host Random Number Generator (RNG) driver");
MODULE_LICENSE("GPL");

View File

@ -22,7 +22,6 @@
#include "init.h"
#include "irq_user.h"
#include "mconsole_kern.h"
#include "2_5compat.h"
static int ssl_version = 1;

View File

@ -28,7 +28,6 @@
#include "irq_user.h"
#include "mconsole_kern.h"
#include "init.h"
#include "2_5compat.h"
#define MAX_TTYS (16)

View File

@ -49,7 +49,6 @@
#include "irq_user.h"
#include "irq_kern.h"
#include "ubd_user.h"
#include "2_5compat.h"
#include "os.h"
#include "mem.h"
#include "mem_kern.h"
@ -440,9 +439,9 @@ static int udb_setup(char *str)
__setup("udb", udb_setup);
__uml_help(udb_setup,
"udb\n"
" This option is here solely to catch ubd -> udb typos, which can be\n\n"
" to impossible to catch visually unless you specifically look for\n\n"
" them. The only result of any option starting with 'udb' is an error\n\n"
" This option is here solely to catch ubd -> udb typos, which can be\n"
" to impossible to catch visually unless you specifically look for\n"
" them. The only result of any option starting with 'udb' is an error\n"
" in the boot output.\n\n"
);

View File

@ -1,24 +0,0 @@
/*
* Copyright (C) 2001 Jeff Dike (jdike@karaya.com)
* Licensed under the GPL
*/
#ifndef __2_5_COMPAT_H__
#define __2_5_COMPAT_H__
#define INIT_HARDSECT(arr, maj, sizes)
#define SET_PRI(task) do ; while(0)
#endif
/*
* Overrides for Emacs so that we follow Linus's tabbing style.
* Emacs will notice this stuff at the end of the file and automatically
* adjust the settings for this buffer only. This must remain at the end
* of the file.
* ---------------------------------------------------------------------------
* Local variables:
* c-file-style: "linux"
* End:
*/

View File

@ -1,6 +1,7 @@
#ifndef __UM_SYSRQ_H
#define __UM_SYSRQ_H
extern void show_trace(unsigned long *stack);
struct task_struct;
extern void show_trace(struct task_struct* task, unsigned long *stack);
#endif

View File

@ -16,7 +16,6 @@
#include "kern.h"
#include "irq_user.h"
#include "tlb.h"
#include "2_5compat.h"
#include "os.h"
#include "time_user.h"
#include "choose-mode.h"

View File

@ -1,59 +0,0 @@
/*
* Copyright (C) 2000, 2001, 2002 Jeff Dike (jdike@karaya.com)
* Licensed under the GPL
*/
#include "linux/init.h"
#include "linux/bootmem.h"
#include "linux/initrd.h"
#include "asm/types.h"
#include "user_util.h"
#include "kern_util.h"
#include "initrd.h"
#include "init.h"
#include "os.h"
/* Changed by uml_initrd_setup, which is a setup */
static char *initrd __initdata = NULL;
static int __init read_initrd(void)
{
void *area;
long long size;
int err;
if(initrd == NULL) return 0;
err = os_file_size(initrd, &size);
if(err) return 0;
area = alloc_bootmem(size);
if(area == NULL) return 0;
if(load_initrd(initrd, area, size) == -1) return 0;
initrd_start = (unsigned long) area;
initrd_end = initrd_start + size;
return 0;
}
__uml_postsetup(read_initrd);
static int __init uml_initrd_setup(char *line, int *add)
{
initrd = line;
return 0;
}
__uml_setup("initrd=", uml_initrd_setup,
"initrd=<initrd image>\n"
" This is used to boot UML from an initrd image. The argument is the\n"
" name of the file containing the image.\n\n"
);
/*
* Overrides for Emacs so that we follow Linus's tabbing style.
* Emacs will notice this stuff at the end of the file and automatically
* adjust the settings for this buffer only. This must remain at the end
* of the file.
* ---------------------------------------------------------------------------
* Local variables:
* c-file-style: "linux"
* End:
*/

View File

@ -1,46 +0,0 @@
/*
* Copyright (C) 2000, 2001 Jeff Dike (jdike@karaya.com)
* Licensed under the GPL
*/
#include <unistd.h>
#include <sys/types.h>
#include <sys/stat.h>
#include <errno.h>
#include "user_util.h"
#include "kern_util.h"
#include "user.h"
#include "initrd.h"
#include "os.h"
int load_initrd(char *filename, void *buf, int size)
{
int fd, n;
fd = os_open_file(filename, of_read(OPENFLAGS()), 0);
if(fd < 0){
printk("Opening '%s' failed - err = %d\n", filename, -fd);
return(-1);
}
n = os_read_file(fd, buf, size);
if(n != size){
printk("Read of %d bytes from '%s' failed, err = %d\n", size,
filename, -n);
return(-1);
}
os_close_file(fd);
return(0);
}
/*
* Overrides for Emacs so that we follow Linus's tabbing style.
* Emacs will notice this stuff at the end of the file and automatically
* adjust the settings for this buffer only. This must remain at the end
* of the file.
* ---------------------------------------------------------------------------
* Local variables:
* c-file-style: "linux"
* End:
*/

View File

@ -71,7 +71,7 @@ static __init void do_uml_initcalls(void)
static void last_ditch_exit(int sig)
{
CHOOSE_MODE(kmalloc_ok = 0, (void) 0);
kmalloc_ok = 0;
signal(SIGINT, SIG_DFL);
signal(SIGTERM, SIG_DFL);
signal(SIGHUP, SIG_DFL);
@ -87,7 +87,7 @@ int main(int argc, char **argv, char **envp)
{
char **new_argv;
sigset_t mask;
int ret, i;
int ret, i, err;
/* Enable all signals except SIGIO - in some environments, we can
* enter with some signals blocked
@ -160,27 +160,29 @@ int main(int argc, char **argv, char **envp)
*/
change_sig(SIGPROF, 0);
/* This signal stuff used to be in the reboot case. However,
* sometimes a SIGVTALRM can come in when we're halting (reproducably
* when writing out gcov information, presumably because that takes
* some time) and cause a segfault.
*/
/* stop timers and set SIG*ALRM to be ignored */
disable_timer();
/* disable SIGIO for the fds and set SIGIO to be ignored */
err = deactivate_all_fds();
if(err)
printf("deactivate_all_fds failed, errno = %d\n", -err);
/* Let any pending signals fire now. This ensures
* that they won't be delivered after the exec, when
* they are definitely not expected.
*/
unblock_signals();
/* Reboot */
if(ret){
int err;
printf("\n");
/* stop timers and set SIG*ALRM to be ignored */
disable_timer();
/* disable SIGIO for the fds and set SIGIO to be ignored */
err = deactivate_all_fds();
if(err)
printf("deactivate_all_fds failed, errno = %d\n",
-err);
/* Let any pending signals fire now. This ensures
* that they won't be delivered after the exec, when
* they are definitely not expected.
*/
unblock_signals();
execvp(new_argv[0], new_argv);
perror("Failed to exec kernel");
ret = 1;

View File

@ -43,7 +43,6 @@
#include "tlb.h"
#include "frame_kern.h"
#include "sigcontext.h"
#include "2_5compat.h"
#include "os.h"
#include "mode.h"
#include "mode_kern.h"
@ -55,18 +54,6 @@
*/
struct cpu_task cpu_tasks[NR_CPUS] = { [0 ... NR_CPUS - 1] = { -1, NULL } };
struct task_struct *get_task(int pid, int require)
{
struct task_struct *ret;
read_lock(&tasklist_lock);
ret = find_task_by_pid(pid);
read_unlock(&tasklist_lock);
if(require && (ret == NULL)) panic("get_task couldn't find a task\n");
return(ret);
}
int external_pid(void *t)
{
struct task_struct *task = t ? t : current;
@ -189,7 +176,6 @@ void default_idle(void)
while(1){
/* endless idle loop with no priority at all */
SET_PRI(current);
/*
* although we are an idle CPU, we do not want to
@ -212,11 +198,6 @@ int page_size(void)
return(PAGE_SIZE);
}
unsigned long page_mask(void)
{
return(PAGE_MASK);
}
void *um_virt_to_phys(struct task_struct *task, unsigned long addr,
pte_t *pte_out)
{
@ -349,11 +330,6 @@ char *uml_strdup(char *string)
return(new);
}
void *get_init_task(void)
{
return(&init_thread_union.thread_info.task);
}
int copy_to_user_proc(void __user *to, void *from, int size)
{
return(copy_to_user(to, from, size));
@ -480,15 +456,3 @@ unsigned long arch_align_stack(unsigned long sp)
return sp & ~0xf;
}
#endif
/*
* Overrides for Emacs so that we follow Linus's tabbing style.
* Emacs will notice this stuff at the end of the file and automatically
* adjust the settings for this buffer only. This must remain at the end
* of the file.
* ---------------------------------------------------------------------------
* Local variables:
* c-file-style: "linux"
* End:
*/

View File

@ -322,11 +322,9 @@ void syscall_trace(union uml_pt_regs *regs, int entryexit)
UPT_SYSCALL_ARG2(regs),
UPT_SYSCALL_ARG3(regs),
UPT_SYSCALL_ARG4(regs));
else {
int res = UPT_SYSCALL_RET(regs);
audit_syscall_exit(current, AUDITSC_RESULT(res),
res);
}
else audit_syscall_exit(current,
AUDITSC_RESULT(UPT_SYSCALL_RET(regs)),
UPT_SYSCALL_RET(regs));
}
/* Fake a debug trap */
@ -356,14 +354,3 @@ void syscall_trace(union uml_pt_regs *regs, int entryexit)
current->exit_code = 0;
}
}
/*
* Overrides for Emacs so that we follow Linus's tabbing style.
* Emacs will notice this stuff at the end of the file and automatically
* adjust the settings for this buffer only. This must remain at the end
* of the file.
* ---------------------------------------------------------------------------
* Local variables:
* c-file-style: "linux"
* End:
*/

View File

@ -3,6 +3,7 @@
* Licensed under the GPL
*/
#include "linux/config.h"
#include "linux/sched.h"
#include "linux/kernel.h"
#include "linux/module.h"
@ -12,14 +13,14 @@
#include "sysrq.h"
#include "user_util.h"
void show_trace(unsigned long * stack)
/* Catch non-i386 SUBARCH's. */
#if !defined(CONFIG_UML_X86) || defined(CONFIG_64BIT)
void show_trace(struct task_struct *task, unsigned long * stack)
{
/* XXX: Copy the CONFIG_FRAME_POINTER stack-walking backtrace from
* arch/i386/kernel/traps.c, and then move this to sys-i386/sysrq.c.*/
unsigned long addr;
if (!stack) {
stack = (unsigned long*) &stack;
stack = (unsigned long*) &stack;
WARN_ON(1);
}
@ -35,6 +36,7 @@ void show_trace(unsigned long * stack)
}
printk("\n");
}
#endif
/*
* stack dumps generator - this is used by arch-independent code.
@ -44,7 +46,7 @@ void dump_stack(void)
{
unsigned long stack;
show_trace(&stack);
show_trace(current, &stack);
}
EXPORT_SYMBOL(dump_stack);
@ -59,7 +61,11 @@ void show_stack(struct task_struct *task, unsigned long *esp)
int i;
if (esp == NULL) {
if (task != current) {
if (task != current && task != NULL) {
/* XXX: Isn't this bogus? I.e. isn't this the
* *userspace* stack of this task? If not so, use this
* even when task == current (as in i386).
*/
esp = (unsigned long *) KSTK_ESP(task);
/* Which one? No actual difference - just coding style.*/
//esp = (unsigned long *) PT_REGS_IP(&task->thread.regs);
@ -77,5 +83,6 @@ void show_stack(struct task_struct *task, unsigned long *esp)
printk("%08lx ", *stack++);
}
show_trace(esp);
printk("Call Trace: \n");
show_trace(current, esp);
}

View File

@ -23,7 +23,6 @@
#include "kern.h"
#include "chan_kern.h"
#include "mconsole_kern.h"
#include "2_5compat.h"
#include "mem.h"
#include "mem_kern.h"

View File

@ -32,10 +32,6 @@ void *switch_to_tt(void *prev, void *next, void *last)
unsigned long flags;
int err, vtalrm, alrm, prof, cpu;
char c;
/* jailing and SMP are incompatible, so this doesn't need to be
* made per-cpu
*/
static int reading;
from = prev;
to = next;
@ -59,13 +55,11 @@ void *switch_to_tt(void *prev, void *next, void *last)
c = 0;
set_current(to);
reading = 0;
err = os_write_file(to->thread.mode.tt.switch_pipe[1], &c, sizeof(c));
if(err != sizeof(c))
panic("write of switch_pipe failed, err = %d", -err);
reading = 1;
if(from->thread.mode.tt.switch_pipe[0] == -1)
if(from->thread.mode.tt.switch_pipe[0] == -1)
os_kill_process(os_getpid(), 0);
err = os_read_file(from->thread.mode.tt.switch_pipe[0], &c, sizeof(c));

View File

@ -111,12 +111,6 @@ struct seq_operations cpuinfo_op = {
.show = show_cpuinfo,
};
pte_t * __bad_pagetable(void)
{
panic("Someone should implement __bad_pagetable");
return(NULL);
}
/* Set in linux_main */
unsigned long host_task_size;
unsigned long task_size;

View File

@ -3,12 +3,15 @@
* Licensed under the GPL
*/
#include "linux/config.h"
#include "linux/kernel.h"
#include "linux/smp.h"
#include "linux/sched.h"
#include "linux/kallsyms.h"
#include "asm/ptrace.h"
#include "sysrq.h"
/* This is declared by <linux/sched.h> */
void show_regs(struct pt_regs *regs)
{
printk("\n");
@ -31,5 +34,80 @@ void show_regs(struct pt_regs *regs)
0xffff & PT_REGS_DS(regs),
0xffff & PT_REGS_ES(regs));
show_trace((unsigned long *) &regs);
show_trace(NULL, (unsigned long *) &regs);
}
/* Copied from i386. */
static inline int valid_stack_ptr(struct thread_info *tinfo, void *p)
{
return p > (void *)tinfo &&
p < (void *)tinfo + THREAD_SIZE - 3;
}
/* Adapted from i386 (we also print the address we read from). */
static inline unsigned long print_context_stack(struct thread_info *tinfo,
unsigned long *stack, unsigned long ebp)
{
unsigned long addr;
#ifdef CONFIG_FRAME_POINTER
while (valid_stack_ptr(tinfo, (void *)ebp)) {
addr = *(unsigned long *)(ebp + 4);
printk("%08lx: [<%08lx>]", ebp + 4, addr);
print_symbol(" %s", addr);
printk("\n");
ebp = *(unsigned long *)ebp;
}
#else
while (valid_stack_ptr(tinfo, stack)) {
addr = *stack;
if (__kernel_text_address(addr)) {
printk("%08lx: [<%08lx>]", (unsigned long) stack, addr);
print_symbol(" %s", addr);
printk("\n");
}
stack++;
}
#endif
return ebp;
}
void show_trace(struct task_struct* task, unsigned long * stack)
{
unsigned long ebp;
struct thread_info *context;
/* Turn this into BUG_ON if possible. */
if (!stack) {
stack = (unsigned long*) &stack;
printk("show_trace: got NULL stack, implicit assumption task == current");
WARN_ON(1);
}
if (!task)
task = current;
if (task != current) {
//ebp = (unsigned long) KSTK_EBP(task);
/* Which one? No actual difference - just coding style.*/
ebp = (unsigned long) PT_REGS_EBP(&task->thread.regs);
} else {
asm ("movl %%ebp, %0" : "=r" (ebp) : );
}
context = (struct thread_info *)
((unsigned long)stack & (~(THREAD_SIZE - 1)));
print_context_stack(context, stack, ebp);
/*while (((long) stack & (THREAD_SIZE-1)) != 0) {
addr = *stack;
if (__kernel_text_address(addr)) {
printk("%08lx: [<%08lx>]", (unsigned long) stack, addr);
print_symbol(" %s", addr);
printk("\n");
}
stack++;
}*/
printk("\n");
}

View File

@ -27,17 +27,5 @@ void show_regs(struct pt_regs_subarch *regs)
0xffff & regs->xds, 0xffff & regs->xes);
#endif
show_trace(&regs->gpr[1]);
show_trace(current, &regs->gpr[1]);
}
/*
* Overrides for Emacs so that we follow Linus's tabbing style.
* Emacs will notice this stuff at the end of the file and automatically
* adjust the settings for this buffer only. This must remain at the end
* of the file.
* ---------------------------------------------------------------------------
* Local variables:
* c-file-style: "linux"
* End:
*/

View File

@ -133,23 +133,27 @@ static long arch_prctl_tt(int code, unsigned long addr)
#ifdef CONFIG_MODE_SKAS
/* XXX: Must also call arch_prctl in the host, beside saving the segment bases! */
static long arch_prctl_skas(int code, unsigned long addr)
{
long ret = 0;
switch(code){
case ARCH_SET_GS:
current->thread.regs.regs.skas.regs[GS_BASE / sizeof(unsigned long)] = addr;
break;
case ARCH_SET_FS:
current->thread.regs.regs.skas.regs[FS_BASE / sizeof(unsigned long)] = addr;
break;
case ARCH_SET_GS:
current->thread.regs.regs.skas.regs[GS_BASE / sizeof(unsigned long)] = addr;
break;
case ARCH_GET_FS:
ret = put_user(current->thread.regs.regs.skas.regs[GS / sizeof(unsigned long)], &addr);
ret = put_user(current->thread.regs.regs.skas.
regs[FS_BASE / sizeof(unsigned long)],
(unsigned long __user *)addr);
break;
case ARCH_GET_GS:
ret = put_user(current->thread.regs.regs.skas.regs[FS / sizeof(unsigned \
long)], &addr);
ret = put_user(current->thread.regs.regs.skas.
regs[GS_BASE / sizeof(unsigned long)],
(unsigned long __user *)addr);
break;
default:
ret = -EINVAL;

View File

@ -36,14 +36,5 @@ void __show_regs(struct pt_regs * regs)
void show_regs(struct pt_regs *regs)
{
__show_regs(regs);
show_trace((unsigned long *) &regs);
show_trace(current, (unsigned long *) &regs);
}
/* Emacs will notice this stuff at the end of the file and automatically
* adjust the settings for this buffer only. This must remain at the end
* of the file.
* ---------------------------------------------------------------------------
* Local variables:
* c-file-style: "linux"
* End:
*/

View File

@ -305,6 +305,7 @@ config HPET_TIMER
config X86_PM_TIMER
bool "PM timer"
depends on ACPI
default y
help
Support the ACPI PM timer for time keeping. This is slow,

View File

@ -37,6 +37,7 @@
#include <asm/desc.h>
#include <asm/proto.h>
#include <asm/mach_apic.h>
#include <asm/acpi.h>
#define __apicdebuginit __init

View File

@ -30,6 +30,7 @@
#include <asm/pgalloc.h>
#include <asm/io_apic.h>
#include <asm/proto.h>
#include <asm/acpi.h>
/* Have we found an MP table */
int smp_found_config;

View File

@ -28,6 +28,7 @@
#include <asm/uaccess.h>
#include <asm/i387.h>
#include <asm/proto.h>
#include <asm/ia32_unistd.h>
/* #define DEBUG_SIG 1 */

View File

@ -27,7 +27,9 @@
#include <linux/bcd.h>
#include <linux/kallsyms.h>
#include <linux/acpi.h>
#ifdef CONFIG_ACPI
#include <acpi/achware.h> /* for PM timer frequency */
#endif
#include <asm/8253pit.h>
#include <asm/pgtable.h>
#include <asm/vsyscall.h>

View File

@ -488,6 +488,20 @@ static int viocd_packet(struct cdrom_device_info *cdi,
& (CDC_DVD_RAM | CDC_RAM)) != 0;
}
break;
case GPCMD_GET_CONFIGURATION:
if (cgc->cmd[3] == CDF_RWRT) {
struct rwrt_feature_desc *rfd = (struct rwrt_feature_desc *)(cgc->buffer + sizeof(struct feature_header));
if ((buflen >=
(sizeof(struct feature_header) + sizeof(*rfd))) &&
(cdi->ops->capability & ~cdi->mask
& (CDC_DVD_RAM | CDC_RAM))) {
rfd->feature_code = cpu_to_be16(CDF_RWRT);
rfd->curr = 1;
ret = 0;
}
}
break;
default:
if (cgc->sense) {
/* indicate Unknown code */

View File

@ -46,6 +46,10 @@ config CPU_FREQ_STAT_DETAILS
This will show detail CPU frequency translation table in sysfs file
system
# Note that it is not currently possible to set the other governors (such as ondemand)
# as the default, since if they fail to initialise, cpufreq will be
# left in an undefined state.
choice
prompt "Default CPUFreq governor"
default CPU_FREQ_DEFAULT_GOV_USERSPACE if CPU_FREQ_SA1100 || CPU_FREQ_SA1110
@ -115,4 +119,24 @@ config CPU_FREQ_GOV_ONDEMAND
If in doubt, say N.
config CPU_FREQ_GOV_CONSERVATIVE
tristate "'conservative' cpufreq governor"
depends on CPU_FREQ
help
'conservative' - this driver is rather similar to the 'ondemand'
governor both in its source code and its purpose, the difference is
its optimisation for better suitability in a battery powered
environment. The frequency is gracefully increased and decreased
rather than jumping to 100% when speed is required.
If you have a desktop machine then you should really be considering
the 'ondemand' governor instead, however if you are using a laptop,
PDA or even an AMD64 based computer (due to the unacceptable
step-by-step latency issues between the minimum and maximum frequency
transitions in the CPU) you will probably want to use this governor.
For details, take a look at linux/Documentation/cpu-freq.
If in doubt, say N.
endif # CPU_FREQ

View File

@ -8,6 +8,7 @@ obj-$(CONFIG_CPU_FREQ_GOV_PERFORMANCE) += cpufreq_performance.o
obj-$(CONFIG_CPU_FREQ_GOV_POWERSAVE) += cpufreq_powersave.o
obj-$(CONFIG_CPU_FREQ_GOV_USERSPACE) += cpufreq_userspace.o
obj-$(CONFIG_CPU_FREQ_GOV_ONDEMAND) += cpufreq_ondemand.o
obj-$(CONFIG_CPU_FREQ_GOV_CONSERVATIVE) += cpufreq_conservative.o
# CPUfreq cross-arch helpers
obj-$(CONFIG_CPU_FREQ_TABLE) += freq_table.o

View File

@ -258,7 +258,7 @@ void cpufreq_notify_transition(struct cpufreq_freqs *freqs, unsigned int state)
(likely(cpufreq_cpu_data[freqs->cpu]->cur)) &&
(unlikely(freqs->old != cpufreq_cpu_data[freqs->cpu]->cur)))
{
printk(KERN_WARNING "Warning: CPU frequency is %u, "
dprintk(KERN_WARNING "Warning: CPU frequency is %u, "
"cpufreq assumed %u kHz.\n", freqs->old, cpufreq_cpu_data[freqs->cpu]->cur);
freqs->old = cpufreq_cpu_data[freqs->cpu]->cur;
}
@ -814,7 +814,7 @@ static void cpufreq_out_of_sync(unsigned int cpu, unsigned int old_freq, unsigne
{
struct cpufreq_freqs freqs;
printk(KERN_WARNING "Warning: CPU frequency out of sync: cpufreq and timing "
dprintk(KERN_WARNING "Warning: CPU frequency out of sync: cpufreq and timing "
"core thinks of %u, is %u kHz.\n", old_freq, new_freq);
freqs.cpu = cpu;
@ -923,7 +923,7 @@ static int cpufreq_suspend(struct sys_device * sysdev, u32 state)
struct cpufreq_freqs freqs;
if (!(cpufreq_driver->flags & CPUFREQ_PM_NO_WARN))
printk(KERN_DEBUG "Warning: CPU frequency is %u, "
dprintk(KERN_DEBUG "Warning: CPU frequency is %u, "
"cpufreq assumed %u kHz.\n",
cur_freq, cpu_policy->cur);
@ -1004,7 +1004,7 @@ static int cpufreq_resume(struct sys_device * sysdev)
struct cpufreq_freqs freqs;
if (!(cpufreq_driver->flags & CPUFREQ_PM_NO_WARN))
printk(KERN_WARNING "Warning: CPU frequency"
dprintk(KERN_WARNING "Warning: CPU frequency"
"is %u, cpufreq assumed %u kHz.\n",
cur_freq, cpu_policy->cur);

View File

@ -0,0 +1,586 @@
/*
* drivers/cpufreq/cpufreq_conservative.c
*
* Copyright (C) 2001 Russell King
* (C) 2003 Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>.
* Jun Nakajima <jun.nakajima@intel.com>
* (C) 2004 Alexander Clouter <alex-kernel@digriz.org.uk>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/smp.h>
#include <linux/init.h>
#include <linux/interrupt.h>
#include <linux/ctype.h>
#include <linux/cpufreq.h>
#include <linux/sysctl.h>
#include <linux/types.h>
#include <linux/fs.h>
#include <linux/sysfs.h>
#include <linux/sched.h>
#include <linux/kmod.h>
#include <linux/workqueue.h>
#include <linux/jiffies.h>
#include <linux/kernel_stat.h>
#include <linux/percpu.h>
/*
* dbs is used in this file as a shortform for demandbased switching
* It helps to keep variable names smaller, simpler
*/
#define DEF_FREQUENCY_UP_THRESHOLD (80)
#define MIN_FREQUENCY_UP_THRESHOLD (0)
#define MAX_FREQUENCY_UP_THRESHOLD (100)
#define DEF_FREQUENCY_DOWN_THRESHOLD (20)
#define MIN_FREQUENCY_DOWN_THRESHOLD (0)
#define MAX_FREQUENCY_DOWN_THRESHOLD (100)
/*
* The polling frequency of this governor depends on the capability of
* the processor. Default polling frequency is 1000 times the transition
* latency of the processor. The governor will work on any processor with
* transition latency <= 10mS, using appropriate sampling
* rate.
* For CPUs with transition latency > 10mS (mostly drivers with CPUFREQ_ETERNAL)
* this governor will not work.
* All times here are in uS.
*/
static unsigned int def_sampling_rate;
#define MIN_SAMPLING_RATE (def_sampling_rate / 2)
#define MAX_SAMPLING_RATE (500 * def_sampling_rate)
#define DEF_SAMPLING_RATE_LATENCY_MULTIPLIER (100000)
#define DEF_SAMPLING_DOWN_FACTOR (5)
#define TRANSITION_LATENCY_LIMIT (10 * 1000)
static void do_dbs_timer(void *data);
struct cpu_dbs_info_s {
struct cpufreq_policy *cur_policy;
unsigned int prev_cpu_idle_up;
unsigned int prev_cpu_idle_down;
unsigned int enable;
};
static DEFINE_PER_CPU(struct cpu_dbs_info_s, cpu_dbs_info);
static unsigned int dbs_enable; /* number of CPUs using this policy */
static DECLARE_MUTEX (dbs_sem);
static DECLARE_WORK (dbs_work, do_dbs_timer, NULL);
struct dbs_tuners {
unsigned int sampling_rate;
unsigned int sampling_down_factor;
unsigned int up_threshold;
unsigned int down_threshold;
unsigned int ignore_nice;
unsigned int freq_step;
};
static struct dbs_tuners dbs_tuners_ins = {
.up_threshold = DEF_FREQUENCY_UP_THRESHOLD,
.down_threshold = DEF_FREQUENCY_DOWN_THRESHOLD,
.sampling_down_factor = DEF_SAMPLING_DOWN_FACTOR,
};
static inline unsigned int get_cpu_idle_time(unsigned int cpu)
{
return kstat_cpu(cpu).cpustat.idle +
kstat_cpu(cpu).cpustat.iowait +
( !dbs_tuners_ins.ignore_nice ?
kstat_cpu(cpu).cpustat.nice :
0);
}
/************************** sysfs interface ************************/
static ssize_t show_sampling_rate_max(struct cpufreq_policy *policy, char *buf)
{
return sprintf (buf, "%u\n", MAX_SAMPLING_RATE);
}
static ssize_t show_sampling_rate_min(struct cpufreq_policy *policy, char *buf)
{
return sprintf (buf, "%u\n", MIN_SAMPLING_RATE);
}
#define define_one_ro(_name) \
static struct freq_attr _name = \
__ATTR(_name, 0444, show_##_name, NULL)
define_one_ro(sampling_rate_max);
define_one_ro(sampling_rate_min);
/* cpufreq_conservative Governor Tunables */
#define show_one(file_name, object) \
static ssize_t show_##file_name \
(struct cpufreq_policy *unused, char *buf) \
{ \
return sprintf(buf, "%u\n", dbs_tuners_ins.object); \
}
show_one(sampling_rate, sampling_rate);
show_one(sampling_down_factor, sampling_down_factor);
show_one(up_threshold, up_threshold);
show_one(down_threshold, down_threshold);
show_one(ignore_nice, ignore_nice);
show_one(freq_step, freq_step);
static ssize_t store_sampling_down_factor(struct cpufreq_policy *unused,
const char *buf, size_t count)
{
unsigned int input;
int ret;
ret = sscanf (buf, "%u", &input);
if (ret != 1 )
return -EINVAL;
down(&dbs_sem);
dbs_tuners_ins.sampling_down_factor = input;
up(&dbs_sem);
return count;
}
static ssize_t store_sampling_rate(struct cpufreq_policy *unused,
const char *buf, size_t count)
{
unsigned int input;
int ret;
ret = sscanf (buf, "%u", &input);
down(&dbs_sem);
if (ret != 1 || input > MAX_SAMPLING_RATE || input < MIN_SAMPLING_RATE) {
up(&dbs_sem);
return -EINVAL;
}
dbs_tuners_ins.sampling_rate = input;
up(&dbs_sem);
return count;
}
static ssize_t store_up_threshold(struct cpufreq_policy *unused,
const char *buf, size_t count)
{
unsigned int input;
int ret;
ret = sscanf (buf, "%u", &input);
down(&dbs_sem);
if (ret != 1 || input > MAX_FREQUENCY_UP_THRESHOLD ||
input < MIN_FREQUENCY_UP_THRESHOLD ||
input <= dbs_tuners_ins.down_threshold) {
up(&dbs_sem);
return -EINVAL;
}
dbs_tuners_ins.up_threshold = input;
up(&dbs_sem);
return count;
}
static ssize_t store_down_threshold(struct cpufreq_policy *unused,
const char *buf, size_t count)
{
unsigned int input;
int ret;
ret = sscanf (buf, "%u", &input);
down(&dbs_sem);
if (ret != 1 || input > MAX_FREQUENCY_DOWN_THRESHOLD ||
input < MIN_FREQUENCY_DOWN_THRESHOLD ||
input >= dbs_tuners_ins.up_threshold) {
up(&dbs_sem);
return -EINVAL;
}
dbs_tuners_ins.down_threshold = input;
up(&dbs_sem);
return count;
}
static ssize_t store_ignore_nice(struct cpufreq_policy *policy,
const char *buf, size_t count)
{
unsigned int input;
int ret;
unsigned int j;
ret = sscanf (buf, "%u", &input);
if ( ret != 1 )
return -EINVAL;
if ( input > 1 )
input = 1;
down(&dbs_sem);
if ( input == dbs_tuners_ins.ignore_nice ) { /* nothing to do */
up(&dbs_sem);
return count;
}
dbs_tuners_ins.ignore_nice = input;
/* we need to re-evaluate prev_cpu_idle_up and prev_cpu_idle_down */
for_each_online_cpu(j) {
struct cpu_dbs_info_s *j_dbs_info;
j_dbs_info = &per_cpu(cpu_dbs_info, j);
j_dbs_info->prev_cpu_idle_up = get_cpu_idle_time(j);
j_dbs_info->prev_cpu_idle_down = j_dbs_info->prev_cpu_idle_up;
}
up(&dbs_sem);
return count;
}
static ssize_t store_freq_step(struct cpufreq_policy *policy,
const char *buf, size_t count)
{
unsigned int input;
int ret;
ret = sscanf (buf, "%u", &input);
if ( ret != 1 )
return -EINVAL;
if ( input > 100 )
input = 100;
/* no need to test here if freq_step is zero as the user might actually
* want this, they would be crazy though :) */
down(&dbs_sem);
dbs_tuners_ins.freq_step = input;
up(&dbs_sem);
return count;
}
#define define_one_rw(_name) \
static struct freq_attr _name = \
__ATTR(_name, 0644, show_##_name, store_##_name)
define_one_rw(sampling_rate);
define_one_rw(sampling_down_factor);
define_one_rw(up_threshold);
define_one_rw(down_threshold);
define_one_rw(ignore_nice);
define_one_rw(freq_step);
static struct attribute * dbs_attributes[] = {
&sampling_rate_max.attr,
&sampling_rate_min.attr,
&sampling_rate.attr,
&sampling_down_factor.attr,
&up_threshold.attr,
&down_threshold.attr,
&ignore_nice.attr,
&freq_step.attr,
NULL
};
static struct attribute_group dbs_attr_group = {
.attrs = dbs_attributes,
.name = "conservative",
};
/************************** sysfs end ************************/
static void dbs_check_cpu(int cpu)
{
unsigned int idle_ticks, up_idle_ticks, down_idle_ticks;
unsigned int freq_step;
unsigned int freq_down_sampling_rate;
static int down_skip[NR_CPUS];
static int requested_freq[NR_CPUS];
static unsigned short init_flag = 0;
struct cpu_dbs_info_s *this_dbs_info;
struct cpu_dbs_info_s *dbs_info;
struct cpufreq_policy *policy;
unsigned int j;
this_dbs_info = &per_cpu(cpu_dbs_info, cpu);
if (!this_dbs_info->enable)
return;
policy = this_dbs_info->cur_policy;
if ( init_flag == 0 ) {
for ( /* NULL */; init_flag < NR_CPUS; init_flag++ ) {
dbs_info = &per_cpu(cpu_dbs_info, init_flag);
requested_freq[cpu] = dbs_info->cur_policy->cur;
}
init_flag = 1;
}
/*
* The default safe range is 20% to 80%
* Every sampling_rate, we check
* - If current idle time is less than 20%, then we try to
* increase frequency
* Every sampling_rate*sampling_down_factor, we check
* - If current idle time is more than 80%, then we try to
* decrease frequency
*
* Any frequency increase takes it to the maximum frequency.
* Frequency reduction happens at minimum steps of
* 5% (default) of max_frequency
*/
/* Check for frequency increase */
idle_ticks = UINT_MAX;
for_each_cpu_mask(j, policy->cpus) {
unsigned int tmp_idle_ticks, total_idle_ticks;
struct cpu_dbs_info_s *j_dbs_info;
j_dbs_info = &per_cpu(cpu_dbs_info, j);
/* Check for frequency increase */
total_idle_ticks = get_cpu_idle_time(j);
tmp_idle_ticks = total_idle_ticks -
j_dbs_info->prev_cpu_idle_up;
j_dbs_info->prev_cpu_idle_up = total_idle_ticks;
if (tmp_idle_ticks < idle_ticks)
idle_ticks = tmp_idle_ticks;
}
/* Scale idle ticks by 100 and compare with up and down ticks */
idle_ticks *= 100;
up_idle_ticks = (100 - dbs_tuners_ins.up_threshold) *
usecs_to_jiffies(dbs_tuners_ins.sampling_rate);
if (idle_ticks < up_idle_ticks) {
down_skip[cpu] = 0;
for_each_cpu_mask(j, policy->cpus) {
struct cpu_dbs_info_s *j_dbs_info;
j_dbs_info = &per_cpu(cpu_dbs_info, j);
j_dbs_info->prev_cpu_idle_down =
j_dbs_info->prev_cpu_idle_up;
}
/* if we are already at full speed then break out early */
if (requested_freq[cpu] == policy->max)
return;
freq_step = (dbs_tuners_ins.freq_step * policy->max) / 100;
/* max freq cannot be less than 100. But who knows.... */
if (unlikely(freq_step == 0))
freq_step = 5;
requested_freq[cpu] += freq_step;
if (requested_freq[cpu] > policy->max)
requested_freq[cpu] = policy->max;
__cpufreq_driver_target(policy, requested_freq[cpu],
CPUFREQ_RELATION_H);
return;
}
/* Check for frequency decrease */
down_skip[cpu]++;
if (down_skip[cpu] < dbs_tuners_ins.sampling_down_factor)
return;
idle_ticks = UINT_MAX;
for_each_cpu_mask(j, policy->cpus) {
unsigned int tmp_idle_ticks, total_idle_ticks;
struct cpu_dbs_info_s *j_dbs_info;
j_dbs_info = &per_cpu(cpu_dbs_info, j);
total_idle_ticks = j_dbs_info->prev_cpu_idle_up;
tmp_idle_ticks = total_idle_ticks -
j_dbs_info->prev_cpu_idle_down;
j_dbs_info->prev_cpu_idle_down = total_idle_ticks;
if (tmp_idle_ticks < idle_ticks)
idle_ticks = tmp_idle_ticks;
}
/* Scale idle ticks by 100 and compare with up and down ticks */
idle_ticks *= 100;
down_skip[cpu] = 0;
freq_down_sampling_rate = dbs_tuners_ins.sampling_rate *
dbs_tuners_ins.sampling_down_factor;
down_idle_ticks = (100 - dbs_tuners_ins.down_threshold) *
usecs_to_jiffies(freq_down_sampling_rate);
if (idle_ticks > down_idle_ticks) {
/* if we are already at the lowest speed then break out early
* or if we 'cannot' reduce the speed as the user might want
* freq_step to be zero */
if (requested_freq[cpu] == policy->min
|| dbs_tuners_ins.freq_step == 0)
return;
freq_step = (dbs_tuners_ins.freq_step * policy->max) / 100;
/* max freq cannot be less than 100. But who knows.... */
if (unlikely(freq_step == 0))
freq_step = 5;
requested_freq[cpu] -= freq_step;
if (requested_freq[cpu] < policy->min)
requested_freq[cpu] = policy->min;
__cpufreq_driver_target(policy,
requested_freq[cpu],
CPUFREQ_RELATION_H);
return;
}
}
static void do_dbs_timer(void *data)
{
int i;
down(&dbs_sem);
for_each_online_cpu(i)
dbs_check_cpu(i);
schedule_delayed_work(&dbs_work,
usecs_to_jiffies(dbs_tuners_ins.sampling_rate));
up(&dbs_sem);
}
static inline void dbs_timer_init(void)
{
INIT_WORK(&dbs_work, do_dbs_timer, NULL);
schedule_delayed_work(&dbs_work,
usecs_to_jiffies(dbs_tuners_ins.sampling_rate));
return;
}
static inline void dbs_timer_exit(void)
{
cancel_delayed_work(&dbs_work);
return;
}
static int cpufreq_governor_dbs(struct cpufreq_policy *policy,
unsigned int event)
{
unsigned int cpu = policy->cpu;
struct cpu_dbs_info_s *this_dbs_info;
unsigned int j;
this_dbs_info = &per_cpu(cpu_dbs_info, cpu);
switch (event) {
case CPUFREQ_GOV_START:
if ((!cpu_online(cpu)) ||
(!policy->cur))
return -EINVAL;
if (policy->cpuinfo.transition_latency >
(TRANSITION_LATENCY_LIMIT * 1000))
return -EINVAL;
if (this_dbs_info->enable) /* Already enabled */
break;
down(&dbs_sem);
for_each_cpu_mask(j, policy->cpus) {
struct cpu_dbs_info_s *j_dbs_info;
j_dbs_info = &per_cpu(cpu_dbs_info, j);
j_dbs_info->cur_policy = policy;
j_dbs_info->prev_cpu_idle_up = get_cpu_idle_time(j);
j_dbs_info->prev_cpu_idle_down
= j_dbs_info->prev_cpu_idle_up;
}
this_dbs_info->enable = 1;
sysfs_create_group(&policy->kobj, &dbs_attr_group);
dbs_enable++;
/*
* Start the timerschedule work, when this governor
* is used for first time
*/
if (dbs_enable == 1) {
unsigned int latency;
/* policy latency is in nS. Convert it to uS first */
latency = policy->cpuinfo.transition_latency;
if (latency < 1000)
latency = 1000;
def_sampling_rate = (latency / 1000) *
DEF_SAMPLING_RATE_LATENCY_MULTIPLIER;
dbs_tuners_ins.sampling_rate = def_sampling_rate;
dbs_tuners_ins.ignore_nice = 0;
dbs_tuners_ins.freq_step = 5;
dbs_timer_init();
}
up(&dbs_sem);
break;
case CPUFREQ_GOV_STOP:
down(&dbs_sem);
this_dbs_info->enable = 0;
sysfs_remove_group(&policy->kobj, &dbs_attr_group);
dbs_enable--;
/*
* Stop the timerschedule work, when this governor
* is used for first time
*/
if (dbs_enable == 0)
dbs_timer_exit();
up(&dbs_sem);
break;
case CPUFREQ_GOV_LIMITS:
down(&dbs_sem);
if (policy->max < this_dbs_info->cur_policy->cur)
__cpufreq_driver_target(
this_dbs_info->cur_policy,
policy->max, CPUFREQ_RELATION_H);
else if (policy->min > this_dbs_info->cur_policy->cur)
__cpufreq_driver_target(
this_dbs_info->cur_policy,
policy->min, CPUFREQ_RELATION_L);
up(&dbs_sem);
break;
}
return 0;
}
static struct cpufreq_governor cpufreq_gov_dbs = {
.name = "conservative",
.governor = cpufreq_governor_dbs,
.owner = THIS_MODULE,
};
static int __init cpufreq_gov_dbs_init(void)
{
return cpufreq_register_governor(&cpufreq_gov_dbs);
}
static void __exit cpufreq_gov_dbs_exit(void)
{
/* Make sure that the scheduled work is indeed not running */
flush_scheduled_work();
cpufreq_unregister_governor(&cpufreq_gov_dbs);
}
MODULE_AUTHOR ("Alexander Clouter <alex-kernel@digriz.org.uk>");
MODULE_DESCRIPTION ("'cpufreq_conservative' - A dynamic cpufreq governor for "
"Low Latency Frequency Transition capable processors "
"optimised for use in a battery environment");
MODULE_LICENSE ("GPL");
module_init(cpufreq_gov_dbs_init);
module_exit(cpufreq_gov_dbs_exit);

View File

@ -34,13 +34,9 @@
*/
#define DEF_FREQUENCY_UP_THRESHOLD (80)
#define MIN_FREQUENCY_UP_THRESHOLD (0)
#define MIN_FREQUENCY_UP_THRESHOLD (11)
#define MAX_FREQUENCY_UP_THRESHOLD (100)
#define DEF_FREQUENCY_DOWN_THRESHOLD (20)
#define MIN_FREQUENCY_DOWN_THRESHOLD (0)
#define MAX_FREQUENCY_DOWN_THRESHOLD (100)
/*
* The polling frequency of this governor depends on the capability of
* the processor. Default polling frequency is 1000 times the transition
@ -55,9 +51,9 @@ static unsigned int def_sampling_rate;
#define MIN_SAMPLING_RATE (def_sampling_rate / 2)
#define MAX_SAMPLING_RATE (500 * def_sampling_rate)
#define DEF_SAMPLING_RATE_LATENCY_MULTIPLIER (1000)
#define DEF_SAMPLING_DOWN_FACTOR (10)
#define DEF_SAMPLING_DOWN_FACTOR (1)
#define MAX_SAMPLING_DOWN_FACTOR (10)
#define TRANSITION_LATENCY_LIMIT (10 * 1000)
#define sampling_rate_in_HZ(x) (((x * HZ) < (1000 * 1000))?1:((x * HZ) / (1000 * 1000)))
static void do_dbs_timer(void *data);
@ -78,15 +74,23 @@ struct dbs_tuners {
unsigned int sampling_rate;
unsigned int sampling_down_factor;
unsigned int up_threshold;
unsigned int down_threshold;
unsigned int ignore_nice;
};
static struct dbs_tuners dbs_tuners_ins = {
.up_threshold = DEF_FREQUENCY_UP_THRESHOLD,
.down_threshold = DEF_FREQUENCY_DOWN_THRESHOLD,
.sampling_down_factor = DEF_SAMPLING_DOWN_FACTOR,
};
static inline unsigned int get_cpu_idle_time(unsigned int cpu)
{
return kstat_cpu(cpu).cpustat.idle +
kstat_cpu(cpu).cpustat.iowait +
( !dbs_tuners_ins.ignore_nice ?
kstat_cpu(cpu).cpustat.nice :
0);
}
/************************** sysfs interface ************************/
static ssize_t show_sampling_rate_max(struct cpufreq_policy *policy, char *buf)
{
@ -115,7 +119,7 @@ static ssize_t show_##file_name \
show_one(sampling_rate, sampling_rate);
show_one(sampling_down_factor, sampling_down_factor);
show_one(up_threshold, up_threshold);
show_one(down_threshold, down_threshold);
show_one(ignore_nice, ignore_nice);
static ssize_t store_sampling_down_factor(struct cpufreq_policy *unused,
const char *buf, size_t count)
@ -126,6 +130,9 @@ static ssize_t store_sampling_down_factor(struct cpufreq_policy *unused,
if (ret != 1 )
return -EINVAL;
if (input > MAX_SAMPLING_DOWN_FACTOR || input < 1)
return -EINVAL;
down(&dbs_sem);
dbs_tuners_ins.sampling_down_factor = input;
up(&dbs_sem);
@ -161,8 +168,7 @@ static ssize_t store_up_threshold(struct cpufreq_policy *unused,
down(&dbs_sem);
if (ret != 1 || input > MAX_FREQUENCY_UP_THRESHOLD ||
input < MIN_FREQUENCY_UP_THRESHOLD ||
input <= dbs_tuners_ins.down_threshold) {
input < MIN_FREQUENCY_UP_THRESHOLD) {
up(&dbs_sem);
return -EINVAL;
}
@ -173,22 +179,35 @@ static ssize_t store_up_threshold(struct cpufreq_policy *unused,
return count;
}
static ssize_t store_down_threshold(struct cpufreq_policy *unused,
static ssize_t store_ignore_nice(struct cpufreq_policy *policy,
const char *buf, size_t count)
{
unsigned int input;
int ret;
unsigned int j;
ret = sscanf (buf, "%u", &input);
down(&dbs_sem);
if (ret != 1 || input > MAX_FREQUENCY_DOWN_THRESHOLD ||
input < MIN_FREQUENCY_DOWN_THRESHOLD ||
input >= dbs_tuners_ins.up_threshold) {
up(&dbs_sem);
if ( ret != 1 )
return -EINVAL;
}
dbs_tuners_ins.down_threshold = input;
if ( input > 1 )
input = 1;
down(&dbs_sem);
if ( input == dbs_tuners_ins.ignore_nice ) { /* nothing to do */
up(&dbs_sem);
return count;
}
dbs_tuners_ins.ignore_nice = input;
/* we need to re-evaluate prev_cpu_idle_up and prev_cpu_idle_down */
for_each_online_cpu(j) {
struct cpu_dbs_info_s *j_dbs_info;
j_dbs_info = &per_cpu(cpu_dbs_info, j);
j_dbs_info->prev_cpu_idle_up = get_cpu_idle_time(j);
j_dbs_info->prev_cpu_idle_down = j_dbs_info->prev_cpu_idle_up;
}
up(&dbs_sem);
return count;
@ -201,7 +220,7 @@ __ATTR(_name, 0644, show_##_name, store_##_name)
define_one_rw(sampling_rate);
define_one_rw(sampling_down_factor);
define_one_rw(up_threshold);
define_one_rw(down_threshold);
define_one_rw(ignore_nice);
static struct attribute * dbs_attributes[] = {
&sampling_rate_max.attr,
@ -209,7 +228,7 @@ static struct attribute * dbs_attributes[] = {
&sampling_rate.attr,
&sampling_down_factor.attr,
&up_threshold.attr,
&down_threshold.attr,
&ignore_nice.attr,
NULL
};
@ -222,9 +241,8 @@ static struct attribute_group dbs_attr_group = {
static void dbs_check_cpu(int cpu)
{
unsigned int idle_ticks, up_idle_ticks, down_idle_ticks;
unsigned int total_idle_ticks;
unsigned int freq_down_step;
unsigned int idle_ticks, up_idle_ticks, total_ticks;
unsigned int freq_next;
unsigned int freq_down_sampling_rate;
static int down_skip[NR_CPUS];
struct cpu_dbs_info_s *this_dbs_info;
@ -238,38 +256,25 @@ static void dbs_check_cpu(int cpu)
policy = this_dbs_info->cur_policy;
/*
* The default safe range is 20% to 80%
* Every sampling_rate, we check
* - If current idle time is less than 20%, then we try to
* increase frequency
* Every sampling_rate*sampling_down_factor, we check
* - If current idle time is more than 80%, then we try to
* decrease frequency
* Every sampling_rate, we check, if current idle time is less
* than 20% (default), then we try to increase frequency
* Every sampling_rate*sampling_down_factor, we look for a the lowest
* frequency which can sustain the load while keeping idle time over
* 30%. If such a frequency exist, we try to decrease to this frequency.
*
* Any frequency increase takes it to the maximum frequency.
* Frequency reduction happens at minimum steps of
* 5% of max_frequency
* 5% (default) of current frequency
*/
/* Check for frequency increase */
total_idle_ticks = kstat_cpu(cpu).cpustat.idle +
kstat_cpu(cpu).cpustat.iowait;
idle_ticks = total_idle_ticks -
this_dbs_info->prev_cpu_idle_up;
this_dbs_info->prev_cpu_idle_up = total_idle_ticks;
idle_ticks = UINT_MAX;
for_each_cpu_mask(j, policy->cpus) {
unsigned int tmp_idle_ticks;
unsigned int tmp_idle_ticks, total_idle_ticks;
struct cpu_dbs_info_s *j_dbs_info;
if (j == cpu)
continue;
j_dbs_info = &per_cpu(cpu_dbs_info, j);
/* Check for frequency increase */
total_idle_ticks = kstat_cpu(j).cpustat.idle +
kstat_cpu(j).cpustat.iowait;
total_idle_ticks = get_cpu_idle_time(j);
tmp_idle_ticks = total_idle_ticks -
j_dbs_info->prev_cpu_idle_up;
j_dbs_info->prev_cpu_idle_up = total_idle_ticks;
@ -281,13 +286,23 @@ static void dbs_check_cpu(int cpu)
/* Scale idle ticks by 100 and compare with up and down ticks */
idle_ticks *= 100;
up_idle_ticks = (100 - dbs_tuners_ins.up_threshold) *
sampling_rate_in_HZ(dbs_tuners_ins.sampling_rate);
usecs_to_jiffies(dbs_tuners_ins.sampling_rate);
if (idle_ticks < up_idle_ticks) {
down_skip[cpu] = 0;
for_each_cpu_mask(j, policy->cpus) {
struct cpu_dbs_info_s *j_dbs_info;
j_dbs_info = &per_cpu(cpu_dbs_info, j);
j_dbs_info->prev_cpu_idle_down =
j_dbs_info->prev_cpu_idle_up;
}
/* if we are already at full speed then break out early */
if (policy->cur == policy->max)
return;
__cpufreq_driver_target(policy, policy->max,
CPUFREQ_RELATION_H);
down_skip[cpu] = 0;
this_dbs_info->prev_cpu_idle_down = total_idle_ticks;
return;
}
@ -296,23 +311,14 @@ static void dbs_check_cpu(int cpu)
if (down_skip[cpu] < dbs_tuners_ins.sampling_down_factor)
return;
total_idle_ticks = kstat_cpu(cpu).cpustat.idle +
kstat_cpu(cpu).cpustat.iowait;
idle_ticks = total_idle_ticks -
this_dbs_info->prev_cpu_idle_down;
this_dbs_info->prev_cpu_idle_down = total_idle_ticks;
idle_ticks = UINT_MAX;
for_each_cpu_mask(j, policy->cpus) {
unsigned int tmp_idle_ticks;
unsigned int tmp_idle_ticks, total_idle_ticks;
struct cpu_dbs_info_s *j_dbs_info;
if (j == cpu)
continue;
j_dbs_info = &per_cpu(cpu_dbs_info, j);
/* Check for frequency increase */
total_idle_ticks = kstat_cpu(j).cpustat.idle +
kstat_cpu(j).cpustat.iowait;
/* Check for frequency decrease */
total_idle_ticks = j_dbs_info->prev_cpu_idle_up;
tmp_idle_ticks = total_idle_ticks -
j_dbs_info->prev_cpu_idle_down;
j_dbs_info->prev_cpu_idle_down = total_idle_ticks;
@ -321,38 +327,37 @@ static void dbs_check_cpu(int cpu)
idle_ticks = tmp_idle_ticks;
}
/* Scale idle ticks by 100 and compare with up and down ticks */
idle_ticks *= 100;
down_skip[cpu] = 0;
/* if we cannot reduce the frequency anymore, break out early */
if (policy->cur == policy->min)
return;
/* Compute how many ticks there are between two measurements */
freq_down_sampling_rate = dbs_tuners_ins.sampling_rate *
dbs_tuners_ins.sampling_down_factor;
down_idle_ticks = (100 - dbs_tuners_ins.down_threshold) *
sampling_rate_in_HZ(freq_down_sampling_rate);
total_ticks = usecs_to_jiffies(freq_down_sampling_rate);
if (idle_ticks > down_idle_ticks ) {
freq_down_step = (5 * policy->max) / 100;
/*
* The optimal frequency is the frequency that is the lowest that
* can support the current CPU usage without triggering the up
* policy. To be safe, we focus 10 points under the threshold.
*/
freq_next = ((total_ticks - idle_ticks) * 100) / total_ticks;
freq_next = (freq_next * policy->cur) /
(dbs_tuners_ins.up_threshold - 10);
/* max freq cannot be less than 100. But who knows.... */
if (unlikely(freq_down_step == 0))
freq_down_step = 5;
__cpufreq_driver_target(policy,
policy->cur - freq_down_step,
CPUFREQ_RELATION_H);
return;
}
if (freq_next <= ((policy->cur * 95) / 100))
__cpufreq_driver_target(policy, freq_next, CPUFREQ_RELATION_L);
}
static void do_dbs_timer(void *data)
{
int i;
down(&dbs_sem);
for (i = 0; i < NR_CPUS; i++)
if (cpu_online(i))
dbs_check_cpu(i);
for_each_online_cpu(i)
dbs_check_cpu(i);
schedule_delayed_work(&dbs_work,
sampling_rate_in_HZ(dbs_tuners_ins.sampling_rate));
usecs_to_jiffies(dbs_tuners_ins.sampling_rate));
up(&dbs_sem);
}
@ -360,7 +365,7 @@ static inline void dbs_timer_init(void)
{
INIT_WORK(&dbs_work, do_dbs_timer, NULL);
schedule_delayed_work(&dbs_work,
sampling_rate_in_HZ(dbs_tuners_ins.sampling_rate));
usecs_to_jiffies(dbs_tuners_ins.sampling_rate));
return;
}
@ -397,12 +402,9 @@ static int cpufreq_governor_dbs(struct cpufreq_policy *policy,
j_dbs_info = &per_cpu(cpu_dbs_info, j);
j_dbs_info->cur_policy = policy;
j_dbs_info->prev_cpu_idle_up =
kstat_cpu(j).cpustat.idle +
kstat_cpu(j).cpustat.iowait;
j_dbs_info->prev_cpu_idle_down =
kstat_cpu(j).cpustat.idle +
kstat_cpu(j).cpustat.iowait;
j_dbs_info->prev_cpu_idle_up = get_cpu_idle_time(j);
j_dbs_info->prev_cpu_idle_down
= j_dbs_info->prev_cpu_idle_up;
}
this_dbs_info->enable = 1;
sysfs_create_group(&policy->kobj, &dbs_attr_group);
@ -422,6 +424,7 @@ static int cpufreq_governor_dbs(struct cpufreq_policy *policy,
def_sampling_rate = (latency / 1000) *
DEF_SAMPLING_RATE_LATENCY_MULTIPLIER;
dbs_tuners_ins.sampling_rate = def_sampling_rate;
dbs_tuners_ins.ignore_nice = 0;
dbs_timer_init();
}
@ -461,12 +464,11 @@ static int cpufreq_governor_dbs(struct cpufreq_policy *policy,
return 0;
}
struct cpufreq_governor cpufreq_gov_dbs = {
static struct cpufreq_governor cpufreq_gov_dbs = {
.name = "ondemand",
.governor = cpufreq_governor_dbs,
.owner = THIS_MODULE,
};
EXPORT_SYMBOL(cpufreq_gov_dbs);
static int __init cpufreq_gov_dbs_init(void)
{

View File

@ -19,6 +19,7 @@
#include <linux/percpu.h>
#include <linux/kobject.h>
#include <linux/spinlock.h>
#include <asm/cputime.h>
static spinlock_t cpufreq_stats_lock;
@ -29,20 +30,14 @@ static struct freq_attr _attr_##_name = {\
.show = _show,\
};
static unsigned long
delta_time(unsigned long old, unsigned long new)
{
return (old > new) ? (old - new): (new + ~old + 1);
}
struct cpufreq_stats {
unsigned int cpu;
unsigned int total_trans;
unsigned long long last_time;
unsigned long long last_time;
unsigned int max_state;
unsigned int state_num;
unsigned int last_index;
unsigned long long *time_in_state;
cputime64_t *time_in_state;
unsigned int *freq_table;
#ifdef CONFIG_CPU_FREQ_STAT_DETAILS
unsigned int *trans_table;
@ -60,12 +55,16 @@ static int
cpufreq_stats_update (unsigned int cpu)
{
struct cpufreq_stats *stat;
unsigned long long cur_time;
cur_time = get_jiffies_64();
spin_lock(&cpufreq_stats_lock);
stat = cpufreq_stats_table[cpu];
if (stat->time_in_state)
stat->time_in_state[stat->last_index] +=
delta_time(stat->last_time, jiffies);
stat->last_time = jiffies;
stat->time_in_state[stat->last_index] =
cputime64_add(stat->time_in_state[stat->last_index],
cputime_sub(cur_time, stat->last_time));
stat->last_time = cur_time;
spin_unlock(&cpufreq_stats_lock);
return 0;
}
@ -90,8 +89,8 @@ show_time_in_state(struct cpufreq_policy *policy, char *buf)
return 0;
cpufreq_stats_update(stat->cpu);
for (i = 0; i < stat->state_num; i++) {
len += sprintf(buf + len, "%u %llu\n",
stat->freq_table[i], stat->time_in_state[i]);
len += sprintf(buf + len, "%u %llu\n", stat->freq_table[i],
(unsigned long long)cputime64_to_clock_t(stat->time_in_state[i]));
}
return len;
}
@ -107,16 +106,30 @@ show_trans_table(struct cpufreq_policy *policy, char *buf)
if(!stat)
return 0;
cpufreq_stats_update(stat->cpu);
len += snprintf(buf + len, PAGE_SIZE - len, " From : To\n");
len += snprintf(buf + len, PAGE_SIZE - len, " : ");
for (i = 0; i < stat->state_num; i++) {
if (len >= PAGE_SIZE)
break;
len += snprintf(buf + len, PAGE_SIZE - len, "%9u:\t",
len += snprintf(buf + len, PAGE_SIZE - len, "%9u ",
stat->freq_table[i]);
}
if (len >= PAGE_SIZE)
return len;
len += snprintf(buf + len, PAGE_SIZE - len, "\n");
for (i = 0; i < stat->state_num; i++) {
if (len >= PAGE_SIZE)
break;
len += snprintf(buf + len, PAGE_SIZE - len, "%9u: ",
stat->freq_table[i]);
for (j = 0; j < stat->state_num; j++) {
if (len >= PAGE_SIZE)
break;
len += snprintf(buf + len, PAGE_SIZE - len, "%u\t",
len += snprintf(buf + len, PAGE_SIZE - len, "%9u ",
stat->trans_table[i*stat->max_state+j]);
}
len += snprintf(buf + len, PAGE_SIZE - len, "\n");
@ -197,7 +210,7 @@ cpufreq_stats_create_table (struct cpufreq_policy *policy,
count++;
}
alloc_size = count * sizeof(int) + count * sizeof(long long);
alloc_size = count * sizeof(int) + count * sizeof(cputime64_t);
#ifdef CONFIG_CPU_FREQ_STAT_DETAILS
alloc_size += count * count * sizeof(int);
@ -224,7 +237,7 @@ cpufreq_stats_create_table (struct cpufreq_policy *policy,
}
stat->state_num = j;
spin_lock(&cpufreq_stats_lock);
stat->last_time = jiffies;
stat->last_time = get_jiffies_64();
stat->last_index = freq_table_get_index(stat, policy->cur);
spin_unlock(&cpufreq_stats_lock);
cpufreq_cpu_put(data);

View File

@ -11,6 +11,7 @@
* published by the Free Software Foundation.
*/
#include <linux/config.h>
#include <linux/acpi.h>
#include <linux/console.h>
#include <linux/efi.h>

View File

@ -2,6 +2,7 @@
* i2c-ali1563.c - i2c driver for the ALi 1563 Southbridge
*
* Copyright (C) 2004 Patrick Mochel
* 2005 Rudolf Marek <r.marek@sh.cvut.cz>
*
* The 1563 southbridge is deceptively similar to the 1533, with a
* few notable exceptions. One of those happens to be the fact they
@ -57,10 +58,11 @@
#define HST_CNTL2_BLOCK 0x05
#define HST_CNTL2_SIZEMASK 0x38
static unsigned short ali1563_smba;
static int ali1563_transaction(struct i2c_adapter * a)
static int ali1563_transaction(struct i2c_adapter * a, int size)
{
u32 data;
int timeout;
@ -73,7 +75,7 @@ static int ali1563_transaction(struct i2c_adapter * a)
data = inb_p(SMB_HST_STS);
if (data & HST_STS_BAD) {
dev_warn(&a->dev,"ali1563: Trying to reset busy device\n");
dev_err(&a->dev, "ali1563: Trying to reset busy device\n");
outb_p(data | HST_STS_BAD,SMB_HST_STS);
data = inb_p(SMB_HST_STS);
if (data & HST_STS_BAD)
@ -94,19 +96,31 @@ static int ali1563_transaction(struct i2c_adapter * a)
if (timeout && !(data & HST_STS_BAD))
return 0;
dev_warn(&a->dev, "SMBus Error: %s%s%s%s%s\n",
timeout ? "Timeout " : "",
data & HST_STS_FAIL ? "Transaction Failed " : "",
data & HST_STS_BUSERR ? "No response or Bus Collision " : "",
data & HST_STS_DEVERR ? "Device Error " : "",
!(data & HST_STS_DONE) ? "Transaction Never Finished " : "");
if (!(data & HST_STS_DONE))
if (!timeout) {
dev_err(&a->dev, "Timeout - Trying to KILL transaction!\n");
/* Issue 'kill' to host controller */
outb_p(HST_CNTL2_KILL,SMB_HST_CNTL2);
else
/* Issue timeout to reset all devices on bus */
data = inb_p(SMB_HST_STS);
}
/* device error - no response, ignore the autodetection case */
if ((data & HST_STS_DEVERR) && (size != HST_CNTL2_QUICK)) {
dev_err(&a->dev, "Device error!\n");
}
/* bus collision */
if (data & HST_STS_BUSERR) {
dev_err(&a->dev, "Bus collision!\n");
/* Issue timeout, hoping it helps */
outb_p(HST_CNTL1_TIMEOUT,SMB_HST_CNTL1);
}
if (data & HST_STS_FAIL) {
dev_err(&a->dev, "Cleaning fail after KILL!\n");
outb_p(0x0,SMB_HST_CNTL2);
}
return -1;
}
@ -149,7 +163,7 @@ static int ali1563_block_start(struct i2c_adapter * a)
if (timeout && !(data & HST_STS_BAD))
return 0;
dev_warn(&a->dev, "SMBus Error: %s%s%s%s%s\n",
dev_err(&a->dev, "SMBus Error: %s%s%s%s%s\n",
timeout ? "Timeout " : "",
data & HST_STS_FAIL ? "Transaction Failed " : "",
data & HST_STS_BUSERR ? "No response or Bus Collision " : "",
@ -242,13 +256,15 @@ static s32 ali1563_access(struct i2c_adapter * a, u16 addr,
}
outb_p(((addr & 0x7f) << 1) | (rw & 0x01), SMB_HST_ADD);
outb_p(inb_p(SMB_HST_CNTL2) | (size << 3), SMB_HST_CNTL2);
outb_p((inb_p(SMB_HST_CNTL2) & ~HST_CNTL2_SIZEMASK) | (size << 3), SMB_HST_CNTL2);
/* Write the command register */
switch(size) {
case HST_CNTL2_BYTE:
if (rw== I2C_SMBUS_WRITE)
outb_p(cmd, SMB_HST_CMD);
/* Beware it uses DAT0 register and not CMD! */
outb_p(cmd, SMB_HST_DAT0);
break;
case HST_CNTL2_BYTE_DATA:
outb_p(cmd, SMB_HST_CMD);
@ -268,7 +284,7 @@ static s32 ali1563_access(struct i2c_adapter * a, u16 addr,
goto Done;
}
if ((error = ali1563_transaction(a)))
if ((error = ali1563_transaction(a, size)))
goto Done;
if ((rw == I2C_SMBUS_WRITE) || (size == HST_CNTL2_QUICK))

View File

@ -1936,7 +1936,7 @@ static ide_startstop_t cdrom_do_block_pc(ide_drive_t *drive, struct request *rq)
* NOTE! The "len" and "addr" checks should possibly have
* separate masks.
*/
if ((rq->data_len & mask) || (addr & mask))
if ((rq->data_len & 15) || (addr & mask))
info->dma = 0;
}

View File

@ -72,6 +72,7 @@ static struct amd_ide_chip {
{ PCI_DEVICE_ID_NVIDIA_NFORCE3S_SATA2, 0x50, AMD_UDMA_133 },
{ PCI_DEVICE_ID_NVIDIA_NFORCE_CK804_IDE, 0x50, AMD_UDMA_133 },
{ PCI_DEVICE_ID_NVIDIA_NFORCE_MCP04_IDE, 0x50, AMD_UDMA_133 },
{ PCI_DEVICE_ID_NVIDIA_NFORCE_MCP51_IDE, 0x50, AMD_UDMA_133 },
{ 0 }
};
@ -487,6 +488,7 @@ static ide_pci_device_t amd74xx_chipsets[] __devinitdata = {
/* 12 */ DECLARE_NV_DEV("NFORCE3-250-SATA2"),
/* 13 */ DECLARE_NV_DEV("NFORCE-CK804"),
/* 14 */ DECLARE_NV_DEV("NFORCE-MCP04"),
/* 15 */ DECLARE_NV_DEV("NFORCE-MCP51"),
};
static int __devinit amd74xx_probe(struct pci_dev *dev, const struct pci_device_id *id)
@ -521,6 +523,7 @@ static struct pci_device_id amd74xx_pci_tbl[] = {
#endif
{ PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NFORCE_CK804_IDE, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 13 },
{ PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NFORCE_MCP04_IDE, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 14 },
{ PCI_VENDOR_ID_NVIDIA, PCI_DEVICE_ID_NVIDIA_NFORCE_MCP51_IDE, PCI_ANY_ID, PCI_ANY_ID, 0, 0, 15 },
{ 0, },
};
MODULE_DEVICE_TABLE(pci, amd74xx_pci_tbl);

View File

@ -68,23 +68,3 @@ config GAMEPORT_CS461X
depends on PCI
endif
# Yes, SOUND_GAMEPORT looks a bit odd. Yes, it ends up being turned on
# in every .config. Please don't touch it. It is here to handle an
# unusual dependency between GAMEPORT and sound drivers.
#
# Some sound drivers call gameport functions. If GAMEPORT is
# not selected, empty stubs are provided for the functions and all is
# well.
# If GAMEPORT is built in, everything is fine.
# If GAMEPORT is a module, however, it would need to be loaded for the
# sound driver to be able to link properly. Therefore, the sound
# driver must be a module as well in that case. Since there's no way
# to express that directly in Kconfig, we use SOUND_GAMEPORT to
# express it. SOUND_GAMEPORT boils down to "if GAMEPORT is 'm',
# anything that depends on SOUND_GAMEPORT must be 'm' as well. if
# GAMEPORT is 'y' or 'n', it can be anything".
config SOUND_GAMEPORT
tristate
default m if GAMEPORT=m
default y

View File

@ -422,7 +422,7 @@ static struct input_handle *joydev_connect(struct input_handler *handler, struct
joydev->nkey++;
}
for (i = 0; i < BTN_JOYSTICK - BTN_MISC + 1; i++)
for (i = 0; i < BTN_JOYSTICK - BTN_MISC; i++)
if (test_bit(i + BTN_MISC, dev->keybit)) {
joydev->keymap[i] = joydev->nkey;
joydev->keypam[joydev->nkey] = i + BTN_MISC;

View File

@ -171,9 +171,9 @@ static struct {
unsigned char set2;
} atkbd_scroll_keys[] = {
{ ATKBD_SCR_1, 0xc5 },
{ ATKBD_SCR_2, 0xa9 },
{ ATKBD_SCR_4, 0xb6 },
{ ATKBD_SCR_8, 0xa7 },
{ ATKBD_SCR_2, 0x9d },
{ ATKBD_SCR_4, 0xa4 },
{ ATKBD_SCR_8, 0x9b },
{ ATKBD_SCR_CLICK, 0xe0 },
{ ATKBD_SCR_LEFT, 0xcb },
{ ATKBD_SCR_RIGHT, 0xd2 },

View File

@ -518,13 +518,16 @@ static int psmouse_probe(struct psmouse *psmouse)
/*
* First, we check if it's a mouse. It should send 0x00 or 0x03
* in case of an IntelliMouse in 4-byte mode or 0x04 for IM Explorer.
* Sunrex K8561 IR Keyboard/Mouse reports 0xff on second and subsequent
* ID queries, probably due to a firmware bug.
*/
param[0] = 0xa5;
if (ps2_command(ps2dev, param, PSMOUSE_CMD_GETID))
return -1;
if (param[0] != 0x00 && param[0] != 0x03 && param[0] != 0x04)
if (param[0] != 0x00 && param[0] != 0x03 &&
param[0] != 0x04 && param[0] != 0xff)
return -1;
/*
@ -972,7 +975,7 @@ static int psmouse_set_maxproto(const char *val, struct kernel_param *kp)
return -EINVAL;
if (!strncmp(val, "any", 3)) {
*((unsigned int *)kp->arg) = -1UL;
*((unsigned int *)kp->arg) = -1U;
return 0;
}

Some files were not shown because too many files have changed in this diff Show More