Merge git://git.infradead.org/mtd-2.6

* git://git.infradead.org/mtd-2.6: (69 commits)
  Revert "[MTD] m25p80.c code cleanup"
  [MTD] [NAND] GPIO driver depends on ARM... for now.
  [MTD] [NAND] sh_flctl: fix compile error
  [MTD] [NOR] AT49BV6416 has swapped erase regions
  [MTD] [NAND] GPIO NAND flash driver
  [MTD] cmdlineparts documentation change - explain where mtd-id comes from
  [MTD] cfi_cmdset_0002.c: Add Macronix CFI V1.0 TopBottom detection
  [MTD] [NAND] Fix compilation warnings in drivers/mtd/nand/cs553x_nand.c
  [JFFS2] Write buffer offset adjustment for NOR-ECC (Sibley) flash
  [MTD] mtdoops: Fix a bug where block may not be erased
  [MTD] mtdoops: Add a magic number to logged kernel oops
  [MTD] mtdoops: Fix an off by one error
  [JFFS2] Correct parameter names of jffs2_compress() in comments
  [MTD] [NAND] sh_flctl: add support for Renesas SuperH FLCTL
  [MTD] [NAND] Bug on atmel_nand HW ECC : OOB info not correctly written
  [MTD] [MAPS] Remove unused variable after ROM API cleanup.
  [MTD] m25p80.c extended jedec support (v2)
  [MTD] remove unused mtd parameter in of_mtd_parse_partitions()
  [MTD] [NAND] remove dead Kconfig associated with !CONFIG_PPC_MERGE
  [MTD] [NAND] driver extension to support NAND on TQM85xx modules
  ...
This commit is contained in:
Linus Torvalds 2008-10-20 09:03:12 -07:00
commit 2be508d847
69 changed files with 5439 additions and 1505 deletions

View File

@ -0,0 +1,714 @@
Introduction
============
Having looked at the linux mtd/nand driver and more specific at nand_ecc.c
I felt there was room for optimisation. I bashed the code for a few hours
performing tricks like table lookup removing superfluous code etc.
After that the speed was increased by 35-40%.
Still I was not too happy as I felt there was additional room for improvement.
Bad! I was hooked.
I decided to annotate my steps in this file. Perhaps it is useful to someone
or someone learns something from it.
The problem
===========
NAND flash (at least SLC one) typically has sectors of 256 bytes.
However NAND flash is not extremely reliable so some error detection
(and sometimes correction) is needed.
This is done by means of a Hamming code. I'll try to explain it in
laymans terms (and apologies to all the pro's in the field in case I do
not use the right terminology, my coding theory class was almost 30
years ago, and I must admit it was not one of my favourites).
As I said before the ecc calculation is performed on sectors of 256
bytes. This is done by calculating several parity bits over the rows and
columns. The parity used is even parity which means that the parity bit = 1
if the data over which the parity is calculated is 1 and the parity bit = 0
if the data over which the parity is calculated is 0. So the total
number of bits over the data over which the parity is calculated + the
parity bit is even. (see wikipedia if you can't follow this).
Parity is often calculated by means of an exclusive or operation,
sometimes also referred to as xor. In C the operator for xor is ^
Back to ecc.
Let's give a small figure:
byte 0: bit7 bit6 bit5 bit4 bit3 bit2 bit1 bit0 rp0 rp2 rp4 ... rp14
byte 1: bit7 bit6 bit5 bit4 bit3 bit2 bit1 bit0 rp1 rp2 rp4 ... rp14
byte 2: bit7 bit6 bit5 bit4 bit3 bit2 bit1 bit0 rp0 rp3 rp4 ... rp14
byte 3: bit7 bit6 bit5 bit4 bit3 bit2 bit1 bit0 rp1 rp3 rp4 ... rp14
byte 4: bit7 bit6 bit5 bit4 bit3 bit2 bit1 bit0 rp0 rp2 rp5 ... rp14
....
byte 254: bit7 bit6 bit5 bit4 bit3 bit2 bit1 bit0 rp0 rp3 rp5 ... rp15
byte 255: bit7 bit6 bit5 bit4 bit3 bit2 bit1 bit0 rp1 rp3 rp5 ... rp15
cp1 cp0 cp1 cp0 cp1 cp0 cp1 cp0
cp3 cp3 cp2 cp2 cp3 cp3 cp2 cp2
cp5 cp5 cp5 cp5 cp4 cp4 cp4 cp4
This figure represents a sector of 256 bytes.
cp is my abbreviaton for column parity, rp for row parity.
Let's start to explain column parity.
cp0 is the parity that belongs to all bit0, bit2, bit4, bit6.
so the sum of all bit0, bit2, bit4 and bit6 values + cp0 itself is even.
Similarly cp1 is the sum of all bit1, bit3, bit5 and bit7.
cp2 is the parity over bit0, bit1, bit4 and bit5
cp3 is the parity over bit2, bit3, bit6 and bit7.
cp4 is the parity over bit0, bit1, bit2 and bit3.
cp5 is the parity over bit4, bit5, bit6 and bit7.
Note that each of cp0 .. cp5 is exactly one bit.
Row parity actually works almost the same.
rp0 is the parity of all even bytes (0, 2, 4, 6, ... 252, 254)
rp1 is the parity of all odd bytes (1, 3, 5, 7, ..., 253, 255)
rp2 is the parity of all bytes 0, 1, 4, 5, 8, 9, ...
(so handle two bytes, then skip 2 bytes).
rp3 is covers the half rp2 does not cover (bytes 2, 3, 6, 7, 10, 11, ...)
for rp4 the rule is cover 4 bytes, skip 4 bytes, cover 4 bytes, skip 4 etc.
so rp4 calculates parity over bytes 0, 1, 2, 3, 8, 9, 10, 11, 16, ...)
and rp5 covers the other half, so bytes 4, 5, 6, 7, 12, 13, 14, 15, 20, ..
The story now becomes quite boring. I guess you get the idea.
rp6 covers 8 bytes then skips 8 etc
rp7 skips 8 bytes then covers 8 etc
rp8 covers 16 bytes then skips 16 etc
rp9 skips 16 bytes then covers 16 etc
rp10 covers 32 bytes then skips 32 etc
rp11 skips 32 bytes then covers 32 etc
rp12 covers 64 bytes then skips 64 etc
rp13 skips 64 bytes then covers 64 etc
rp14 covers 128 bytes then skips 128
rp15 skips 128 bytes then covers 128
In the end the parity bits are grouped together in three bytes as
follows:
ECC Bit 7 Bit 6 Bit 5 Bit 4 Bit 3 Bit 2 Bit 1 Bit 0
ECC 0 rp07 rp06 rp05 rp04 rp03 rp02 rp01 rp00
ECC 1 rp15 rp14 rp13 rp12 rp11 rp10 rp09 rp08
ECC 2 cp5 cp4 cp3 cp2 cp1 cp0 1 1
I detected after writing this that ST application note AN1823
(http://www.st.com/stonline/books/pdf/docs/10123.pdf) gives a much
nicer picture.(but they use line parity as term where I use row parity)
Oh well, I'm graphically challenged, so suffer with me for a moment :-)
And I could not reuse the ST picture anyway for copyright reasons.
Attempt 0
=========
Implementing the parity calculation is pretty simple.
In C pseudocode:
for (i = 0; i < 256; i++)
{
if (i & 0x01)
rp1 = bit7 ^ bit6 ^ bit5 ^ bit4 ^ bit3 ^ bit2 ^ bit1 ^ bit0 ^ rp1;
else
rp0 = bit7 ^ bit6 ^ bit5 ^ bit4 ^ bit3 ^ bit2 ^ bit1 ^ bit0 ^ rp1;
if (i & 0x02)
rp3 = bit7 ^ bit6 ^ bit5 ^ bit4 ^ bit3 ^ bit2 ^ bit1 ^ bit0 ^ rp3;
else
rp2 = bit7 ^ bit6 ^ bit5 ^ bit4 ^ bit3 ^ bit2 ^ bit1 ^ bit0 ^ rp2;
if (i & 0x04)
rp5 = bit7 ^ bit6 ^ bit5 ^ bit4 ^ bit3 ^ bit2 ^ bit1 ^ bit0 ^ rp5;
else
rp4 = bit7 ^ bit6 ^ bit5 ^ bit4 ^ bit3 ^ bit2 ^ bit1 ^ bit0 ^ rp4;
if (i & 0x08)
rp7 = bit7 ^ bit6 ^ bit5 ^ bit4 ^ bit3 ^ bit2 ^ bit1 ^ bit0 ^ rp7;
else
rp6 = bit7 ^ bit6 ^ bit5 ^ bit4 ^ bit3 ^ bit2 ^ bit1 ^ bit0 ^ rp6;
if (i & 0x10)
rp9 = bit7 ^ bit6 ^ bit5 ^ bit4 ^ bit3 ^ bit2 ^ bit1 ^ bit0 ^ rp9;
else
rp8 = bit7 ^ bit6 ^ bit5 ^ bit4 ^ bit3 ^ bit2 ^ bit1 ^ bit0 ^ rp8;
if (i & 0x20)
rp11 = bit7 ^ bit6 ^ bit5 ^ bit4 ^ bit3 ^ bit2 ^ bit1 ^ bit0 ^ rp11;
else
rp10 = bit7 ^ bit6 ^ bit5 ^ bit4 ^ bit3 ^ bit2 ^ bit1 ^ bit0 ^ rp10;
if (i & 0x40)
rp13 = bit7 ^ bit6 ^ bit5 ^ bit4 ^ bit3 ^ bit2 ^ bit1 ^ bit0 ^ rp13;
else
rp12 = bit7 ^ bit6 ^ bit5 ^ bit4 ^ bit3 ^ bit2 ^ bit1 ^ bit0 ^ rp12;
if (i & 0x80)
rp15 = bit7 ^ bit6 ^ bit5 ^ bit4 ^ bit3 ^ bit2 ^ bit1 ^ bit0 ^ rp15;
else
rp14 = bit7 ^ bit6 ^ bit5 ^ bit4 ^ bit3 ^ bit2 ^ bit1 ^ bit0 ^ rp14;
cp0 = bit6 ^ bit4 ^ bit2 ^ bit0 ^ cp0;
cp1 = bit7 ^ bit5 ^ bit3 ^ bit1 ^ cp1;
cp2 = bit5 ^ bit4 ^ bit1 ^ bit0 ^ cp2;
cp3 = bit7 ^ bit6 ^ bit3 ^ bit2 ^ cp3
cp4 = bit3 ^ bit2 ^ bit1 ^ bit0 ^ cp4
cp5 = bit7 ^ bit6 ^ bit5 ^ bit4 ^ cp5
}
Analysis 0
==========
C does have bitwise operators but not really operators to do the above
efficiently (and most hardware has no such instructions either).
Therefore without implementing this it was clear that the code above was
not going to bring me a Nobel prize :-)
Fortunately the exclusive or operation is commutative, so we can combine
the values in any order. So instead of calculating all the bits
individually, let us try to rearrange things.
For the column parity this is easy. We can just xor the bytes and in the
end filter out the relevant bits. This is pretty nice as it will bring
all cp calculation out of the if loop.
Similarly we can first xor the bytes for the various rows.
This leads to:
Attempt 1
=========
const char parity[256] = {
0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0,
1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1,
1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1,
0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0,
1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1,
0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0,
0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0,
1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1,
1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1,
0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0,
0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0,
1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1,
0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0,
1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1,
1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1,
0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0
};
void ecc1(const unsigned char *buf, unsigned char *code)
{
int i;
const unsigned char *bp = buf;
unsigned char cur;
unsigned char rp0, rp1, rp2, rp3, rp4, rp5, rp6, rp7;
unsigned char rp8, rp9, rp10, rp11, rp12, rp13, rp14, rp15;
unsigned char par;
par = 0;
rp0 = 0; rp1 = 0; rp2 = 0; rp3 = 0;
rp4 = 0; rp5 = 0; rp6 = 0; rp7 = 0;
rp8 = 0; rp9 = 0; rp10 = 0; rp11 = 0;
rp12 = 0; rp13 = 0; rp14 = 0; rp15 = 0;
for (i = 0; i < 256; i++)
{
cur = *bp++;
par ^= cur;
if (i & 0x01) rp1 ^= cur; else rp0 ^= cur;
if (i & 0x02) rp3 ^= cur; else rp2 ^= cur;
if (i & 0x04) rp5 ^= cur; else rp4 ^= cur;
if (i & 0x08) rp7 ^= cur; else rp6 ^= cur;
if (i & 0x10) rp9 ^= cur; else rp8 ^= cur;
if (i & 0x20) rp11 ^= cur; else rp10 ^= cur;
if (i & 0x40) rp13 ^= cur; else rp12 ^= cur;
if (i & 0x80) rp15 ^= cur; else rp14 ^= cur;
}
code[0] =
(parity[rp7] << 7) |
(parity[rp6] << 6) |
(parity[rp5] << 5) |
(parity[rp4] << 4) |
(parity[rp3] << 3) |
(parity[rp2] << 2) |
(parity[rp1] << 1) |
(parity[rp0]);
code[1] =
(parity[rp15] << 7) |
(parity[rp14] << 6) |
(parity[rp13] << 5) |
(parity[rp12] << 4) |
(parity[rp11] << 3) |
(parity[rp10] << 2) |
(parity[rp9] << 1) |
(parity[rp8]);
code[2] =
(parity[par & 0xf0] << 7) |
(parity[par & 0x0f] << 6) |
(parity[par & 0xcc] << 5) |
(parity[par & 0x33] << 4) |
(parity[par & 0xaa] << 3) |
(parity[par & 0x55] << 2);
code[0] = ~code[0];
code[1] = ~code[1];
code[2] = ~code[2];
}
Still pretty straightforward. The last three invert statements are there to
give a checksum of 0xff 0xff 0xff for an empty flash. In an empty flash
all data is 0xff, so the checksum then matches.
I also introduced the parity lookup. I expected this to be the fastest
way to calculate the parity, but I will investigate alternatives later
on.
Analysis 1
==========
The code works, but is not terribly efficient. On my system it took
almost 4 times as much time as the linux driver code. But hey, if it was
*that* easy this would have been done long before.
No pain. no gain.
Fortunately there is plenty of room for improvement.
In step 1 we moved from bit-wise calculation to byte-wise calculation.
However in C we can also use the unsigned long data type and virtually
every modern microprocessor supports 32 bit operations, so why not try
to write our code in such a way that we process data in 32 bit chunks.
Of course this means some modification as the row parity is byte by
byte. A quick analysis:
for the column parity we use the par variable. When extending to 32 bits
we can in the end easily calculate p0 and p1 from it.
(because par now consists of 4 bytes, contributing to rp1, rp0, rp1, rp0
respectively)
also rp2 and rp3 can be easily retrieved from par as rp3 covers the
first two bytes and rp2 the last two bytes.
Note that of course now the loop is executed only 64 times (256/4).
And note that care must taken wrt byte ordering. The way bytes are
ordered in a long is machine dependent, and might affect us.
Anyway, if there is an issue: this code is developed on x86 (to be
precise: a DELL PC with a D920 Intel CPU)
And of course the performance might depend on alignment, but I expect
that the I/O buffers in the nand driver are aligned properly (and
otherwise that should be fixed to get maximum performance).
Let's give it a try...
Attempt 2
=========
extern const char parity[256];
void ecc2(const unsigned char *buf, unsigned char *code)
{
int i;
const unsigned long *bp = (unsigned long *)buf;
unsigned long cur;
unsigned long rp0, rp1, rp2, rp3, rp4, rp5, rp6, rp7;
unsigned long rp8, rp9, rp10, rp11, rp12, rp13, rp14, rp15;
unsigned long par;
par = 0;
rp0 = 0; rp1 = 0; rp2 = 0; rp3 = 0;
rp4 = 0; rp5 = 0; rp6 = 0; rp7 = 0;
rp8 = 0; rp9 = 0; rp10 = 0; rp11 = 0;
rp12 = 0; rp13 = 0; rp14 = 0; rp15 = 0;
for (i = 0; i < 64; i++)
{
cur = *bp++;
par ^= cur;
if (i & 0x01) rp5 ^= cur; else rp4 ^= cur;
if (i & 0x02) rp7 ^= cur; else rp6 ^= cur;
if (i & 0x04) rp9 ^= cur; else rp8 ^= cur;
if (i & 0x08) rp11 ^= cur; else rp10 ^= cur;
if (i & 0x10) rp13 ^= cur; else rp12 ^= cur;
if (i & 0x20) rp15 ^= cur; else rp14 ^= cur;
}
/*
we need to adapt the code generation for the fact that rp vars are now
long; also the column parity calculation needs to be changed.
we'll bring rp4 to 15 back to single byte entities by shifting and
xoring
*/
rp4 ^= (rp4 >> 16); rp4 ^= (rp4 >> 8); rp4 &= 0xff;
rp5 ^= (rp5 >> 16); rp5 ^= (rp5 >> 8); rp5 &= 0xff;
rp6 ^= (rp6 >> 16); rp6 ^= (rp6 >> 8); rp6 &= 0xff;
rp7 ^= (rp7 >> 16); rp7 ^= (rp7 >> 8); rp7 &= 0xff;
rp8 ^= (rp8 >> 16); rp8 ^= (rp8 >> 8); rp8 &= 0xff;
rp9 ^= (rp9 >> 16); rp9 ^= (rp9 >> 8); rp9 &= 0xff;
rp10 ^= (rp10 >> 16); rp10 ^= (rp10 >> 8); rp10 &= 0xff;
rp11 ^= (rp11 >> 16); rp11 ^= (rp11 >> 8); rp11 &= 0xff;
rp12 ^= (rp12 >> 16); rp12 ^= (rp12 >> 8); rp12 &= 0xff;
rp13 ^= (rp13 >> 16); rp13 ^= (rp13 >> 8); rp13 &= 0xff;
rp14 ^= (rp14 >> 16); rp14 ^= (rp14 >> 8); rp14 &= 0xff;
rp15 ^= (rp15 >> 16); rp15 ^= (rp15 >> 8); rp15 &= 0xff;
rp3 = (par >> 16); rp3 ^= (rp3 >> 8); rp3 &= 0xff;
rp2 = par & 0xffff; rp2 ^= (rp2 >> 8); rp2 &= 0xff;
par ^= (par >> 16);
rp1 = (par >> 8); rp1 &= 0xff;
rp0 = (par & 0xff);
par ^= (par >> 8); par &= 0xff;
code[0] =
(parity[rp7] << 7) |
(parity[rp6] << 6) |
(parity[rp5] << 5) |
(parity[rp4] << 4) |
(parity[rp3] << 3) |
(parity[rp2] << 2) |
(parity[rp1] << 1) |
(parity[rp0]);
code[1] =
(parity[rp15] << 7) |
(parity[rp14] << 6) |
(parity[rp13] << 5) |
(parity[rp12] << 4) |
(parity[rp11] << 3) |
(parity[rp10] << 2) |
(parity[rp9] << 1) |
(parity[rp8]);
code[2] =
(parity[par & 0xf0] << 7) |
(parity[par & 0x0f] << 6) |
(parity[par & 0xcc] << 5) |
(parity[par & 0x33] << 4) |
(parity[par & 0xaa] << 3) |
(parity[par & 0x55] << 2);
code[0] = ~code[0];
code[1] = ~code[1];
code[2] = ~code[2];
}
The parity array is not shown any more. Note also that for these
examples I kinda deviated from my regular programming style by allowing
multiple statements on a line, not using { } in then and else blocks
with only a single statement and by using operators like ^=
Analysis 2
==========
The code (of course) works, and hurray: we are a little bit faster than
the linux driver code (about 15%). But wait, don't cheer too quickly.
THere is more to be gained.
If we look at e.g. rp14 and rp15 we see that we either xor our data with
rp14 or with rp15. However we also have par which goes over all data.
This means there is no need to calculate rp14 as it can be calculated from
rp15 through rp14 = par ^ rp15;
(or if desired we can avoid calculating rp15 and calculate it from
rp14). That is why some places refer to inverse parity.
Of course the same thing holds for rp4/5, rp6/7, rp8/9, rp10/11 and rp12/13.
Effectively this means we can eliminate the else clause from the if
statements. Also we can optimise the calculation in the end a little bit
by going from long to byte first. Actually we can even avoid the table
lookups
Attempt 3
=========
Odd replaced:
if (i & 0x01) rp5 ^= cur; else rp4 ^= cur;
if (i & 0x02) rp7 ^= cur; else rp6 ^= cur;
if (i & 0x04) rp9 ^= cur; else rp8 ^= cur;
if (i & 0x08) rp11 ^= cur; else rp10 ^= cur;
if (i & 0x10) rp13 ^= cur; else rp12 ^= cur;
if (i & 0x20) rp15 ^= cur; else rp14 ^= cur;
with
if (i & 0x01) rp5 ^= cur;
if (i & 0x02) rp7 ^= cur;
if (i & 0x04) rp9 ^= cur;
if (i & 0x08) rp11 ^= cur;
if (i & 0x10) rp13 ^= cur;
if (i & 0x20) rp15 ^= cur;
and outside the loop added:
rp4 = par ^ rp5;
rp6 = par ^ rp7;
rp8 = par ^ rp9;
rp10 = par ^ rp11;
rp12 = par ^ rp13;
rp14 = par ^ rp15;
And after that the code takes about 30% more time, although the number of
statements is reduced. This is also reflected in the assembly code.
Analysis 3
==========
Very weird. Guess it has to do with caching or instruction parallellism
or so. I also tried on an eeePC (Celeron, clocked at 900 Mhz). Interesting
observation was that this one is only 30% slower (according to time)
executing the code as my 3Ghz D920 processor.
Well, it was expected not to be easy so maybe instead move to a
different track: let's move back to the code from attempt2 and do some
loop unrolling. This will eliminate a few if statements. I'll try
different amounts of unrolling to see what works best.
Attempt 4
=========
Unrolled the loop 1, 2, 3 and 4 times.
For 4 the code starts with:
for (i = 0; i < 4; i++)
{
cur = *bp++;
par ^= cur;
rp4 ^= cur;
rp6 ^= cur;
rp8 ^= cur;
rp10 ^= cur;
if (i & 0x1) rp13 ^= cur; else rp12 ^= cur;
if (i & 0x2) rp15 ^= cur; else rp14 ^= cur;
cur = *bp++;
par ^= cur;
rp5 ^= cur;
rp6 ^= cur;
...
Analysis 4
==========
Unrolling once gains about 15%
Unrolling twice keeps the gain at about 15%
Unrolling three times gives a gain of 30% compared to attempt 2.
Unrolling four times gives a marginal improvement compared to unrolling
three times.
I decided to proceed with a four time unrolled loop anyway. It was my gut
feeling that in the next steps I would obtain additional gain from it.
The next step was triggered by the fact that par contains the xor of all
bytes and rp4 and rp5 each contain the xor of half of the bytes.
So in effect par = rp4 ^ rp5. But as xor is commutative we can also say
that rp5 = par ^ rp4. So no need to keep both rp4 and rp5 around. We can
eliminate rp5 (or rp4, but I already foresaw another optimisation).
The same holds for rp6/7, rp8/9, rp10/11 rp12/13 and rp14/15.
Attempt 5
=========
Effectively so all odd digit rp assignments in the loop were removed.
This included the else clause of the if statements.
Of course after the loop we need to correct things by adding code like:
rp5 = par ^ rp4;
Also the initial assignments (rp5 = 0; etc) could be removed.
Along the line I also removed the initialisation of rp0/1/2/3.
Analysis 5
==========
Measurements showed this was a good move. The run-time roughly halved
compared with attempt 4 with 4 times unrolled, and we only require 1/3rd
of the processor time compared to the current code in the linux kernel.
However, still I thought there was more. I didn't like all the if
statements. Why not keep a running parity and only keep the last if
statement. Time for yet another version!
Attempt 6
=========
THe code within the for loop was changed to:
for (i = 0; i < 4; i++)
{
cur = *bp++; tmppar = cur; rp4 ^= cur;
cur = *bp++; tmppar ^= cur; rp6 ^= tmppar;
cur = *bp++; tmppar ^= cur; rp4 ^= cur;
cur = *bp++; tmppar ^= cur; rp8 ^= tmppar;
cur = *bp++; tmppar ^= cur; rp4 ^= cur; rp6 ^= cur;
cur = *bp++; tmppar ^= cur; rp6 ^= cur;
cur = *bp++; tmppar ^= cur; rp4 ^= cur;
cur = *bp++; tmppar ^= cur; rp10 ^= tmppar;
cur = *bp++; tmppar ^= cur; rp4 ^= cur; rp6 ^= cur; rp8 ^= cur;
cur = *bp++; tmppar ^= cur; rp6 ^= cur; rp8 ^= cur;
cur = *bp++; tmppar ^= cur; rp4 ^= cur; rp8 ^= cur;
cur = *bp++; tmppar ^= cur; rp8 ^= cur;
cur = *bp++; tmppar ^= cur; rp4 ^= cur; rp6 ^= cur;
cur = *bp++; tmppar ^= cur; rp6 ^= cur;
cur = *bp++; tmppar ^= cur; rp4 ^= cur;
cur = *bp++; tmppar ^= cur;
par ^= tmppar;
if ((i & 0x1) == 0) rp12 ^= tmppar;
if ((i & 0x2) == 0) rp14 ^= tmppar;
}
As you can see tmppar is used to accumulate the parity within a for
iteration. In the last 3 statements is is added to par and, if needed,
to rp12 and rp14.
While making the changes I also found that I could exploit that tmppar
contains the running parity for this iteration. So instead of having:
rp4 ^= cur; rp6 = cur;
I removed the rp6 = cur; statement and did rp6 ^= tmppar; on next
statement. A similar change was done for rp8 and rp10
Analysis 6
==========
Measuring this code again showed big gain. When executing the original
linux code 1 million times, this took about 1 second on my system.
(using time to measure the performance). After this iteration I was back
to 0.075 sec. Actually I had to decide to start measuring over 10
million interations in order not to loose too much accuracy. This one
definitely seemed to be the jackpot!
There is a little bit more room for improvement though. There are three
places with statements:
rp4 ^= cur; rp6 ^= cur;
It seems more efficient to also maintain a variable rp4_6 in the while
loop; This eliminates 3 statements per loop. Of course after the loop we
need to correct by adding:
rp4 ^= rp4_6;
rp6 ^= rp4_6
Furthermore there are 4 sequential assingments to rp8. This can be
encoded slightly more efficient by saving tmppar before those 4 lines
and later do rp8 = rp8 ^ tmppar ^ notrp8;
(where notrp8 is the value of rp8 before those 4 lines).
Again a use of the commutative property of xor.
Time for a new test!
Attempt 7
=========
The new code now looks like:
for (i = 0; i < 4; i++)
{
cur = *bp++; tmppar = cur; rp4 ^= cur;
cur = *bp++; tmppar ^= cur; rp6 ^= tmppar;
cur = *bp++; tmppar ^= cur; rp4 ^= cur;
cur = *bp++; tmppar ^= cur; rp8 ^= tmppar;
cur = *bp++; tmppar ^= cur; rp4_6 ^= cur;
cur = *bp++; tmppar ^= cur; rp6 ^= cur;
cur = *bp++; tmppar ^= cur; rp4 ^= cur;
cur = *bp++; tmppar ^= cur; rp10 ^= tmppar;
notrp8 = tmppar;
cur = *bp++; tmppar ^= cur; rp4_6 ^= cur;
cur = *bp++; tmppar ^= cur; rp6 ^= cur;
cur = *bp++; tmppar ^= cur; rp4 ^= cur;
cur = *bp++; tmppar ^= cur;
rp8 = rp8 ^ tmppar ^ notrp8;
cur = *bp++; tmppar ^= cur; rp4_6 ^= cur;
cur = *bp++; tmppar ^= cur; rp6 ^= cur;
cur = *bp++; tmppar ^= cur; rp4 ^= cur;
cur = *bp++; tmppar ^= cur;
par ^= tmppar;
if ((i & 0x1) == 0) rp12 ^= tmppar;
if ((i & 0x2) == 0) rp14 ^= tmppar;
}
rp4 ^= rp4_6;
rp6 ^= rp4_6;
Not a big change, but every penny counts :-)
Analysis 7
==========
Acutally this made things worse. Not very much, but I don't want to move
into the wrong direction. Maybe something to investigate later. Could
have to do with caching again.
Guess that is what there is to win within the loop. Maybe unrolling one
more time will help. I'll keep the optimisations from 7 for now.
Attempt 8
=========
Unrolled the loop one more time.
Analysis 8
==========
This makes things worse. Let's stick with attempt 6 and continue from there.
Although it seems that the code within the loop cannot be optimised
further there is still room to optimize the generation of the ecc codes.
We can simply calcualate the total parity. If this is 0 then rp4 = rp5
etc. If the parity is 1, then rp4 = !rp5;
But if rp4 = rp5 we do not need rp5 etc. We can just write the even bits
in the result byte and then do something like
code[0] |= (code[0] << 1);
Lets test this.
Attempt 9
=========
Changed the code but again this slightly degrades performance. Tried all
kind of other things, like having dedicated parity arrays to avoid the
shift after parity[rp7] << 7; No gain.
Change the lookup using the parity array by using shift operators (e.g.
replace parity[rp7] << 7 with:
rp7 ^= (rp7 << 4);
rp7 ^= (rp7 << 2);
rp7 ^= (rp7 << 1);
rp7 &= 0x80;
No gain.
The only marginal change was inverting the parity bits, so we can remove
the last three invert statements.
Ah well, pity this does not deliver more. Then again 10 million
iterations using the linux driver code takes between 13 and 13.5
seconds, whereas my code now takes about 0.73 seconds for those 10
million iterations. So basically I've improved the performance by a
factor 18 on my system. Not that bad. Of course on different hardware
you will get different results. No warranties!
But of course there is no such thing as a free lunch. The codesize almost
tripled (from 562 bytes to 1434 bytes). Then again, it is not that much.
Correcting errors
=================
For correcting errors I again used the ST application note as a starter,
but I also peeked at the existing code.
The algorithm itself is pretty straightforward. Just xor the given and
the calculated ecc. If all bytes are 0 there is no problem. If 11 bits
are 1 we have one correctable bit error. If there is 1 bit 1, we have an
error in the given ecc code.
It proved to be fastest to do some table lookups. Performance gain
introduced by this is about a factor 2 on my system when a repair had to
be done, and 1% or so if no repair had to be done.
Code size increased from 330 bytes to 686 bytes for this function.
(gcc 4.2, -O3)
Conclusion
==========
The gain when calculating the ecc is tremendous. Om my development hardware
a speedup of a factor of 18 for ecc calculation was achieved. On a test on an
embedded system with a MIPS core a factor 7 was obtained.
On a test with a Linksys NSLU2 (ARMv5TE processor) the speedup was a factor
5 (big endian mode, gcc 4.1.2, -O3)
For correction not much gain could be obtained (as bitflips are rare). Then
again there are also much less cycles spent there.
It seems there is not much more gain possible in this, at least when
programmed in C. Of course it might be possible to squeeze something more
out of it with an assembler program, but due to pipeline behaviour etc
this is very tricky (at least for intel hw).
Author: Frans Meulenbroeks
Copyright (C) 2008 Koninklijke Philips Electronics NV.

View File

@ -4,6 +4,43 @@
#include <linux/mtd/mtd.h>
#include <linux/mtd/partitions.h>
struct pxa3xx_nand_timing {
unsigned int tCH; /* Enable signal hold time */
unsigned int tCS; /* Enable signal setup time */
unsigned int tWH; /* ND_nWE high duration */
unsigned int tWP; /* ND_nWE pulse time */
unsigned int tRH; /* ND_nRE high duration */
unsigned int tRP; /* ND_nRE pulse width */
unsigned int tR; /* ND_nWE high to ND_nRE low for read */
unsigned int tWHR; /* ND_nWE high to ND_nRE low for status read */
unsigned int tAR; /* ND_ALE low to ND_nRE low delay */
};
struct pxa3xx_nand_cmdset {
uint16_t read1;
uint16_t read2;
uint16_t program;
uint16_t read_status;
uint16_t read_id;
uint16_t erase;
uint16_t reset;
uint16_t lock;
uint16_t unlock;
uint16_t lock_status;
};
struct pxa3xx_nand_flash {
const struct pxa3xx_nand_timing *timing; /* NAND Flash timing */
const struct pxa3xx_nand_cmdset *cmdset;
uint32_t page_per_block;/* Pages per block (PG_PER_BLK) */
uint32_t page_size; /* Page size in bytes (PAGE_SZ) */
uint32_t flash_width; /* Width of Flash memory (DWIDTH_M) */
uint32_t dfc_width; /* Width of flash controller(DWIDTH_C) */
uint32_t num_blocks; /* Number of physical blocks in Flash */
uint32_t chip_id;
};
struct pxa3xx_nand_platform_data {
/* the data flash bus is shared between the Static Memory
@ -12,8 +49,11 @@ struct pxa3xx_nand_platform_data {
*/
int enable_arbiter;
struct mtd_partition *parts;
unsigned int nr_parts;
const struct mtd_partition *parts;
unsigned int nr_parts;
const struct pxa3xx_nand_flash * flash;
size_t num_flash;
};
extern void pxa3xx_set_nand_info(struct pxa3xx_nand_platform_data *info);

View File

@ -0,0 +1,27 @@
/*
* Copyright 2004-2007 Freescale Semiconductor, Inc. All Rights Reserved.
* Copyright 2008 Sascha Hauer, kernel@pengutronix.de
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version 2
* of the License, or (at your option) any later version.
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston,
* MA 02110-1301, USA.
*/
#ifndef __ASM_ARCH_NAND_H
#define __ASM_ARCH_NAND_H
struct mxc_nand_platform_data {
int width; /* data bus width in bytes */
int hw_ecc; /* 0 if supress hardware ECC */
};
#endif /* __ASM_ARCH_NAND_H */

View File

@ -16,6 +16,10 @@ struct omap_onenand_platform_data {
int gpio_irq;
struct mtd_partition *parts;
int nr_parts;
int (*onenand_setup)(void __iomem *);
int (*onenand_setup)(void __iomem *, int freq);
int dma_channel;
};
int omap2_onenand_rephase(void);
#define ONENAND_MAX_PARTITIONS 8

View File

@ -172,6 +172,11 @@ config MTD_CHAR
memory chips, and also use ioctl() to obtain information about
the device, or to erase parts of it.
config HAVE_MTD_OTP
bool
help
Enable access to OTP regions using MTD_CHAR.
config MTD_BLKDEVS
tristate "Common interface to block layer for MTD 'translation layers'"
depends on BLOCK

View File

@ -6,6 +6,7 @@ menu "RAM/ROM/Flash chip drivers"
config MTD_CFI
tristate "Detect flash chips by Common Flash Interface (CFI) probe"
select MTD_GEN_PROBE
select MTD_CFI_UTIL
help
The Common Flash Interface specification was developed by Intel,
AMD and other flash manufactures that provides a universal method
@ -154,6 +155,7 @@ config MTD_CFI_I8
config MTD_OTP
bool "Protection Registers aka one-time programmable (OTP) bits"
depends on MTD_CFI_ADV_OPTIONS
select HAVE_MTD_OTP
default n
help
This enables support for reading, writing and locking so called
@ -187,7 +189,7 @@ config MTD_CFI_INTELEXT
StrataFlash and other parts.
config MTD_CFI_AMDSTD
tristate "Support for AMD/Fujitsu flash chips"
tristate "Support for AMD/Fujitsu/Spansion flash chips"
depends on MTD_GEN_PROBE
select MTD_CFI_UTIL
help

View File

@ -478,6 +478,28 @@ struct mtd_info *cfi_cmdset_0001(struct map_info *map, int primary)
else
cfi->chips[i].erase_time = 2000000;
if (cfi->cfiq->WordWriteTimeoutTyp &&
cfi->cfiq->WordWriteTimeoutMax)
cfi->chips[i].word_write_time_max =
1<<(cfi->cfiq->WordWriteTimeoutTyp +
cfi->cfiq->WordWriteTimeoutMax);
else
cfi->chips[i].word_write_time_max = 50000 * 8;
if (cfi->cfiq->BufWriteTimeoutTyp &&
cfi->cfiq->BufWriteTimeoutMax)
cfi->chips[i].buffer_write_time_max =
1<<(cfi->cfiq->BufWriteTimeoutTyp +
cfi->cfiq->BufWriteTimeoutMax);
if (cfi->cfiq->BlockEraseTimeoutTyp &&
cfi->cfiq->BlockEraseTimeoutMax)
cfi->chips[i].erase_time_max =
1000<<(cfi->cfiq->BlockEraseTimeoutTyp +
cfi->cfiq->BlockEraseTimeoutMax);
else
cfi->chips[i].erase_time_max = 2000000 * 8;
cfi->chips[i].ref_point_counter = 0;
init_waitqueue_head(&(cfi->chips[i].wq));
}
@ -703,6 +725,10 @@ static int chip_ready (struct map_info *map, struct flchip *chip, unsigned long
struct cfi_pri_intelext *cfip = cfi->cmdset_priv;
unsigned long timeo = jiffies + HZ;
/* Prevent setting state FL_SYNCING for chip in suspended state. */
if (mode == FL_SYNCING && chip->oldstate != FL_READY)
goto sleep;
switch (chip->state) {
case FL_STATUS:
@ -808,8 +834,9 @@ static int get_chip(struct map_info *map, struct flchip *chip, unsigned long adr
DECLARE_WAITQUEUE(wait, current);
retry:
if (chip->priv && (mode == FL_WRITING || mode == FL_ERASING
|| mode == FL_OTP_WRITE || mode == FL_SHUTDOWN)) {
if (chip->priv &&
(mode == FL_WRITING || mode == FL_ERASING || mode == FL_OTP_WRITE
|| mode == FL_SHUTDOWN) && chip->state != FL_SYNCING) {
/*
* OK. We have possibility for contention on the write/erase
* operations which are global to the real chip and not per
@ -859,6 +886,14 @@ static int get_chip(struct map_info *map, struct flchip *chip, unsigned long adr
return ret;
}
spin_lock(&shared->lock);
/* We should not own chip if it is already
* in FL_SYNCING state. Put contender and retry. */
if (chip->state == FL_SYNCING) {
put_chip(map, contender, contender->start);
spin_unlock(contender->mutex);
goto retry;
}
spin_unlock(contender->mutex);
}
@ -1012,7 +1047,7 @@ static void __xipram xip_enable(struct map_info *map, struct flchip *chip,
static int __xipram xip_wait_for_operation(
struct map_info *map, struct flchip *chip,
unsigned long adr, unsigned int chip_op_time )
unsigned long adr, unsigned int chip_op_time_max)
{
struct cfi_private *cfi = map->fldrv_priv;
struct cfi_pri_intelext *cfip = cfi->cmdset_priv;
@ -1021,7 +1056,7 @@ static int __xipram xip_wait_for_operation(
flstate_t oldstate, newstate;
start = xip_currtime();
usec = chip_op_time * 8;
usec = chip_op_time_max;
if (usec == 0)
usec = 500000;
done = 0;
@ -1131,8 +1166,8 @@ static int __xipram xip_wait_for_operation(
#define XIP_INVAL_CACHED_RANGE(map, from, size) \
INVALIDATE_CACHED_RANGE(map, from, size)
#define INVAL_CACHE_AND_WAIT(map, chip, cmd_adr, inval_adr, inval_len, usec) \
xip_wait_for_operation(map, chip, cmd_adr, usec)
#define INVAL_CACHE_AND_WAIT(map, chip, cmd_adr, inval_adr, inval_len, usec, usec_max) \
xip_wait_for_operation(map, chip, cmd_adr, usec_max)
#else
@ -1144,7 +1179,7 @@ static int __xipram xip_wait_for_operation(
static int inval_cache_and_wait_for_operation(
struct map_info *map, struct flchip *chip,
unsigned long cmd_adr, unsigned long inval_adr, int inval_len,
unsigned int chip_op_time)
unsigned int chip_op_time, unsigned int chip_op_time_max)
{
struct cfi_private *cfi = map->fldrv_priv;
map_word status, status_OK = CMD(0x80);
@ -1156,8 +1191,7 @@ static int inval_cache_and_wait_for_operation(
INVALIDATE_CACHED_RANGE(map, inval_adr, inval_len);
spin_lock(chip->mutex);
/* set our timeout to 8 times the expected delay */
timeo = chip_op_time * 8;
timeo = chip_op_time_max;
if (!timeo)
timeo = 500000;
reset_timeo = timeo;
@ -1217,8 +1251,8 @@ static int inval_cache_and_wait_for_operation(
#endif
#define WAIT_TIMEOUT(map, chip, adr, udelay) \
INVAL_CACHE_AND_WAIT(map, chip, adr, 0, 0, udelay);
#define WAIT_TIMEOUT(map, chip, adr, udelay, udelay_max) \
INVAL_CACHE_AND_WAIT(map, chip, adr, 0, 0, udelay, udelay_max);
static int do_point_onechip (struct map_info *map, struct flchip *chip, loff_t adr, size_t len)
@ -1452,7 +1486,8 @@ static int __xipram do_write_oneword(struct map_info *map, struct flchip *chip,
ret = INVAL_CACHE_AND_WAIT(map, chip, adr,
adr, map_bankwidth(map),
chip->word_write_time);
chip->word_write_time,
chip->word_write_time_max);
if (ret) {
xip_enable(map, chip, adr);
printk(KERN_ERR "%s: word write error (status timeout)\n", map->name);
@ -1623,7 +1658,7 @@ static int __xipram do_write_buffer(struct map_info *map, struct flchip *chip,
chip->state = FL_WRITING_TO_BUFFER;
map_write(map, write_cmd, cmd_adr);
ret = WAIT_TIMEOUT(map, chip, cmd_adr, 0);
ret = WAIT_TIMEOUT(map, chip, cmd_adr, 0, 0);
if (ret) {
/* Argh. Not ready for write to buffer */
map_word Xstatus = map_read(map, cmd_adr);
@ -1640,7 +1675,7 @@ static int __xipram do_write_buffer(struct map_info *map, struct flchip *chip,
/* Figure out the number of words to write */
word_gap = (-adr & (map_bankwidth(map)-1));
words = (len - word_gap + map_bankwidth(map) - 1) / map_bankwidth(map);
words = DIV_ROUND_UP(len - word_gap, map_bankwidth(map));
if (!word_gap) {
words--;
} else {
@ -1692,7 +1727,8 @@ static int __xipram do_write_buffer(struct map_info *map, struct flchip *chip,
ret = INVAL_CACHE_AND_WAIT(map, chip, cmd_adr,
initial_adr, initial_len,
chip->buffer_write_time);
chip->buffer_write_time,
chip->buffer_write_time_max);
if (ret) {
map_write(map, CMD(0x70), cmd_adr);
chip->state = FL_STATUS;
@ -1827,7 +1863,8 @@ static int __xipram do_erase_oneblock(struct map_info *map, struct flchip *chip,
ret = INVAL_CACHE_AND_WAIT(map, chip, adr,
adr, len,
chip->erase_time);
chip->erase_time,
chip->erase_time_max);
if (ret) {
map_write(map, CMD(0x70), adr);
chip->state = FL_STATUS;
@ -2006,7 +2043,7 @@ static int __xipram do_xxlock_oneblock(struct map_info *map, struct flchip *chip
*/
udelay = (!extp || !(extp->FeatureSupport & (1 << 5))) ? 1000000/HZ : 0;
ret = WAIT_TIMEOUT(map, chip, adr, udelay);
ret = WAIT_TIMEOUT(map, chip, adr, udelay, udelay * 100);
if (ret) {
map_write(map, CMD(0x70), adr);
chip->state = FL_STATUS;

View File

@ -13,6 +13,8 @@
* XIP support hooks by Vitaly Wool (based on code for Intel flash
* by Nicolas Pitre)
*
* 25/09/2008 Christopher Moore: TopBottom fixup for many Macronix with CFI V1.0
*
* Occasionally maintained by Thayne Harbaugh tharbaugh at lnxi dot com
*
* This code is GPL
@ -43,6 +45,7 @@
#define MANUFACTURER_AMD 0x0001
#define MANUFACTURER_ATMEL 0x001F
#define MANUFACTURER_MACRONIX 0x00C2
#define MANUFACTURER_SST 0x00BF
#define SST49LF004B 0x0060
#define SST49LF040B 0x0050
@ -144,12 +147,44 @@ static void fixup_amd_bootblock(struct mtd_info *mtd, void* param)
if (((major << 8) | minor) < 0x3131) {
/* CFI version 1.0 => don't trust bootloc */
DEBUG(MTD_DEBUG_LEVEL1,
"%s: JEDEC Vendor ID is 0x%02X Device ID is 0x%02X\n",
map->name, cfi->mfr, cfi->id);
/* AFAICS all 29LV400 with a bottom boot block have a device ID
* of 0x22BA in 16-bit mode and 0xBA in 8-bit mode.
* These were badly detected as they have the 0x80 bit set
* so treat them as a special case.
*/
if (((cfi->id == 0xBA) || (cfi->id == 0x22BA)) &&
/* Macronix added CFI to their 2nd generation
* MX29LV400C B/T but AFAICS no other 29LV400 (AMD,
* Fujitsu, Spansion, EON, ESI and older Macronix)
* has CFI.
*
* Therefore also check the manufacturer.
* This reduces the risk of false detection due to
* the 8-bit device ID.
*/
(cfi->mfr == MANUFACTURER_MACRONIX)) {
DEBUG(MTD_DEBUG_LEVEL1,
"%s: Macronix MX29LV400C with bottom boot block"
" detected\n", map->name);
extp->TopBottom = 2; /* bottom boot */
} else
if (cfi->id & 0x80) {
printk(KERN_WARNING "%s: JEDEC Device ID is 0x%02X. Assuming broken CFI table.\n", map->name, cfi->id);
extp->TopBottom = 3; /* top boot */
} else {
extp->TopBottom = 2; /* bottom boot */
}
DEBUG(MTD_DEBUG_LEVEL1,
"%s: AMD CFI PRI V%c.%c has no boot block field;"
" deduced %s from Device ID\n", map->name, major, minor,
extp->TopBottom == 2 ? "bottom" : "top");
}
}
#endif
@ -178,10 +213,18 @@ static void fixup_convert_atmel_pri(struct mtd_info *mtd, void *param)
if (atmel_pri.Features & 0x02)
extp->EraseSuspend = 2;
if (atmel_pri.BottomBoot)
extp->TopBottom = 2;
else
extp->TopBottom = 3;
/* Some chips got it backwards... */
if (cfi->id == AT49BV6416) {
if (atmel_pri.BottomBoot)
extp->TopBottom = 3;
else
extp->TopBottom = 2;
} else {
if (atmel_pri.BottomBoot)
extp->TopBottom = 2;
else
extp->TopBottom = 3;
}
/* burst write mode not supported */
cfi->cfiq->BufWriteTimeoutTyp = 0;
@ -243,6 +286,7 @@ static struct cfi_fixup cfi_fixup_table[] = {
{ CFI_MFR_ATMEL, CFI_ID_ANY, fixup_convert_atmel_pri, NULL },
#ifdef AMD_BOOTLOC_BUG
{ CFI_MFR_AMD, CFI_ID_ANY, fixup_amd_bootblock, NULL },
{ MANUFACTURER_MACRONIX, CFI_ID_ANY, fixup_amd_bootblock, NULL },
#endif
{ CFI_MFR_AMD, 0x0050, fixup_use_secsi, NULL, },
{ CFI_MFR_AMD, 0x0053, fixup_use_secsi, NULL, },

View File

@ -44,17 +44,14 @@ do { \
#define xip_enable(base, map, cfi) \
do { \
cfi_send_gen_cmd(0xF0, 0, base, map, cfi, cfi->device_type, NULL); \
cfi_send_gen_cmd(0xFF, 0, base, map, cfi, cfi->device_type, NULL); \
cfi_qry_mode_off(base, map, cfi); \
xip_allowed(base, map); \
} while (0)
#define xip_disable_qry(base, map, cfi) \
do { \
xip_disable(); \
cfi_send_gen_cmd(0xF0, 0, base, map, cfi, cfi->device_type, NULL); \
cfi_send_gen_cmd(0xFF, 0, base, map, cfi, cfi->device_type, NULL); \
cfi_send_gen_cmd(0x98, 0x55, base, map, cfi, cfi->device_type, NULL); \
cfi_qry_mode_on(base, map, cfi); \
} while (0)
#else
@ -70,32 +67,6 @@ do { \
in: interleave,type,mode
ret: table index, <0 for error
*/
static int __xipram qry_present(struct map_info *map, __u32 base,
struct cfi_private *cfi)
{
int osf = cfi->interleave * cfi->device_type; // scale factor
map_word val[3];
map_word qry[3];
qry[0] = cfi_build_cmd('Q', map, cfi);
qry[1] = cfi_build_cmd('R', map, cfi);
qry[2] = cfi_build_cmd('Y', map, cfi);
val[0] = map_read(map, base + osf*0x10);
val[1] = map_read(map, base + osf*0x11);
val[2] = map_read(map, base + osf*0x12);
if (!map_word_equal(map, qry[0], val[0]))
return 0;
if (!map_word_equal(map, qry[1], val[1]))
return 0;
if (!map_word_equal(map, qry[2], val[2]))
return 0;
return 1; // "QRY" found
}
static int __xipram cfi_probe_chip(struct map_info *map, __u32 base,
unsigned long *chip_map, struct cfi_private *cfi)
@ -116,11 +87,7 @@ static int __xipram cfi_probe_chip(struct map_info *map, __u32 base,
}
xip_disable();
cfi_send_gen_cmd(0xF0, 0, base, map, cfi, cfi->device_type, NULL);
cfi_send_gen_cmd(0xFF, 0, base, map, cfi, cfi->device_type, NULL);
cfi_send_gen_cmd(0x98, 0x55, base, map, cfi, cfi->device_type, NULL);
if (!qry_present(map,base,cfi)) {
if (!cfi_qry_mode_on(base, map, cfi)) {
xip_enable(base, map, cfi);
return 0;
}
@ -141,14 +108,13 @@ static int __xipram cfi_probe_chip(struct map_info *map, __u32 base,
start = i << cfi->chipshift;
/* This chip should be in read mode if it's one
we've already touched. */
if (qry_present(map, start, cfi)) {
if (cfi_qry_present(map, start, cfi)) {
/* Eep. This chip also had the QRY marker.
* Is it an alias for the new one? */
cfi_send_gen_cmd(0xF0, 0, start, map, cfi, cfi->device_type, NULL);
cfi_send_gen_cmd(0xFF, 0, start, map, cfi, cfi->device_type, NULL);
cfi_qry_mode_off(start, map, cfi);
/* If the QRY marker goes away, it's an alias */
if (!qry_present(map, start, cfi)) {
if (!cfi_qry_present(map, start, cfi)) {
xip_allowed(base, map);
printk(KERN_DEBUG "%s: Found an alias at 0x%x for the chip at 0x%lx\n",
map->name, base, start);
@ -158,10 +124,9 @@ static int __xipram cfi_probe_chip(struct map_info *map, __u32 base,
* unfortunate. Stick the new chip in read mode
* too and if it's the same, assume it's an alias. */
/* FIXME: Use other modes to do a proper check */
cfi_send_gen_cmd(0xF0, 0, base, map, cfi, cfi->device_type, NULL);
cfi_send_gen_cmd(0xFF, 0, start, map, cfi, cfi->device_type, NULL);
cfi_qry_mode_off(base, map, cfi);
if (qry_present(map, base, cfi)) {
if (cfi_qry_present(map, base, cfi)) {
xip_allowed(base, map);
printk(KERN_DEBUG "%s: Found an alias at 0x%x for the chip at 0x%lx\n",
map->name, base, start);
@ -176,8 +141,7 @@ static int __xipram cfi_probe_chip(struct map_info *map, __u32 base,
cfi->numchips++;
/* Put it back into Read Mode */
cfi_send_gen_cmd(0xF0, 0, base, map, cfi, cfi->device_type, NULL);
cfi_send_gen_cmd(0xFF, 0, base, map, cfi, cfi->device_type, NULL);
cfi_qry_mode_off(base, map, cfi);
xip_allowed(base, map);
printk(KERN_INFO "%s: Found %d x%d devices at 0x%x in %d-bit bank\n",
@ -237,9 +201,7 @@ static int __xipram cfi_chip_setup(struct map_info *map,
cfi_read_query(map, base + 0xf * ofs_factor);
/* Put it back into Read Mode */
cfi_send_gen_cmd(0xF0, 0, base, map, cfi, cfi->device_type, NULL);
/* ... even if it's an Intel chip */
cfi_send_gen_cmd(0xFF, 0, base, map, cfi, cfi->device_type, NULL);
cfi_qry_mode_off(base, map, cfi);
xip_allowed(base, map);
/* Do any necessary byteswapping */

View File

@ -24,6 +24,66 @@
#include <linux/mtd/cfi.h>
#include <linux/mtd/compatmac.h>
int __xipram cfi_qry_present(struct map_info *map, __u32 base,
struct cfi_private *cfi)
{
int osf = cfi->interleave * cfi->device_type; /* scale factor */
map_word val[3];
map_word qry[3];
qry[0] = cfi_build_cmd('Q', map, cfi);
qry[1] = cfi_build_cmd('R', map, cfi);
qry[2] = cfi_build_cmd('Y', map, cfi);
val[0] = map_read(map, base + osf*0x10);
val[1] = map_read(map, base + osf*0x11);
val[2] = map_read(map, base + osf*0x12);
if (!map_word_equal(map, qry[0], val[0]))
return 0;
if (!map_word_equal(map, qry[1], val[1]))
return 0;
if (!map_word_equal(map, qry[2], val[2]))
return 0;
return 1; /* "QRY" found */
}
EXPORT_SYMBOL_GPL(cfi_qry_present);
int __xipram cfi_qry_mode_on(uint32_t base, struct map_info *map,
struct cfi_private *cfi)
{
cfi_send_gen_cmd(0xF0, 0, base, map, cfi, cfi->device_type, NULL);
cfi_send_gen_cmd(0x98, 0x55, base, map, cfi, cfi->device_type, NULL);
if (cfi_qry_present(map, base, cfi))
return 1;
/* QRY not found probably we deal with some odd CFI chips */
/* Some revisions of some old Intel chips? */
cfi_send_gen_cmd(0xF0, 0, base, map, cfi, cfi->device_type, NULL);
cfi_send_gen_cmd(0xFF, 0, base, map, cfi, cfi->device_type, NULL);
cfi_send_gen_cmd(0x98, 0x55, base, map, cfi, cfi->device_type, NULL);
if (cfi_qry_present(map, base, cfi))
return 1;
/* ST M29DW chips */
cfi_send_gen_cmd(0xF0, 0, base, map, cfi, cfi->device_type, NULL);
cfi_send_gen_cmd(0x98, 0x555, base, map, cfi, cfi->device_type, NULL);
if (cfi_qry_present(map, base, cfi))
return 1;
/* QRY not found */
return 0;
}
EXPORT_SYMBOL_GPL(cfi_qry_mode_on);
void __xipram cfi_qry_mode_off(uint32_t base, struct map_info *map,
struct cfi_private *cfi)
{
cfi_send_gen_cmd(0xF0, 0, base, map, cfi, cfi->device_type, NULL);
cfi_send_gen_cmd(0xFF, 0, base, map, cfi, cfi->device_type, NULL);
}
EXPORT_SYMBOL_GPL(cfi_qry_mode_off);
struct cfi_extquery *
__xipram cfi_read_pri(struct map_info *map, __u16 adr, __u16 size, const char* name)
{
@ -48,8 +108,7 @@ __xipram cfi_read_pri(struct map_info *map, __u16 adr, __u16 size, const char* n
#endif
/* Switch it into Query Mode */
cfi_send_gen_cmd(0x98, 0x55, base, map, cfi, cfi->device_type, NULL);
cfi_qry_mode_on(base, map, cfi);
/* Read in the Extended Query Table */
for (i=0; i<size; i++) {
((unsigned char *)extp)[i] =
@ -57,8 +116,7 @@ __xipram cfi_read_pri(struct map_info *map, __u16 adr, __u16 size, const char* n
}
/* Make sure it returns to read mode */
cfi_send_gen_cmd(0xf0, 0, base, map, cfi, cfi->device_type, NULL);
cfi_send_gen_cmd(0xff, 0, base, map, cfi, cfi->device_type, NULL);
cfi_qry_mode_off(base, map, cfi);
#ifdef CONFIG_MTD_XIP
(void) map_read(map, base);

View File

@ -111,7 +111,7 @@ static struct cfi_private *genprobe_ident_chips(struct map_info *map, struct chi
max_chips = 1;
}
mapsize = sizeof(long) * ( (max_chips + BITS_PER_LONG-1) / BITS_PER_LONG );
mapsize = sizeof(long) * DIV_ROUND_UP(max_chips, BITS_PER_LONG);
chip_map = kzalloc(mapsize, GFP_KERNEL);
if (!chip_map) {
printk(KERN_WARNING "%s: kmalloc failed for CFI chip map\n", map->name);

View File

@ -7,6 +7,7 @@
*
* mtdparts=<mtddef>[;<mtddef]
* <mtddef> := <mtd-id>:<partdef>[,<partdef>]
* where <mtd-id> is the name from the "cat /proc/mtd" command
* <partdef> := <size>[@offset][<name>][ro][lk]
* <mtd-id> := unique name used in mapping driver/device (mtd->name)
* <size> := standard linux memsize OR "-" to denote all remaining space

View File

@ -59,6 +59,27 @@ config MTD_DATAFLASH
Sometimes DataFlash chips are packaged inside MMC-format
cards; at this writing, the MMC stack won't handle those.
config MTD_DATAFLASH_WRITE_VERIFY
bool "Verify DataFlash page writes"
depends on MTD_DATAFLASH
help
This adds an extra check when data is written to the flash.
It may help if you are verifying chip setup (timings etc) on
your board. There is a rare possibility that even though the
device thinks the write was successful, a bit could have been
flipped accidentally due to device wear or something else.
config MTD_DATAFLASH_OTP
bool "DataFlash OTP support (Security Register)"
depends on MTD_DATAFLASH
select HAVE_MTD_OTP
help
Newer DataFlash chips (revisions C and D) support 128 bytes of
one-time-programmable (OTP) data. The first half may be written
(once) with up to 64 bytes of data, such as a serial number or
other key product data. The second half is programmed with a
unique-to-each-chip bit pattern at the factory.
config MTD_M25P80
tristate "Support most SPI Flash chips (AT26DF, M25P, W25X, ...)"
depends on SPI_MASTER && EXPERIMENTAL

View File

@ -39,6 +39,7 @@
#define OPCODE_PP 0x02 /* Page program (up to 256 bytes) */
#define OPCODE_BE_4K 0x20 /* Erase 4KiB block */
#define OPCODE_BE_32K 0x52 /* Erase 32KiB block */
#define OPCODE_BE 0xc7 /* Erase whole flash block */
#define OPCODE_SE 0xd8 /* Sector erase (usually 64KiB) */
#define OPCODE_RDID 0x9f /* Read JEDEC ID */
@ -161,6 +162,31 @@ static int wait_till_ready(struct m25p *flash)
return 1;
}
/*
* Erase the whole flash memory
*
* Returns 0 if successful, non-zero otherwise.
*/
static int erase_block(struct m25p *flash)
{
DEBUG(MTD_DEBUG_LEVEL3, "%s: %s %dKiB\n",
flash->spi->dev.bus_id, __func__,
flash->mtd.size / 1024);
/* Wait until finished previous write command. */
if (wait_till_ready(flash))
return 1;
/* Send write enable, then erase commands. */
write_enable(flash);
/* Set up command buffer. */
flash->command[0] = OPCODE_BE;
spi_write(flash->spi, flash->command, 1);
return 0;
}
/*
* Erase one sector of flash memory at offset ``offset'' which is any
@ -229,15 +255,21 @@ static int m25p80_erase(struct mtd_info *mtd, struct erase_info *instr)
*/
/* now erase those sectors */
while (len) {
if (erase_sector(flash, addr)) {
instr->state = MTD_ERASE_FAILED;
mutex_unlock(&flash->lock);
return -EIO;
}
if (len == flash->mtd.size && erase_block(flash)) {
instr->state = MTD_ERASE_FAILED;
mutex_unlock(&flash->lock);
return -EIO;
} else {
while (len) {
if (erase_sector(flash, addr)) {
instr->state = MTD_ERASE_FAILED;
mutex_unlock(&flash->lock);
return -EIO;
}
addr += mtd->erasesize;
len -= mtd->erasesize;
addr += mtd->erasesize;
len -= mtd->erasesize;
}
}
mutex_unlock(&flash->lock);
@ -437,6 +469,7 @@ struct flash_info {
* then a two byte device id.
*/
u32 jedec_id;
u16 ext_id;
/* The size listed here is what works with OPCODE_SE, which isn't
* necessarily called a "sector" by the vendor.
@ -456,72 +489,75 @@ struct flash_info {
static struct flash_info __devinitdata m25p_data [] = {
/* Atmel -- some are (confusingly) marketed as "DataFlash" */
{ "at25fs010", 0x1f6601, 32 * 1024, 4, SECT_4K, },
{ "at25fs040", 0x1f6604, 64 * 1024, 8, SECT_4K, },
{ "at25fs010", 0x1f6601, 0, 32 * 1024, 4, SECT_4K, },
{ "at25fs040", 0x1f6604, 0, 64 * 1024, 8, SECT_4K, },
{ "at25df041a", 0x1f4401, 64 * 1024, 8, SECT_4K, },
{ "at25df641", 0x1f4800, 64 * 1024, 128, SECT_4K, },
{ "at25df041a", 0x1f4401, 0, 64 * 1024, 8, SECT_4K, },
{ "at25df641", 0x1f4800, 0, 64 * 1024, 128, SECT_4K, },
{ "at26f004", 0x1f0400, 64 * 1024, 8, SECT_4K, },
{ "at26df081a", 0x1f4501, 64 * 1024, 16, SECT_4K, },
{ "at26df161a", 0x1f4601, 64 * 1024, 32, SECT_4K, },
{ "at26df321", 0x1f4701, 64 * 1024, 64, SECT_4K, },
{ "at26f004", 0x1f0400, 0, 64 * 1024, 8, SECT_4K, },
{ "at26df081a", 0x1f4501, 0, 64 * 1024, 16, SECT_4K, },
{ "at26df161a", 0x1f4601, 0, 64 * 1024, 32, SECT_4K, },
{ "at26df321", 0x1f4701, 0, 64 * 1024, 64, SECT_4K, },
/* Spansion -- single (large) sector size only, at least
* for the chips listed here (without boot sectors).
*/
{ "s25sl004a", 0x010212, 64 * 1024, 8, },
{ "s25sl008a", 0x010213, 64 * 1024, 16, },
{ "s25sl016a", 0x010214, 64 * 1024, 32, },
{ "s25sl032a", 0x010215, 64 * 1024, 64, },
{ "s25sl064a", 0x010216, 64 * 1024, 128, },
{ "s25sl004a", 0x010212, 0, 64 * 1024, 8, },
{ "s25sl008a", 0x010213, 0, 64 * 1024, 16, },
{ "s25sl016a", 0x010214, 0, 64 * 1024, 32, },
{ "s25sl032a", 0x010215, 0, 64 * 1024, 64, },
{ "s25sl064a", 0x010216, 0, 64 * 1024, 128, },
{ "s25sl12800", 0x012018, 0x0300, 256 * 1024, 64, },
{ "s25sl12801", 0x012018, 0x0301, 64 * 1024, 256, },
/* SST -- large erase sizes are "overlays", "sectors" are 4K */
{ "sst25vf040b", 0xbf258d, 64 * 1024, 8, SECT_4K, },
{ "sst25vf080b", 0xbf258e, 64 * 1024, 16, SECT_4K, },
{ "sst25vf016b", 0xbf2541, 64 * 1024, 32, SECT_4K, },
{ "sst25vf032b", 0xbf254a, 64 * 1024, 64, SECT_4K, },
{ "sst25vf040b", 0xbf258d, 0, 64 * 1024, 8, SECT_4K, },
{ "sst25vf080b", 0xbf258e, 0, 64 * 1024, 16, SECT_4K, },
{ "sst25vf016b", 0xbf2541, 0, 64 * 1024, 32, SECT_4K, },
{ "sst25vf032b", 0xbf254a, 0, 64 * 1024, 64, SECT_4K, },
/* ST Microelectronics -- newer production may have feature updates */
{ "m25p05", 0x202010, 32 * 1024, 2, },
{ "m25p10", 0x202011, 32 * 1024, 4, },
{ "m25p20", 0x202012, 64 * 1024, 4, },
{ "m25p40", 0x202013, 64 * 1024, 8, },
{ "m25p80", 0, 64 * 1024, 16, },
{ "m25p16", 0x202015, 64 * 1024, 32, },
{ "m25p32", 0x202016, 64 * 1024, 64, },
{ "m25p64", 0x202017, 64 * 1024, 128, },
{ "m25p128", 0x202018, 256 * 1024, 64, },
{ "m25p05", 0x202010, 0, 32 * 1024, 2, },
{ "m25p10", 0x202011, 0, 32 * 1024, 4, },
{ "m25p20", 0x202012, 0, 64 * 1024, 4, },
{ "m25p40", 0x202013, 0, 64 * 1024, 8, },
{ "m25p80", 0, 0, 64 * 1024, 16, },
{ "m25p16", 0x202015, 0, 64 * 1024, 32, },
{ "m25p32", 0x202016, 0, 64 * 1024, 64, },
{ "m25p64", 0x202017, 0, 64 * 1024, 128, },
{ "m25p128", 0x202018, 0, 256 * 1024, 64, },
{ "m45pe80", 0x204014, 64 * 1024, 16, },
{ "m45pe16", 0x204015, 64 * 1024, 32, },
{ "m45pe80", 0x204014, 0, 64 * 1024, 16, },
{ "m45pe16", 0x204015, 0, 64 * 1024, 32, },
{ "m25pe80", 0x208014, 64 * 1024, 16, },
{ "m25pe16", 0x208015, 64 * 1024, 32, SECT_4K, },
{ "m25pe80", 0x208014, 0, 64 * 1024, 16, },
{ "m25pe16", 0x208015, 0, 64 * 1024, 32, SECT_4K, },
/* Winbond -- w25x "blocks" are 64K, "sectors" are 4KiB */
{ "w25x10", 0xef3011, 64 * 1024, 2, SECT_4K, },
{ "w25x20", 0xef3012, 64 * 1024, 4, SECT_4K, },
{ "w25x40", 0xef3013, 64 * 1024, 8, SECT_4K, },
{ "w25x80", 0xef3014, 64 * 1024, 16, SECT_4K, },
{ "w25x16", 0xef3015, 64 * 1024, 32, SECT_4K, },
{ "w25x32", 0xef3016, 64 * 1024, 64, SECT_4K, },
{ "w25x64", 0xef3017, 64 * 1024, 128, SECT_4K, },
{ "w25x10", 0xef3011, 0, 64 * 1024, 2, SECT_4K, },
{ "w25x20", 0xef3012, 0, 64 * 1024, 4, SECT_4K, },
{ "w25x40", 0xef3013, 0, 64 * 1024, 8, SECT_4K, },
{ "w25x80", 0xef3014, 0, 64 * 1024, 16, SECT_4K, },
{ "w25x16", 0xef3015, 0, 64 * 1024, 32, SECT_4K, },
{ "w25x32", 0xef3016, 0, 64 * 1024, 64, SECT_4K, },
{ "w25x64", 0xef3017, 0, 64 * 1024, 128, SECT_4K, },
};
static struct flash_info *__devinit jedec_probe(struct spi_device *spi)
{
int tmp;
u8 code = OPCODE_RDID;
u8 id[3];
u8 id[5];
u32 jedec;
u16 ext_jedec;
struct flash_info *info;
/* JEDEC also defines an optional "extended device information"
* string for after vendor-specific data, after the three bytes
* we use here. Supporting some chips might require using it.
*/
tmp = spi_write_then_read(spi, &code, 1, id, 3);
tmp = spi_write_then_read(spi, &code, 1, id, 5);
if (tmp < 0) {
DEBUG(MTD_DEBUG_LEVEL0, "%s: error %d reading JEDEC ID\n",
spi->dev.bus_id, tmp);
@ -533,10 +569,14 @@ static struct flash_info *__devinit jedec_probe(struct spi_device *spi)
jedec = jedec << 8;
jedec |= id[2];
ext_jedec = id[3] << 8 | id[4];
for (tmp = 0, info = m25p_data;
tmp < ARRAY_SIZE(m25p_data);
tmp++, info++) {
if (info->jedec_id == jedec)
if (ext_jedec != 0 && info->ext_id != ext_jedec)
continue;
return info;
}
dev_err(&spi->dev, "unrecognized JEDEC id %06x\n", jedec);

View File

@ -30,12 +30,10 @@
* doesn't (yet) use these for any kind of i/o overlap or prefetching.
*
* Sometimes DataFlash is packaged in MMC-format cards, although the
* MMC stack can't use SPI (yet), or distinguish between MMC and DataFlash
* MMC stack can't (yet?) distinguish between MMC and DataFlash
* protocols during enumeration.
*/
#define CONFIG_DATAFLASH_WRITE_VERIFY
/* reads can bypass the buffers */
#define OP_READ_CONTINUOUS 0xE8
#define OP_READ_PAGE 0xD2
@ -80,7 +78,8 @@
*/
#define OP_READ_ID 0x9F
#define OP_READ_SECURITY 0x77
#define OP_WRITE_SECURITY 0x9A /* OTP bits */
#define OP_WRITE_SECURITY_REVC 0x9A
#define OP_WRITE_SECURITY 0x9B /* revision D */
struct dataflash {
@ -402,7 +401,7 @@ static int dataflash_write(struct mtd_info *mtd, loff_t to, size_t len,
(void) dataflash_waitready(priv->spi);
#ifdef CONFIG_DATAFLASH_WRITE_VERIFY
#ifdef CONFIG_MTD_DATAFLASH_VERIFY_WRITE
/* (3) Compare to Buffer1 */
addr = pageaddr << priv->page_offset;
@ -431,7 +430,7 @@ static int dataflash_write(struct mtd_info *mtd, loff_t to, size_t len,
} else
status = 0;
#endif /* CONFIG_DATAFLASH_WRITE_VERIFY */
#endif /* CONFIG_MTD_DATAFLASH_VERIFY_WRITE */
remaining = remaining - writelen;
pageaddr++;
@ -451,16 +450,192 @@ static int dataflash_write(struct mtd_info *mtd, loff_t to, size_t len,
/* ......................................................................... */
#ifdef CONFIG_MTD_DATAFLASH_OTP
static int dataflash_get_otp_info(struct mtd_info *mtd,
struct otp_info *info, size_t len)
{
/* Report both blocks as identical: bytes 0..64, locked.
* Unless the user block changed from all-ones, we can't
* tell whether it's still writable; so we assume it isn't.
*/
info->start = 0;
info->length = 64;
info->locked = 1;
return sizeof(*info);
}
static ssize_t otp_read(struct spi_device *spi, unsigned base,
uint8_t *buf, loff_t off, size_t len)
{
struct spi_message m;
size_t l;
uint8_t *scratch;
struct spi_transfer t;
int status;
if (off > 64)
return -EINVAL;
if ((off + len) > 64)
len = 64 - off;
if (len == 0)
return len;
spi_message_init(&m);
l = 4 + base + off + len;
scratch = kzalloc(l, GFP_KERNEL);
if (!scratch)
return -ENOMEM;
/* OUT: OP_READ_SECURITY, 3 don't-care bytes, zeroes
* IN: ignore 4 bytes, data bytes 0..N (max 127)
*/
scratch[0] = OP_READ_SECURITY;
memset(&t, 0, sizeof t);
t.tx_buf = scratch;
t.rx_buf = scratch;
t.len = l;
spi_message_add_tail(&t, &m);
dataflash_waitready(spi);
status = spi_sync(spi, &m);
if (status >= 0) {
memcpy(buf, scratch + 4 + base + off, len);
status = len;
}
kfree(scratch);
return status;
}
static int dataflash_read_fact_otp(struct mtd_info *mtd,
loff_t from, size_t len, size_t *retlen, u_char *buf)
{
struct dataflash *priv = (struct dataflash *)mtd->priv;
int status;
/* 64 bytes, from 0..63 ... start at 64 on-chip */
mutex_lock(&priv->lock);
status = otp_read(priv->spi, 64, buf, from, len);
mutex_unlock(&priv->lock);
if (status < 0)
return status;
*retlen = status;
return 0;
}
static int dataflash_read_user_otp(struct mtd_info *mtd,
loff_t from, size_t len, size_t *retlen, u_char *buf)
{
struct dataflash *priv = (struct dataflash *)mtd->priv;
int status;
/* 64 bytes, from 0..63 ... start at 0 on-chip */
mutex_lock(&priv->lock);
status = otp_read(priv->spi, 0, buf, from, len);
mutex_unlock(&priv->lock);
if (status < 0)
return status;
*retlen = status;
return 0;
}
static int dataflash_write_user_otp(struct mtd_info *mtd,
loff_t from, size_t len, size_t *retlen, u_char *buf)
{
struct spi_message m;
const size_t l = 4 + 64;
uint8_t *scratch;
struct spi_transfer t;
struct dataflash *priv = (struct dataflash *)mtd->priv;
int status;
if (len > 64)
return -EINVAL;
/* Strictly speaking, we *could* truncate the write ... but
* let's not do that for the only write that's ever possible.
*/
if ((from + len) > 64)
return -EINVAL;
/* OUT: OP_WRITE_SECURITY, 3 zeroes, 64 data-or-zero bytes
* IN: ignore all
*/
scratch = kzalloc(l, GFP_KERNEL);
if (!scratch)
return -ENOMEM;
scratch[0] = OP_WRITE_SECURITY;
memcpy(scratch + 4 + from, buf, len);
spi_message_init(&m);
memset(&t, 0, sizeof t);
t.tx_buf = scratch;
t.len = l;
spi_message_add_tail(&t, &m);
/* Write the OTP bits, if they've not yet been written.
* This modifies SRAM buffer1.
*/
mutex_lock(&priv->lock);
dataflash_waitready(priv->spi);
status = spi_sync(priv->spi, &m);
mutex_unlock(&priv->lock);
kfree(scratch);
if (status >= 0) {
status = 0;
*retlen = len;
}
return status;
}
static char *otp_setup(struct mtd_info *device, char revision)
{
device->get_fact_prot_info = dataflash_get_otp_info;
device->read_fact_prot_reg = dataflash_read_fact_otp;
device->get_user_prot_info = dataflash_get_otp_info;
device->read_user_prot_reg = dataflash_read_user_otp;
/* rev c parts (at45db321c and at45db1281 only!) use a
* different write procedure; not (yet?) implemented.
*/
if (revision > 'c')
device->write_user_prot_reg = dataflash_write_user_otp;
return ", OTP";
}
#else
static char *otp_setup(struct mtd_info *device, char revision)
{
return " (OTP)";
}
#endif
/* ......................................................................... */
/*
* Register DataFlash device with MTD subsystem.
*/
static int __devinit
add_dataflash(struct spi_device *spi, char *name,
int nr_pages, int pagesize, int pageoffset)
add_dataflash_otp(struct spi_device *spi, char *name,
int nr_pages, int pagesize, int pageoffset, char revision)
{
struct dataflash *priv;
struct mtd_info *device;
struct flash_platform_data *pdata = spi->dev.platform_data;
char *otp_tag = "";
priv = kzalloc(sizeof *priv, GFP_KERNEL);
if (!priv)
@ -489,8 +664,12 @@ add_dataflash(struct spi_device *spi, char *name,
device->write = dataflash_write;
device->priv = priv;
dev_info(&spi->dev, "%s (%d KBytes) pagesize %d bytes\n",
name, DIV_ROUND_UP(device->size, 1024), pagesize);
if (revision >= 'c')
otp_tag = otp_setup(device, revision);
dev_info(&spi->dev, "%s (%d KBytes) pagesize %d bytes%s\n",
name, DIV_ROUND_UP(device->size, 1024),
pagesize, otp_tag);
dev_set_drvdata(&spi->dev, priv);
if (mtd_has_partitions()) {
@ -519,6 +698,14 @@ add_dataflash(struct spi_device *spi, char *name,
return add_mtd_device(device) == 1 ? -ENODEV : 0;
}
static inline int __devinit
add_dataflash(struct spi_device *spi, char *name,
int nr_pages, int pagesize, int pageoffset)
{
return add_dataflash_otp(spi, name, nr_pages, pagesize,
pageoffset, 0);
}
struct flash_info {
char *name;
@ -664,13 +851,16 @@ static int __devinit dataflash_probe(struct spi_device *spi)
* Try to detect dataflash by JEDEC ID.
* If it succeeds we know we have either a C or D part.
* D will support power of 2 pagesize option.
* Both support the security register, though with different
* write procedures.
*/
info = jedec_probe(spi);
if (IS_ERR(info))
return PTR_ERR(info);
if (info != NULL)
return add_dataflash(spi, info->name, info->nr_pages,
info->pagesize, info->pageoffset);
return add_dataflash_otp(spi, info->name, info->nr_pages,
info->pagesize, info->pageoffset,
(info->flags & SUP_POW2PS) ? 'd' : 'c');
/*
* Older chips support only legacy commands, identifing

View File

@ -388,6 +388,10 @@ static u16 INFTL_foldchain(struct INFTLrecord *inftl, unsigned thisVUC, unsigned
if (thisEUN == targetEUN)
break;
/* Unlink the last block from the chain. */
inftl->PUtable[prevEUN] = BLOCK_NIL;
/* Now try to erase it. */
if (INFTL_formatblock(inftl, thisEUN) < 0) {
/*
* Could not erase : mark block as reserved.
@ -396,7 +400,6 @@ static u16 INFTL_foldchain(struct INFTLrecord *inftl, unsigned thisVUC, unsigned
} else {
/* Correctly erased : mark it as free */
inftl->PUtable[thisEUN] = BLOCK_FREE;
inftl->PUtable[prevEUN] = BLOCK_NIL;
inftl->numfreeEUNs++;
}
}

View File

@ -332,30 +332,6 @@ config MTD_CFI_FLAGADM
Mapping for the Flaga digital module. If you don't have one, ignore
this setting.
config MTD_WALNUT
tristate "Flash device mapped on IBM 405GP Walnut"
depends on MTD_JEDECPROBE && WALNUT && !PPC_MERGE
help
This enables access routines for the flash chips on the IBM 405GP
Walnut board. If you have one of these boards and would like to
use the flash chips on it, say 'Y'.
config MTD_EBONY
tristate "Flash devices mapped on IBM 440GP Ebony"
depends on MTD_JEDECPROBE && EBONY && !PPC_MERGE
help
This enables access routines for the flash chips on the IBM 440GP
Ebony board. If you have one of these boards and would like to
use the flash chips on it, say 'Y'.
config MTD_OCOTEA
tristate "Flash devices mapped on IBM 440GX Ocotea"
depends on MTD_CFI && OCOTEA && !PPC_MERGE
help
This enables access routines for the flash chips on the IBM 440GX
Ocotea board. If you have one of these boards and would like to
use the flash chips on it, say 'Y'.
config MTD_REDWOOD
tristate "CFI Flash devices mapped on IBM Redwood"
depends on MTD_CFI && ( REDWOOD_4 || REDWOOD_5 || REDWOOD_6 )
@ -458,13 +434,6 @@ config MTD_CEIVA
PhotoMax Digital Picture Frame.
If you have such a device, say 'Y'.
config MTD_NOR_TOTO
tristate "NOR Flash device on TOTO board"
depends on ARCH_OMAP && OMAP_TOTO
help
This enables access to the NOR flash on the Texas Instruments
TOTO board.
config MTD_H720X
tristate "Hynix evaluation board mappings"
depends on MTD_CFI && ( ARCH_H7201 || ARCH_H7202 )
@ -522,7 +491,7 @@ config MTD_BFIN_ASYNC
config MTD_UCLINUX
tristate "Generic uClinux RAM/ROM filesystem support"
depends on MTD_PARTITIONS && !MMU
depends on MTD_PARTITIONS && MTD_RAM && !MMU
help
Map driver to support image based filesystems for uClinux.

View File

@ -50,12 +50,8 @@ obj-$(CONFIG_MTD_REDWOOD) += redwood.o
obj-$(CONFIG_MTD_UCLINUX) += uclinux.o
obj-$(CONFIG_MTD_NETtel) += nettel.o
obj-$(CONFIG_MTD_SCB2_FLASH) += scb2_flash.o
obj-$(CONFIG_MTD_EBONY) += ebony.o
obj-$(CONFIG_MTD_OCOTEA) += ocotea.o
obj-$(CONFIG_MTD_WALNUT) += walnut.o
obj-$(CONFIG_MTD_H720X) += h720x-flash.o
obj-$(CONFIG_MTD_SBC8240) += sbc8240.o
obj-$(CONFIG_MTD_NOR_TOTO) += omap-toto-flash.o
obj-$(CONFIG_MTD_IXP4XX) += ixp4xx.o
obj-$(CONFIG_MTD_IXP2000) += ixp2000.o
obj-$(CONFIG_MTD_WRSBC8260) += wr_sbc82xx_flash.o

View File

@ -1,163 +0,0 @@
/*
* Mapping for Ebony user flash
*
* Matt Porter <mporter@kernel.crashing.org>
*
* Copyright 2002-2004 MontaVista Software Inc.
*
* This program is free software; you can redistribute it and/or modify it
* under the terms of the GNU General Public License as published by the
* Free Software Foundation; either version 2 of the License, or (at your
* option) any later version.
*/
#include <linux/module.h>
#include <linux/types.h>
#include <linux/kernel.h>
#include <linux/init.h>
#include <linux/mtd/mtd.h>
#include <linux/mtd/map.h>
#include <linux/mtd/partitions.h>
#include <asm/io.h>
#include <asm/ibm44x.h>
#include <platforms/4xx/ebony.h>
static struct mtd_info *flash;
static struct map_info ebony_small_map = {
.name = "Ebony small flash",
.size = EBONY_SMALL_FLASH_SIZE,
.bankwidth = 1,
};
static struct map_info ebony_large_map = {
.name = "Ebony large flash",
.size = EBONY_LARGE_FLASH_SIZE,
.bankwidth = 1,
};
static struct mtd_partition ebony_small_partitions[] = {
{
.name = "OpenBIOS",
.offset = 0x0,
.size = 0x80000,
}
};
static struct mtd_partition ebony_large_partitions[] = {
{
.name = "fs",
.offset = 0,
.size = 0x380000,
},
{
.name = "firmware",
.offset = 0x380000,
.size = 0x80000,
}
};
int __init init_ebony(void)
{
u8 fpga0_reg;
u8 __iomem *fpga0_adr;
unsigned long long small_flash_base, large_flash_base;
fpga0_adr = ioremap64(EBONY_FPGA_ADDR, 16);
if (!fpga0_adr)
return -ENOMEM;
fpga0_reg = readb(fpga0_adr);
iounmap(fpga0_adr);
if (EBONY_BOOT_SMALL_FLASH(fpga0_reg) &&
!EBONY_FLASH_SEL(fpga0_reg))
small_flash_base = EBONY_SMALL_FLASH_HIGH2;
else if (EBONY_BOOT_SMALL_FLASH(fpga0_reg) &&
EBONY_FLASH_SEL(fpga0_reg))
small_flash_base = EBONY_SMALL_FLASH_HIGH1;
else if (!EBONY_BOOT_SMALL_FLASH(fpga0_reg) &&
!EBONY_FLASH_SEL(fpga0_reg))
small_flash_base = EBONY_SMALL_FLASH_LOW2;
else
small_flash_base = EBONY_SMALL_FLASH_LOW1;
if (EBONY_BOOT_SMALL_FLASH(fpga0_reg) &&
!EBONY_ONBRD_FLASH_EN(fpga0_reg))
large_flash_base = EBONY_LARGE_FLASH_LOW;
else
large_flash_base = EBONY_LARGE_FLASH_HIGH;
ebony_small_map.phys = small_flash_base;
ebony_small_map.virt = ioremap64(small_flash_base,
ebony_small_map.size);
if (!ebony_small_map.virt) {
printk("Failed to ioremap flash\n");
return -EIO;
}
simple_map_init(&ebony_small_map);
flash = do_map_probe("jedec_probe", &ebony_small_map);
if (flash) {
flash->owner = THIS_MODULE;
add_mtd_partitions(flash, ebony_small_partitions,
ARRAY_SIZE(ebony_small_partitions));
} else {
printk("map probe failed for flash\n");
iounmap(ebony_small_map.virt);
return -ENXIO;
}
ebony_large_map.phys = large_flash_base;
ebony_large_map.virt = ioremap64(large_flash_base,
ebony_large_map.size);
if (!ebony_large_map.virt) {
printk("Failed to ioremap flash\n");
iounmap(ebony_small_map.virt);
return -EIO;
}
simple_map_init(&ebony_large_map);
flash = do_map_probe("jedec_probe", &ebony_large_map);
if (flash) {
flash->owner = THIS_MODULE;
add_mtd_partitions(flash, ebony_large_partitions,
ARRAY_SIZE(ebony_large_partitions));
} else {
printk("map probe failed for flash\n");
iounmap(ebony_small_map.virt);
iounmap(ebony_large_map.virt);
return -ENXIO;
}
return 0;
}
static void __exit cleanup_ebony(void)
{
if (flash) {
del_mtd_partitions(flash);
map_destroy(flash);
}
if (ebony_small_map.virt) {
iounmap(ebony_small_map.virt);
ebony_small_map.virt = NULL;
}
if (ebony_large_map.virt) {
iounmap(ebony_large_map.virt);
ebony_large_map.virt = NULL;
}
}
module_init(init_ebony);
module_exit(cleanup_ebony);
MODULE_LICENSE("GPL");
MODULE_AUTHOR("Matt Porter <mporter@kernel.crashing.org>");
MODULE_DESCRIPTION("MTD map and partitions for IBM 440GP Ebony boards");

View File

@ -1,154 +0,0 @@
/*
* Mapping for Ocotea user flash
*
* Matt Porter <mporter@kernel.crashing.org>
*
* Copyright 2002-2004 MontaVista Software Inc.
*
* This program is free software; you can redistribute it and/or modify it
* under the terms of the GNU General Public License as published by the
* Free Software Foundation; either version 2 of the License, or (at your
* option) any later version.
*/
#include <linux/module.h>
#include <linux/types.h>
#include <linux/kernel.h>
#include <linux/init.h>
#include <linux/mtd/mtd.h>
#include <linux/mtd/map.h>
#include <linux/mtd/partitions.h>
#include <asm/io.h>
#include <asm/ibm44x.h>
#include <platforms/4xx/ocotea.h>
static struct mtd_info *flash;
static struct map_info ocotea_small_map = {
.name = "Ocotea small flash",
.size = OCOTEA_SMALL_FLASH_SIZE,
.buswidth = 1,
};
static struct map_info ocotea_large_map = {
.name = "Ocotea large flash",
.size = OCOTEA_LARGE_FLASH_SIZE,
.buswidth = 1,
};
static struct mtd_partition ocotea_small_partitions[] = {
{
.name = "pibs",
.offset = 0x0,
.size = 0x100000,
}
};
static struct mtd_partition ocotea_large_partitions[] = {
{
.name = "fs",
.offset = 0,
.size = 0x300000,
},
{
.name = "firmware",
.offset = 0x300000,
.size = 0x100000,
}
};
int __init init_ocotea(void)
{
u8 fpga0_reg;
u8 *fpga0_adr;
unsigned long long small_flash_base, large_flash_base;
fpga0_adr = ioremap64(OCOTEA_FPGA_ADDR, 16);
if (!fpga0_adr)
return -ENOMEM;
fpga0_reg = readb((unsigned long)fpga0_adr);
iounmap(fpga0_adr);
if (OCOTEA_BOOT_LARGE_FLASH(fpga0_reg)) {
small_flash_base = OCOTEA_SMALL_FLASH_HIGH;
large_flash_base = OCOTEA_LARGE_FLASH_LOW;
}
else {
small_flash_base = OCOTEA_SMALL_FLASH_LOW;
large_flash_base = OCOTEA_LARGE_FLASH_HIGH;
}
ocotea_small_map.phys = small_flash_base;
ocotea_small_map.virt = ioremap64(small_flash_base,
ocotea_small_map.size);
if (!ocotea_small_map.virt) {
printk("Failed to ioremap flash\n");
return -EIO;
}
simple_map_init(&ocotea_small_map);
flash = do_map_probe("map_rom", &ocotea_small_map);
if (flash) {
flash->owner = THIS_MODULE;
add_mtd_partitions(flash, ocotea_small_partitions,
ARRAY_SIZE(ocotea_small_partitions));
} else {
printk("map probe failed for flash\n");
iounmap(ocotea_small_map.virt);
return -ENXIO;
}
ocotea_large_map.phys = large_flash_base;
ocotea_large_map.virt = ioremap64(large_flash_base,
ocotea_large_map.size);
if (!ocotea_large_map.virt) {
printk("Failed to ioremap flash\n");
iounmap(ocotea_small_map.virt);
return -EIO;
}
simple_map_init(&ocotea_large_map);
flash = do_map_probe("cfi_probe", &ocotea_large_map);
if (flash) {
flash->owner = THIS_MODULE;
add_mtd_partitions(flash, ocotea_large_partitions,
ARRAY_SIZE(ocotea_large_partitions));
} else {
printk("map probe failed for flash\n");
iounmap(ocotea_small_map.virt);
iounmap(ocotea_large_map.virt);
return -ENXIO;
}
return 0;
}
static void __exit cleanup_ocotea(void)
{
if (flash) {
del_mtd_partitions(flash);
map_destroy(flash);
}
if (ocotea_small_map.virt) {
iounmap((void *)ocotea_small_map.virt);
ocotea_small_map.virt = 0;
}
if (ocotea_large_map.virt) {
iounmap((void *)ocotea_large_map.virt);
ocotea_large_map.virt = 0;
}
}
module_init(init_ocotea);
module_exit(cleanup_ocotea);
MODULE_LICENSE("GPL");
MODULE_AUTHOR("Matt Porter <mporter@kernel.crashing.org>");
MODULE_DESCRIPTION("MTD map and partitions for IBM 440GX Ocotea boards");

View File

@ -1,133 +0,0 @@
/*
* NOR Flash memory access on TI Toto board
*
* jzhang@ti.com (C) 2003 Texas Instruments.
*
* (C) 2002 MontVista Software, Inc.
*/
#include <linux/module.h>
#include <linux/types.h>
#include <linux/kernel.h>
#include <linux/errno.h>
#include <linux/init.h>
#include <linux/slab.h>
#include <linux/mtd/mtd.h>
#include <linux/mtd/map.h>
#include <linux/mtd/partitions.h>
#include <asm/hardware.h>
#include <asm/io.h>
#ifndef CONFIG_ARCH_OMAP
#error This is for OMAP architecture only
#endif
//these lines need be moved to a hardware header file
#define OMAP_TOTO_FLASH_BASE 0xd8000000
#define OMAP_TOTO_FLASH_SIZE 0x80000
static struct map_info omap_toto_map_flash = {
.name = "OMAP Toto flash",
.bankwidth = 2,
.virt = (void __iomem *)OMAP_TOTO_FLASH_BASE,
};
static struct mtd_partition toto_flash_partitions[] = {
{
.name = "BootLoader",
.size = 0x00040000, /* hopefully u-boot will stay 128k + 128*/
.offset = 0,
.mask_flags = MTD_WRITEABLE, /* force read-only */
}, {
.name = "ReservedSpace",
.size = 0x00030000,
.offset = MTDPART_OFS_APPEND,
//mask_flags: MTD_WRITEABLE, /* force read-only */
}, {
.name = "EnvArea", /* bottom 64KiB for env vars */
.size = MTDPART_SIZ_FULL,
.offset = MTDPART_OFS_APPEND,
}
};
static struct mtd_partition *parsed_parts;
static struct mtd_info *flash_mtd;
static int __init init_flash (void)
{
struct mtd_partition *parts;
int nb_parts = 0;
int parsed_nr_parts = 0;
const char *part_type;
/*
* Static partition definition selection
*/
part_type = "static";
parts = toto_flash_partitions;
nb_parts = ARRAY_SIZE(toto_flash_partitions);
omap_toto_map_flash.size = OMAP_TOTO_FLASH_SIZE;
omap_toto_map_flash.phys = virt_to_phys(OMAP_TOTO_FLASH_BASE);
simple_map_init(&omap_toto_map_flash);
/*
* Now let's probe for the actual flash. Do it here since
* specific machine settings might have been set above.
*/
printk(KERN_NOTICE "OMAP toto flash: probing %d-bit flash bus\n",
omap_toto_map_flash.bankwidth*8);
flash_mtd = do_map_probe("jedec_probe", &omap_toto_map_flash);
if (!flash_mtd)
return -ENXIO;
if (parsed_nr_parts > 0) {
parts = parsed_parts;
nb_parts = parsed_nr_parts;
}
if (nb_parts == 0) {
printk(KERN_NOTICE "OMAP toto flash: no partition info available,"
"registering whole flash at once\n");
if (add_mtd_device(flash_mtd)){
return -ENXIO;
}
} else {
printk(KERN_NOTICE "Using %s partition definition\n",
part_type);
return add_mtd_partitions(flash_mtd, parts, nb_parts);
}
return 0;
}
int __init omap_toto_mtd_init(void)
{
int status;
if (status = init_flash()) {
printk(KERN_ERR "OMAP Toto Flash: unable to init map for toto flash\n");
}
return status;
}
static void __exit omap_toto_mtd_cleanup(void)
{
if (flash_mtd) {
del_mtd_partitions(flash_mtd);
map_destroy(flash_mtd);
kfree(parsed_parts);
}
}
module_init(omap_toto_mtd_init);
module_exit(omap_toto_mtd_cleanup);
MODULE_AUTHOR("Jian Zhang");
MODULE_DESCRIPTION("OMAP Toto board map driver");
MODULE_LICENSE("GPL");

View File

@ -203,15 +203,8 @@ intel_dc21285_init(struct pci_dev *dev, struct map_pci_info *map)
* not enabled, should we be allocating a new resource for it
* or simply enabling it?
*/
if (!(pci_resource_flags(dev, PCI_ROM_RESOURCE) &
IORESOURCE_ROM_ENABLE)) {
u32 val;
pci_resource_flags(dev, PCI_ROM_RESOURCE) |= IORESOURCE_ROM_ENABLE;
pci_read_config_dword(dev, PCI_ROM_ADDRESS, &val);
val |= PCI_ROM_ADDRESS_ENABLE;
pci_write_config_dword(dev, PCI_ROM_ADDRESS, val);
printk("%s: enabling expansion ROM\n", pci_name(dev));
}
pci_enable_rom(dev);
printk("%s: enabling expansion ROM\n", pci_name(dev));
}
if (!len || !base)
@ -232,18 +225,13 @@ intel_dc21285_init(struct pci_dev *dev, struct map_pci_info *map)
static void
intel_dc21285_exit(struct pci_dev *dev, struct map_pci_info *map)
{
u32 val;
if (map->base)
iounmap(map->base);
/*
* We need to undo the PCI BAR2/PCI ROM BAR address alteration.
*/
pci_resource_flags(dev, PCI_ROM_RESOURCE) &= ~IORESOURCE_ROM_ENABLE;
pci_read_config_dword(dev, PCI_ROM_ADDRESS, &val);
val &= ~PCI_ROM_ADDRESS_ENABLE;
pci_write_config_dword(dev, PCI_ROM_ADDRESS, val);
pci_disable_rom(dev);
}
static unsigned long

View File

@ -230,8 +230,7 @@ static int __devinit of_flash_probe(struct of_device *dev,
#ifdef CONFIG_MTD_OF_PARTS
if (err == 0) {
err = of_mtd_parse_partitions(&dev->dev, info->mtd,
dp, &info->parts);
err = of_mtd_parse_partitions(&dev->dev, dp, &info->parts);
if (err < 0)
return err;
}

View File

@ -1,122 +0,0 @@
/*
* Mapping for Walnut flash
* (used ebony.c as a "framework")
*
* Heikki Lindholm <holindho@infradead.org>
*
*
* This program is free software; you can redistribute it and/or modify it
* under the terms of the GNU General Public License as published by the
* Free Software Foundation; either version 2 of the License, or (at your
* option) any later version.
*/
#include <linux/module.h>
#include <linux/types.h>
#include <linux/kernel.h>
#include <linux/init.h>
#include <linux/mtd/mtd.h>
#include <linux/mtd/map.h>
#include <linux/mtd/partitions.h>
#include <asm/io.h>
#include <asm/ibm4xx.h>
#include <platforms/4xx/walnut.h>
/* these should be in platforms/4xx/walnut.h ? */
#define WALNUT_FLASH_ONBD_N(x) (x & 0x02)
#define WALNUT_FLASH_SRAM_SEL(x) (x & 0x01)
#define WALNUT_FLASH_LOW 0xFFF00000
#define WALNUT_FLASH_HIGH 0xFFF80000
#define WALNUT_FLASH_SIZE 0x80000
static struct mtd_info *flash;
static struct map_info walnut_map = {
.name = "Walnut flash",
.size = WALNUT_FLASH_SIZE,
.bankwidth = 1,
};
/* Actually, OpenBIOS is the last 128 KiB of the flash - better
* partitioning could be made */
static struct mtd_partition walnut_partitions[] = {
{
.name = "OpenBIOS",
.offset = 0x0,
.size = WALNUT_FLASH_SIZE,
/*.mask_flags = MTD_WRITEABLE, */ /* force read-only */
}
};
int __init init_walnut(void)
{
u8 fpga_brds1;
void *fpga_brds1_adr;
void *fpga_status_adr;
unsigned long flash_base;
/* this should already be mapped (platform/4xx/walnut.c) */
fpga_status_adr = ioremap(WALNUT_FPGA_BASE, 8);
if (!fpga_status_adr)
return -ENOMEM;
fpga_brds1_adr = fpga_status_adr+5;
fpga_brds1 = readb(fpga_brds1_adr);
/* iounmap(fpga_status_adr); */
if (WALNUT_FLASH_ONBD_N(fpga_brds1)) {
printk("The on-board flash is disabled (U79 sw 5)!");
iounmap(fpga_status_adr);
return -EIO;
}
if (WALNUT_FLASH_SRAM_SEL(fpga_brds1))
flash_base = WALNUT_FLASH_LOW;
else
flash_base = WALNUT_FLASH_HIGH;
walnut_map.phys = flash_base;
walnut_map.virt =
(void __iomem *)ioremap(flash_base, walnut_map.size);
if (!walnut_map.virt) {
printk("Failed to ioremap flash.\n");
iounmap(fpga_status_adr);
return -EIO;
}
simple_map_init(&walnut_map);
flash = do_map_probe("jedec_probe", &walnut_map);
if (flash) {
flash->owner = THIS_MODULE;
add_mtd_partitions(flash, walnut_partitions,
ARRAY_SIZE(walnut_partitions));
} else {
printk("map probe failed for flash\n");
iounmap(fpga_status_adr);
return -ENXIO;
}
iounmap(fpga_status_adr);
return 0;
}
static void __exit cleanup_walnut(void)
{
if (flash) {
del_mtd_partitions(flash);
map_destroy(flash);
}
if (walnut_map.virt) {
iounmap((void *)walnut_map.virt);
walnut_map.virt = 0;
}
}
module_init(init_walnut);
module_exit(cleanup_walnut);
MODULE_LICENSE("GPL");
MODULE_AUTHOR("Heikki Lindholm <holindho@infradead.org>");
MODULE_DESCRIPTION("MTD map and partitions for IBM 405GP Walnut boards");

View File

@ -348,7 +348,7 @@ static void mtdchar_erase_callback (struct erase_info *instr)
wake_up((wait_queue_head_t *)instr->priv);
}
#if defined(CONFIG_MTD_OTP) || defined(CONFIG_MTD_ONENAND_OTP)
#ifdef CONFIG_HAVE_MTD_OTP
static int otp_select_filemode(struct mtd_file_info *mfi, int mode)
{
struct mtd_info *mtd = mfi->mtd;
@ -665,7 +665,7 @@ static int mtd_ioctl(struct inode *inode, struct file *file,
break;
}
#if defined(CONFIG_MTD_OTP) || defined(CONFIG_MTD_ONENAND_OTP)
#ifdef CONFIG_HAVE_MTD_OTP
case OTPSELECT:
{
int mode;

View File

@ -444,7 +444,7 @@ static int concat_erase(struct mtd_info *mtd, struct erase_info *instr)
return -EINVAL;
}
instr->fail_addr = 0xffffffff;
instr->fail_addr = MTD_FAIL_ADDR_UNKNOWN;
/* make a local copy of instr to avoid modifying the caller's struct */
erase = kmalloc(sizeof (struct erase_info), GFP_KERNEL);
@ -493,7 +493,7 @@ static int concat_erase(struct mtd_info *mtd, struct erase_info *instr)
/* sanity check: should never happen since
* block alignment has been checked above */
BUG_ON(err == -EINVAL);
if (erase->fail_addr != 0xffffffff)
if (erase->fail_addr != MTD_FAIL_ADDR_UNKNOWN)
instr->fail_addr = erase->fail_addr + offset;
break;
}

View File

@ -33,6 +33,7 @@
#include <linux/interrupt.h>
#include <linux/mtd/mtd.h>
#define MTDOOPS_KERNMSG_MAGIC 0x5d005d00
#define OOPS_PAGE_SIZE 4096
static struct mtdoops_context {
@ -99,7 +100,7 @@ static void mtdoops_inc_counter(struct mtdoops_context *cxt)
int ret;
cxt->nextpage++;
if (cxt->nextpage > cxt->oops_pages)
if (cxt->nextpage >= cxt->oops_pages)
cxt->nextpage = 0;
cxt->nextcount++;
if (cxt->nextcount == 0xffffffff)
@ -141,7 +142,7 @@ static void mtdoops_workfunc_erase(struct work_struct *work)
mod = (cxt->nextpage * OOPS_PAGE_SIZE) % mtd->erasesize;
if (mod != 0) {
cxt->nextpage = cxt->nextpage + ((mtd->erasesize - mod) / OOPS_PAGE_SIZE);
if (cxt->nextpage > cxt->oops_pages)
if (cxt->nextpage >= cxt->oops_pages)
cxt->nextpage = 0;
}
@ -158,7 +159,7 @@ badblock:
cxt->nextpage * OOPS_PAGE_SIZE);
i++;
cxt->nextpage = cxt->nextpage + (mtd->erasesize / OOPS_PAGE_SIZE);
if (cxt->nextpage > cxt->oops_pages)
if (cxt->nextpage >= cxt->oops_pages)
cxt->nextpage = 0;
if (i == (cxt->oops_pages / (mtd->erasesize / OOPS_PAGE_SIZE))) {
printk(KERN_ERR "mtdoops: All blocks bad!\n");
@ -224,40 +225,40 @@ static void find_next_position(struct mtdoops_context *cxt)
{
struct mtd_info *mtd = cxt->mtd;
int ret, page, maxpos = 0;
u32 count, maxcount = 0xffffffff;
u32 count[2], maxcount = 0xffffffff;
size_t retlen;
for (page = 0; page < cxt->oops_pages; page++) {
ret = mtd->read(mtd, page * OOPS_PAGE_SIZE, 4, &retlen, (u_char *) &count);
if ((retlen != 4) || ((ret < 0) && (ret != -EUCLEAN))) {
printk(KERN_ERR "mtdoops: Read failure at %d (%td of 4 read)"
ret = mtd->read(mtd, page * OOPS_PAGE_SIZE, 8, &retlen, (u_char *) &count[0]);
if ((retlen != 8) || ((ret < 0) && (ret != -EUCLEAN))) {
printk(KERN_ERR "mtdoops: Read failure at %d (%td of 8 read)"
", err %d.\n", page * OOPS_PAGE_SIZE, retlen, ret);
continue;
}
if (count == 0xffffffff)
if (count[1] != MTDOOPS_KERNMSG_MAGIC)
continue;
if (count[0] == 0xffffffff)
continue;
if (maxcount == 0xffffffff) {
maxcount = count;
maxcount = count[0];
maxpos = page;
} else if ((count < 0x40000000) && (maxcount > 0xc0000000)) {
maxcount = count;
} else if ((count[0] < 0x40000000) && (maxcount > 0xc0000000)) {
maxcount = count[0];
maxpos = page;
} else if ((count > maxcount) && (count < 0xc0000000)) {
maxcount = count;
} else if ((count[0] > maxcount) && (count[0] < 0xc0000000)) {
maxcount = count[0];
maxpos = page;
} else if ((count > maxcount) && (count > 0xc0000000)
} else if ((count[0] > maxcount) && (count[0] > 0xc0000000)
&& (maxcount > 0x80000000)) {
maxcount = count;
maxcount = count[0];
maxpos = page;
}
}
if (maxcount == 0xffffffff) {
cxt->nextpage = 0;
cxt->nextcount = 1;
cxt->ready = 1;
printk(KERN_DEBUG "mtdoops: Ready %d, %d (first init)\n",
cxt->nextpage, cxt->nextcount);
schedule_work(&cxt->work_erase);
return;
}
@ -358,8 +359,9 @@ mtdoops_console_write(struct console *co, const char *s, unsigned int count)
if (cxt->writecount == 0) {
u32 *stamp = cxt->oops_buf;
*stamp = cxt->nextcount;
cxt->writecount = 4;
*stamp++ = cxt->nextcount;
*stamp = MTDOOPS_KERNMSG_MAGIC;
cxt->writecount = 8;
}
if ((count + cxt->writecount) > OOPS_PAGE_SIZE)

View File

@ -214,7 +214,7 @@ static int part_erase(struct mtd_info *mtd, struct erase_info *instr)
instr->addr += part->offset;
ret = part->master->erase(part->master, instr);
if (ret) {
if (instr->fail_addr != 0xffffffff)
if (instr->fail_addr != MTD_FAIL_ADDR_UNKNOWN)
instr->fail_addr -= part->offset;
instr->addr -= part->offset;
}
@ -226,7 +226,7 @@ void mtd_erase_callback(struct erase_info *instr)
if (instr->mtd->erase == part_erase) {
struct mtd_part *part = PART(instr->mtd);
if (instr->fail_addr != 0xffffffff)
if (instr->fail_addr != MTD_FAIL_ADDR_UNKNOWN)
instr->fail_addr -= part->offset;
instr->addr -= part->offset;
}

View File

@ -56,6 +56,12 @@ config MTD_NAND_H1900
help
This enables the driver for the iPAQ h1900 flash.
config MTD_NAND_GPIO
tristate "GPIO NAND Flash driver"
depends on GENERIC_GPIO && ARM
help
This enables a GPIO based NAND flash driver.
config MTD_NAND_SPIA
tristate "NAND Flash device on SPIA board"
depends on ARCH_P720T
@ -68,12 +74,6 @@ config MTD_NAND_AMS_DELTA
help
Support for NAND flash on Amstrad E3 (Delta).
config MTD_NAND_TOTO
tristate "NAND Flash device on TOTO board"
depends on ARCH_OMAP && BROKEN
help
Support for NAND flash on Texas Instruments Toto platform.
config MTD_NAND_TS7250
tristate "NAND Flash device on TS-7250 board"
depends on MACH_TS72XX
@ -163,13 +163,6 @@ config MTD_NAND_S3C2410_HWECC
incorrect ECC generation, and if using these, the default of
software ECC is preferable.
config MTD_NAND_NDFC
tristate "NDFC NanD Flash Controller"
depends on 4xx && !PPC_MERGE
select MTD_NAND_ECC_SMC
help
NDFC Nand Flash Controllers are integrated in IBM/AMCC's 4xx SoCs
config MTD_NAND_S3C2410_CLKSTOP
bool "S3C2410 NAND IDLE clock stop"
depends on MTD_NAND_S3C2410
@ -340,6 +333,13 @@ config MTD_NAND_PXA3xx
This enables the driver for the NAND flash device found on
PXA3xx processors
config MTD_NAND_PXA3xx_BUILTIN
bool "Use builtin definitions for some NAND chips (deprecated)"
depends on MTD_NAND_PXA3xx
help
This enables builtin definitions for some NAND chips. This
is deprecated in favor of platform specific data.
config MTD_NAND_CM_X270
tristate "Support for NAND Flash on CM-X270 modules"
depends on MTD_NAND && MACH_ARMCORE
@ -400,10 +400,24 @@ config MTD_NAND_FSL_ELBC
config MTD_NAND_FSL_UPM
tristate "Support for NAND on Freescale UPM"
depends on MTD_NAND && OF_GPIO && (PPC_83xx || PPC_85xx)
depends on MTD_NAND && (PPC_83xx || PPC_85xx)
select FSL_LBC
help
Enables support for NAND Flash chips wired onto Freescale PowerPC
processor localbus with User-Programmable Machine support.
config MTD_NAND_MXC
tristate "MXC NAND support"
depends on ARCH_MX2
help
This enables the driver for the NAND flash controller on the
MXC processors.
config MTD_NAND_SH_FLCTL
tristate "Support for NAND on Renesas SuperH FLCTL"
depends on MTD_NAND && SUPERH && CPU_SUBTYPE_SH7723
help
Several Renesas SuperH CPU has FLCTL. This option enables support
for NAND Flash using FLCTL. This driver support SH7723.
endif # MTD_NAND

View File

@ -8,7 +8,6 @@ obj-$(CONFIG_MTD_NAND_IDS) += nand_ids.o
obj-$(CONFIG_MTD_NAND_CAFE) += cafe_nand.o
obj-$(CONFIG_MTD_NAND_SPIA) += spia.o
obj-$(CONFIG_MTD_NAND_AMS_DELTA) += ams-delta.o
obj-$(CONFIG_MTD_NAND_TOTO) += toto.o
obj-$(CONFIG_MTD_NAND_AUTCPU12) += autcpu12.o
obj-$(CONFIG_MTD_NAND_EDB7312) += edb7312.o
obj-$(CONFIG_MTD_NAND_AU1550) += au1550nd.o
@ -24,6 +23,7 @@ obj-$(CONFIG_MTD_NAND_NANDSIM) += nandsim.o
obj-$(CONFIG_MTD_NAND_CS553X) += cs553x_nand.o
obj-$(CONFIG_MTD_NAND_NDFC) += ndfc.o
obj-$(CONFIG_MTD_NAND_ATMEL) += atmel_nand.o
obj-$(CONFIG_MTD_NAND_GPIO) += gpio.o
obj-$(CONFIG_MTD_NAND_CM_X270) += cmx270_nand.o
obj-$(CONFIG_MTD_NAND_BASLER_EXCITE) += excite_nandflash.o
obj-$(CONFIG_MTD_NAND_PXA3xx) += pxa3xx_nand.o
@ -34,5 +34,7 @@ obj-$(CONFIG_MTD_NAND_PASEMI) += pasemi_nand.o
obj-$(CONFIG_MTD_NAND_ORION) += orion_nand.o
obj-$(CONFIG_MTD_NAND_FSL_ELBC) += fsl_elbc_nand.o
obj-$(CONFIG_MTD_NAND_FSL_UPM) += fsl_upm.o
obj-$(CONFIG_MTD_NAND_SH_FLCTL) += sh_flctl.o
obj-$(CONFIG_MTD_NAND_MXC) += mxc_nand.o
nand-objs := nand_base.o nand_bbt.o

View File

@ -173,48 +173,6 @@ static void atmel_write_buf16(struct mtd_info *mtd, const u8 *buf, int len)
__raw_writesw(nand_chip->IO_ADDR_W, buf, len / 2);
}
/*
* write oob for small pages
*/
static int atmel_nand_write_oob_512(struct mtd_info *mtd,
struct nand_chip *chip, int page)
{
int chunk = chip->ecc.bytes + chip->ecc.prepad + chip->ecc.postpad;
int eccsize = chip->ecc.size, length = mtd->oobsize;
int len, pos, status = 0;
const uint8_t *bufpoi = chip->oob_poi;
pos = eccsize + chunk;
chip->cmdfunc(mtd, NAND_CMD_SEQIN, pos, page);
len = min_t(int, length, chunk);
chip->write_buf(mtd, bufpoi, len);
bufpoi += len;
length -= len;
if (length > 0)
chip->write_buf(mtd, bufpoi, length);
chip->cmdfunc(mtd, NAND_CMD_PAGEPROG, -1, -1);
status = chip->waitfunc(mtd, chip);
return status & NAND_STATUS_FAIL ? -EIO : 0;
}
/*
* read oob for small pages
*/
static int atmel_nand_read_oob_512(struct mtd_info *mtd,
struct nand_chip *chip, int page, int sndcmd)
{
if (sndcmd) {
chip->cmdfunc(mtd, NAND_CMD_READOOB, 0, page);
sndcmd = 0;
}
chip->read_buf(mtd, chip->oob_poi, mtd->oobsize);
return sndcmd;
}
/*
* Calculate HW ECC
*
@ -235,14 +193,14 @@ static int atmel_nand_calculate(struct mtd_info *mtd,
/* get the first 2 ECC bytes */
ecc_value = ecc_readl(host->ecc, PR);
ecc_code[eccpos[0]] = ecc_value & 0xFF;
ecc_code[eccpos[1]] = (ecc_value >> 8) & 0xFF;
ecc_code[0] = ecc_value & 0xFF;
ecc_code[1] = (ecc_value >> 8) & 0xFF;
/* get the last 2 ECC bytes */
ecc_value = ecc_readl(host->ecc, NPR) & ATMEL_ECC_NPARITY;
ecc_code[eccpos[2]] = ecc_value & 0xFF;
ecc_code[eccpos[3]] = (ecc_value >> 8) & 0xFF;
ecc_code[2] = ecc_value & 0xFF;
ecc_code[3] = (ecc_value >> 8) & 0xFF;
return 0;
}
@ -476,14 +434,12 @@ static int __init atmel_nand_probe(struct platform_device *pdev)
res = -EIO;
goto err_ecc_ioremap;
}
nand_chip->ecc.mode = NAND_ECC_HW_SYNDROME;
nand_chip->ecc.mode = NAND_ECC_HW;
nand_chip->ecc.calculate = atmel_nand_calculate;
nand_chip->ecc.correct = atmel_nand_correct;
nand_chip->ecc.hwctl = atmel_nand_hwctl;
nand_chip->ecc.read_page = atmel_nand_read_page;
nand_chip->ecc.bytes = 4;
nand_chip->ecc.prepad = 0;
nand_chip->ecc.postpad = 0;
}
nand_chip->chip_delay = 20; /* 20us command delay time */
@ -514,7 +470,7 @@ static int __init atmel_nand_probe(struct platform_device *pdev)
goto err_scan_ident;
}
if (nand_chip->ecc.mode == NAND_ECC_HW_SYNDROME) {
if (nand_chip->ecc.mode == NAND_ECC_HW) {
/* ECC is calculated for the whole page (1 step) */
nand_chip->ecc.size = mtd->writesize;
@ -522,8 +478,6 @@ static int __init atmel_nand_probe(struct platform_device *pdev)
switch (mtd->writesize) {
case 512:
nand_chip->ecc.layout = &atmel_oobinfo_small;
nand_chip->ecc.read_oob = atmel_nand_read_oob_512;
nand_chip->ecc.write_oob = atmel_nand_write_oob_512;
ecc_writel(host->ecc, MR, ATMEL_ECC_PAGESIZE_528);
break;
case 1024:

View File

@ -289,8 +289,10 @@ static int __init cs553x_init(void)
int i;
uint64_t val;
#ifdef CONFIG_MTD_PARTITIONS
int mtd_parts_nb = 0;
struct mtd_partition *mtd_parts = NULL;
#endif
/* If the CPU isn't a Geode GX or LX, abort */
if (!is_geode())

View File

@ -918,8 +918,7 @@ static int __devinit fsl_elbc_chip_probe(struct fsl_elbc_ctrl *ctrl,
#ifdef CONFIG_MTD_OF_PARTS
if (ret == 0) {
ret = of_mtd_parse_partitions(priv->dev, &priv->mtd,
node, &parts);
ret = of_mtd_parse_partitions(priv->dev, node, &parts);
if (ret < 0)
goto err;
}

View File

@ -13,6 +13,7 @@
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/delay.h>
#include <linux/mtd/nand.h>
#include <linux/mtd/nand_ecc.h>
#include <linux/mtd/partitions.h>
@ -36,8 +37,6 @@ struct fsl_upm_nand {
uint8_t upm_cmd_offset;
void __iomem *io_base;
int rnb_gpio;
const uint32_t *wait_pattern;
const uint32_t *wait_write;
int chip_delay;
};
@ -61,10 +60,11 @@ static void fun_wait_rnb(struct fsl_upm_nand *fun)
if (fun->rnb_gpio >= 0) {
while (--cnt && !fun_chip_ready(&fun->mtd))
cpu_relax();
if (!cnt)
dev_err(fun->dev, "tired waiting for RNB\n");
} else {
ndelay(100);
}
if (!cnt)
dev_err(fun->dev, "tired waiting for RNB\n");
}
static void fun_cmd_ctrl(struct mtd_info *mtd, int cmd, unsigned int ctrl)
@ -89,8 +89,7 @@ static void fun_cmd_ctrl(struct mtd_info *mtd, int cmd, unsigned int ctrl)
fsl_upm_run_pattern(&fun->upm, fun->io_base, cmd);
if (fun->wait_pattern)
fun_wait_rnb(fun);
fun_wait_rnb(fun);
}
static uint8_t fun_read_byte(struct mtd_info *mtd)
@ -116,14 +115,16 @@ static void fun_write_buf(struct mtd_info *mtd, const uint8_t *buf, int len)
for (i = 0; i < len; i++) {
out_8(fun->chip.IO_ADDR_W, buf[i]);
if (fun->wait_write)
fun_wait_rnb(fun);
fun_wait_rnb(fun);
}
}
static int __devinit fun_chip_init(struct fsl_upm_nand *fun)
static int __devinit fun_chip_init(struct fsl_upm_nand *fun,
const struct device_node *upm_np,
const struct resource *io_res)
{
int ret;
struct device_node *flash_np;
#ifdef CONFIG_MTD_PARTITIONS
static const char *part_types[] = { "cmdlinepart", NULL, };
#endif
@ -143,18 +144,37 @@ static int __devinit fun_chip_init(struct fsl_upm_nand *fun)
fun->mtd.priv = &fun->chip;
fun->mtd.owner = THIS_MODULE;
flash_np = of_get_next_child(upm_np, NULL);
if (!flash_np)
return -ENODEV;
fun->mtd.name = kasprintf(GFP_KERNEL, "%x.%s", io_res->start,
flash_np->name);
if (!fun->mtd.name) {
ret = -ENOMEM;
goto err;
}
ret = nand_scan(&fun->mtd, 1);
if (ret)
return ret;
fun->mtd.name = fun->dev->bus_id;
goto err;
#ifdef CONFIG_MTD_PARTITIONS
ret = parse_mtd_partitions(&fun->mtd, part_types, &fun->parts, 0);
if (ret > 0)
return add_mtd_partitions(&fun->mtd, fun->parts, ret);
#ifdef CONFIG_MTD_OF_PARTS
if (ret == 0)
ret = of_mtd_parse_partitions(fun->dev, &fun->mtd,
flash_np, &fun->parts);
#endif
return add_mtd_device(&fun->mtd);
if (ret > 0)
ret = add_mtd_partitions(&fun->mtd, fun->parts, ret);
else
#endif
ret = add_mtd_device(&fun->mtd);
err:
of_node_put(flash_np);
return ret;
}
static int __devinit fun_probe(struct of_device *ofdev,
@ -211,6 +231,12 @@ static int __devinit fun_probe(struct of_device *ofdev,
goto err2;
}
prop = of_get_property(ofdev->node, "chip-delay", NULL);
if (prop)
fun->chip_delay = *prop;
else
fun->chip_delay = 50;
fun->io_base = devm_ioremap_nocache(&ofdev->dev, io_res.start,
io_res.end - io_res.start + 1);
if (!fun->io_base) {
@ -220,17 +246,8 @@ static int __devinit fun_probe(struct of_device *ofdev,
fun->dev = &ofdev->dev;
fun->last_ctrl = NAND_CLE;
fun->wait_pattern = of_get_property(ofdev->node, "fsl,wait-pattern",
NULL);
fun->wait_write = of_get_property(ofdev->node, "fsl,wait-write", NULL);
prop = of_get_property(ofdev->node, "chip-delay", NULL);
if (prop)
fun->chip_delay = *prop;
else
fun->chip_delay = 50;
ret = fun_chip_init(fun);
ret = fun_chip_init(fun, ofdev->node, &io_res);
if (ret)
goto err2;
@ -251,6 +268,7 @@ static int __devexit fun_remove(struct of_device *ofdev)
struct fsl_upm_nand *fun = dev_get_drvdata(&ofdev->dev);
nand_release(&fun->mtd);
kfree(fun->mtd.name);
if (fun->rnb_gpio >= 0)
gpio_free(fun->rnb_gpio);

375
drivers/mtd/nand/gpio.c Normal file
View File

@ -0,0 +1,375 @@
/*
* drivers/mtd/nand/gpio.c
*
* Updated, and converted to generic GPIO based driver by Russell King.
*
* Written by Ben Dooks <ben@simtec.co.uk>
* Based on 2.4 version by Mark Whittaker
*
* © 2004 Simtec Electronics
*
* Device driver for NAND connected via GPIO
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*
*/
#include <linux/kernel.h>
#include <linux/init.h>
#include <linux/slab.h>
#include <linux/module.h>
#include <linux/platform_device.h>
#include <linux/gpio.h>
#include <linux/io.h>
#include <linux/mtd/mtd.h>
#include <linux/mtd/nand.h>
#include <linux/mtd/partitions.h>
#include <linux/mtd/nand-gpio.h>
struct gpiomtd {
void __iomem *io_sync;
struct mtd_info mtd_info;
struct nand_chip nand_chip;
struct gpio_nand_platdata plat;
};
#define gpio_nand_getpriv(x) container_of(x, struct gpiomtd, mtd_info)
#ifdef CONFIG_ARM
/* gpio_nand_dosync()
*
* Make sure the GPIO state changes occur in-order with writes to NAND
* memory region.
* Needed on PXA due to bus-reordering within the SoC itself (see section on
* I/O ordering in PXA manual (section 2.3, p35)
*/
static void gpio_nand_dosync(struct gpiomtd *gpiomtd)
{
unsigned long tmp;
if (gpiomtd->io_sync) {
/*
* Linux memory barriers don't cater for what's required here.
* What's required is what's here - a read from a separate
* region with a dependency on that read.
*/
tmp = readl(gpiomtd->io_sync);
asm volatile("mov %1, %0\n" : "=r" (tmp) : "r" (tmp));
}
}
#else
static inline void gpio_nand_dosync(struct gpiomtd *gpiomtd) {}
#endif
static void gpio_nand_cmd_ctrl(struct mtd_info *mtd, int cmd, unsigned int ctrl)
{
struct gpiomtd *gpiomtd = gpio_nand_getpriv(mtd);
gpio_nand_dosync(gpiomtd);
if (ctrl & NAND_CTRL_CHANGE) {
gpio_set_value(gpiomtd->plat.gpio_nce, !(ctrl & NAND_NCE));
gpio_set_value(gpiomtd->plat.gpio_cle, !!(ctrl & NAND_CLE));
gpio_set_value(gpiomtd->plat.gpio_ale, !!(ctrl & NAND_ALE));
gpio_nand_dosync(gpiomtd);
}
if (cmd == NAND_CMD_NONE)
return;
writeb(cmd, gpiomtd->nand_chip.IO_ADDR_W);
gpio_nand_dosync(gpiomtd);
}
static void gpio_nand_writebuf(struct mtd_info *mtd, const u_char *buf, int len)
{
struct nand_chip *this = mtd->priv;
writesb(this->IO_ADDR_W, buf, len);
}
static void gpio_nand_readbuf(struct mtd_info *mtd, u_char *buf, int len)
{
struct nand_chip *this = mtd->priv;
readsb(this->IO_ADDR_R, buf, len);
}
static int gpio_nand_verifybuf(struct mtd_info *mtd, const u_char *buf, int len)
{
struct nand_chip *this = mtd->priv;
unsigned char read, *p = (unsigned char *) buf;
int i, err = 0;
for (i = 0; i < len; i++) {
read = readb(this->IO_ADDR_R);
if (read != p[i]) {
pr_debug("%s: err at %d (read %04x vs %04x)\n",
__func__, i, read, p[i]);
err = -EFAULT;
}
}
return err;
}
static void gpio_nand_writebuf16(struct mtd_info *mtd, const u_char *buf,
int len)
{
struct nand_chip *this = mtd->priv;
if (IS_ALIGNED((unsigned long)buf, 2)) {
writesw(this->IO_ADDR_W, buf, len>>1);
} else {
int i;
unsigned short *ptr = (unsigned short *)buf;
for (i = 0; i < len; i += 2, ptr++)
writew(*ptr, this->IO_ADDR_W);
}
}
static void gpio_nand_readbuf16(struct mtd_info *mtd, u_char *buf, int len)
{
struct nand_chip *this = mtd->priv;
if (IS_ALIGNED((unsigned long)buf, 2)) {
readsw(this->IO_ADDR_R, buf, len>>1);
} else {
int i;
unsigned short *ptr = (unsigned short *)buf;
for (i = 0; i < len; i += 2, ptr++)
*ptr = readw(this->IO_ADDR_R);
}
}
static int gpio_nand_verifybuf16(struct mtd_info *mtd, const u_char *buf,
int len)
{
struct nand_chip *this = mtd->priv;
unsigned short read, *p = (unsigned short *) buf;
int i, err = 0;
len >>= 1;
for (i = 0; i < len; i++) {
read = readw(this->IO_ADDR_R);
if (read != p[i]) {
pr_debug("%s: err at %d (read %04x vs %04x)\n",
__func__, i, read, p[i]);
err = -EFAULT;
}
}
return err;
}
static int gpio_nand_devready(struct mtd_info *mtd)
{
struct gpiomtd *gpiomtd = gpio_nand_getpriv(mtd);
return gpio_get_value(gpiomtd->plat.gpio_rdy);
}
static int __devexit gpio_nand_remove(struct platform_device *dev)
{
struct gpiomtd *gpiomtd = platform_get_drvdata(dev);
struct resource *res;
nand_release(&gpiomtd->mtd_info);
res = platform_get_resource(dev, IORESOURCE_MEM, 1);
iounmap(gpiomtd->io_sync);
if (res)
release_mem_region(res->start, res->end - res->start + 1);
res = platform_get_resource(dev, IORESOURCE_MEM, 0);
iounmap(gpiomtd->nand_chip.IO_ADDR_R);
release_mem_region(res->start, res->end - res->start + 1);
if (gpio_is_valid(gpiomtd->plat.gpio_nwp))
gpio_set_value(gpiomtd->plat.gpio_nwp, 0);
gpio_set_value(gpiomtd->plat.gpio_nce, 1);
gpio_free(gpiomtd->plat.gpio_cle);
gpio_free(gpiomtd->plat.gpio_ale);
gpio_free(gpiomtd->plat.gpio_nce);
if (gpio_is_valid(gpiomtd->plat.gpio_nwp))
gpio_free(gpiomtd->plat.gpio_nwp);
gpio_free(gpiomtd->plat.gpio_rdy);
kfree(gpiomtd);
return 0;
}
static void __iomem *request_and_remap(struct resource *res, size_t size,
const char *name, int *err)
{
void __iomem *ptr;
if (!request_mem_region(res->start, res->end - res->start + 1, name)) {
*err = -EBUSY;
return NULL;
}
ptr = ioremap(res->start, size);
if (!ptr) {
release_mem_region(res->start, res->end - res->start + 1);
*err = -ENOMEM;
}
return ptr;
}
static int __devinit gpio_nand_probe(struct platform_device *dev)
{
struct gpiomtd *gpiomtd;
struct nand_chip *this;
struct resource *res0, *res1;
int ret;
if (!dev->dev.platform_data)
return -EINVAL;
res0 = platform_get_resource(dev, IORESOURCE_MEM, 0);
if (!res0)
return -EINVAL;
gpiomtd = kzalloc(sizeof(*gpiomtd), GFP_KERNEL);
if (gpiomtd == NULL) {
dev_err(&dev->dev, "failed to create NAND MTD\n");
return -ENOMEM;
}
this = &gpiomtd->nand_chip;
this->IO_ADDR_R = request_and_remap(res0, 2, "NAND", &ret);
if (!this->IO_ADDR_R) {
dev_err(&dev->dev, "unable to map NAND\n");
goto err_map;
}
res1 = platform_get_resource(dev, IORESOURCE_MEM, 1);
if (res1) {
gpiomtd->io_sync = request_and_remap(res1, 4, "NAND sync", &ret);
if (!gpiomtd->io_sync) {
dev_err(&dev->dev, "unable to map sync NAND\n");
goto err_sync;
}
}
memcpy(&gpiomtd->plat, dev->dev.platform_data, sizeof(gpiomtd->plat));
ret = gpio_request(gpiomtd->plat.gpio_nce, "NAND NCE");
if (ret)
goto err_nce;
gpio_direction_output(gpiomtd->plat.gpio_nce, 1);
if (gpio_is_valid(gpiomtd->plat.gpio_nwp)) {
ret = gpio_request(gpiomtd->plat.gpio_nwp, "NAND NWP");
if (ret)
goto err_nwp;
gpio_direction_output(gpiomtd->plat.gpio_nwp, 1);
}
ret = gpio_request(gpiomtd->plat.gpio_ale, "NAND ALE");
if (ret)
goto err_ale;
gpio_direction_output(gpiomtd->plat.gpio_ale, 0);
ret = gpio_request(gpiomtd->plat.gpio_cle, "NAND CLE");
if (ret)
goto err_cle;
gpio_direction_output(gpiomtd->plat.gpio_cle, 0);
ret = gpio_request(gpiomtd->plat.gpio_rdy, "NAND RDY");
if (ret)
goto err_rdy;
gpio_direction_input(gpiomtd->plat.gpio_rdy);
this->IO_ADDR_W = this->IO_ADDR_R;
this->ecc.mode = NAND_ECC_SOFT;
this->options = gpiomtd->plat.options;
this->chip_delay = gpiomtd->plat.chip_delay;
/* install our routines */
this->cmd_ctrl = gpio_nand_cmd_ctrl;
this->dev_ready = gpio_nand_devready;
if (this->options & NAND_BUSWIDTH_16) {
this->read_buf = gpio_nand_readbuf16;
this->write_buf = gpio_nand_writebuf16;
this->verify_buf = gpio_nand_verifybuf16;
} else {
this->read_buf = gpio_nand_readbuf;
this->write_buf = gpio_nand_writebuf;
this->verify_buf = gpio_nand_verifybuf;
}
/* set the mtd private data for the nand driver */
gpiomtd->mtd_info.priv = this;
gpiomtd->mtd_info.owner = THIS_MODULE;
if (nand_scan(&gpiomtd->mtd_info, 1)) {
dev_err(&dev->dev, "no nand chips found?\n");
ret = -ENXIO;
goto err_wp;
}
if (gpiomtd->plat.adjust_parts)
gpiomtd->plat.adjust_parts(&gpiomtd->plat,
gpiomtd->mtd_info.size);
add_mtd_partitions(&gpiomtd->mtd_info, gpiomtd->plat.parts,
gpiomtd->plat.num_parts);
platform_set_drvdata(dev, gpiomtd);
return 0;
err_wp:
if (gpio_is_valid(gpiomtd->plat.gpio_nwp))
gpio_set_value(gpiomtd->plat.gpio_nwp, 0);
gpio_free(gpiomtd->plat.gpio_rdy);
err_rdy:
gpio_free(gpiomtd->plat.gpio_cle);
err_cle:
gpio_free(gpiomtd->plat.gpio_ale);
err_ale:
if (gpio_is_valid(gpiomtd->plat.gpio_nwp))
gpio_free(gpiomtd->plat.gpio_nwp);
err_nwp:
gpio_free(gpiomtd->plat.gpio_nce);
err_nce:
iounmap(gpiomtd->io_sync);
if (res1)
release_mem_region(res1->start, res1->end - res1->start + 1);
err_sync:
iounmap(gpiomtd->nand_chip.IO_ADDR_R);
release_mem_region(res0->start, res0->end - res0->start + 1);
err_map:
kfree(gpiomtd);
return ret;
}
static struct platform_driver gpio_nand_driver = {
.probe = gpio_nand_probe,
.remove = gpio_nand_remove,
.driver = {
.name = "gpio-nand",
},
};
static int __init gpio_nand_init(void)
{
printk(KERN_INFO "GPIO NAND driver, © 2004 Simtec Electronics\n");
return platform_driver_register(&gpio_nand_driver);
}
static void __exit gpio_nand_exit(void)
{
platform_driver_unregister(&gpio_nand_driver);
}
module_init(gpio_nand_init);
module_exit(gpio_nand_exit);
MODULE_LICENSE("GPL");
MODULE_AUTHOR("Ben Dooks <ben@simtec.co.uk>");
MODULE_DESCRIPTION("GPIO NAND Driver");

1077
drivers/mtd/nand/mxc_nand.c Normal file

File diff suppressed because it is too large Load Diff

View File

@ -801,9 +801,9 @@ static int nand_read_page_swecc(struct mtd_info *mtd, struct nand_chip *chip,
* nand_read_subpage - [REPLACABLE] software ecc based sub-page read function
* @mtd: mtd info structure
* @chip: nand chip info structure
* @dataofs offset of requested data within the page
* @readlen data length
* @buf: buffer to store read data
* @data_offs: offset of requested data within the page
* @readlen: data length
* @bufpoi: buffer to store read data
*/
static int nand_read_subpage(struct mtd_info *mtd, struct nand_chip *chip, uint32_t data_offs, uint32_t readlen, uint8_t *bufpoi)
{
@ -2042,7 +2042,7 @@ int nand_erase_nand(struct mtd_info *mtd, struct erase_info *instr,
return -EINVAL;
}
instr->fail_addr = 0xffffffff;
instr->fail_addr = MTD_FAIL_ADDR_UNKNOWN;
/* Grab the lock and see if the device is available */
nand_get_device(chip, mtd, FL_ERASING);
@ -2318,6 +2318,12 @@ static struct nand_flash_dev *nand_get_flash_type(struct mtd_info *mtd,
/* Select the device */
chip->select_chip(mtd, 0);
/*
* Reset the chip, required by some chips (e.g. Micron MT29FxGxxxxx)
* after power-up
*/
chip->cmdfunc(mtd, NAND_CMD_RESET, -1, -1);
/* Send the command for reading device ID */
chip->cmdfunc(mtd, NAND_CMD_READID, 0x00, -1);
@ -2488,6 +2494,8 @@ int nand_scan_ident(struct mtd_info *mtd, int maxchips)
/* Check for a chip array */
for (i = 1; i < maxchips; i++) {
chip->select_chip(mtd, i);
/* See comment in nand_get_flash_type for reset */
chip->cmdfunc(mtd, NAND_CMD_RESET, -1, -1);
/* Send the command for reading device ID */
chip->cmdfunc(mtd, NAND_CMD_READID, 0x00, -1);
/* Read manufacturer and device IDs */

View File

@ -1,13 +1,18 @@
/*
* This file contains an ECC algorithm from Toshiba that detects and
* corrects 1 bit errors in a 256 byte block of data.
* This file contains an ECC algorithm that detects and corrects 1 bit
* errors in a 256 byte block of data.
*
* drivers/mtd/nand/nand_ecc.c
*
* Copyright (C) 2000-2004 Steven J. Hill (sjhill@realitydiluted.com)
* Toshiba America Electronics Components, Inc.
* Copyright © 2008 Koninklijke Philips Electronics NV.
* Author: Frans Meulenbroeks
*
* Copyright (C) 2006 Thomas Gleixner <tglx@linutronix.de>
* Completely replaces the previous ECC implementation which was written by:
* Steven J. Hill (sjhill@realitydiluted.com)
* Thomas Gleixner (tglx@linutronix.de)
*
* Information on how this algorithm works and how it was developed
* can be found in Documentation/mtd/nand_ecc.txt
*
* This file is free software; you can redistribute it and/or modify it
* under the terms of the GNU General Public License as published by the
@ -23,174 +28,475 @@
* with this file; if not, write to the Free Software Foundation, Inc.,
* 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA.
*
* As a special exception, if other files instantiate templates or use
* macros or inline functions from these files, or you compile these
* files and link them with other works to produce a work based on these
* files, these files do not by themselves cause the resulting work to be
* covered by the GNU General Public License. However the source code for
* these files must still be made available in accordance with section (3)
* of the GNU General Public License.
*
* This exception does not invalidate any other reasons why a work based on
* this file might be covered by the GNU General Public License.
*/
/*
* The STANDALONE macro is useful when running the code outside the kernel
* e.g. when running the code in a testbed or a benchmark program.
* When STANDALONE is used, the module related macros are commented out
* as well as the linux include files.
* Instead a private definition of mtd_info is given to satisfy the compiler
* (the code does not use mtd_info, so the code does not care)
*/
#ifndef STANDALONE
#include <linux/types.h>
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/mtd/mtd.h>
#include <linux/mtd/nand.h>
#include <linux/mtd/nand_ecc.h>
#include <asm/byteorder.h>
#else
#include <stdint.h>
struct mtd_info;
#define EXPORT_SYMBOL(x) /* x */
#define MODULE_LICENSE(x) /* x */
#define MODULE_AUTHOR(x) /* x */
#define MODULE_DESCRIPTION(x) /* x */
#define printk printf
#define KERN_ERR ""
#endif
/*
* Pre-calculated 256-way 1 byte column parity
* invparity is a 256 byte table that contains the odd parity
* for each byte. So if the number of bits in a byte is even,
* the array element is 1, and when the number of bits is odd
* the array eleemnt is 0.
*/
static const u_char nand_ecc_precalc_table[] = {
0x00, 0x55, 0x56, 0x03, 0x59, 0x0c, 0x0f, 0x5a, 0x5a, 0x0f, 0x0c, 0x59, 0x03, 0x56, 0x55, 0x00,
0x65, 0x30, 0x33, 0x66, 0x3c, 0x69, 0x6a, 0x3f, 0x3f, 0x6a, 0x69, 0x3c, 0x66, 0x33, 0x30, 0x65,
0x66, 0x33, 0x30, 0x65, 0x3f, 0x6a, 0x69, 0x3c, 0x3c, 0x69, 0x6a, 0x3f, 0x65, 0x30, 0x33, 0x66,
0x03, 0x56, 0x55, 0x00, 0x5a, 0x0f, 0x0c, 0x59, 0x59, 0x0c, 0x0f, 0x5a, 0x00, 0x55, 0x56, 0x03,
0x69, 0x3c, 0x3f, 0x6a, 0x30, 0x65, 0x66, 0x33, 0x33, 0x66, 0x65, 0x30, 0x6a, 0x3f, 0x3c, 0x69,
0x0c, 0x59, 0x5a, 0x0f, 0x55, 0x00, 0x03, 0x56, 0x56, 0x03, 0x00, 0x55, 0x0f, 0x5a, 0x59, 0x0c,
0x0f, 0x5a, 0x59, 0x0c, 0x56, 0x03, 0x00, 0x55, 0x55, 0x00, 0x03, 0x56, 0x0c, 0x59, 0x5a, 0x0f,
0x6a, 0x3f, 0x3c, 0x69, 0x33, 0x66, 0x65, 0x30, 0x30, 0x65, 0x66, 0x33, 0x69, 0x3c, 0x3f, 0x6a,
0x6a, 0x3f, 0x3c, 0x69, 0x33, 0x66, 0x65, 0x30, 0x30, 0x65, 0x66, 0x33, 0x69, 0x3c, 0x3f, 0x6a,
0x0f, 0x5a, 0x59, 0x0c, 0x56, 0x03, 0x00, 0x55, 0x55, 0x00, 0x03, 0x56, 0x0c, 0x59, 0x5a, 0x0f,
0x0c, 0x59, 0x5a, 0x0f, 0x55, 0x00, 0x03, 0x56, 0x56, 0x03, 0x00, 0x55, 0x0f, 0x5a, 0x59, 0x0c,
0x69, 0x3c, 0x3f, 0x6a, 0x30, 0x65, 0x66, 0x33, 0x33, 0x66, 0x65, 0x30, 0x6a, 0x3f, 0x3c, 0x69,
0x03, 0x56, 0x55, 0x00, 0x5a, 0x0f, 0x0c, 0x59, 0x59, 0x0c, 0x0f, 0x5a, 0x00, 0x55, 0x56, 0x03,
0x66, 0x33, 0x30, 0x65, 0x3f, 0x6a, 0x69, 0x3c, 0x3c, 0x69, 0x6a, 0x3f, 0x65, 0x30, 0x33, 0x66,
0x65, 0x30, 0x33, 0x66, 0x3c, 0x69, 0x6a, 0x3f, 0x3f, 0x6a, 0x69, 0x3c, 0x66, 0x33, 0x30, 0x65,
0x00, 0x55, 0x56, 0x03, 0x59, 0x0c, 0x0f, 0x5a, 0x5a, 0x0f, 0x0c, 0x59, 0x03, 0x56, 0x55, 0x00
static const char invparity[256] = {
1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1,
0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0,
0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0,
1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1,
0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0,
1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1,
1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1,
0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0,
0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0,
1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1,
1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1,
0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0,
1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1,
0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0,
0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 1, 0, 1, 1, 0,
1, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1
};
/*
* bitsperbyte contains the number of bits per byte
* this is only used for testing and repairing parity
* (a precalculated value slightly improves performance)
*/
static const char bitsperbyte[256] = {
0, 1, 1, 2, 1, 2, 2, 3, 1, 2, 2, 3, 2, 3, 3, 4,
1, 2, 2, 3, 2, 3, 3, 4, 2, 3, 3, 4, 3, 4, 4, 5,
1, 2, 2, 3, 2, 3, 3, 4, 2, 3, 3, 4, 3, 4, 4, 5,
2, 3, 3, 4, 3, 4, 4, 5, 3, 4, 4, 5, 4, 5, 5, 6,
1, 2, 2, 3, 2, 3, 3, 4, 2, 3, 3, 4, 3, 4, 4, 5,
2, 3, 3, 4, 3, 4, 4, 5, 3, 4, 4, 5, 4, 5, 5, 6,
2, 3, 3, 4, 3, 4, 4, 5, 3, 4, 4, 5, 4, 5, 5, 6,
3, 4, 4, 5, 4, 5, 5, 6, 4, 5, 5, 6, 5, 6, 6, 7,
1, 2, 2, 3, 2, 3, 3, 4, 2, 3, 3, 4, 3, 4, 4, 5,
2, 3, 3, 4, 3, 4, 4, 5, 3, 4, 4, 5, 4, 5, 5, 6,
2, 3, 3, 4, 3, 4, 4, 5, 3, 4, 4, 5, 4, 5, 5, 6,
3, 4, 4, 5, 4, 5, 5, 6, 4, 5, 5, 6, 5, 6, 6, 7,
2, 3, 3, 4, 3, 4, 4, 5, 3, 4, 4, 5, 4, 5, 5, 6,
3, 4, 4, 5, 4, 5, 5, 6, 4, 5, 5, 6, 5, 6, 6, 7,
3, 4, 4, 5, 4, 5, 5, 6, 4, 5, 5, 6, 5, 6, 6, 7,
4, 5, 5, 6, 5, 6, 6, 7, 5, 6, 6, 7, 6, 7, 7, 8,
};
/*
* addressbits is a lookup table to filter out the bits from the xor-ed
* ecc data that identify the faulty location.
* this is only used for repairing parity
* see the comments in nand_correct_data for more details
*/
static const char addressbits[256] = {
0x00, 0x00, 0x01, 0x01, 0x00, 0x00, 0x01, 0x01,
0x02, 0x02, 0x03, 0x03, 0x02, 0x02, 0x03, 0x03,
0x00, 0x00, 0x01, 0x01, 0x00, 0x00, 0x01, 0x01,
0x02, 0x02, 0x03, 0x03, 0x02, 0x02, 0x03, 0x03,
0x04, 0x04, 0x05, 0x05, 0x04, 0x04, 0x05, 0x05,
0x06, 0x06, 0x07, 0x07, 0x06, 0x06, 0x07, 0x07,
0x04, 0x04, 0x05, 0x05, 0x04, 0x04, 0x05, 0x05,
0x06, 0x06, 0x07, 0x07, 0x06, 0x06, 0x07, 0x07,
0x00, 0x00, 0x01, 0x01, 0x00, 0x00, 0x01, 0x01,
0x02, 0x02, 0x03, 0x03, 0x02, 0x02, 0x03, 0x03,
0x00, 0x00, 0x01, 0x01, 0x00, 0x00, 0x01, 0x01,
0x02, 0x02, 0x03, 0x03, 0x02, 0x02, 0x03, 0x03,
0x04, 0x04, 0x05, 0x05, 0x04, 0x04, 0x05, 0x05,
0x06, 0x06, 0x07, 0x07, 0x06, 0x06, 0x07, 0x07,
0x04, 0x04, 0x05, 0x05, 0x04, 0x04, 0x05, 0x05,
0x06, 0x06, 0x07, 0x07, 0x06, 0x06, 0x07, 0x07,
0x08, 0x08, 0x09, 0x09, 0x08, 0x08, 0x09, 0x09,
0x0a, 0x0a, 0x0b, 0x0b, 0x0a, 0x0a, 0x0b, 0x0b,
0x08, 0x08, 0x09, 0x09, 0x08, 0x08, 0x09, 0x09,
0x0a, 0x0a, 0x0b, 0x0b, 0x0a, 0x0a, 0x0b, 0x0b,
0x0c, 0x0c, 0x0d, 0x0d, 0x0c, 0x0c, 0x0d, 0x0d,
0x0e, 0x0e, 0x0f, 0x0f, 0x0e, 0x0e, 0x0f, 0x0f,
0x0c, 0x0c, 0x0d, 0x0d, 0x0c, 0x0c, 0x0d, 0x0d,
0x0e, 0x0e, 0x0f, 0x0f, 0x0e, 0x0e, 0x0f, 0x0f,
0x08, 0x08, 0x09, 0x09, 0x08, 0x08, 0x09, 0x09,
0x0a, 0x0a, 0x0b, 0x0b, 0x0a, 0x0a, 0x0b, 0x0b,
0x08, 0x08, 0x09, 0x09, 0x08, 0x08, 0x09, 0x09,
0x0a, 0x0a, 0x0b, 0x0b, 0x0a, 0x0a, 0x0b, 0x0b,
0x0c, 0x0c, 0x0d, 0x0d, 0x0c, 0x0c, 0x0d, 0x0d,
0x0e, 0x0e, 0x0f, 0x0f, 0x0e, 0x0e, 0x0f, 0x0f,
0x0c, 0x0c, 0x0d, 0x0d, 0x0c, 0x0c, 0x0d, 0x0d,
0x0e, 0x0e, 0x0f, 0x0f, 0x0e, 0x0e, 0x0f, 0x0f
};
/**
* nand_calculate_ecc - [NAND Interface] Calculate 3-byte ECC for 256-byte block
* nand_calculate_ecc - [NAND Interface] Calculate 3-byte ECC for 256/512-byte
* block
* @mtd: MTD block structure
* @dat: raw data
* @ecc_code: buffer for ECC
* @buf: input buffer with raw data
* @code: output buffer with ECC
*/
int nand_calculate_ecc(struct mtd_info *mtd, const u_char *dat,
u_char *ecc_code)
int nand_calculate_ecc(struct mtd_info *mtd, const unsigned char *buf,
unsigned char *code)
{
uint8_t idx, reg1, reg2, reg3, tmp1, tmp2;
int i;
const uint32_t *bp = (uint32_t *)buf;
/* 256 or 512 bytes/ecc */
const uint32_t eccsize_mult =
(((struct nand_chip *)mtd->priv)->ecc.size) >> 8;
uint32_t cur; /* current value in buffer */
/* rp0..rp15..rp17 are the various accumulated parities (per byte) */
uint32_t rp0, rp1, rp2, rp3, rp4, rp5, rp6, rp7;
uint32_t rp8, rp9, rp10, rp11, rp12, rp13, rp14, rp15, rp16;
uint32_t uninitialized_var(rp17); /* to make compiler happy */
uint32_t par; /* the cumulative parity for all data */
uint32_t tmppar; /* the cumulative parity for this iteration;
for rp12, rp14 and rp16 at the end of the
loop */
/* Initialize variables */
reg1 = reg2 = reg3 = 0;
par = 0;
rp4 = 0;
rp6 = 0;
rp8 = 0;
rp10 = 0;
rp12 = 0;
rp14 = 0;
rp16 = 0;
/* Build up column parity */
for(i = 0; i < 256; i++) {
/* Get CP0 - CP5 from table */
idx = nand_ecc_precalc_table[*dat++];
reg1 ^= (idx & 0x3f);
/*
* The loop is unrolled a number of times;
* This avoids if statements to decide on which rp value to update
* Also we process the data by longwords.
* Note: passing unaligned data might give a performance penalty.
* It is assumed that the buffers are aligned.
* tmppar is the cumulative sum of this iteration.
* needed for calculating rp12, rp14, rp16 and par
* also used as a performance improvement for rp6, rp8 and rp10
*/
for (i = 0; i < eccsize_mult << 2; i++) {
cur = *bp++;
tmppar = cur;
rp4 ^= cur;
cur = *bp++;
tmppar ^= cur;
rp6 ^= tmppar;
cur = *bp++;
tmppar ^= cur;
rp4 ^= cur;
cur = *bp++;
tmppar ^= cur;
rp8 ^= tmppar;
/* All bit XOR = 1 ? */
if (idx & 0x40) {
reg3 ^= (uint8_t) i;
reg2 ^= ~((uint8_t) i);
}
cur = *bp++;
tmppar ^= cur;
rp4 ^= cur;
rp6 ^= cur;
cur = *bp++;
tmppar ^= cur;
rp6 ^= cur;
cur = *bp++;
tmppar ^= cur;
rp4 ^= cur;
cur = *bp++;
tmppar ^= cur;
rp10 ^= tmppar;
cur = *bp++;
tmppar ^= cur;
rp4 ^= cur;
rp6 ^= cur;
rp8 ^= cur;
cur = *bp++;
tmppar ^= cur;
rp6 ^= cur;
rp8 ^= cur;
cur = *bp++;
tmppar ^= cur;
rp4 ^= cur;
rp8 ^= cur;
cur = *bp++;
tmppar ^= cur;
rp8 ^= cur;
cur = *bp++;
tmppar ^= cur;
rp4 ^= cur;
rp6 ^= cur;
cur = *bp++;
tmppar ^= cur;
rp6 ^= cur;
cur = *bp++;
tmppar ^= cur;
rp4 ^= cur;
cur = *bp++;
tmppar ^= cur;
par ^= tmppar;
if ((i & 0x1) == 0)
rp12 ^= tmppar;
if ((i & 0x2) == 0)
rp14 ^= tmppar;
if (eccsize_mult == 2 && (i & 0x4) == 0)
rp16 ^= tmppar;
}
/* Create non-inverted ECC code from line parity */
tmp1 = (reg3 & 0x80) >> 0; /* B7 -> B7 */
tmp1 |= (reg2 & 0x80) >> 1; /* B7 -> B6 */
tmp1 |= (reg3 & 0x40) >> 1; /* B6 -> B5 */
tmp1 |= (reg2 & 0x40) >> 2; /* B6 -> B4 */
tmp1 |= (reg3 & 0x20) >> 2; /* B5 -> B3 */
tmp1 |= (reg2 & 0x20) >> 3; /* B5 -> B2 */
tmp1 |= (reg3 & 0x10) >> 3; /* B4 -> B1 */
tmp1 |= (reg2 & 0x10) >> 4; /* B4 -> B0 */
/*
* handle the fact that we use longword operations
* we'll bring rp4..rp14..rp16 back to single byte entities by
* shifting and xoring first fold the upper and lower 16 bits,
* then the upper and lower 8 bits.
*/
rp4 ^= (rp4 >> 16);
rp4 ^= (rp4 >> 8);
rp4 &= 0xff;
rp6 ^= (rp6 >> 16);
rp6 ^= (rp6 >> 8);
rp6 &= 0xff;
rp8 ^= (rp8 >> 16);
rp8 ^= (rp8 >> 8);
rp8 &= 0xff;
rp10 ^= (rp10 >> 16);
rp10 ^= (rp10 >> 8);
rp10 &= 0xff;
rp12 ^= (rp12 >> 16);
rp12 ^= (rp12 >> 8);
rp12 &= 0xff;
rp14 ^= (rp14 >> 16);
rp14 ^= (rp14 >> 8);
rp14 &= 0xff;
if (eccsize_mult == 2) {
rp16 ^= (rp16 >> 16);
rp16 ^= (rp16 >> 8);
rp16 &= 0xff;
}
tmp2 = (reg3 & 0x08) << 4; /* B3 -> B7 */
tmp2 |= (reg2 & 0x08) << 3; /* B3 -> B6 */
tmp2 |= (reg3 & 0x04) << 3; /* B2 -> B5 */
tmp2 |= (reg2 & 0x04) << 2; /* B2 -> B4 */
tmp2 |= (reg3 & 0x02) << 2; /* B1 -> B3 */
tmp2 |= (reg2 & 0x02) << 1; /* B1 -> B2 */
tmp2 |= (reg3 & 0x01) << 1; /* B0 -> B1 */
tmp2 |= (reg2 & 0x01) << 0; /* B7 -> B0 */
/* Calculate final ECC code */
#ifdef CONFIG_MTD_NAND_ECC_SMC
ecc_code[0] = ~tmp2;
ecc_code[1] = ~tmp1;
/*
* we also need to calculate the row parity for rp0..rp3
* This is present in par, because par is now
* rp3 rp3 rp2 rp2 in little endian and
* rp2 rp2 rp3 rp3 in big endian
* as well as
* rp1 rp0 rp1 rp0 in little endian and
* rp0 rp1 rp0 rp1 in big endian
* First calculate rp2 and rp3
*/
#ifdef __BIG_ENDIAN
rp2 = (par >> 16);
rp2 ^= (rp2 >> 8);
rp2 &= 0xff;
rp3 = par & 0xffff;
rp3 ^= (rp3 >> 8);
rp3 &= 0xff;
#else
ecc_code[0] = ~tmp1;
ecc_code[1] = ~tmp2;
rp3 = (par >> 16);
rp3 ^= (rp3 >> 8);
rp3 &= 0xff;
rp2 = par & 0xffff;
rp2 ^= (rp2 >> 8);
rp2 &= 0xff;
#endif
ecc_code[2] = ((~reg1) << 2) | 0x03;
/* reduce par to 16 bits then calculate rp1 and rp0 */
par ^= (par >> 16);
#ifdef __BIG_ENDIAN
rp0 = (par >> 8) & 0xff;
rp1 = (par & 0xff);
#else
rp1 = (par >> 8) & 0xff;
rp0 = (par & 0xff);
#endif
/* finally reduce par to 8 bits */
par ^= (par >> 8);
par &= 0xff;
/*
* and calculate rp5..rp15..rp17
* note that par = rp4 ^ rp5 and due to the commutative property
* of the ^ operator we can say:
* rp5 = (par ^ rp4);
* The & 0xff seems superfluous, but benchmarking learned that
* leaving it out gives slightly worse results. No idea why, probably
* it has to do with the way the pipeline in pentium is organized.
*/
rp5 = (par ^ rp4) & 0xff;
rp7 = (par ^ rp6) & 0xff;
rp9 = (par ^ rp8) & 0xff;
rp11 = (par ^ rp10) & 0xff;
rp13 = (par ^ rp12) & 0xff;
rp15 = (par ^ rp14) & 0xff;
if (eccsize_mult == 2)
rp17 = (par ^ rp16) & 0xff;
/*
* Finally calculate the ecc bits.
* Again here it might seem that there are performance optimisations
* possible, but benchmarks showed that on the system this is developed
* the code below is the fastest
*/
#ifdef CONFIG_MTD_NAND_ECC_SMC
code[0] =
(invparity[rp7] << 7) |
(invparity[rp6] << 6) |
(invparity[rp5] << 5) |
(invparity[rp4] << 4) |
(invparity[rp3] << 3) |
(invparity[rp2] << 2) |
(invparity[rp1] << 1) |
(invparity[rp0]);
code[1] =
(invparity[rp15] << 7) |
(invparity[rp14] << 6) |
(invparity[rp13] << 5) |
(invparity[rp12] << 4) |
(invparity[rp11] << 3) |
(invparity[rp10] << 2) |
(invparity[rp9] << 1) |
(invparity[rp8]);
#else
code[1] =
(invparity[rp7] << 7) |
(invparity[rp6] << 6) |
(invparity[rp5] << 5) |
(invparity[rp4] << 4) |
(invparity[rp3] << 3) |
(invparity[rp2] << 2) |
(invparity[rp1] << 1) |
(invparity[rp0]);
code[0] =
(invparity[rp15] << 7) |
(invparity[rp14] << 6) |
(invparity[rp13] << 5) |
(invparity[rp12] << 4) |
(invparity[rp11] << 3) |
(invparity[rp10] << 2) |
(invparity[rp9] << 1) |
(invparity[rp8]);
#endif
if (eccsize_mult == 1)
code[2] =
(invparity[par & 0xf0] << 7) |
(invparity[par & 0x0f] << 6) |
(invparity[par & 0xcc] << 5) |
(invparity[par & 0x33] << 4) |
(invparity[par & 0xaa] << 3) |
(invparity[par & 0x55] << 2) |
3;
else
code[2] =
(invparity[par & 0xf0] << 7) |
(invparity[par & 0x0f] << 6) |
(invparity[par & 0xcc] << 5) |
(invparity[par & 0x33] << 4) |
(invparity[par & 0xaa] << 3) |
(invparity[par & 0x55] << 2) |
(invparity[rp17] << 1) |
(invparity[rp16] << 0);
return 0;
}
EXPORT_SYMBOL(nand_calculate_ecc);
static inline int countbits(uint32_t byte)
{
int res = 0;
for (;byte; byte >>= 1)
res += byte & 0x01;
return res;
}
/**
* nand_correct_data - [NAND Interface] Detect and correct bit error(s)
* @mtd: MTD block structure
* @dat: raw data read from the chip
* @buf: raw data read from the chip
* @read_ecc: ECC from the chip
* @calc_ecc: the ECC calculated from raw data
*
* Detect and correct a 1 bit error for 256 byte block
* Detect and correct a 1 bit error for 256/512 byte block
*/
int nand_correct_data(struct mtd_info *mtd, u_char *dat,
u_char *read_ecc, u_char *calc_ecc)
int nand_correct_data(struct mtd_info *mtd, unsigned char *buf,
unsigned char *read_ecc, unsigned char *calc_ecc)
{
uint8_t s0, s1, s2;
unsigned char b0, b1, b2;
unsigned char byte_addr, bit_addr;
/* 256 or 512 bytes/ecc */
const uint32_t eccsize_mult =
(((struct nand_chip *)mtd->priv)->ecc.size) >> 8;
/*
* b0 to b2 indicate which bit is faulty (if any)
* we might need the xor result more than once,
* so keep them in a local var
*/
#ifdef CONFIG_MTD_NAND_ECC_SMC
s0 = calc_ecc[0] ^ read_ecc[0];
s1 = calc_ecc[1] ^ read_ecc[1];
s2 = calc_ecc[2] ^ read_ecc[2];
b0 = read_ecc[0] ^ calc_ecc[0];
b1 = read_ecc[1] ^ calc_ecc[1];
#else
s1 = calc_ecc[0] ^ read_ecc[0];
s0 = calc_ecc[1] ^ read_ecc[1];
s2 = calc_ecc[2] ^ read_ecc[2];
b0 = read_ecc[1] ^ calc_ecc[1];
b1 = read_ecc[0] ^ calc_ecc[0];
#endif
if ((s0 | s1 | s2) == 0)
return 0;
b2 = read_ecc[2] ^ calc_ecc[2];
/* Check for a single bit error */
if( ((s0 ^ (s0 >> 1)) & 0x55) == 0x55 &&
((s1 ^ (s1 >> 1)) & 0x55) == 0x55 &&
((s2 ^ (s2 >> 1)) & 0x54) == 0x54) {
/* check if there are any bitfaults */
uint32_t byteoffs, bitnum;
/* repeated if statements are slightly more efficient than switch ... */
/* ordered in order of likelihood */
byteoffs = (s1 << 0) & 0x80;
byteoffs |= (s1 << 1) & 0x40;
byteoffs |= (s1 << 2) & 0x20;
byteoffs |= (s1 << 3) & 0x10;
byteoffs |= (s0 >> 4) & 0x08;
byteoffs |= (s0 >> 3) & 0x04;
byteoffs |= (s0 >> 2) & 0x02;
byteoffs |= (s0 >> 1) & 0x01;
bitnum = (s2 >> 5) & 0x04;
bitnum |= (s2 >> 4) & 0x02;
bitnum |= (s2 >> 3) & 0x01;
dat[byteoffs] ^= (1 << bitnum);
if ((b0 | b1 | b2) == 0)
return 0; /* no error */
if ((((b0 ^ (b0 >> 1)) & 0x55) == 0x55) &&
(((b1 ^ (b1 >> 1)) & 0x55) == 0x55) &&
((eccsize_mult == 1 && ((b2 ^ (b2 >> 1)) & 0x54) == 0x54) ||
(eccsize_mult == 2 && ((b2 ^ (b2 >> 1)) & 0x55) == 0x55))) {
/* single bit error */
/*
* rp17/rp15/13/11/9/7/5/3/1 indicate which byte is the faulty
* byte, cp 5/3/1 indicate the faulty bit.
* A lookup table (called addressbits) is used to filter
* the bits from the byte they are in.
* A marginal optimisation is possible by having three
* different lookup tables.
* One as we have now (for b0), one for b2
* (that would avoid the >> 1), and one for b1 (with all values
* << 4). However it was felt that introducing two more tables
* hardly justify the gain.
*
* The b2 shift is there to get rid of the lowest two bits.
* We could also do addressbits[b2] >> 1 but for the
* performace it does not make any difference
*/
if (eccsize_mult == 1)
byte_addr = (addressbits[b1] << 4) + addressbits[b0];
else
byte_addr = (addressbits[b2 & 0x3] << 8) +
(addressbits[b1] << 4) + addressbits[b0];
bit_addr = addressbits[b2 >> 2];
/* flip the bit */
buf[byte_addr] ^= (1 << bit_addr);
return 1;
}
/* count nr of bits; use table lookup, faster than calculating it */
if ((bitsperbyte[b0] + bitsperbyte[b1] + bitsperbyte[b2]) == 1)
return 1; /* error in ecc data; no action needed */
if(countbits(s0 | ((uint32_t)s1 << 8) | ((uint32_t)s2 <<16)) == 1)
return 1;
return -EBADMSG;
printk(KERN_ERR "uncorrectable error : ");
return -1;
}
EXPORT_SYMBOL(nand_correct_data);
MODULE_LICENSE("GPL");
MODULE_AUTHOR("Steven J. Hill <sjhill@realitydiluted.com>");
MODULE_AUTHOR("Frans Meulenbroeks <fransmeulenbroeks@gmail.com>");
MODULE_DESCRIPTION("Generic NAND ECC support");

View File

@ -38,7 +38,6 @@
#include <linux/delay.h>
#include <linux/list.h>
#include <linux/random.h>
#include <asm/div64.h>
/* Default simulator parameters values */
#if !defined(CONFIG_NANDSIM_FIRST_ID_BYTE) || \

View File

@ -115,55 +115,11 @@ enum {
STATE_PIO_WRITING,
};
struct pxa3xx_nand_timing {
unsigned int tCH; /* Enable signal hold time */
unsigned int tCS; /* Enable signal setup time */
unsigned int tWH; /* ND_nWE high duration */
unsigned int tWP; /* ND_nWE pulse time */
unsigned int tRH; /* ND_nRE high duration */
unsigned int tRP; /* ND_nRE pulse width */
unsigned int tR; /* ND_nWE high to ND_nRE low for read */
unsigned int tWHR; /* ND_nWE high to ND_nRE low for status read */
unsigned int tAR; /* ND_ALE low to ND_nRE low delay */
};
struct pxa3xx_nand_cmdset {
uint16_t read1;
uint16_t read2;
uint16_t program;
uint16_t read_status;
uint16_t read_id;
uint16_t erase;
uint16_t reset;
uint16_t lock;
uint16_t unlock;
uint16_t lock_status;
};
struct pxa3xx_nand_flash {
struct pxa3xx_nand_timing *timing; /* NAND Flash timing */
struct pxa3xx_nand_cmdset *cmdset;
uint32_t page_per_block;/* Pages per block (PG_PER_BLK) */
uint32_t page_size; /* Page size in bytes (PAGE_SZ) */
uint32_t flash_width; /* Width of Flash memory (DWIDTH_M) */
uint32_t dfc_width; /* Width of flash controller(DWIDTH_C) */
uint32_t num_blocks; /* Number of physical blocks in Flash */
uint32_t chip_id;
/* NOTE: these are automatically calculated, do not define */
size_t oob_size;
size_t read_id_bytes;
unsigned int col_addr_cycles;
unsigned int row_addr_cycles;
};
struct pxa3xx_nand_info {
struct nand_chip nand_chip;
struct platform_device *pdev;
struct pxa3xx_nand_flash *flash_info;
const struct pxa3xx_nand_flash *flash_info;
struct clk *clk;
void __iomem *mmio_base;
@ -202,12 +158,20 @@ struct pxa3xx_nand_info {
uint32_t ndcb0;
uint32_t ndcb1;
uint32_t ndcb2;
/* calculated from pxa3xx_nand_flash data */
size_t oob_size;
size_t read_id_bytes;
unsigned int col_addr_cycles;
unsigned int row_addr_cycles;
};
static int use_dma = 1;
module_param(use_dma, bool, 0444);
MODULE_PARM_DESC(use_dma, "enable DMA for data transfering to/from NAND HW");
#ifdef CONFIG_MTD_NAND_PXA3xx_BUILTIN
static struct pxa3xx_nand_cmdset smallpage_cmdset = {
.read1 = 0x0000,
.read2 = 0x0050,
@ -291,11 +255,35 @@ static struct pxa3xx_nand_flash micron1GbX16 = {
.chip_id = 0xb12c,
};
static struct pxa3xx_nand_timing stm2GbX16_timing = {
.tCH = 10,
.tCS = 35,
.tWH = 15,
.tWP = 25,
.tRH = 15,
.tRP = 25,
.tR = 25000,
.tWHR = 60,
.tAR = 10,
};
static struct pxa3xx_nand_flash stm2GbX16 = {
.timing = &stm2GbX16_timing,
.page_per_block = 64,
.page_size = 2048,
.flash_width = 16,
.dfc_width = 16,
.num_blocks = 2048,
.chip_id = 0xba20,
};
static struct pxa3xx_nand_flash *builtin_flash_types[] = {
&samsung512MbX16,
&micron1GbX8,
&micron1GbX16,
&stm2GbX16,
};
#endif /* CONFIG_MTD_NAND_PXA3xx_BUILTIN */
#define NDTR0_tCH(c) (min((c), 7) << 19)
#define NDTR0_tCS(c) (min((c), 7) << 16)
@ -312,7 +300,7 @@ static struct pxa3xx_nand_flash *builtin_flash_types[] = {
#define ns2cycle(ns, clk) (int)(((ns) * (clk / 1000000) / 1000) + 1)
static void pxa3xx_nand_set_timing(struct pxa3xx_nand_info *info,
struct pxa3xx_nand_timing *t)
const struct pxa3xx_nand_timing *t)
{
unsigned long nand_clk = clk_get_rate(info->clk);
uint32_t ndtr0, ndtr1;
@ -354,8 +342,8 @@ static int wait_for_event(struct pxa3xx_nand_info *info, uint32_t event)
static int prepare_read_prog_cmd(struct pxa3xx_nand_info *info,
uint16_t cmd, int column, int page_addr)
{
struct pxa3xx_nand_flash *f = info->flash_info;
struct pxa3xx_nand_cmdset *cmdset = f->cmdset;
const struct pxa3xx_nand_flash *f = info->flash_info;
const struct pxa3xx_nand_cmdset *cmdset = f->cmdset;
/* calculate data size */
switch (f->page_size) {
@ -373,14 +361,14 @@ static int prepare_read_prog_cmd(struct pxa3xx_nand_info *info,
info->ndcb0 = cmd | ((cmd & 0xff00) ? NDCB0_DBC : 0);
info->ndcb1 = 0;
info->ndcb2 = 0;
info->ndcb0 |= NDCB0_ADDR_CYC(f->row_addr_cycles + f->col_addr_cycles);
info->ndcb0 |= NDCB0_ADDR_CYC(info->row_addr_cycles + info->col_addr_cycles);
if (f->col_addr_cycles == 2) {
if (info->col_addr_cycles == 2) {
/* large block, 2 cycles for column address
* row address starts from 3rd cycle
*/
info->ndcb1 |= (page_addr << 16) | (column & 0xffff);
if (f->row_addr_cycles == 3)
if (info->row_addr_cycles == 3)
info->ndcb2 = (page_addr >> 16) & 0xff;
} else
/* small block, 1 cycles for column address
@ -406,7 +394,7 @@ static int prepare_erase_cmd(struct pxa3xx_nand_info *info,
static int prepare_other_cmd(struct pxa3xx_nand_info *info, uint16_t cmd)
{
struct pxa3xx_nand_cmdset *cmdset = info->flash_info->cmdset;
const struct pxa3xx_nand_cmdset *cmdset = info->flash_info->cmdset;
info->ndcb0 = cmd | ((cmd & 0xff00) ? NDCB0_DBC : 0);
info->ndcb1 = 0;
@ -641,8 +629,8 @@ static void pxa3xx_nand_cmdfunc(struct mtd_info *mtd, unsigned command,
int column, int page_addr)
{
struct pxa3xx_nand_info *info = mtd->priv;
struct pxa3xx_nand_flash *flash_info = info->flash_info;
struct pxa3xx_nand_cmdset *cmdset = flash_info->cmdset;
const struct pxa3xx_nand_flash *flash_info = info->flash_info;
const struct pxa3xx_nand_cmdset *cmdset = flash_info->cmdset;
int ret;
info->use_dma = (use_dma) ? 1 : 0;
@ -720,7 +708,7 @@ static void pxa3xx_nand_cmdfunc(struct mtd_info *mtd, unsigned command,
info->use_dma = 0; /* force PIO read */
info->buf_start = 0;
info->buf_count = (command == NAND_CMD_READID) ?
flash_info->read_id_bytes : 1;
info->read_id_bytes : 1;
if (prepare_other_cmd(info, (command == NAND_CMD_READID) ?
cmdset->read_id : cmdset->read_status))
@ -861,8 +849,8 @@ static int pxa3xx_nand_ecc_correct(struct mtd_info *mtd,
static int __readid(struct pxa3xx_nand_info *info, uint32_t *id)
{
struct pxa3xx_nand_flash *f = info->flash_info;
struct pxa3xx_nand_cmdset *cmdset = f->cmdset;
const struct pxa3xx_nand_flash *f = info->flash_info;
const struct pxa3xx_nand_cmdset *cmdset = f->cmdset;
uint32_t ndcr;
uint8_t id_buff[8];
@ -891,7 +879,7 @@ fail_timeout:
}
static int pxa3xx_nand_config_flash(struct pxa3xx_nand_info *info,
struct pxa3xx_nand_flash *f)
const struct pxa3xx_nand_flash *f)
{
struct platform_device *pdev = info->pdev;
struct pxa3xx_nand_platform_data *pdata = pdev->dev.platform_data;
@ -904,25 +892,25 @@ static int pxa3xx_nand_config_flash(struct pxa3xx_nand_info *info,
return -EINVAL;
/* calculate flash information */
f->oob_size = (f->page_size == 2048) ? 64 : 16;
f->read_id_bytes = (f->page_size == 2048) ? 4 : 2;
info->oob_size = (f->page_size == 2048) ? 64 : 16;
info->read_id_bytes = (f->page_size == 2048) ? 4 : 2;
/* calculate addressing information */
f->col_addr_cycles = (f->page_size == 2048) ? 2 : 1;
info->col_addr_cycles = (f->page_size == 2048) ? 2 : 1;
if (f->num_blocks * f->page_per_block > 65536)
f->row_addr_cycles = 3;
info->row_addr_cycles = 3;
else
f->row_addr_cycles = 2;
info->row_addr_cycles = 2;
ndcr |= (pdata->enable_arbiter) ? NDCR_ND_ARB_EN : 0;
ndcr |= (f->col_addr_cycles == 2) ? NDCR_RA_START : 0;
ndcr |= (info->col_addr_cycles == 2) ? NDCR_RA_START : 0;
ndcr |= (f->page_per_block == 64) ? NDCR_PG_PER_BLK : 0;
ndcr |= (f->page_size == 2048) ? NDCR_PAGE_SZ : 0;
ndcr |= (f->flash_width == 16) ? NDCR_DWIDTH_M : 0;
ndcr |= (f->dfc_width == 16) ? NDCR_DWIDTH_C : 0;
ndcr |= NDCR_RD_ID_CNT(f->read_id_bytes);
ndcr |= NDCR_RD_ID_CNT(info->read_id_bytes);
ndcr |= NDCR_SPARE_EN; /* enable spare by default */
info->reg_ndcr = ndcr;
@ -932,12 +920,27 @@ static int pxa3xx_nand_config_flash(struct pxa3xx_nand_info *info,
return 0;
}
static int pxa3xx_nand_detect_flash(struct pxa3xx_nand_info *info)
static int pxa3xx_nand_detect_flash(struct pxa3xx_nand_info *info,
const struct pxa3xx_nand_platform_data *pdata)
{
struct pxa3xx_nand_flash *f;
uint32_t id;
const struct pxa3xx_nand_flash *f;
uint32_t id = -1;
int i;
for (i = 0; i<pdata->num_flash; ++i) {
f = pdata->flash + i;
if (pxa3xx_nand_config_flash(info, f))
continue;
if (__readid(info, &id))
continue;
if (id == f->chip_id)
return 0;
}
#ifdef CONFIG_MTD_NAND_PXA3xx_BUILTIN
for (i = 0; i < ARRAY_SIZE(builtin_flash_types); i++) {
f = builtin_flash_types[i];
@ -951,7 +954,11 @@ static int pxa3xx_nand_detect_flash(struct pxa3xx_nand_info *info)
if (id == f->chip_id)
return 0;
}
#endif
dev_warn(&info->pdev->dev,
"failed to detect configured nand flash; found %04x instead of\n",
id);
return -ENODEV;
}
@ -1014,7 +1021,7 @@ static struct nand_ecclayout hw_largepage_ecclayout = {
static void pxa3xx_nand_init_mtd(struct mtd_info *mtd,
struct pxa3xx_nand_info *info)
{
struct pxa3xx_nand_flash *f = info->flash_info;
const struct pxa3xx_nand_flash *f = info->flash_info;
struct nand_chip *this = &info->nand_chip;
this->options = (f->flash_width == 16) ? NAND_BUSWIDTH_16: 0;
@ -1135,7 +1142,7 @@ static int pxa3xx_nand_probe(struct platform_device *pdev)
goto fail_free_buf;
}
ret = pxa3xx_nand_detect_flash(info);
ret = pxa3xx_nand_detect_flash(info, pdata);
if (ret) {
dev_err(&pdev->dev, "failed to detect flash\n");
ret = -ENODEV;

878
drivers/mtd/nand/sh_flctl.c Normal file
View File

@ -0,0 +1,878 @@
/*
* SuperH FLCTL nand controller
*
* Copyright © 2008 Renesas Solutions Corp.
* Copyright © 2008 Atom Create Engineering Co., Ltd.
*
* Based on fsl_elbc_nand.c, Copyright © 2006-2007 Freescale Semiconductor
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; version 2 of the License.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
*
*/
#include <linux/module.h>
#include <linux/kernel.h>
#include <linux/delay.h>
#include <linux/io.h>
#include <linux/platform_device.h>
#include <linux/mtd/mtd.h>
#include <linux/mtd/nand.h>
#include <linux/mtd/partitions.h>
#include <linux/mtd/sh_flctl.h>
static struct nand_ecclayout flctl_4secc_oob_16 = {
.eccbytes = 10,
.eccpos = {0, 1, 2, 3, 4, 5, 6, 7, 8, 9},
.oobfree = {
{.offset = 12,
. length = 4} },
};
static struct nand_ecclayout flctl_4secc_oob_64 = {
.eccbytes = 10,
.eccpos = {48, 49, 50, 51, 52, 53, 54, 55, 56, 57},
.oobfree = {
{.offset = 60,
. length = 4} },
};
static uint8_t scan_ff_pattern[] = { 0xff, 0xff };
static struct nand_bbt_descr flctl_4secc_smallpage = {
.options = NAND_BBT_SCAN2NDPAGE,
.offs = 11,
.len = 1,
.pattern = scan_ff_pattern,
};
static struct nand_bbt_descr flctl_4secc_largepage = {
.options = 0,
.offs = 58,
.len = 2,
.pattern = scan_ff_pattern,
};
static void empty_fifo(struct sh_flctl *flctl)
{
writel(0x000c0000, FLINTDMACR(flctl)); /* FIFO Clear */
writel(0x00000000, FLINTDMACR(flctl)); /* Clear Error flags */
}
static void start_translation(struct sh_flctl *flctl)
{
writeb(TRSTRT, FLTRCR(flctl));
}
static void wait_completion(struct sh_flctl *flctl)
{
uint32_t timeout = LOOP_TIMEOUT_MAX;
while (timeout--) {
if (readb(FLTRCR(flctl)) & TREND) {
writeb(0x0, FLTRCR(flctl));
return;
}
udelay(1);
}
printk(KERN_ERR "wait_completion(): Timeout occured \n");
writeb(0x0, FLTRCR(flctl));
}
static void set_addr(struct mtd_info *mtd, int column, int page_addr)
{
struct sh_flctl *flctl = mtd_to_flctl(mtd);
uint32_t addr = 0;
if (column == -1) {
addr = page_addr; /* ERASE1 */
} else if (page_addr != -1) {
/* SEQIN, READ0, etc.. */
if (flctl->page_size) {
addr = column & 0x0FFF;
addr |= (page_addr & 0xff) << 16;
addr |= ((page_addr >> 8) & 0xff) << 24;
/* big than 128MB */
if (flctl->rw_ADRCNT == ADRCNT2_E) {
uint32_t addr2;
addr2 = (page_addr >> 16) & 0xff;
writel(addr2, FLADR2(flctl));
}
} else {
addr = column;
addr |= (page_addr & 0xff) << 8;
addr |= ((page_addr >> 8) & 0xff) << 16;
addr |= ((page_addr >> 16) & 0xff) << 24;
}
}
writel(addr, FLADR(flctl));
}
static void wait_rfifo_ready(struct sh_flctl *flctl)
{
uint32_t timeout = LOOP_TIMEOUT_MAX;
while (timeout--) {
uint32_t val;
/* check FIFO */
val = readl(FLDTCNTR(flctl)) >> 16;
if (val & 0xFF)
return;
udelay(1);
}
printk(KERN_ERR "wait_rfifo_ready(): Timeout occured \n");
}
static void wait_wfifo_ready(struct sh_flctl *flctl)
{
uint32_t len, timeout = LOOP_TIMEOUT_MAX;
while (timeout--) {
/* check FIFO */
len = (readl(FLDTCNTR(flctl)) >> 16) & 0xFF;
if (len >= 4)
return;
udelay(1);
}
printk(KERN_ERR "wait_wfifo_ready(): Timeout occured \n");
}
static int wait_recfifo_ready(struct sh_flctl *flctl)
{
uint32_t timeout = LOOP_TIMEOUT_MAX;
int checked[4];
void __iomem *ecc_reg[4];
int i;
uint32_t data, size;
memset(checked, 0, sizeof(checked));
while (timeout--) {
size = readl(FLDTCNTR(flctl)) >> 24;
if (size & 0xFF)
return 0; /* success */
if (readl(FL4ECCCR(flctl)) & _4ECCFA)
return 1; /* can't correct */
udelay(1);
if (!(readl(FL4ECCCR(flctl)) & _4ECCEND))
continue;
/* start error correction */
ecc_reg[0] = FL4ECCRESULT0(flctl);
ecc_reg[1] = FL4ECCRESULT1(flctl);
ecc_reg[2] = FL4ECCRESULT2(flctl);
ecc_reg[3] = FL4ECCRESULT3(flctl);
for (i = 0; i < 3; i++) {
data = readl(ecc_reg[i]);
if (data != INIT_FL4ECCRESULT_VAL && !checked[i]) {
uint8_t org;
int index;
index = data >> 16;
org = flctl->done_buff[index];
flctl->done_buff[index] = org ^ (data & 0xFF);
checked[i] = 1;
}
}
writel(0, FL4ECCCR(flctl));
}
printk(KERN_ERR "wait_recfifo_ready(): Timeout occured \n");
return 1; /* timeout */
}
static void wait_wecfifo_ready(struct sh_flctl *flctl)
{
uint32_t timeout = LOOP_TIMEOUT_MAX;
uint32_t len;
while (timeout--) {
/* check FLECFIFO */
len = (readl(FLDTCNTR(flctl)) >> 24) & 0xFF;
if (len >= 4)
return;
udelay(1);
}
printk(KERN_ERR "wait_wecfifo_ready(): Timeout occured \n");
}
static void read_datareg(struct sh_flctl *flctl, int offset)
{
unsigned long data;
unsigned long *buf = (unsigned long *)&flctl->done_buff[offset];
wait_completion(flctl);
data = readl(FLDATAR(flctl));
*buf = le32_to_cpu(data);
}
static void read_fiforeg(struct sh_flctl *flctl, int rlen, int offset)
{
int i, len_4align;
unsigned long *buf = (unsigned long *)&flctl->done_buff[offset];
void *fifo_addr = (void *)FLDTFIFO(flctl);
len_4align = (rlen + 3) / 4;
for (i = 0; i < len_4align; i++) {
wait_rfifo_ready(flctl);
buf[i] = readl(fifo_addr);
buf[i] = be32_to_cpu(buf[i]);
}
}
static int read_ecfiforeg(struct sh_flctl *flctl, uint8_t *buff)
{
int i;
unsigned long *ecc_buf = (unsigned long *)buff;
void *fifo_addr = (void *)FLECFIFO(flctl);
for (i = 0; i < 4; i++) {
if (wait_recfifo_ready(flctl))
return 1;
ecc_buf[i] = readl(fifo_addr);
ecc_buf[i] = be32_to_cpu(ecc_buf[i]);
}
return 0;
}
static void write_fiforeg(struct sh_flctl *flctl, int rlen, int offset)
{
int i, len_4align;
unsigned long *data = (unsigned long *)&flctl->done_buff[offset];
void *fifo_addr = (void *)FLDTFIFO(flctl);
len_4align = (rlen + 3) / 4;
for (i = 0; i < len_4align; i++) {
wait_wfifo_ready(flctl);
writel(cpu_to_be32(data[i]), fifo_addr);
}
}
static void set_cmd_regs(struct mtd_info *mtd, uint32_t cmd, uint32_t flcmcdr_val)
{
struct sh_flctl *flctl = mtd_to_flctl(mtd);
uint32_t flcmncr_val = readl(FLCMNCR(flctl));
uint32_t flcmdcr_val, addr_len_bytes = 0;
/* Set SNAND bit if page size is 2048byte */
if (flctl->page_size)
flcmncr_val |= SNAND_E;
else
flcmncr_val &= ~SNAND_E;
/* default FLCMDCR val */
flcmdcr_val = DOCMD1_E | DOADR_E;
/* Set for FLCMDCR */
switch (cmd) {
case NAND_CMD_ERASE1:
addr_len_bytes = flctl->erase_ADRCNT;
flcmdcr_val |= DOCMD2_E;
break;
case NAND_CMD_READ0:
case NAND_CMD_READOOB:
addr_len_bytes = flctl->rw_ADRCNT;
flcmdcr_val |= CDSRC_E;
break;
case NAND_CMD_SEQIN:
/* This case is that cmd is READ0 or READ1 or READ00 */
flcmdcr_val &= ~DOADR_E; /* ONLY execute 1st cmd */
break;
case NAND_CMD_PAGEPROG:
addr_len_bytes = flctl->rw_ADRCNT;
flcmdcr_val |= DOCMD2_E | CDSRC_E | SELRW;
break;
case NAND_CMD_READID:
flcmncr_val &= ~SNAND_E;
addr_len_bytes = ADRCNT_1;
break;
case NAND_CMD_STATUS:
case NAND_CMD_RESET:
flcmncr_val &= ~SNAND_E;
flcmdcr_val &= ~(DOADR_E | DOSR_E);
break;
default:
break;
}
/* Set address bytes parameter */
flcmdcr_val |= addr_len_bytes;
/* Now actually write */
writel(flcmncr_val, FLCMNCR(flctl));
writel(flcmdcr_val, FLCMDCR(flctl));
writel(flcmcdr_val, FLCMCDR(flctl));
}
static int flctl_read_page_hwecc(struct mtd_info *mtd, struct nand_chip *chip,
uint8_t *buf)
{
int i, eccsize = chip->ecc.size;
int eccbytes = chip->ecc.bytes;
int eccsteps = chip->ecc.steps;
uint8_t *p = buf;
struct sh_flctl *flctl = mtd_to_flctl(mtd);
for (i = 0; eccsteps; eccsteps--, i += eccbytes, p += eccsize)
chip->read_buf(mtd, p, eccsize);
for (i = 0; eccsteps; eccsteps--, i += eccbytes, p += eccsize) {
if (flctl->hwecc_cant_correct[i])
mtd->ecc_stats.failed++;
else
mtd->ecc_stats.corrected += 0;
}
return 0;
}
static void flctl_write_page_hwecc(struct mtd_info *mtd, struct nand_chip *chip,
const uint8_t *buf)
{
int i, eccsize = chip->ecc.size;
int eccbytes = chip->ecc.bytes;
int eccsteps = chip->ecc.steps;
const uint8_t *p = buf;
for (i = 0; eccsteps; eccsteps--, i += eccbytes, p += eccsize)
chip->write_buf(mtd, p, eccsize);
}
static void execmd_read_page_sector(struct mtd_info *mtd, int page_addr)
{
struct sh_flctl *flctl = mtd_to_flctl(mtd);
int sector, page_sectors;
if (flctl->page_size)
page_sectors = 4;
else
page_sectors = 1;
writel(readl(FLCMNCR(flctl)) | ACM_SACCES_MODE | _4ECCCORRECT,
FLCMNCR(flctl));
set_cmd_regs(mtd, NAND_CMD_READ0,
(NAND_CMD_READSTART << 8) | NAND_CMD_READ0);
for (sector = 0; sector < page_sectors; sector++) {
int ret;
empty_fifo(flctl);
writel(readl(FLCMDCR(flctl)) | 1, FLCMDCR(flctl));
writel(page_addr << 2 | sector, FLADR(flctl));
start_translation(flctl);
read_fiforeg(flctl, 512, 512 * sector);
ret = read_ecfiforeg(flctl,
&flctl->done_buff[mtd->writesize + 16 * sector]);
if (ret)
flctl->hwecc_cant_correct[sector] = 1;
writel(0x0, FL4ECCCR(flctl));
wait_completion(flctl);
}
writel(readl(FLCMNCR(flctl)) & ~(ACM_SACCES_MODE | _4ECCCORRECT),
FLCMNCR(flctl));
}
static void execmd_read_oob(struct mtd_info *mtd, int page_addr)
{
struct sh_flctl *flctl = mtd_to_flctl(mtd);
set_cmd_regs(mtd, NAND_CMD_READ0,
(NAND_CMD_READSTART << 8) | NAND_CMD_READ0);
empty_fifo(flctl);
if (flctl->page_size) {
int i;
/* In case that the page size is 2k */
for (i = 0; i < 16 * 3; i++)
flctl->done_buff[i] = 0xFF;
set_addr(mtd, 3 * 528 + 512, page_addr);
writel(16, FLDTCNTR(flctl));
start_translation(flctl);
read_fiforeg(flctl, 16, 16 * 3);
wait_completion(flctl);
} else {
/* In case that the page size is 512b */
set_addr(mtd, 512, page_addr);
writel(16, FLDTCNTR(flctl));
start_translation(flctl);
read_fiforeg(flctl, 16, 0);
wait_completion(flctl);
}
}
static void execmd_write_page_sector(struct mtd_info *mtd)
{
struct sh_flctl *flctl = mtd_to_flctl(mtd);
int i, page_addr = flctl->seqin_page_addr;
int sector, page_sectors;
if (flctl->page_size)
page_sectors = 4;
else
page_sectors = 1;
writel(readl(FLCMNCR(flctl)) | ACM_SACCES_MODE, FLCMNCR(flctl));
set_cmd_regs(mtd, NAND_CMD_PAGEPROG,
(NAND_CMD_PAGEPROG << 8) | NAND_CMD_SEQIN);
for (sector = 0; sector < page_sectors; sector++) {
empty_fifo(flctl);
writel(readl(FLCMDCR(flctl)) | 1, FLCMDCR(flctl));
writel(page_addr << 2 | sector, FLADR(flctl));
start_translation(flctl);
write_fiforeg(flctl, 512, 512 * sector);
for (i = 0; i < 4; i++) {
wait_wecfifo_ready(flctl); /* wait for write ready */
writel(0xFFFFFFFF, FLECFIFO(flctl));
}
wait_completion(flctl);
}
writel(readl(FLCMNCR(flctl)) & ~ACM_SACCES_MODE, FLCMNCR(flctl));
}
static void execmd_write_oob(struct mtd_info *mtd)
{
struct sh_flctl *flctl = mtd_to_flctl(mtd);
int page_addr = flctl->seqin_page_addr;
int sector, page_sectors;
if (flctl->page_size) {
sector = 3;
page_sectors = 4;
} else {
sector = 0;
page_sectors = 1;
}
set_cmd_regs(mtd, NAND_CMD_PAGEPROG,
(NAND_CMD_PAGEPROG << 8) | NAND_CMD_SEQIN);
for (; sector < page_sectors; sector++) {
empty_fifo(flctl);
set_addr(mtd, sector * 528 + 512, page_addr);
writel(16, FLDTCNTR(flctl)); /* set read size */
start_translation(flctl);
write_fiforeg(flctl, 16, 16 * sector);
wait_completion(flctl);
}
}
static void flctl_cmdfunc(struct mtd_info *mtd, unsigned int command,
int column, int page_addr)
{
struct sh_flctl *flctl = mtd_to_flctl(mtd);
uint32_t read_cmd = 0;
flctl->read_bytes = 0;
if (command != NAND_CMD_PAGEPROG)
flctl->index = 0;
switch (command) {
case NAND_CMD_READ1:
case NAND_CMD_READ0:
if (flctl->hwecc) {
/* read page with hwecc */
execmd_read_page_sector(mtd, page_addr);
break;
}
empty_fifo(flctl);
if (flctl->page_size)
set_cmd_regs(mtd, command, (NAND_CMD_READSTART << 8)
| command);
else
set_cmd_regs(mtd, command, command);
set_addr(mtd, 0, page_addr);
flctl->read_bytes = mtd->writesize + mtd->oobsize;
flctl->index += column;
goto read_normal_exit;
case NAND_CMD_READOOB:
if (flctl->hwecc) {
/* read page with hwecc */
execmd_read_oob(mtd, page_addr);
break;
}
empty_fifo(flctl);
if (flctl->page_size) {
set_cmd_regs(mtd, command, (NAND_CMD_READSTART << 8)
| NAND_CMD_READ0);
set_addr(mtd, mtd->writesize, page_addr);
} else {
set_cmd_regs(mtd, command, command);
set_addr(mtd, 0, page_addr);
}
flctl->read_bytes = mtd->oobsize;
goto read_normal_exit;
case NAND_CMD_READID:
empty_fifo(flctl);
set_cmd_regs(mtd, command, command);
set_addr(mtd, 0, 0);
flctl->read_bytes = 4;
writel(flctl->read_bytes, FLDTCNTR(flctl)); /* set read size */
start_translation(flctl);
read_datareg(flctl, 0); /* read and end */
break;
case NAND_CMD_ERASE1:
flctl->erase1_page_addr = page_addr;
break;
case NAND_CMD_ERASE2:
set_cmd_regs(mtd, NAND_CMD_ERASE1,
(command << 8) | NAND_CMD_ERASE1);
set_addr(mtd, -1, flctl->erase1_page_addr);
start_translation(flctl);
wait_completion(flctl);
break;
case NAND_CMD_SEQIN:
if (!flctl->page_size) {
/* output read command */
if (column >= mtd->writesize) {
column -= mtd->writesize;
read_cmd = NAND_CMD_READOOB;
} else if (column < 256) {
read_cmd = NAND_CMD_READ0;
} else {
column -= 256;
read_cmd = NAND_CMD_READ1;
}
}
flctl->seqin_column = column;
flctl->seqin_page_addr = page_addr;
flctl->seqin_read_cmd = read_cmd;
break;
case NAND_CMD_PAGEPROG:
empty_fifo(flctl);
if (!flctl->page_size) {
set_cmd_regs(mtd, NAND_CMD_SEQIN,
flctl->seqin_read_cmd);
set_addr(mtd, -1, -1);
writel(0, FLDTCNTR(flctl)); /* set 0 size */
start_translation(flctl);
wait_completion(flctl);
}
if (flctl->hwecc) {
/* write page with hwecc */
if (flctl->seqin_column == mtd->writesize)
execmd_write_oob(mtd);
else if (!flctl->seqin_column)
execmd_write_page_sector(mtd);
else
printk(KERN_ERR "Invalid address !?\n");
break;
}
set_cmd_regs(mtd, command, (command << 8) | NAND_CMD_SEQIN);
set_addr(mtd, flctl->seqin_column, flctl->seqin_page_addr);
writel(flctl->index, FLDTCNTR(flctl)); /* set write size */
start_translation(flctl);
write_fiforeg(flctl, flctl->index, 0);
wait_completion(flctl);
break;
case NAND_CMD_STATUS:
set_cmd_regs(mtd, command, command);
set_addr(mtd, -1, -1);
flctl->read_bytes = 1;
writel(flctl->read_bytes, FLDTCNTR(flctl)); /* set read size */
start_translation(flctl);
read_datareg(flctl, 0); /* read and end */
break;
case NAND_CMD_RESET:
set_cmd_regs(mtd, command, command);
set_addr(mtd, -1, -1);
writel(0, FLDTCNTR(flctl)); /* set 0 size */
start_translation(flctl);
wait_completion(flctl);
break;
default:
break;
}
return;
read_normal_exit:
writel(flctl->read_bytes, FLDTCNTR(flctl)); /* set read size */
start_translation(flctl);
read_fiforeg(flctl, flctl->read_bytes, 0);
wait_completion(flctl);
return;
}
static void flctl_select_chip(struct mtd_info *mtd, int chipnr)
{
struct sh_flctl *flctl = mtd_to_flctl(mtd);
uint32_t flcmncr_val = readl(FLCMNCR(flctl));
switch (chipnr) {
case -1:
flcmncr_val &= ~CE0_ENABLE;
writel(flcmncr_val, FLCMNCR(flctl));
break;
case 0:
flcmncr_val |= CE0_ENABLE;
writel(flcmncr_val, FLCMNCR(flctl));
break;
default:
BUG();
}
}
static void flctl_write_buf(struct mtd_info *mtd, const uint8_t *buf, int len)
{
struct sh_flctl *flctl = mtd_to_flctl(mtd);
int i, index = flctl->index;
for (i = 0; i < len; i++)
flctl->done_buff[index + i] = buf[i];
flctl->index += len;
}
static uint8_t flctl_read_byte(struct mtd_info *mtd)
{
struct sh_flctl *flctl = mtd_to_flctl(mtd);
int index = flctl->index;
uint8_t data;
data = flctl->done_buff[index];
flctl->index++;
return data;
}
static void flctl_read_buf(struct mtd_info *mtd, uint8_t *buf, int len)
{
int i;
for (i = 0; i < len; i++)
buf[i] = flctl_read_byte(mtd);
}
static int flctl_verify_buf(struct mtd_info *mtd, const u_char *buf, int len)
{
int i;
for (i = 0; i < len; i++)
if (buf[i] != flctl_read_byte(mtd))
return -EFAULT;
return 0;
}
static void flctl_register_init(struct sh_flctl *flctl, unsigned long val)
{
writel(val, FLCMNCR(flctl));
}
static int flctl_chip_init_tail(struct mtd_info *mtd)
{
struct sh_flctl *flctl = mtd_to_flctl(mtd);
struct nand_chip *chip = &flctl->chip;
if (mtd->writesize == 512) {
flctl->page_size = 0;
if (chip->chipsize > (32 << 20)) {
/* big than 32MB */
flctl->rw_ADRCNT = ADRCNT_4;
flctl->erase_ADRCNT = ADRCNT_3;
} else if (chip->chipsize > (2 << 16)) {
/* big than 128KB */
flctl->rw_ADRCNT = ADRCNT_3;
flctl->erase_ADRCNT = ADRCNT_2;
} else {
flctl->rw_ADRCNT = ADRCNT_2;
flctl->erase_ADRCNT = ADRCNT_1;
}
} else {
flctl->page_size = 1;
if (chip->chipsize > (128 << 20)) {
/* big than 128MB */
flctl->rw_ADRCNT = ADRCNT2_E;
flctl->erase_ADRCNT = ADRCNT_3;
} else if (chip->chipsize > (8 << 16)) {
/* big than 512KB */
flctl->rw_ADRCNT = ADRCNT_4;
flctl->erase_ADRCNT = ADRCNT_2;
} else {
flctl->rw_ADRCNT = ADRCNT_3;
flctl->erase_ADRCNT = ADRCNT_1;
}
}
if (flctl->hwecc) {
if (mtd->writesize == 512) {
chip->ecc.layout = &flctl_4secc_oob_16;
chip->badblock_pattern = &flctl_4secc_smallpage;
} else {
chip->ecc.layout = &flctl_4secc_oob_64;
chip->badblock_pattern = &flctl_4secc_largepage;
}
chip->ecc.size = 512;
chip->ecc.bytes = 10;
chip->ecc.read_page = flctl_read_page_hwecc;
chip->ecc.write_page = flctl_write_page_hwecc;
chip->ecc.mode = NAND_ECC_HW;
/* 4 symbols ECC enabled */
writel(readl(FLCMNCR(flctl)) | _4ECCEN | ECCPOS2 | ECCPOS_02,
FLCMNCR(flctl));
} else {
chip->ecc.mode = NAND_ECC_SOFT;
}
return 0;
}
static int __init flctl_probe(struct platform_device *pdev)
{
struct resource *res;
struct sh_flctl *flctl;
struct mtd_info *flctl_mtd;
struct nand_chip *nand;
struct sh_flctl_platform_data *pdata;
int ret;
pdata = pdev->dev.platform_data;
if (pdata == NULL) {
printk(KERN_ERR "sh_flctl platform_data not found.\n");
return -ENODEV;
}
flctl = kzalloc(sizeof(struct sh_flctl), GFP_KERNEL);
if (!flctl) {
printk(KERN_ERR "Unable to allocate NAND MTD dev structure.\n");
return -ENOMEM;
}
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
if (!res) {
printk(KERN_ERR "%s: resource not found.\n", __func__);
ret = -ENODEV;
goto err;
}
flctl->reg = ioremap(res->start, res->end - res->start + 1);
if (flctl->reg == NULL) {
printk(KERN_ERR "%s: ioremap error.\n", __func__);
ret = -ENOMEM;
goto err;
}
platform_set_drvdata(pdev, flctl);
flctl_mtd = &flctl->mtd;
nand = &flctl->chip;
flctl_mtd->priv = nand;
flctl->hwecc = pdata->has_hwecc;
flctl_register_init(flctl, pdata->flcmncr_val);
nand->options = NAND_NO_AUTOINCR;
/* Set address of hardware control function */
/* 20 us command delay time */
nand->chip_delay = 20;
nand->read_byte = flctl_read_byte;
nand->write_buf = flctl_write_buf;
nand->read_buf = flctl_read_buf;
nand->verify_buf = flctl_verify_buf;
nand->select_chip = flctl_select_chip;
nand->cmdfunc = flctl_cmdfunc;
ret = nand_scan_ident(flctl_mtd, 1);
if (ret)
goto err;
ret = flctl_chip_init_tail(flctl_mtd);
if (ret)
goto err;
ret = nand_scan_tail(flctl_mtd);
if (ret)
goto err;
add_mtd_partitions(flctl_mtd, pdata->parts, pdata->nr_parts);
return 0;
err:
kfree(flctl);
return ret;
}
static int __exit flctl_remove(struct platform_device *pdev)
{
struct sh_flctl *flctl = platform_get_drvdata(pdev);
nand_release(&flctl->mtd);
kfree(flctl);
return 0;
}
static struct platform_driver flctl_driver = {
.probe = flctl_probe,
.remove = flctl_remove,
.driver = {
.name = "sh_flctl",
.owner = THIS_MODULE,
},
};
static int __init flctl_nand_init(void)
{
return platform_driver_register(&flctl_driver);
}
static void __exit flctl_nand_cleanup(void)
{
platform_driver_unregister(&flctl_driver);
}
module_init(flctl_nand_init);
module_exit(flctl_nand_cleanup);
MODULE_LICENSE("GPL");
MODULE_AUTHOR("Yoshihiro Shimoda");
MODULE_DESCRIPTION("SuperH FLCTL driver");
MODULE_ALIAS("platform:sh_flctl");

View File

@ -1,206 +0,0 @@
/*
* drivers/mtd/nand/toto.c
*
* Copyright (c) 2003 Texas Instruments
*
* Derived from drivers/mtd/autcpu12.c
*
* Copyright (c) 2002 Thomas Gleixner <tgxl@linutronix.de>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*
* Overview:
* This is a device driver for the NAND flash device found on the
* TI fido board. It supports 32MiB and 64MiB cards
*/
#include <linux/slab.h>
#include <linux/init.h>
#include <linux/module.h>
#include <linux/delay.h>
#include <linux/mtd/mtd.h>
#include <linux/mtd/nand.h>
#include <linux/mtd/partitions.h>
#include <asm/io.h>
#include <asm/arch/hardware.h>
#include <asm/sizes.h>
#include <asm/arch/toto.h>
#include <asm/arch-omap1510/hardware.h>
#include <asm/arch/gpio.h>
#define CONFIG_NAND_WORKAROUND 1
/*
* MTD structure for TOTO board
*/
static struct mtd_info *toto_mtd = NULL;
static unsigned long toto_io_base = OMAP_FLASH_1_BASE;
/*
* Define partitions for flash devices
*/
static struct mtd_partition partition_info64M[] = {
{ .name = "toto kernel partition 1",
.offset = 0,
.size = 2 * SZ_1M },
{ .name = "toto file sys partition 2",
.offset = 2 * SZ_1M,
.size = 14 * SZ_1M },
{ .name = "toto user partition 3",
.offset = 16 * SZ_1M,
.size = 16 * SZ_1M },
{ .name = "toto devboard extra partition 4",
.offset = 32 * SZ_1M,
.size = 32 * SZ_1M },
};
static struct mtd_partition partition_info32M[] = {
{ .name = "toto kernel partition 1",
.offset = 0,
.size = 2 * SZ_1M },
{ .name = "toto file sys partition 2",
.offset = 2 * SZ_1M,
.size = 14 * SZ_1M },
{ .name = "toto user partition 3",
.offset = 16 * SZ_1M,
.size = 16 * SZ_1M },
};
#define NUM_PARTITIONS32M 3
#define NUM_PARTITIONS64M 4
/*
* hardware specific access to control-lines
*
* ctrl:
* NAND_NCE: bit 0 -> bit 14 (0x4000)
* NAND_CLE: bit 1 -> bit 12 (0x1000)
* NAND_ALE: bit 2 -> bit 1 (0x0002)
*/
static void toto_hwcontrol(struct mtd_info *mtd, int cmd,
unsigned int ctrl)
{
struct nand_chip *chip = mtd->priv;
if (ctrl & NAND_CTRL_CHANGE) {
unsigned long bits;
/* hopefully enough time for tc make proceding write to clear */
udelay(1);
bits = (~ctrl & NAND_NCE) << 14;
bits |= (ctrl & NAND_CLE) << 12;
bits |= (ctrl & NAND_ALE) >> 1;
#warning Wild guess as gpiosetout() is nowhere defined in the kernel source - tglx
gpiosetout(0x5002, bits);
#ifdef CONFIG_NAND_WORKAROUND
/* "some" dev boards busted, blue wired to rts2 :( */
rts2setout(2, (ctrl & NAND_CLE) << 1);
#endif
/* allow time to ensure gpio state to over take memory write */
udelay(1);
}
if (cmd != NAND_CMD_NONE)
writeb(cmd, chip->IO_ADDR_W);
}
/*
* Main initialization routine
*/
static int __init toto_init(void)
{
struct nand_chip *this;
int err = 0;
/* Allocate memory for MTD device structure and private data */
toto_mtd = kmalloc(sizeof(struct mtd_info) + sizeof(struct nand_chip), GFP_KERNEL);
if (!toto_mtd) {
printk(KERN_WARNING "Unable to allocate toto NAND MTD device structure.\n");
err = -ENOMEM;
goto out;
}
/* Get pointer to private data */
this = (struct nand_chip *)(&toto_mtd[1]);
/* Initialize structures */
memset(toto_mtd, 0, sizeof(struct mtd_info));
memset(this, 0, sizeof(struct nand_chip));
/* Link the private data with the MTD structure */
toto_mtd->priv = this;
toto_mtd->owner = THIS_MODULE;
/* Set address of NAND IO lines */
this->IO_ADDR_R = toto_io_base;
this->IO_ADDR_W = toto_io_base;
this->cmd_ctrl = toto_hwcontrol;
this->dev_ready = NULL;
/* 25 us command delay time */
this->chip_delay = 30;
this->ecc.mode = NAND_ECC_SOFT;
/* Scan to find existance of the device */
if (nand_scan(toto_mtd, 1)) {
err = -ENXIO;
goto out_mtd;
}
/* Register the partitions */
switch (toto_mtd->size) {
case SZ_64M:
add_mtd_partitions(toto_mtd, partition_info64M, NUM_PARTITIONS64M);
break;
case SZ_32M:
add_mtd_partitions(toto_mtd, partition_info32M, NUM_PARTITIONS32M);
break;
default:{
printk(KERN_WARNING "Unsupported Nand device\n");
err = -ENXIO;
goto out_buf;
}
}
gpioreserve(NAND_MASK); /* claim our gpios */
archflashwp(0, 0); /* open up flash for writing */
goto out;
out_mtd:
kfree(toto_mtd);
out:
return err;
}
module_init(toto_init);
/*
* Clean up routine
*/
static void __exit toto_cleanup(void)
{
/* Release resources, unregister device */
nand_release(toto_mtd);
/* Free the MTD device structure */
kfree(toto_mtd);
/* stop flash writes */
archflashwp(0, 1);
/* release gpios to system */
gpiorelease(NAND_MASK);
}
module_exit(toto_cleanup);
MODULE_LICENSE("GPL");
MODULE_AUTHOR("Richard Woodruff <r-woodruff2@ti.com>");
MODULE_DESCRIPTION("Glue layer for NAND flash on toto board");

View File

@ -20,7 +20,6 @@
#include <linux/mtd/partitions.h>
int __devinit of_mtd_parse_partitions(struct device *dev,
struct mtd_info *mtd,
struct device_node *node,
struct mtd_partition **pparts)
{

View File

@ -27,8 +27,16 @@ config MTD_ONENAND_GENERIC
help
Support for OneNAND flash via platform device driver.
config MTD_ONENAND_OMAP2
tristate "OneNAND on OMAP2/OMAP3 support"
depends on MTD_ONENAND && (ARCH_OMAP2 || ARCH_OMAP3)
help
Support for a OneNAND flash device connected to an OMAP2/OMAP3 CPU
via the GPMC memory controller.
config MTD_ONENAND_OTP
bool "OneNAND OTP Support"
select HAVE_MTD_OTP
help
One Block of the NAND Flash Array memory is reserved as
a One-Time Programmable Block memory area.

View File

@ -7,6 +7,7 @@ obj-$(CONFIG_MTD_ONENAND) += onenand.o
# Board specific.
obj-$(CONFIG_MTD_ONENAND_GENERIC) += generic.o
obj-$(CONFIG_MTD_ONENAND_OMAP2) += omap2.o
# Simulator
obj-$(CONFIG_MTD_ONENAND_SIM) += onenand_sim.o

802
drivers/mtd/onenand/omap2.c Normal file
View File

@ -0,0 +1,802 @@
/*
* linux/drivers/mtd/onenand/omap2.c
*
* OneNAND driver for OMAP2 / OMAP3
*
* Copyright © 2005-2006 Nokia Corporation
*
* Author: Jarkko Lavinen <jarkko.lavinen@nokia.com> and Juha Yrjölä
* IRQ and DMA support written by Timo Teras
*
* This program is free software; you can redistribute it and/or modify it
* under the terms of the GNU General Public License version 2 as published by
* the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
* more details.
*
* You should have received a copy of the GNU General Public License along with
* this program; see the file COPYING. If not, write to the Free Software
* Foundation, 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
*
*/
#include <linux/device.h>
#include <linux/module.h>
#include <linux/init.h>
#include <linux/mtd/mtd.h>
#include <linux/mtd/onenand.h>
#include <linux/mtd/partitions.h>
#include <linux/platform_device.h>
#include <linux/interrupt.h>
#include <linux/delay.h>
#include <asm/io.h>
#include <asm/mach/flash.h>
#include <asm/arch/gpmc.h>
#include <asm/arch/onenand.h>
#include <asm/arch/gpio.h>
#include <asm/arch/gpmc.h>
#include <asm/arch/pm.h>
#include <linux/dma-mapping.h>
#include <asm/dma-mapping.h>
#include <asm/arch/dma.h>
#include <asm/arch/board.h>
#define DRIVER_NAME "omap2-onenand"
#define ONENAND_IO_SIZE SZ_128K
#define ONENAND_BUFRAM_SIZE (1024 * 5)
struct omap2_onenand {
struct platform_device *pdev;
int gpmc_cs;
unsigned long phys_base;
int gpio_irq;
struct mtd_info mtd;
struct mtd_partition *parts;
struct onenand_chip onenand;
struct completion irq_done;
struct completion dma_done;
int dma_channel;
int freq;
int (*setup)(void __iomem *base, int freq);
};
static void omap2_onenand_dma_cb(int lch, u16 ch_status, void *data)
{
struct omap2_onenand *c = data;
complete(&c->dma_done);
}
static irqreturn_t omap2_onenand_interrupt(int irq, void *dev_id)
{
struct omap2_onenand *c = dev_id;
complete(&c->irq_done);
return IRQ_HANDLED;
}
static inline unsigned short read_reg(struct omap2_onenand *c, int reg)
{
return readw(c->onenand.base + reg);
}
static inline void write_reg(struct omap2_onenand *c, unsigned short value,
int reg)
{
writew(value, c->onenand.base + reg);
}
static void wait_err(char *msg, int state, unsigned int ctrl, unsigned int intr)
{
printk(KERN_ERR "onenand_wait: %s! state %d ctrl 0x%04x intr 0x%04x\n",
msg, state, ctrl, intr);
}
static void wait_warn(char *msg, int state, unsigned int ctrl,
unsigned int intr)
{
printk(KERN_WARNING "onenand_wait: %s! state %d ctrl 0x%04x "
"intr 0x%04x\n", msg, state, ctrl, intr);
}
static int omap2_onenand_wait(struct mtd_info *mtd, int state)
{
struct omap2_onenand *c = container_of(mtd, struct omap2_onenand, mtd);
unsigned int intr = 0;
unsigned int ctrl;
unsigned long timeout;
u32 syscfg;
if (state == FL_RESETING) {
int i;
for (i = 0; i < 20; i++) {
udelay(1);
intr = read_reg(c, ONENAND_REG_INTERRUPT);
if (intr & ONENAND_INT_MASTER)
break;
}
ctrl = read_reg(c, ONENAND_REG_CTRL_STATUS);
if (ctrl & ONENAND_CTRL_ERROR) {
wait_err("controller error", state, ctrl, intr);
return -EIO;
}
if (!(intr & ONENAND_INT_RESET)) {
wait_err("timeout", state, ctrl, intr);
return -EIO;
}
return 0;
}
if (state != FL_READING) {
int result;
/* Turn interrupts on */
syscfg = read_reg(c, ONENAND_REG_SYS_CFG1);
if (!(syscfg & ONENAND_SYS_CFG1_IOBE)) {
syscfg |= ONENAND_SYS_CFG1_IOBE;
write_reg(c, syscfg, ONENAND_REG_SYS_CFG1);
if (cpu_is_omap34xx())
/* Add a delay to let GPIO settle */
syscfg = read_reg(c, ONENAND_REG_SYS_CFG1);
}
INIT_COMPLETION(c->irq_done);
if (c->gpio_irq) {
result = omap_get_gpio_datain(c->gpio_irq);
if (result == -1) {
ctrl = read_reg(c, ONENAND_REG_CTRL_STATUS);
intr = read_reg(c, ONENAND_REG_INTERRUPT);
wait_err("gpio error", state, ctrl, intr);
return -EIO;
}
} else
result = 0;
if (result == 0) {
int retry_cnt = 0;
retry:
result = wait_for_completion_timeout(&c->irq_done,
msecs_to_jiffies(20));
if (result == 0) {
/* Timeout after 20ms */
ctrl = read_reg(c, ONENAND_REG_CTRL_STATUS);
if (ctrl & ONENAND_CTRL_ONGO) {
/*
* The operation seems to be still going
* so give it some more time.
*/
retry_cnt += 1;
if (retry_cnt < 3)
goto retry;
intr = read_reg(c,
ONENAND_REG_INTERRUPT);
wait_err("timeout", state, ctrl, intr);
return -EIO;
}
intr = read_reg(c, ONENAND_REG_INTERRUPT);
if ((intr & ONENAND_INT_MASTER) == 0)
wait_warn("timeout", state, ctrl, intr);
}
}
} else {
int retry_cnt = 0;
/* Turn interrupts off */
syscfg = read_reg(c, ONENAND_REG_SYS_CFG1);
syscfg &= ~ONENAND_SYS_CFG1_IOBE;
write_reg(c, syscfg, ONENAND_REG_SYS_CFG1);
timeout = jiffies + msecs_to_jiffies(20);
while (1) {
if (time_before(jiffies, timeout)) {
intr = read_reg(c, ONENAND_REG_INTERRUPT);
if (intr & ONENAND_INT_MASTER)
break;
} else {
/* Timeout after 20ms */
ctrl = read_reg(c, ONENAND_REG_CTRL_STATUS);
if (ctrl & ONENAND_CTRL_ONGO) {
/*
* The operation seems to be still going
* so give it some more time.
*/
retry_cnt += 1;
if (retry_cnt < 3) {
timeout = jiffies +
msecs_to_jiffies(20);
continue;
}
}
break;
}
}
}
intr = read_reg(c, ONENAND_REG_INTERRUPT);
ctrl = read_reg(c, ONENAND_REG_CTRL_STATUS);
if (intr & ONENAND_INT_READ) {
int ecc = read_reg(c, ONENAND_REG_ECC_STATUS);
if (ecc) {
unsigned int addr1, addr8;
addr1 = read_reg(c, ONENAND_REG_START_ADDRESS1);
addr8 = read_reg(c, ONENAND_REG_START_ADDRESS8);
if (ecc & ONENAND_ECC_2BIT_ALL) {
printk(KERN_ERR "onenand_wait: ECC error = "
"0x%04x, addr1 %#x, addr8 %#x\n",
ecc, addr1, addr8);
mtd->ecc_stats.failed++;
return -EBADMSG;
} else if (ecc & ONENAND_ECC_1BIT_ALL) {
printk(KERN_NOTICE "onenand_wait: correctable "
"ECC error = 0x%04x, addr1 %#x, "
"addr8 %#x\n", ecc, addr1, addr8);
mtd->ecc_stats.corrected++;
}
}
} else if (state == FL_READING) {
wait_err("timeout", state, ctrl, intr);
return -EIO;
}
if (ctrl & ONENAND_CTRL_ERROR) {
wait_err("controller error", state, ctrl, intr);
if (ctrl & ONENAND_CTRL_LOCK)
printk(KERN_ERR "onenand_wait: "
"Device is write protected!!!\n");
return -EIO;
}
if (ctrl & 0xFE9F)
wait_warn("unexpected controller status", state, ctrl, intr);
return 0;
}
static inline int omap2_onenand_bufferram_offset(struct mtd_info *mtd, int area)
{
struct onenand_chip *this = mtd->priv;
if (ONENAND_CURRENT_BUFFERRAM(this)) {
if (area == ONENAND_DATARAM)
return mtd->writesize;
if (area == ONENAND_SPARERAM)
return mtd->oobsize;
}
return 0;
}
#if defined(CONFIG_ARCH_OMAP3) || defined(MULTI_OMAP2)
static int omap3_onenand_read_bufferram(struct mtd_info *mtd, int area,
unsigned char *buffer, int offset,
size_t count)
{
struct omap2_onenand *c = container_of(mtd, struct omap2_onenand, mtd);
struct onenand_chip *this = mtd->priv;
dma_addr_t dma_src, dma_dst;
int bram_offset;
unsigned long timeout;
void *buf = (void *)buffer;
size_t xtra;
volatile unsigned *done;
bram_offset = omap2_onenand_bufferram_offset(mtd, area) + area + offset;
if (bram_offset & 3 || (size_t)buf & 3 || count < 384)
goto out_copy;
if (buf >= high_memory) {
struct page *p1;
if (((size_t)buf & PAGE_MASK) !=
((size_t)(buf + count - 1) & PAGE_MASK))
goto out_copy;
p1 = vmalloc_to_page(buf);
if (!p1)
goto out_copy;
buf = page_address(p1) + ((size_t)buf & ~PAGE_MASK);
}
xtra = count & 3;
if (xtra) {
count -= xtra;
memcpy(buf + count, this->base + bram_offset + count, xtra);
}
dma_src = c->phys_base + bram_offset;
dma_dst = dma_map_single(&c->pdev->dev, buf, count, DMA_FROM_DEVICE);
if (dma_mapping_error(&c->pdev->dev, dma_dst)) {
dev_err(&c->pdev->dev,
"Couldn't DMA map a %d byte buffer\n",
count);
goto out_copy;
}
omap_set_dma_transfer_params(c->dma_channel, OMAP_DMA_DATA_TYPE_S32,
count >> 2, 1, 0, 0, 0);
omap_set_dma_src_params(c->dma_channel, 0, OMAP_DMA_AMODE_POST_INC,
dma_src, 0, 0);
omap_set_dma_dest_params(c->dma_channel, 0, OMAP_DMA_AMODE_POST_INC,
dma_dst, 0, 0);
INIT_COMPLETION(c->dma_done);
omap_start_dma(c->dma_channel);
timeout = jiffies + msecs_to_jiffies(20);
done = &c->dma_done.done;
while (time_before(jiffies, timeout))
if (*done)
break;
dma_unmap_single(&c->pdev->dev, dma_dst, count, DMA_FROM_DEVICE);
if (!*done) {
dev_err(&c->pdev->dev, "timeout waiting for DMA\n");
goto out_copy;
}
return 0;
out_copy:
memcpy(buf, this->base + bram_offset, count);
return 0;
}
static int omap3_onenand_write_bufferram(struct mtd_info *mtd, int area,
const unsigned char *buffer,
int offset, size_t count)
{
struct omap2_onenand *c = container_of(mtd, struct omap2_onenand, mtd);
struct onenand_chip *this = mtd->priv;
dma_addr_t dma_src, dma_dst;
int bram_offset;
unsigned long timeout;
void *buf = (void *)buffer;
volatile unsigned *done;
bram_offset = omap2_onenand_bufferram_offset(mtd, area) + area + offset;
if (bram_offset & 3 || (size_t)buf & 3 || count < 384)
goto out_copy;
/* panic_write() may be in an interrupt context */
if (in_interrupt())
goto out_copy;
if (buf >= high_memory) {
struct page *p1;
if (((size_t)buf & PAGE_MASK) !=
((size_t)(buf + count - 1) & PAGE_MASK))
goto out_copy;
p1 = vmalloc_to_page(buf);
if (!p1)
goto out_copy;
buf = page_address(p1) + ((size_t)buf & ~PAGE_MASK);
}
dma_src = dma_map_single(&c->pdev->dev, buf, count, DMA_TO_DEVICE);
dma_dst = c->phys_base + bram_offset;
if (dma_mapping_error(&c->pdev->dev, dma_dst)) {
dev_err(&c->pdev->dev,
"Couldn't DMA map a %d byte buffer\n",
count);
return -1;
}
omap_set_dma_transfer_params(c->dma_channel, OMAP_DMA_DATA_TYPE_S32,
count >> 2, 1, 0, 0, 0);
omap_set_dma_src_params(c->dma_channel, 0, OMAP_DMA_AMODE_POST_INC,
dma_src, 0, 0);
omap_set_dma_dest_params(c->dma_channel, 0, OMAP_DMA_AMODE_POST_INC,
dma_dst, 0, 0);
INIT_COMPLETION(c->dma_done);
omap_start_dma(c->dma_channel);
timeout = jiffies + msecs_to_jiffies(20);
done = &c->dma_done.done;
while (time_before(jiffies, timeout))
if (*done)
break;
dma_unmap_single(&c->pdev->dev, dma_dst, count, DMA_TO_DEVICE);
if (!*done) {
dev_err(&c->pdev->dev, "timeout waiting for DMA\n");
goto out_copy;
}
return 0;
out_copy:
memcpy(this->base + bram_offset, buf, count);
return 0;
}
#else
int omap3_onenand_read_bufferram(struct mtd_info *mtd, int area,
unsigned char *buffer, int offset,
size_t count);
int omap3_onenand_write_bufferram(struct mtd_info *mtd, int area,
const unsigned char *buffer,
int offset, size_t count);
#endif
#if defined(CONFIG_ARCH_OMAP2) || defined(MULTI_OMAP2)
static int omap2_onenand_read_bufferram(struct mtd_info *mtd, int area,
unsigned char *buffer, int offset,
size_t count)
{
struct omap2_onenand *c = container_of(mtd, struct omap2_onenand, mtd);
struct onenand_chip *this = mtd->priv;
dma_addr_t dma_src, dma_dst;
int bram_offset;
bram_offset = omap2_onenand_bufferram_offset(mtd, area) + area + offset;
/* DMA is not used. Revisit PM requirements before enabling it. */
if (1 || (c->dma_channel < 0) ||
((void *) buffer >= (void *) high_memory) || (bram_offset & 3) ||
(((unsigned int) buffer) & 3) || (count < 1024) || (count & 3)) {
memcpy(buffer, (__force void *)(this->base + bram_offset),
count);
return 0;
}
dma_src = c->phys_base + bram_offset;
dma_dst = dma_map_single(&c->pdev->dev, buffer, count,
DMA_FROM_DEVICE);
if (dma_mapping_error(&c->pdev->dev, dma_dst)) {
dev_err(&c->pdev->dev,
"Couldn't DMA map a %d byte buffer\n",
count);
return -1;
}
omap_set_dma_transfer_params(c->dma_channel, OMAP_DMA_DATA_TYPE_S32,
count / 4, 1, 0, 0, 0);
omap_set_dma_src_params(c->dma_channel, 0, OMAP_DMA_AMODE_POST_INC,
dma_src, 0, 0);
omap_set_dma_dest_params(c->dma_channel, 0, OMAP_DMA_AMODE_POST_INC,
dma_dst, 0, 0);
INIT_COMPLETION(c->dma_done);
omap_start_dma(c->dma_channel);
wait_for_completion(&c->dma_done);
dma_unmap_single(&c->pdev->dev, dma_dst, count, DMA_FROM_DEVICE);
return 0;
}
static int omap2_onenand_write_bufferram(struct mtd_info *mtd, int area,
const unsigned char *buffer,
int offset, size_t count)
{
struct omap2_onenand *c = container_of(mtd, struct omap2_onenand, mtd);
struct onenand_chip *this = mtd->priv;
dma_addr_t dma_src, dma_dst;
int bram_offset;
bram_offset = omap2_onenand_bufferram_offset(mtd, area) + area + offset;
/* DMA is not used. Revisit PM requirements before enabling it. */
if (1 || (c->dma_channel < 0) ||
((void *) buffer >= (void *) high_memory) || (bram_offset & 3) ||
(((unsigned int) buffer) & 3) || (count < 1024) || (count & 3)) {
memcpy((__force void *)(this->base + bram_offset), buffer,
count);
return 0;
}
dma_src = dma_map_single(&c->pdev->dev, (void *) buffer, count,
DMA_TO_DEVICE);
dma_dst = c->phys_base + bram_offset;
if (dma_mapping_error(&c->pdev->dev, dma_dst)) {
dev_err(&c->pdev->dev,
"Couldn't DMA map a %d byte buffer\n",
count);
return -1;
}
omap_set_dma_transfer_params(c->dma_channel, OMAP_DMA_DATA_TYPE_S16,
count / 2, 1, 0, 0, 0);
omap_set_dma_src_params(c->dma_channel, 0, OMAP_DMA_AMODE_POST_INC,
dma_src, 0, 0);
omap_set_dma_dest_params(c->dma_channel, 0, OMAP_DMA_AMODE_POST_INC,
dma_dst, 0, 0);
INIT_COMPLETION(c->dma_done);
omap_start_dma(c->dma_channel);
wait_for_completion(&c->dma_done);
dma_unmap_single(&c->pdev->dev, dma_dst, count, DMA_TO_DEVICE);
return 0;
}
#else
int omap2_onenand_read_bufferram(struct mtd_info *mtd, int area,
unsigned char *buffer, int offset,
size_t count);
int omap2_onenand_write_bufferram(struct mtd_info *mtd, int area,
const unsigned char *buffer,
int offset, size_t count);
#endif
static struct platform_driver omap2_onenand_driver;
static int __adjust_timing(struct device *dev, void *data)
{
int ret = 0;
struct omap2_onenand *c;
c = dev_get_drvdata(dev);
BUG_ON(c->setup == NULL);
/* DMA is not in use so this is all that is needed */
/* Revisit for OMAP3! */
ret = c->setup(c->onenand.base, c->freq);
return ret;
}
int omap2_onenand_rephase(void)
{
return driver_for_each_device(&omap2_onenand_driver.driver, NULL,
NULL, __adjust_timing);
}
static void __devexit omap2_onenand_shutdown(struct platform_device *pdev)
{
struct omap2_onenand *c = dev_get_drvdata(&pdev->dev);
/* With certain content in the buffer RAM, the OMAP boot ROM code
* can recognize the flash chip incorrectly. Zero it out before
* soft reset.
*/
memset((__force void *)c->onenand.base, 0, ONENAND_BUFRAM_SIZE);
}
static int __devinit omap2_onenand_probe(struct platform_device *pdev)
{
struct omap_onenand_platform_data *pdata;
struct omap2_onenand *c;
int r;
pdata = pdev->dev.platform_data;
if (pdata == NULL) {
dev_err(&pdev->dev, "platform data missing\n");
return -ENODEV;
}
c = kzalloc(sizeof(struct omap2_onenand), GFP_KERNEL);
if (!c)
return -ENOMEM;
init_completion(&c->irq_done);
init_completion(&c->dma_done);
c->gpmc_cs = pdata->cs;
c->gpio_irq = pdata->gpio_irq;
c->dma_channel = pdata->dma_channel;
if (c->dma_channel < 0) {
/* if -1, don't use DMA */
c->gpio_irq = 0;
}
r = gpmc_cs_request(c->gpmc_cs, ONENAND_IO_SIZE, &c->phys_base);
if (r < 0) {
dev_err(&pdev->dev, "Cannot request GPMC CS\n");
goto err_kfree;
}
if (request_mem_region(c->phys_base, ONENAND_IO_SIZE,
pdev->dev.driver->name) == NULL) {
dev_err(&pdev->dev, "Cannot reserve memory region at 0x%08lx, "
"size: 0x%x\n", c->phys_base, ONENAND_IO_SIZE);
r = -EBUSY;
goto err_free_cs;
}
c->onenand.base = ioremap(c->phys_base, ONENAND_IO_SIZE);
if (c->onenand.base == NULL) {
r = -ENOMEM;
goto err_release_mem_region;
}
if (pdata->onenand_setup != NULL) {
r = pdata->onenand_setup(c->onenand.base, c->freq);
if (r < 0) {
dev_err(&pdev->dev, "Onenand platform setup failed: "
"%d\n", r);
goto err_iounmap;
}
c->setup = pdata->onenand_setup;
}
if (c->gpio_irq) {
if ((r = omap_request_gpio(c->gpio_irq)) < 0) {
dev_err(&pdev->dev, "Failed to request GPIO%d for "
"OneNAND\n", c->gpio_irq);
goto err_iounmap;
}
omap_set_gpio_direction(c->gpio_irq, 1);
if ((r = request_irq(OMAP_GPIO_IRQ(c->gpio_irq),
omap2_onenand_interrupt, IRQF_TRIGGER_RISING,
pdev->dev.driver->name, c)) < 0)
goto err_release_gpio;
}
if (c->dma_channel >= 0) {
r = omap_request_dma(0, pdev->dev.driver->name,
omap2_onenand_dma_cb, (void *) c,
&c->dma_channel);
if (r == 0) {
omap_set_dma_write_mode(c->dma_channel,
OMAP_DMA_WRITE_NON_POSTED);
omap_set_dma_src_data_pack(c->dma_channel, 1);
omap_set_dma_src_burst_mode(c->dma_channel,
OMAP_DMA_DATA_BURST_8);
omap_set_dma_dest_data_pack(c->dma_channel, 1);
omap_set_dma_dest_burst_mode(c->dma_channel,
OMAP_DMA_DATA_BURST_8);
} else {
dev_info(&pdev->dev,
"failed to allocate DMA for OneNAND, "
"using PIO instead\n");
c->dma_channel = -1;
}
}
dev_info(&pdev->dev, "initializing on CS%d, phys base 0x%08lx, virtual "
"base %p\n", c->gpmc_cs, c->phys_base,
c->onenand.base);
c->pdev = pdev;
c->mtd.name = pdev->dev.bus_id;
c->mtd.priv = &c->onenand;
c->mtd.owner = THIS_MODULE;
if (c->dma_channel >= 0) {
struct onenand_chip *this = &c->onenand;
this->wait = omap2_onenand_wait;
if (cpu_is_omap34xx()) {
this->read_bufferram = omap3_onenand_read_bufferram;
this->write_bufferram = omap3_onenand_write_bufferram;
} else {
this->read_bufferram = omap2_onenand_read_bufferram;
this->write_bufferram = omap2_onenand_write_bufferram;
}
}
if ((r = onenand_scan(&c->mtd, 1)) < 0)
goto err_release_dma;
switch ((c->onenand.version_id >> 4) & 0xf) {
case 0:
c->freq = 40;
break;
case 1:
c->freq = 54;
break;
case 2:
c->freq = 66;
break;
case 3:
c->freq = 83;
break;
}
#ifdef CONFIG_MTD_PARTITIONS
if (pdata->parts != NULL)
r = add_mtd_partitions(&c->mtd, pdata->parts,
pdata->nr_parts);
else
#endif
r = add_mtd_device(&c->mtd);
if (r < 0)
goto err_release_onenand;
platform_set_drvdata(pdev, c);
return 0;
err_release_onenand:
onenand_release(&c->mtd);
err_release_dma:
if (c->dma_channel != -1)
omap_free_dma(c->dma_channel);
if (c->gpio_irq)
free_irq(OMAP_GPIO_IRQ(c->gpio_irq), c);
err_release_gpio:
if (c->gpio_irq)
omap_free_gpio(c->gpio_irq);
err_iounmap:
iounmap(c->onenand.base);
err_release_mem_region:
release_mem_region(c->phys_base, ONENAND_IO_SIZE);
err_free_cs:
gpmc_cs_free(c->gpmc_cs);
err_kfree:
kfree(c);
return r;
}
static int __devexit omap2_onenand_remove(struct platform_device *pdev)
{
struct omap2_onenand *c = dev_get_drvdata(&pdev->dev);
BUG_ON(c == NULL);
#ifdef CONFIG_MTD_PARTITIONS
if (c->parts)
del_mtd_partitions(&c->mtd);
else
del_mtd_device(&c->mtd);
#else
del_mtd_device(&c->mtd);
#endif
onenand_release(&c->mtd);
if (c->dma_channel != -1)
omap_free_dma(c->dma_channel);
omap2_onenand_shutdown(pdev);
platform_set_drvdata(pdev, NULL);
if (c->gpio_irq) {
free_irq(OMAP_GPIO_IRQ(c->gpio_irq), c);
omap_free_gpio(c->gpio_irq);
}
iounmap(c->onenand.base);
release_mem_region(c->phys_base, ONENAND_IO_SIZE);
kfree(c);
return 0;
}
static struct platform_driver omap2_onenand_driver = {
.probe = omap2_onenand_probe,
.remove = omap2_onenand_remove,
.shutdown = omap2_onenand_shutdown,
.driver = {
.name = DRIVER_NAME,
.owner = THIS_MODULE,
},
};
static int __init omap2_onenand_init(void)
{
printk(KERN_INFO "OneNAND driver initializing\n");
return platform_driver_register(&omap2_onenand_driver);
}
static void __exit omap2_onenand_exit(void)
{
platform_driver_unregister(&omap2_onenand_driver);
}
module_init(omap2_onenand_init);
module_exit(omap2_onenand_exit);
MODULE_ALIAS(DRIVER_NAME);
MODULE_LICENSE("GPL");
MODULE_AUTHOR("Jarkko Lavinen <jarkko.lavinen@nokia.com>");
MODULE_DESCRIPTION("Glue layer for OneNAND flash on OMAP2 / OMAP3");

View File

@ -1794,7 +1794,7 @@ static int onenand_erase(struct mtd_info *mtd, struct erase_info *instr)
return -EINVAL;
}
instr->fail_addr = 0xffffffff;
instr->fail_addr = MTD_FAIL_ADDR_UNKNOWN;
/* Grab the lock and see if the device is available */
onenand_get_device(mtd, FL_ERASING);

View File

@ -321,8 +321,7 @@ static void ssfdcr_add_mtd(struct mtd_blktrans_ops *tr, struct mtd_info *mtd)
DEBUG(MTD_DEBUG_LEVEL1,
"SSFDC_RO: cis_block=%d,erase_size=%d,map_len=%d,n_zones=%d\n",
ssfdc->cis_block, ssfdc->erase_size, ssfdc->map_len,
(ssfdc->map_len + MAX_PHYS_BLK_PER_ZONE - 1) /
MAX_PHYS_BLK_PER_ZONE);
DIV_ROUND_UP(ssfdc->map_len, MAX_PHYS_BLK_PER_ZONE));
/* Set geometry */
ssfdc->heads = 16;

View File

@ -104,12 +104,9 @@ static int vol_cdev_open(struct inode *inode, struct file *file)
struct ubi_volume_desc *desc;
int vol_id = iminor(inode) - 1, mode, ubi_num;
lock_kernel();
ubi_num = ubi_major2num(imajor(inode));
if (ubi_num < 0) {
unlock_kernel();
if (ubi_num < 0)
return ubi_num;
}
if (file->f_mode & FMODE_WRITE)
mode = UBI_READWRITE;
@ -119,7 +116,6 @@ static int vol_cdev_open(struct inode *inode, struct file *file)
dbg_gen("open volume %d, mode %d", vol_id, mode);
desc = ubi_open_volume(ubi_num, vol_id, mode);
unlock_kernel();
if (IS_ERR(desc))
return PTR_ERR(desc);

View File

@ -387,7 +387,7 @@ int ubi_scan_add_used(struct ubi_device *ubi, struct ubi_scan_info *si,
pnum, vol_id, lnum, ec, sqnum, bitflips);
sv = add_volume(si, vol_id, pnum, vid_hdr);
if (IS_ERR(sv) < 0)
if (IS_ERR(sv))
return PTR_ERR(sv);
if (si->max_sqnum < sqnum)

View File

@ -244,8 +244,8 @@ static int vtbl_check(const struct ubi_device *ubi,
}
if (reserved_pebs > ubi->good_peb_count) {
dbg_err("too large reserved_pebs, good PEBs %d",
ubi->good_peb_count);
dbg_err("too large reserved_pebs %d, good PEBs %d",
reserved_pebs, ubi->good_peb_count);
err = 9;
goto bad;
}

View File

@ -21,7 +21,7 @@
* between the ROM and other resources, so enabling it may disable access
* to MMIO registers or other card memory.
*/
static int pci_enable_rom(struct pci_dev *pdev)
int pci_enable_rom(struct pci_dev *pdev)
{
struct resource *res = pdev->resource + PCI_ROM_RESOURCE;
struct pci_bus_region region;
@ -45,7 +45,7 @@ static int pci_enable_rom(struct pci_dev *pdev)
* Disable ROM decoding on a PCI device by turning off the last bit in the
* ROM BAR.
*/
static void pci_disable_rom(struct pci_dev *pdev)
void pci_disable_rom(struct pci_dev *pdev)
{
u32 rom_addr;
pci_read_config_dword(pdev, pdev->rom_base_reg, &rom_addr);
@ -260,3 +260,5 @@ void pci_cleanup_rom(struct pci_dev *pdev)
EXPORT_SYMBOL(pci_map_rom);
EXPORT_SYMBOL(pci_unmap_rom);
EXPORT_SYMBOL_GPL(pci_enable_rom);
EXPORT_SYMBOL_GPL(pci_disable_rom);

View File

@ -1168,195 +1168,7 @@ config EFS_FS
To compile the EFS file system support as a module, choose M here: the
module will be called efs.
config JFFS2_FS
tristate "Journalling Flash File System v2 (JFFS2) support"
select CRC32
depends on MTD
help
JFFS2 is the second generation of the Journalling Flash File System
for use on diskless embedded devices. It provides improved wear
levelling, compression and support for hard links. You cannot use
this on normal block devices, only on 'MTD' devices.
Further information on the design and implementation of JFFS2 is
available at <http://sources.redhat.com/jffs2/>.
config JFFS2_FS_DEBUG
int "JFFS2 debugging verbosity (0 = quiet, 2 = noisy)"
depends on JFFS2_FS
default "0"
help
This controls the amount of debugging messages produced by the JFFS2
code. Set it to zero for use in production systems. For evaluation,
testing and debugging, it's advisable to set it to one. This will
enable a few assertions and will print debugging messages at the
KERN_DEBUG loglevel, where they won't normally be visible. Level 2
is unlikely to be useful - it enables extra debugging in certain
areas which at one point needed debugging, but when the bugs were
located and fixed, the detailed messages were relegated to level 2.
If reporting bugs, please try to have available a full dump of the
messages at debug level 1 while the misbehaviour was occurring.
config JFFS2_FS_WRITEBUFFER
bool "JFFS2 write-buffering support"
depends on JFFS2_FS
default y
help
This enables the write-buffering support in JFFS2.
This functionality is required to support JFFS2 on the following
types of flash devices:
- NAND flash
- NOR flash with transparent ECC
- DataFlash
config JFFS2_FS_WBUF_VERIFY
bool "Verify JFFS2 write-buffer reads"
depends on JFFS2_FS_WRITEBUFFER
default n
help
This causes JFFS2 to read back every page written through the
write-buffer, and check for errors.
config JFFS2_SUMMARY
bool "JFFS2 summary support (EXPERIMENTAL)"
depends on JFFS2_FS && EXPERIMENTAL
default n
help
This feature makes it possible to use summary information
for faster filesystem mount.
The summary information can be inserted into a filesystem image
by the utility 'sumtool'.
If unsure, say 'N'.
config JFFS2_FS_XATTR
bool "JFFS2 XATTR support (EXPERIMENTAL)"
depends on JFFS2_FS && EXPERIMENTAL
default n
help
Extended attributes are name:value pairs associated with inodes by
the kernel or by users (see the attr(5) manual page, or visit
<http://acl.bestbits.at/> for details).
If unsure, say N.
config JFFS2_FS_POSIX_ACL
bool "JFFS2 POSIX Access Control Lists"
depends on JFFS2_FS_XATTR
default y
select FS_POSIX_ACL
help
Posix Access Control Lists (ACLs) support permissions for users and
groups beyond the owner/group/world scheme.
To learn more about Access Control Lists, visit the Posix ACLs for
Linux website <http://acl.bestbits.at/>.
If you don't know what Access Control Lists are, say N
config JFFS2_FS_SECURITY
bool "JFFS2 Security Labels"
depends on JFFS2_FS_XATTR
default y
help
Security labels support alternative access control models
implemented by security modules like SELinux. This option
enables an extended attribute handler for file security
labels in the jffs2 filesystem.
If you are not using a security module that requires using
extended attributes for file security labels, say N.
config JFFS2_COMPRESSION_OPTIONS
bool "Advanced compression options for JFFS2"
depends on JFFS2_FS
default n
help
Enabling this option allows you to explicitly choose which
compression modules, if any, are enabled in JFFS2. Removing
compressors can mean you cannot read existing file systems,
and enabling experimental compressors can mean that you
write a file system which cannot be read by a standard kernel.
If unsure, you should _definitely_ say 'N'.
config JFFS2_ZLIB
bool "JFFS2 ZLIB compression support" if JFFS2_COMPRESSION_OPTIONS
select ZLIB_INFLATE
select ZLIB_DEFLATE
depends on JFFS2_FS
default y
help
Zlib is designed to be a free, general-purpose, legally unencumbered,
lossless data-compression library for use on virtually any computer
hardware and operating system. See <http://www.gzip.org/zlib/> for
further information.
Say 'Y' if unsure.
config JFFS2_LZO
bool "JFFS2 LZO compression support" if JFFS2_COMPRESSION_OPTIONS
select LZO_COMPRESS
select LZO_DECOMPRESS
depends on JFFS2_FS
default n
help
minilzo-based compression. Generally works better than Zlib.
This feature was added in July, 2007. Say 'N' if you need
compatibility with older bootloaders or kernels.
config JFFS2_RTIME
bool "JFFS2 RTIME compression support" if JFFS2_COMPRESSION_OPTIONS
depends on JFFS2_FS
default y
help
Rtime does manage to recompress already-compressed data. Say 'Y' if unsure.
config JFFS2_RUBIN
bool "JFFS2 RUBIN compression support" if JFFS2_COMPRESSION_OPTIONS
depends on JFFS2_FS
default n
help
RUBINMIPS and DYNRUBIN compressors. Say 'N' if unsure.
choice
prompt "JFFS2 default compression mode" if JFFS2_COMPRESSION_OPTIONS
default JFFS2_CMODE_PRIORITY
depends on JFFS2_FS
help
You can set here the default compression mode of JFFS2 from
the available compression modes. Don't touch if unsure.
config JFFS2_CMODE_NONE
bool "no compression"
help
Uses no compression.
config JFFS2_CMODE_PRIORITY
bool "priority"
help
Tries the compressors in a predefined order and chooses the first
successful one.
config JFFS2_CMODE_SIZE
bool "size (EXPERIMENTAL)"
help
Tries all compressors and chooses the one which has the smallest
result.
config JFFS2_CMODE_FAVOURLZO
bool "Favour LZO"
help
Tries all compressors and chooses the one which has the smallest
result but gives some preference to LZO (which has faster
decompression) at the expense of size.
endchoice
source "fs/jffs2/Kconfig"
# UBIFS File system configuration
source "fs/ubifs/Kconfig"

188
fs/jffs2/Kconfig Normal file
View File

@ -0,0 +1,188 @@
config JFFS2_FS
tristate "Journalling Flash File System v2 (JFFS2) support"
select CRC32
depends on MTD
help
JFFS2 is the second generation of the Journalling Flash File System
for use on diskless embedded devices. It provides improved wear
levelling, compression and support for hard links. You cannot use
this on normal block devices, only on 'MTD' devices.
Further information on the design and implementation of JFFS2 is
available at <http://sources.redhat.com/jffs2/>.
config JFFS2_FS_DEBUG
int "JFFS2 debugging verbosity (0 = quiet, 2 = noisy)"
depends on JFFS2_FS
default "0"
help
This controls the amount of debugging messages produced by the JFFS2
code. Set it to zero for use in production systems. For evaluation,
testing and debugging, it's advisable to set it to one. This will
enable a few assertions and will print debugging messages at the
KERN_DEBUG loglevel, where they won't normally be visible. Level 2
is unlikely to be useful - it enables extra debugging in certain
areas which at one point needed debugging, but when the bugs were
located and fixed, the detailed messages were relegated to level 2.
If reporting bugs, please try to have available a full dump of the
messages at debug level 1 while the misbehaviour was occurring.
config JFFS2_FS_WRITEBUFFER
bool "JFFS2 write-buffering support"
depends on JFFS2_FS
default y
help
This enables the write-buffering support in JFFS2.
This functionality is required to support JFFS2 on the following
types of flash devices:
- NAND flash
- NOR flash with transparent ECC
- DataFlash
config JFFS2_FS_WBUF_VERIFY
bool "Verify JFFS2 write-buffer reads"
depends on JFFS2_FS_WRITEBUFFER
default n
help
This causes JFFS2 to read back every page written through the
write-buffer, and check for errors.
config JFFS2_SUMMARY
bool "JFFS2 summary support (EXPERIMENTAL)"
depends on JFFS2_FS && EXPERIMENTAL
default n
help
This feature makes it possible to use summary information
for faster filesystem mount.
The summary information can be inserted into a filesystem image
by the utility 'sumtool'.
If unsure, say 'N'.
config JFFS2_FS_XATTR
bool "JFFS2 XATTR support (EXPERIMENTAL)"
depends on JFFS2_FS && EXPERIMENTAL
default n
help
Extended attributes are name:value pairs associated with inodes by
the kernel or by users (see the attr(5) manual page, or visit
<http://acl.bestbits.at/> for details).
If unsure, say N.
config JFFS2_FS_POSIX_ACL
bool "JFFS2 POSIX Access Control Lists"
depends on JFFS2_FS_XATTR
default y
select FS_POSIX_ACL
help
Posix Access Control Lists (ACLs) support permissions for users and
groups beyond the owner/group/world scheme.
To learn more about Access Control Lists, visit the Posix ACLs for
Linux website <http://acl.bestbits.at/>.
If you don't know what Access Control Lists are, say N
config JFFS2_FS_SECURITY
bool "JFFS2 Security Labels"
depends on JFFS2_FS_XATTR
default y
help
Security labels support alternative access control models
implemented by security modules like SELinux. This option
enables an extended attribute handler for file security
labels in the jffs2 filesystem.
If you are not using a security module that requires using
extended attributes for file security labels, say N.
config JFFS2_COMPRESSION_OPTIONS
bool "Advanced compression options for JFFS2"
depends on JFFS2_FS
default n
help
Enabling this option allows you to explicitly choose which
compression modules, if any, are enabled in JFFS2. Removing
compressors can mean you cannot read existing file systems,
and enabling experimental compressors can mean that you
write a file system which cannot be read by a standard kernel.
If unsure, you should _definitely_ say 'N'.
config JFFS2_ZLIB
bool "JFFS2 ZLIB compression support" if JFFS2_COMPRESSION_OPTIONS
select ZLIB_INFLATE
select ZLIB_DEFLATE
depends on JFFS2_FS
default y
help
Zlib is designed to be a free, general-purpose, legally unencumbered,
lossless data-compression library for use on virtually any computer
hardware and operating system. See <http://www.gzip.org/zlib/> for
further information.
Say 'Y' if unsure.
config JFFS2_LZO
bool "JFFS2 LZO compression support" if JFFS2_COMPRESSION_OPTIONS
select LZO_COMPRESS
select LZO_DECOMPRESS
depends on JFFS2_FS
default n
help
minilzo-based compression. Generally works better than Zlib.
This feature was added in July, 2007. Say 'N' if you need
compatibility with older bootloaders or kernels.
config JFFS2_RTIME
bool "JFFS2 RTIME compression support" if JFFS2_COMPRESSION_OPTIONS
depends on JFFS2_FS
default y
help
Rtime does manage to recompress already-compressed data. Say 'Y' if unsure.
config JFFS2_RUBIN
bool "JFFS2 RUBIN compression support" if JFFS2_COMPRESSION_OPTIONS
depends on JFFS2_FS
default n
help
RUBINMIPS and DYNRUBIN compressors. Say 'N' if unsure.
choice
prompt "JFFS2 default compression mode" if JFFS2_COMPRESSION_OPTIONS
default JFFS2_CMODE_PRIORITY
depends on JFFS2_FS
help
You can set here the default compression mode of JFFS2 from
the available compression modes. Don't touch if unsure.
config JFFS2_CMODE_NONE
bool "no compression"
help
Uses no compression.
config JFFS2_CMODE_PRIORITY
bool "priority"
help
Tries the compressors in a predefined order and chooses the first
successful one.
config JFFS2_CMODE_SIZE
bool "size (EXPERIMENTAL)"
help
Tries all compressors and chooses the one which has the smallest
result.
config JFFS2_CMODE_FAVOURLZO
bool "Favour LZO"
help
Tries all compressors and chooses the one which has the smallest
result but gives some preference to LZO (which has faster
decompression) at the expense of size.
endchoice

View File

@ -53,8 +53,8 @@ static int jffs2_is_best_compression(struct jffs2_compressor *this,
}
/* jffs2_compress:
* @data: Pointer to uncompressed data
* @cdata: Pointer to returned pointer to buffer for compressed data
* @data_in: Pointer to uncompressed data
* @cpage_out: Pointer to returned pointer to buffer for compressed data
* @datalen: On entry, holds the amount of data available for compression.
* On exit, expected to hold the amount of data actually compressed.
* @cdatalen: On entry, holds the amount of space available for compressed

View File

@ -311,7 +311,7 @@ static int jffs2_symlink (struct inode *dir_i, struct dentry *dentry, const char
/* FIXME: If you care. We'd need to use frags for the target
if it grows much more than this */
if (targetlen > 254)
return -EINVAL;
return -ENAMETOOLONG;
ri = jffs2_alloc_raw_inode();

View File

@ -68,7 +68,7 @@ static void jffs2_erase_block(struct jffs2_sb_info *c,
instr->len = c->sector_size;
instr->callback = jffs2_erase_callback;
instr->priv = (unsigned long)(&instr[1]);
instr->fail_addr = 0xffffffff;
instr->fail_addr = MTD_FAIL_ADDR_UNKNOWN;
((struct erase_priv_struct *)instr->priv)->jeb = jeb;
((struct erase_priv_struct *)instr->priv)->c = c;
@ -175,7 +175,7 @@ static void jffs2_erase_failed(struct jffs2_sb_info *c, struct jffs2_eraseblock
{
/* For NAND, if the failure did not occur at the device level for a
specific physical page, don't bother updating the bad block table. */
if (jffs2_cleanmarker_oob(c) && (bad_offset != 0xffffffff)) {
if (jffs2_cleanmarker_oob(c) && (bad_offset != MTD_FAIL_ADDR_UNKNOWN)) {
/* We had a device-level failure to erase. Let's see if we've
failed too many times. */
if (!jffs2_write_nand_badblock(c, jeb, bad_offset)) {

View File

@ -207,6 +207,8 @@ int jffs2_statfs(struct dentry *dentry, struct kstatfs *buf)
buf->f_files = 0;
buf->f_ffree = 0;
buf->f_namelen = JFFS2_MAX_NAME_LEN;
buf->f_fsid.val[0] = JFFS2_SUPER_MAGIC;
buf->f_fsid.val[1] = c->mtd->index;
spin_lock(&c->erase_completion_lock);
avail = c->dirty_size + c->free_size;
@ -440,14 +442,14 @@ struct inode *jffs2_new_inode (struct inode *dir_i, int mode, struct jffs2_raw_i
memset(ri, 0, sizeof(*ri));
/* Set OS-specific defaults for new inodes */
ri->uid = cpu_to_je16(current->fsuid);
ri->uid = cpu_to_je16(current_fsuid());
if (dir_i->i_mode & S_ISGID) {
ri->gid = cpu_to_je16(dir_i->i_gid);
if (S_ISDIR(mode))
mode |= S_ISGID;
} else {
ri->gid = cpu_to_je16(current->fsgid);
ri->gid = cpu_to_je16(current_fsgid());
}
/* POSIX ACLs have to be processed now, at least partly.

View File

@ -261,6 +261,10 @@ static int jffs2_find_nextblock(struct jffs2_sb_info *c)
jffs2_sum_reset_collected(c->summary); /* reset collected summary */
/* adjust write buffer offset, else we get a non contiguous write bug */
if (!(c->wbuf_ofs % c->sector_size) && !c->wbuf_len)
c->wbuf_ofs = 0xffffffff;
D1(printk(KERN_DEBUG "jffs2_find_nextblock(): new nextblock = 0x%08x\n", c->nextblock->offset));
return 0;

View File

@ -679,10 +679,7 @@ static int __jffs2_flush_wbuf(struct jffs2_sb_info *c, int pad)
memset(c->wbuf,0xff,c->wbuf_pagesize);
/* adjust write buffer offset, else we get a non contiguous write bug */
if (SECTOR_ADDR(c->wbuf_ofs) == SECTOR_ADDR(c->wbuf_ofs+c->wbuf_pagesize))
c->wbuf_ofs += c->wbuf_pagesize;
else
c->wbuf_ofs = 0xffffffff;
c->wbuf_ofs += c->wbuf_pagesize;
c->wbuf_len = 0;
return 0;
}

View File

@ -12,6 +12,7 @@
#include <linux/mtd/flashchip.h>
#include <linux/mtd/map.h>
#include <linux/mtd/cfi_endian.h>
#include <linux/mtd/xip.h>
#ifdef CONFIG_MTD_CFI_I1
#define cfi_interleave(cfi) 1
@ -430,7 +431,6 @@ static inline uint32_t cfi_send_gen_cmd(u_char cmd, uint32_t cmd_addr, uint32_t
{
map_word val;
uint32_t addr = base + cfi_build_cmd_addr(cmd_addr, cfi_interleave(cfi), type);
val = cfi_build_cmd(cmd, map, cfi);
if (prev_val)
@ -483,6 +483,13 @@ static inline void cfi_udelay(int us)
}
}
int __xipram cfi_qry_present(struct map_info *map, __u32 base,
struct cfi_private *cfi);
int __xipram cfi_qry_mode_on(uint32_t base, struct map_info *map,
struct cfi_private *cfi);
void __xipram cfi_qry_mode_off(uint32_t base, struct map_info *map,
struct cfi_private *cfi);
struct cfi_extquery *cfi_read_pri(struct map_info *map, uint16_t adr, uint16_t size,
const char* name);
struct cfi_fixup {

View File

@ -73,6 +73,10 @@ struct flchip {
int buffer_write_time;
int erase_time;
int word_write_time_max;
int buffer_write_time_max;
int erase_time_max;
void *priv;
};

View File

@ -25,8 +25,10 @@
#define MTD_ERASE_DONE 0x08
#define MTD_ERASE_FAILED 0x10
#define MTD_FAIL_ADDR_UNKNOWN 0xffffffff
/* If the erase fails, fail_addr might indicate exactly which block failed. If
fail_addr = 0xffffffff, the failure was not at the device level or was not
fail_addr = MTD_FAIL_ADDR_UNKNOWN, the failure was not at the device level or was not
specific to any particular block. */
struct erase_info {
struct mtd_info *mtd;

View File

@ -0,0 +1,19 @@
#ifndef __LINUX_MTD_NAND_GPIO_H
#define __LINUX_MTD_NAND_GPIO_H
#include <linux/mtd/nand.h>
struct gpio_nand_platdata {
int gpio_nce;
int gpio_nwp;
int gpio_cle;
int gpio_ale;
int gpio_rdy;
void (*adjust_parts)(struct gpio_nand_platdata *, size_t);
struct mtd_partition *parts;
unsigned int num_parts;
unsigned int options;
int chip_delay;
};
#endif

View File

@ -248,6 +248,7 @@ struct nand_hw_control {
* @read_page_raw: function to read a raw page without ECC
* @write_page_raw: function to write a raw page without ECC
* @read_page: function to read a page according to the ecc generator requirements
* @read_subpage: function to read parts of the page covered by ECC.
* @write_page: function to write a page according to the ecc generator requirements
* @read_oob: function to read chip OOB data
* @write_oob: function to write chip OOB data

View File

@ -152,6 +152,8 @@
#define ONENAND_SYS_CFG1_INT (1 << 6)
#define ONENAND_SYS_CFG1_IOBE (1 << 5)
#define ONENAND_SYS_CFG1_RDY_CONF (1 << 4)
#define ONENAND_SYS_CFG1_HF (1 << 2)
#define ONENAND_SYS_CFG1_SYNC_WRITE (1 << 1)
/*
* Controller Status Register F240h (R)

View File

@ -73,7 +73,6 @@ struct device;
struct device_node;
int __devinit of_mtd_parse_partitions(struct device *dev,
struct mtd_info *mtd,
struct device_node *node,
struct mtd_partition **pparts);

View File

@ -0,0 +1,125 @@
/*
* SuperH FLCTL nand controller
*
* Copyright © 2008 Renesas Solutions Corp.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; version 2 of the License.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
*/
#ifndef __SH_FLCTL_H__
#define __SH_FLCTL_H__
#include <linux/mtd/mtd.h>
#include <linux/mtd/nand.h>
#include <linux/mtd/partitions.h>
/* FLCTL registers */
#define FLCMNCR(f) (f->reg + 0x0)
#define FLCMDCR(f) (f->reg + 0x4)
#define FLCMCDR(f) (f->reg + 0x8)
#define FLADR(f) (f->reg + 0xC)
#define FLADR2(f) (f->reg + 0x3C)
#define FLDATAR(f) (f->reg + 0x10)
#define FLDTCNTR(f) (f->reg + 0x14)
#define FLINTDMACR(f) (f->reg + 0x18)
#define FLBSYTMR(f) (f->reg + 0x1C)
#define FLBSYCNT(f) (f->reg + 0x20)
#define FLDTFIFO(f) (f->reg + 0x24)
#define FLECFIFO(f) (f->reg + 0x28)
#define FLTRCR(f) (f->reg + 0x2C)
#define FL4ECCRESULT0(f) (f->reg + 0x80)
#define FL4ECCRESULT1(f) (f->reg + 0x84)
#define FL4ECCRESULT2(f) (f->reg + 0x88)
#define FL4ECCRESULT3(f) (f->reg + 0x8C)
#define FL4ECCCR(f) (f->reg + 0x90)
#define FL4ECCCNT(f) (f->reg + 0x94)
#define FLERRADR(f) (f->reg + 0x98)
/* FLCMNCR control bits */
#define ECCPOS2 (0x1 << 25)
#define _4ECCCNTEN (0x1 << 24)
#define _4ECCEN (0x1 << 23)
#define _4ECCCORRECT (0x1 << 22)
#define SNAND_E (0x1 << 18) /* SNAND (0=512 1=2048)*/
#define QTSEL_E (0x1 << 17)
#define ENDIAN (0x1 << 16) /* 1 = little endian */
#define FCKSEL_E (0x1 << 15)
#define ECCPOS_00 (0x00 << 12)
#define ECCPOS_01 (0x01 << 12)
#define ECCPOS_02 (0x02 << 12)
#define ACM_SACCES_MODE (0x01 << 10)
#define NANWF_E (0x1 << 9)
#define SE_D (0x1 << 8) /* Spare area disable */
#define CE1_ENABLE (0x1 << 4) /* Chip Enable 1 */
#define CE0_ENABLE (0x1 << 3) /* Chip Enable 0 */
#define TYPESEL_SET (0x1 << 0)
/* FLCMDCR control bits */
#define ADRCNT2_E (0x1 << 31) /* 5byte address enable */
#define ADRMD_E (0x1 << 26) /* Sector address access */
#define CDSRC_E (0x1 << 25) /* Data buffer selection */
#define DOSR_E (0x1 << 24) /* Status read check */
#define SELRW (0x1 << 21) /* 0:read 1:write */
#define DOADR_E (0x1 << 20) /* Address stage execute */
#define ADRCNT_1 (0x00 << 18) /* Address data bytes: 1byte */
#define ADRCNT_2 (0x01 << 18) /* Address data bytes: 2byte */
#define ADRCNT_3 (0x02 << 18) /* Address data bytes: 3byte */
#define ADRCNT_4 (0x03 << 18) /* Address data bytes: 4byte */
#define DOCMD2_E (0x1 << 17) /* 2nd cmd stage execute */
#define DOCMD1_E (0x1 << 16) /* 1st cmd stage execute */
/* FLTRCR control bits */
#define TRSTRT (0x1 << 0) /* translation start */
#define TREND (0x1 << 1) /* translation end */
/* FL4ECCCR control bits */
#define _4ECCFA (0x1 << 2) /* 4 symbols correct fault */
#define _4ECCEND (0x1 << 1) /* 4 symbols end */
#define _4ECCEXST (0x1 << 0) /* 4 symbols exist */
#define INIT_FL4ECCRESULT_VAL 0x03FF03FF
#define LOOP_TIMEOUT_MAX 0x00010000
#define mtd_to_flctl(mtd) container_of(mtd, struct sh_flctl, mtd)
struct sh_flctl {
struct mtd_info mtd;
struct nand_chip chip;
void __iomem *reg;
uint8_t done_buff[2048 + 64]; /* max size 2048 + 64 */
int read_bytes;
int index;
int seqin_column; /* column in SEQIN cmd */
int seqin_page_addr; /* page_addr in SEQIN cmd */
uint32_t seqin_read_cmd; /* read cmd in SEQIN cmd */
int erase1_page_addr; /* page_addr in ERASE1 cmd */
uint32_t erase_ADRCNT; /* bits of FLCMDCR in ERASE1 cmd */
uint32_t rw_ADRCNT; /* bits of FLCMDCR in READ WRITE cmd */
int hwecc_cant_correct[4];
unsigned page_size:1; /* NAND page size (0 = 512, 1 = 2048) */
unsigned hwecc:1; /* Hardware ECC (0 = disabled, 1 = enabled) */
};
struct sh_flctl_platform_data {
struct mtd_partition *parts;
int nr_parts;
unsigned long flcmncr_val;
unsigned has_hwecc:1;
};
#endif /* __SH_FLCTL_H__ */

View File

@ -631,6 +631,8 @@ int __must_check pci_assign_resource(struct pci_dev *dev, int i);
int pci_select_bars(struct pci_dev *dev, unsigned long flags);
/* ROM control related routines */
int pci_enable_rom(struct pci_dev *pdev);
void pci_disable_rom(struct pci_dev *pdev);
void __iomem __must_check *pci_map_rom(struct pci_dev *pdev, size_t *size);
void pci_unmap_rom(struct pci_dev *pdev, void __iomem *rom);
size_t pci_get_rom_size(void __iomem *rom, size_t size);