Fixed ordering of Unicode wowels. [ku+A] gives the correct Unicode
now, e.g.
EWTS->TMW looks better for some wacky wowels like, I'm guessing here, [ku+A].
EWTS->TMW should now give errors any time the full input isn't used.
Previously, wacky wowels like [kai+-i] would lead to some droppage.
EWTS->TMW->Unicode testing is now in effect. This found a ton of
EWTS->TMW bugs, most or all of which are fixed now.
TMW->Unicode is improved/fixed for {
\u5350,\u534D,\u0F88+k,\u0F88+kh,U }. (Why U? "\u0f75" is
discouraged in favor of "\u0f71\u0f74".)
NOTE: TMW_RTF_TO_THDL_WYLIETest is still disabled for the nightly
builds' sake, but I ran it in my sandbox and it passed.
table exactly and I fear that it makes the ACIP->Tibetan converter code
a lot uglier. The TODO(DLC)[EWTS->Tibetan] comments littered throughout
are part of the ugliness; they point to the ugliness. If each were addressed,
cleanliness could perhaps be achieved.
I've largely forgotten exactly what this change does, but it attempts to
improve EWTS->Tibetan conversion. The lexer is probably really, really
primitive. I concentrate here on converting a single tsheg bar rather than
a whole document.
Eclipse was used during part of my journey here and some imports were
reorganized merely because I could. :)
(Eclipse was needed when the usual ant build failed to run a new test
EWTSTest. And I wanted its debugger.)
Next steps: end-to-end EWTS tests should bring many problems to light. Fix
those. Triage all the TODO comments.
I don't know that I'll ever really trust the implementation. The tests are
valuable, though. A clean implementation of EWTS->Tibetan in Jython
might hold enough interest for me; I'd like to learn Python.
well-formed. They still do, but they do it less often.
Chris Fynn wrote this a while back:
By normal Tibetan & Dzongkha spelling, writing, and input rules
Tibetan script stacks should be entered and written: 1 headline
consonant (0F40->0F6A), any subjoined consonant(s) (0F90-> 0F9C),
achung (0F71), shabkyu (0F74), any above headline vowel(s) (0F72
0F7A 0F7B 0F7C 0F7D and 0F80); any ngaro (0F7E, 0F82 and 0F83).
Now efforts are made to ensure that the converters conform to the
above rules.
warnings in ACIP->Tibetan conversion much more configurable. You can
now choose from short or long error messages, for one thing. You can change
the severity of almost all warnings. Each error and warning has an error code.
Errors and warnings are better tested.
The converter GUI has a new checkbox for short messages; the converter
CLI has a new mandatory option for short messages.
I also fixed a bug whereby certain errors were not being appended to the
'errors' StringBuffer.
that say "ya can take a ga prefix" etc.
The ACIP->Unicode converter now gives warnings (optionally, and by
default, inline). This converter now produces output even when
lexical errors occur, but the output has errors and warnings inline.
\, the Sanskrit virama, is not used. Of the 1370-odd ACIP texts I've
got here, about 57% make it through the gauntlet (fewer if you demand
a vowel or disambiguator on every stack of a non-Tibetan tsheg bar).
UnicodeCodepointToThdlWylie.java.
Added a new class, UnicodeGraphemeCluster, that can tell you
the components of a grapheme cluster from top to bottom. It does not
yet have good error checking; it is not yet finished.
Next is to parse clean Unicode into GraphemeClusters. After that comes
scanning dirty Unicode into best-guess GraphemeClusters, and scanning
dirty Unicode to get nice error messages.
because a Japanese scholar has an "Extended Wylie" also.
NFKD and NFD have a new brother, NFTHDL. I wish there weren't a need,
but as my yet-to-be-put-into-CVS break-unicode-into-grapheme-clusters code
demonstrates, the-need-is-there. forgive-me for the hyphens, it's late.
characters, for example.
Normalization forms NFKD and NFD are supported for the Tibetan Unicode
range. I don't like either, actually. I've tested NFKD, but I've not yet
committed the tests.
and the build system is not yet aware of them.
I'm adding some classes for representing legal tsheg-bars (syllables, for the
most part) in Unicode. These classes were designed bottom-up (OK, OK --
they weren't designed designed, but I had to write down everything I knew
about Tibetan syntax somewhere). The classes are aware of extended
wylie. I doubt the Javadocs work yet, and I'm still testing (and am not
committing my testing code with these as it is not yet ready).
Next on my list--fix these up to reflect my new awareness of suffix particles
(like le'u'i'o) add classes to support syntactically incorrect Unicode
sequences. Then add a UnicodeReader, and we've got the back end of
a Tibetan Unicode shaping system (like half of MS's Uniscribe or Apple's
Worldscript or FreeType Layout or Omega's OTPs).
A top-down design would not have included LegalTshegBar. But now that
my itch has been scratched, potential uses are lingering about. For example,
it would be nice to scan some input and break it into LegalTshegBars,
punctuation/marks/signs, and illegal stacks. Then we could alert the client
of the illegality, its precise form, and its precise location.
The real system for turning a Unicode stream into an internal representation
suitable for conversion to EWTS/ACIP/XHTML/what-have-you need not be
aware of Tibetan syntax. But to make the very best conversion from
Unicode to, e.g., EWTS, it is necessary to konw that gaskad is better
represented as gskad, but that jaskad is not the same as jskad.