\, the Sanskrit virama, is not used. Of the 1370-odd ACIP texts I've
got here, about 57% make it through the gauntlet (fewer if you demand
a vowel or disambiguator on every stack of a non-Tibetan tsheg bar).
b, c, d, e, ... do not belong in ACIP, so the scanner rejects them.
This should make it even easier to distinguish automatically between
Tibetan and English texts.
is a correction, that's a comment, this is Tibetan, that's Latin
(English), that's Tibetan inter-tsheg-bar punctuation, etc.) It now
accepts more real-world ACIP files, i.e. it handles illegal
constructs. The error checking is more user-friendly. There are now
tests.
Added some tsheg bars that Peter E. Hauer of Linguasoft sent me to the
tests. Many thanks, Peter. I still need to implement rules that say,
"This is not Tibetan, it must be Sanskrit, because that letter doesn't
take a MA prefix."
up that String into tsheg bars, punctuation, etc., while finding
errors. I've tested it some, but I'm not yet committing the tests.
Next step: a converter that takes an ACIP file as input and outputs
TMW+Latin.
and it has the capability to produce error messages and warnings that
make sense to the user. One can now get the correct parse, if one
exists, for an ACIP tsheg bar.
One could even feed in ACIP and get a list of warnings about things as
innocuous as PADMA, which a dumb converter would have trouble with.
One could then turn ACIP into well-behaved ACIP for that dumb
converter, if you really wanted to.
Still to do:
o Scan ACIP files into tsheg bars.
o Produce TMW/Latin (from which you can get Unicode, etc.).
o E-mail the illegal tsheg bars to the ACIP fellows so they can fix
the affected documents (most of the Kangyur has unparseable
creatures).
Our disambiguation is now perfect, happening when and only when it is
necessary. These are all illegal, so it shouldn't affect many
existing conversions. But if there were typos, it could.
Our disambiguation is now perfect, happening when and only when it is
necessary. These are all illegal, so it shouldn't affect many
existing conversions. But if there were typos, it could.
This way through clicking on the application through the wizard one can choose
to connect to the available on-line dicts, open a local dict or generate a dict database.
(which I found on suigeneris.org, not apache.org) in order to bulletproof the
Tibetan Converter tests. They used to fail due to nondeterminism in the
Java RTF writer; they should no longer fail.
I've also changed it so that the Tibetan Converter tests run in headless
mode, which means that they'll run on the nightly builds server.
or <?Numbers?> commands; it instead hard-codes the appropriate comma-
delimited lists. This is cleaner because WylieWord and Jskad had different
values for these lists.
TMW->Wylie conversions with the new-and-improved TMW->Wylie
algorithm faulty.
Now I'm using it a little more than you need to, e.g. b.lha instead of blha is
generated because bla and b.la are ambiguous.
TMW->Wylie conversions with the new-and-improved TMW->Wylie
algorithm faulty.
Now I'm using it a little more than you need to, e.g. b.lha instead of blha is
generated because bla and b.la are ambiguous.
like \bullet, \emdash, etc., and this fix only works for Windows or OS/2 RTF
files, not for Mac RTF files. So if you want a TM->TMW conversion to work,
use MS Word for Windows, not for the Mac.