20001015
... and now to something
completely different:
Internationalizing...
... with UNICODE
MS
Windows
MS Windows CE
MS Office 2000
WAP
Notes on Unicode,
multibyte-encodings and national code pages a demonstration of Unicode
is here
Java and
Internationalization
Links for
Languages
Classic literature:
Chinese/French
Sign
Guestbook - your thoughts, tips, links
View Guestbook - do have a look at it. Various notes
on Windows CE Internationalization. Some kind of FAQ.
_______
When I started this page in 1998, some companies made the strategic
decision not to go with Unicode for operating systems or low level engines. By now, it's obvious that this was a mistake.
MS Windows
IE 5 for i18n - any language
Chinese/Japanese/Korean on Windows
NT 4
Windows 2000
any language: IE 5
- IE 5 has some unique and very
useful features for internationalization, IE4's nuisances have been ironed
out.
- Very stable on NT 4 sp4. No backup browser required for me.
- Can handle Hebrew/Arab and right/left text in addition to the FarEast
support IE 4 already has.
- Beats Netscape 4.5 hands down, in almost all respects, and especially if
you use languages/encodings other than Latin1.
Japanese, Korean and Chinese for Windows
-
Language
Packs by Microsoft (addon for MS IE 3.0 or higher) By now integrated
into the IE5 Setup
-
will enable display of these languages in IE and other applications
-
MS'
page on Multilanguage support for Windows
-
Input Method Editors Japanese, Korean, Chinese Traditional and
Simplified for IE 4.0, also work in Word/Outlook/Frontpage of Office 2000,
MS Outlook 98 for HTML-messages, in Outlook Express for text and HTML, and in MSN Messenger.
-
Unicode based, it will accept any language's input. Global IMEs are supported. For example, you can chat via the internet using Japanese and German at the same time on a Western Windows 9x or NT system. I have not tested it's support for right-to-left text. ICQ didn't have this capability as of the start of 1999, I doubt it has it now.
- Chinese Input Methods for MS Word97
-
-
EasyWord97, it's free for
personal use
- Word97 Chinese
Input also free
General notes on NT 4
- install the latest service pack and IE to get maximum i18n support
- uses Unicode internally, but the top layer is localized (eg. S-JIS for
Japanese NT)
- supports Unicode-filenames
- cmd.exe can be started in Unicode-mode with the /u switch. For example,
echo >test. Helo will produce a unicode UCS encoded file named test.
containing the text Helo. Set the console font to a non-bitmap font, this
should contain glyphs for Greek and Russian in addition to Latin1 (or the
respecitve system encoding).
- if text using special fonts doesn't display correctly, try deleting any
unneeded fonts from the system and reinstalling the problematic fonts. NT 4
tends to choke if many fonts are installed.
Windows 2000 (fka NT 5.0)
- will be your choice (apart from Unix) if you need several languages: All
languages complete with IMEs are said to come with the system discs. Japanese,
Chinese, Korean, Arab, Hebrew, Yiddish will be supported on the same platform,
there'll be only one comprehensive NT 5.0 build.
- said to be fully Unicode based, no code page top layer. Correction:There is a code page top layer after all, necessary for compatibility with old programs. Apparently it is not possible to manually set the codepage a program expects. For example, an old Japanese program will expect a S-JIS environment which cannot be set on a Western Windows 2000 system. Still, i18n support is quite good and the kernel has been unified. New software should display correctly, no matter which provenience. The GUI automatically uses an alternate font to display eg. filenames containing characters not in the font set in the display properties. The IMEs which come with the system work with any program, as far as I can tell. I successfully tested the Japanese and Chinese IME.
MS Windows
CE
A few hints for
programmers
Please view the Guestbook for additional notes on Chinese, Japanese,
Hebrew
If you own a CE 2.11 or better device, have a quick look at this
demo page! For CE 2.0, you'll have to use a
software like CE-Star for viewing.
CE 2.11 H/PC-Pro
general
i18n features of CE 2.0 - all languages
Chinese
on CE
Japanese on CE
i18n problems of CE
(non-i18n) Tips & Tricks for CE
Windows CE 2.11 - H/PC-Pro
Based on the CE 2.0 features, 2.11 has important improvements and bug
fixes for i18n issues.
- PocketIE 3.0 handles UTF encoded HTML and also supports different European
encodings (Baltic, Cyrillic, Greek, Turkish, Ukrainian, Latin-3). However, it
cannot open UCS encoded text or html - a bug. Also, some UTF-HTML is not
rendered at all, I don't know why. But HTML pages saved by IE5 in UTF will
always be displayed correctly.
- PocketWord 3.0 can read UCS encoded text files. The i18n bugs of PW 2.0
listed below are fixed
- PocketAccess (new in the HPCPro) is Unicode based. However, CE Services
2.2 will corrupt Unicode data when synchronizing with Access of Office 2000.
SP-1 for CE 2.11 adds important features
- PocketOutlook now has the ability to send/read mail in various encodings.
All encodings already supported by PocketIE can now be used to
read/write mail. Via UTF-encoding, any language can be used for mail. To use
languages not supported by the system font, like Chinese or Japanese,
install an appropriate font and change the registry setting
HKLM/SYSTEM/GDI/SYSFNT/Nm to have
PocketOutlook use this font for mail display.
Windows CE 2.0 internationalization in general
- is fully Unicode 2.0 based
- thus it can support about any language with minimal effort. You need a
TrueType font for your language (if not already covered by the system fonts)
and a keyboard remapper (ParaWin
CE is good) or an input method editor (IME, for
Chinese/Japanese/Thai/Korean).
- PocketWord and PocketExcel should be able to display any language,
provided the font is there
- for Japanese there is a compiled font, an IME and other language related
progs. See above.
- try it for yourself: download this (zipped) PocketWord file and open it. Lucida Sans Unicode, CEFONT (see
above) and Arabic Karbala should be installed. Or enjoy
the screenshot. I've also put up a Word97 rtf
file with Chinese text, using MS Song and MingLiU.
It can be displayed in PocketWord if the font is present.
- To send/receive mail in any language (using the UTF-8 format): By
now possible for CE 2.11 with SP-1.
- If you're thinking of using languages other than those covered in Latin1,
take CE 2.11 or higher. Many problems described below (for CE 2.0) have been
fixed for that version.
Languages that can be displayed in CE 2.0
- basically: any. Install the font (TrueType)
- already present are the fonts for Cyrillic, Greek, Latin Extended 1 and 2
and a few more. Use a keyboard remapper that is Unicode compliant, then you
won't have to install additional fonts.
- I've successfully tried MS Gothic for Japanese (4mb uncompressed), MS Song
for abbreviated Chinese (2.5mb), an Arab font, Lucida Sans Unicode for Hebrew.
Summing up: anything true to the specs goes.
- for those fonts that are not present, like Chinese: Someone should compile
a comprehensive Unicode font that encompasses about all languages as it won't
make a big difference in terms of storage footprint. The public domain Uni16m.bdf
seems to be fit for this purpose. Apart from that, you'll have to use the
original TT font of the language you require, eg. MS Song. Disadvantage: For
Asian languages, the font is huge. Yet it works.
- here's a CE screenshot with some
international nonsense (same as above).
Chinese for Windows CE
- Meanwhile, a lot of utilities to enter Chinese and to display files in
conventional Chinese encodings (Big5, GB/HZ) are out. Consult Jango's site. Notably there's CE-Star.
- If you have a CE 2.11 device, you can open a unicode (UCS) encoded text
file containing Chinese characters in PocketWord. For the characters to
display, copy a Chinese TT font to \windows\fonts and set that font in
PocketWord.
Japanese for Windows CE (mostly 2.0) H/PCs
- the Monash Nihongo page (see Links) has an
EDICT version for WinCE
- on that page there's also an info file by myself describing how to set up
Japanese on CE. The most recent version is here.
- or go directly to Ito's page for
Japanese IMEs and utilities
- Glenn Rosenthal has done JWPce, a rewrite of JWP for CE, a word processor specific for
Japanese. Freeware!
- also check the links given at this page (Cassiopeia Homepage Freeware) - it helps if
you can read Japanese
- for Ps/PCs -
- try the same as above but:
- the fonts are better: copy & convert the TrueType font you want to the
H/PCs \windows\fonts folder. The conversion setting is accessible via
(Desktop) MobileDevices/Tools/FileConversion/Desktop->Device/TrueType. Set
to MobileDeviceRasterFont, this will convert TT to *.fnt fonts with a minimal
storage footprint. A CE 2.0 H/PC can't recognize these fonts, as far as I
know. Tell me if you know otherwise.
i18n problems of CE: Problems in using languages...
... with ActiveSync 3.1
Unfortunately, version 3.1 retains most bugs of version 3.0. Most notably, Unicode filename support
is still broken.
... with ActiveSync 3.0
- ActiveSync 3.0 (Build 9204) cannot synchronize files whose filename
contains characters not supported by the hosting OS. The files also cannot be
copied using the explorer. If a backup is attempted, these files won't be
saved. This is nasty, as CE as well as NT support unicode
filenames, which (for example) makes it possible to have a filename containing
Japanese and French characters at the same time. So there's Unicode support on
both sides, but the interfacing software thrashes it. (Tested with NT 5 beta
3, NT 4 sp5)
- ActiveSync 3.0 will corrupt Hebrew and Arabic if synchronizing an Outlook
contacts entry which contains these and other languages in the notes section.
This is regrettable, as Hebrew and Arabic characters are already present in
the font Tahoma of CE 2.11.
- ActiveSync 3.0 will corrupt character data not supported by the top-layer API of the PC-Windows system when synchronizing PocketAccess and Access 2000. Both PAccess and Access 2000 do fully support Unicode, but due to this bug they possibly cannot be used in combination. Example: For a Western CE and PC system, Russian, Hebrew, Arab, Chinese, Japanese, Thai, Esperanto data will get lost or corrupted.
... with CE 3.0 / WindowsPowered on the PocketPC
I recently acquired a Compaq iPAQ 3630, unfortunately I had to send it back for repairs after just a few trial runs. This section will be updated as soon as I get the PPC back.
- The MS Reader bookreader can't display characters not in a certain system font. This has been confirmed for Japanese, Chinese and Russian characters.
It's a definite bug, not a question of incorrectly installed or incorrectly specified fonts.
- PocketIE does not recognize the encoding tag of a HTML page. If you want to view a UTF encoded page
like the Unicode demonstration, you'll have to manually set the default encoding to UTF in PIE's options. PIE doesn't remember this setting.
... for CE 2.11 -- see above
... on CE 2.0
- CEFONT only encompasses Japanese, UniSun covers Chinese and Japanese - but
there's no comprehensive font that covers all languages. Somebody should
compile a small footprint font for all CJK. And one for the rest of Unicode
(like I said, how about converting Uni16m.bdf?).
Currently, CE 2.0 (the H/PC version) can, without additional work, handle only
TrueType fonts, 2.01 (Ps/PC) additionally bitmap fonts *.fnt. Asian TT fonts
are very space intensive but can be used on CE. The workaround for this space
problem is to compile a TT font that contains bitmaps but no mathematic TT
formula data or hints. An example is the Japanese CEFONT. It should be
possible, given TT tools, to make a new bitmap-TT font by importing *.fnt
fonts into an empty TT template (watch the copyright). As mentioned, TT fonts
can be converted to the *.fnt type with the mobile devices converters.
- Problems with Word97/PocketWord file exchange:
CE 2.0
SP-1 For some languages, rtf/rtf works best, for others pwd/doc
format. Try it out; CJK apparently will be read correctly in files in rtf
format with no conversion, however the font has to be the same on both sides.
PocketWord 2.0 in SP-1 won't read correctly the following unicode code
pages for files originating from Word97 (sp-1), the characters won't be
correctly recognized:
Latin Extended-A (tested character 0x0156)
Basic Greek (tested 039e, 03be)
Cyrillic (042f, 044f)
If
originating from PocketWord, word97 as well as PocketWord will read these code
ranges correctly.
The following unicode code pages were also tested
without such problems (ie., if the font is there there shouldn't be any
problems):
Latin Extended-B
Japanese
Chinese
Hebrew.
CE 2.0 without SP-1: Chinese and Japanese chars
contained in a file originating from Word97 (rtf or doc original) won't be
read correctly by PocketWord. This is a bug: It can produce and read files
(rtf and pwd) containing CJK (and all other Unicode pages I was able to test)
that will be correctly read by Word97. Feel free to tell MS feedback.
Arab also appears to be in the misinterpreded codes range. According
to my own q&d test, the following Unicode character ranges aren't read
correctly (in hex):
2500-262f
3000-33ef CJK symbols, punctuation,
miscellanuous, syllable alphabets Japanese/Chinese
4e00-9fef CJK Unified
Ideographs
? ac00-d7ef Hangul
? f000-f0af?
fe00-ffef
f900-faff
CJK compatibility ideographs
- PocketIE 2.0 of CE 2.0 has no notable international support. PocketIE 3.0
of CE 2.11 supports UTF encoded HTML and European encodings.
- same for Inbox (workaround: send .rtf files as attachment)
(non-i18n) Tips & Tricks for CE
- To disable the (nonsensical) exploding windows animation of CE, add a
DWORD registry entry: HKLM\SYSTEM\GWI\Animate. Give it the value 0 (1
to enable animation). This was taken from a news post by David Scott -
thanks David
- If the unit seems sluggish, try giving it more Program Memory (Control
Panel - System Settings -> Memory). How much you need depends on the
application you use, but generally about 2 - 3 mb are optimum.
Microsoft Office 2000
- Outlook, Word, Access, Excell, Frontpage are fully Unicode based, i.e. any
language can be used with them in any combination with other languages.
- The IMEs of IE 5
can be used in
- but not in Access, Excell. I haven't yet tested PowerPoint but it should
fully support Unicode.
-
- So far, there are IMEs for Chinese Traditional/Short, Japanese, Korean.
WAP
- WAP 1.2 allows UNICODE encoding to be used in the lightweight Wireless
Application Protocoll standard. More generally, any character set containing
a proper subset of Unicode 2.0 is allowed; the full subset of Unicode 2.0 is
contained in UTF-8, as it is a transformation format. Using UTF-8 allows to reap the
full benefit of WAPs internationalization, at the cost of a somewhat larger data size,
the extend of which depends on the ranges of characters encoded (see a
description of the UTF-8 format). Refer to the WAP 1.2 specification,
especially section "Wireless Application Environment Overview / 7.6
Internationalization". As soon as I find a WAP 1.2 compatible
browser for Windows I'll port the Unicode
demo page to WAP.
Notes on Unicode,
multibyte-encodings and national code pages
Unicode
Editors
Unicode vs. Traditional encoding
methods
Problems
The future - mapping 48.000 Han-characters, and more
Unicode
- www.unicode.org
- Unicode 2.0 is a standard defining most languages' characters in an
address space of 16bit. Thus basically, one letter uses 2 bytes instead of
one. There are different ways (coding shemes) to save the text information, a
sufficiently space efficient one is UTF-8: English (ASCII) will be encoded
using only 1 byte, the rest 2-3 bytes.
- the Unicode coding shemes I know of are UTF-7, UTF-8, UCS-2, UTF-16, UCS-4
(32bit encoding for a char address space of > 2*109)
- this is a page demonstrating the
capabilities of Unicode
- more links for Unicode here
Editors
- a good but minimal freeware Unicode editor for Windows NT was written by
H. Eichmann: UnicEdit (sorry, don't have a link)
- SC UniPad by Sharmahd
Computing is a Unicode text editor "intended to finally support all
scripts, characters and symbols of the Unicode standard without the need for
additional fonts, modules or whatever". Version 1.0 will include
bi-directionality support (Arab, Hebrew). Freeware for now, and in future
freeware for non-commercial use.
- UniEdit by the Duke
University, very encompassing editor. Commercial, but there's a trial version
- NT Notepad can save as Unicode (UCS-2), able to handle textes >2mb
- Windows 2000 Notepad can open/save Unicode (UCS-2, UTF, UCS-2 big endian)
text
- UltraEdit-32 can open/edit UCS-2
files but the filename has to consist of characters of the OS top layer
encoding
- Windows 98 WordPad can handle Unicode (UCS-2) files
- MS Office 2000 is fully Unicode based
- MS Word97, MS Excell97 are fully Unicode based and thus can use any
languages' chars. MS Access97 is not, I don't know about MS Powerpoint97
- Netscape 4.x, MS Frontpage Express, MS FrontPage 2000 can produce UTF-8 HTML pages. You'll
want that if you want to display more than one language on a page.
- not an editor, but IE 4 can save as UTF-8 and UCS-2, as text or html. Very
convenient for standardizing, if you've local HTML files in different
languages. Set UTF-8 for all and you won't have to adjust the encodings any
more. IE 4 can convert between most encodings. IE 5 has some additional, very
useful Internationalization features.
Unicode vs. the traditional encodings
(multi-byte and national code pages)
- Why Unicode? Because it's unambiguous and universal. Try to do this without Unicode.
- most new OSses are fully Unicode based: Windows 2000 (fka. NT 5.0),
Windows CE 2.0, BeOS, Java 1.1 (for setting up CJK for Java, consult NS' pages
for International Users; see Links-page)
- Unicode can handle all languages, together. There's only one standard for
all languages, the standard is unambiguous.
- multi-byte encodings and code pages can only handle their own language,
and additionally English at most. But not languages of different code pages or
different multi-byte encodings. Also, for one language there are usually
several encodings that have evolved. For example (I list those that I know of,
there are more in some cases) Japanese: EUC, JIS, S-JIS. Chinese: Big-5, GB,
HZ. Russian/Cyrillic: KO18-R, ISO-8859-5, Windows-1251. Korean: 3 standards.
Greek: chaotic. Also some standards aren't well defined: For example, there
are 2 slightly different Big5 versions and at least 3 versions of JIS. Several
JIS definitions with minor typeface differences have been published.
- Unicode covers more characters for one language than the traditional
encodings. For example the Unicode characters 7AC8, 6901, 8FF9, 6673, 979F,
671E (Chinese) don't exist in Big5 (don't even ask for GB).
- thus, if you expect an international audience ecquipped with the tools to
read Unicode (= Netscape 3 or above, IE 3 or above), use Unicode. In the long
term, this should be the least trouble. MS did it, with Office 97 and WinCE - there's a reason why they did it.
- this is a UTF-8 webpage with different
languages' chars. A demonstration. UTF will become the standard for emails in
the near future, bet on it.
- a free Windows TrueType Unicode font for about all languages is at Bitstream's site: Cyberbit (search for it
on their site). MS Office 2000 comes with the font Arial Unicode MS, which has
a few characters more than Cyberbit.
- TrueType fonts done true to the specifications will contain a table
remapping the chars they contain to the Unicode address space. However, there
are a lot of TT fonts not true to the specs out there...
- MS' resource for TT (and the rest, eg. OpenType) is its Typography site.
Problems: Difficulties with Unicode and the traditional
encodings
Incompatibilities
A Windows centric view
Incompatibilities?
- The fear, that Unicode would kill the specific character forms of
Japanese Kanji in comparison to Chinese Hanzi resp. Traditional vs.
Simplified Chinese forms, is common. This is not true. Some examples:
-
-
character name |
Japanese
| Chinese Traditional
| Chinese Simplified |
masses |
衆 |
眾 |
众 |
air |
気 |
氣 |
气 |
through |
経 |
經 |
经 |
(Set your browser to UTF-8
encoding to display these examples. The glyphs not present on the system or
not provided by the browser will appear as boxes, " ? " or " | ". If your
system/browser can't display all glyphs: Here's a screenshot of what these examples look like on IE 5
using the comprehensive font MS Arial Unicode)
However, a small number of characters with marginally different writings
have been unified. This generally means that there is one version of a
character in common use in the Japanese/Chinese-Traditional/Korean typefaces,
and one slightly different in Chinese-Simplified. Examples: bone骨 (click to
check the character entry in the Unihan database).
For Chinese/Japanese/Korean, Unicode v. 2.1 covers 21204, v 3.0 about 27786
Han characters, for Korean the Hangul syllables are additionally provided.
More than 6,500 additional
characters have been included in the next Unicode version 3.0. Even
in its "old" version 2.1 Unicode defines considerably more Han characters
than the commonly used traditional CJK encodings*. In view of these numbers Unicode is clearly
preferable to the other Asian encodings in general use. Update: As of
03/2000, the characters of the Kangxi Dictionary (康熙辭典)
have been assigned codepoints in the 32byte character space. As well as other
characters, eg. the Japanese dentist's symbols (in 16byte Unicode).
-
No significant incompatibilities exist between Unicode and: S-JIS (the
Japanese part of Unicode was modelled on S-JIS), Big5, GB/HZ, Korean, Thai,
all Western/European/Near East encodings.
Provided, of course, that the character in question is defined in the
target when converting from Unicode to another encoding. Unicode defines more
characters for any given language than any single encoding in general use.
- Ambiguities do exist when roundtrip-converting Unicode to JIS/EUC
in the symbols area. These are questions of definition, mirroring the problems
occuring when transcoding between S-JIS <-> JIS/EUC. The reason is that
Unicode is trying to restrict itself to plaintext characters, but the JIS
characters in question can be viewed as formatted. Kanji mapping is one to
one, provided the Kanji does exist in JIS/EUC.
A Windows centric view
- NT 4.0 has a Unicode kernel but on top there's a layer in the local
encoding (eg. S-JIS for Japanese), thus there are problems. A Japanese
program's string data may be encoded in S-JIS, which won't be converted on a
Western system (try it out). Strings passed to the APIs may become corrupted,
because the system thinks they're of the top layer encoding while they're not.
NT 5.0 still has this problem, but modern compilers (at least MS') should by
default use Unicode, thus solving this problem.
- several programs will take your Unicode string and corrupt it, even on NT:
Not the fault of NT but of the program.
- calling a wide API (for Unicode strings) is difficult in VisualBasic up to
ver 5. You'd better use VisualC++ (ver ?). Update I hear VB 6 fully
supports Unicode strings (in such a way that you won't notice you're not
dealing with 8 bit strings).
- if you have a choice: use pure Unicode, no - repeat no! - traditional
encodings. Then there's no trouble.
The future - or how to map 48.000 Chinese chars, and
more
Unicode is normally understood as denoting the Unicode standard of version
2.0 (2.1 is supported by newer software. 3.0 is finalized by now.) and/or UCS-2
encoding. UCS-2 has space for about 65.000 different characters, which is
sufficient for most purposes. However, there do exist more than 48.000 different
Chinese characters but currently in UCS-2 of Unicode 2.1 only more than 21.000
Han-characters (Chinese caracters used for Chinese, Japanese, Korean) are
defined. To my knowledge, a space of about 30,000 characters is reserved for Han
characters. Several (largely unofficial) standards in academic and military use
try to map all 48.000 Han characters, the most common being CCCII.
Unicode does offer sufficient space to accomodate all Han-characters in its
UCS-4 coding method. UCS-2 is a subset of UCS-4, that is adding so far undefined
characters to the standard won't change the definitions that have been fixed in
Unicode 2.0. It is desirable to expand the existing Unicode standard using UCS-4
as coding method to encompass the Han characters that do not fit into UCS-2. The
definition process is currently under way. Update: The characters of the Kangxi
Dictionary (康熙辭典) have as of 03/2000 been assigned codepoints in the
32byte character space, and will probably be part of the next major Unicode
version. As well as other characters, eg. the Japanese dentist's symbols (in
16byte Unicode).
Java and
Internationalization
From the perspective of a MS Windows
user
Java is fully Unicode 2.0 based since jdk 1.1...
Sun's VM and JDK
- however, as of jdk 1.1.7, the Windows system clipboard access is limited
to single byte strings on Western NT (win95/98 will be the same)
- jdk1.2 will support input method editors provided by the system. Support
for input methods written in Java "will be added later".
- there are problems displaying Unicode chars > \u00FF (that is
everything non-Latin1: Russian, Greek, Chinese, Japanese etc.) for some
methods as of jdk 1.1.7, jdk 1.2 RC-1. Workarounds are not to use those
methods (like TextArea). Consult the Java Bug Parade.
- javac (the compiler) can't read Unicode UCS encoded source files
- for a how to setup Chinese/Japanese/Korean support for Java see Netscapes'
internationalization pages (see Links-page)
MS' VM and SDK
- will not remap chars > \u00ff to the correct system fonts by itself. If
the system font preset for the VM does contain a specific glyph, it will be
displayed (in contrast to Sun's VM).
- however, for most methods they are interpreted correctly and, using MS'
Java extension com.ms.awt.FontX, a fitting Windows TT font can be
selected.
- clipboard access is Unicode
- jvc (the compiler) can read Unicode UCS encoded source files
- there's support for Java IME's, but that's probably MS specific
- update 9903 The newest MS JVM has additional features useful for
i18n. You risk loosing the "write once..." if you use these, but - most Java
programs (not "Applets") are VM specific anyway, so forget it and make your
life easier.
- Java is supposedly the Holy Grail of Internationalization, but reality is
different. If you want to use it for internationalized stuff then either you
rely on Sun's JDK and can't use important text methods or you rely on MS' VM,
have less trouble and make a MS-only Java program. Sorry, Sun.
hits since 98070220
Note
First, thanks to the individuals and companies mentioned here
that made low cost internationalization possible.
How were these
(UTF-8 encoded) pages done? MS FrontPage on Windows 2000 with some source
editing.
Checked in IE 5. Have used NT Notepad and IE5 for the same purpose before.
Internationalization and languages are
my personal interest: This is just a homepage with no guarantee for anything.
It may be full of errors. All trademarks on this page are
acknowledged as the property of their respective owner. TrueType in
this context refers to the Microsoft TrueType version. I don't know a thing
about the Mac/Adobe version
copyright Rafael Humpert, 1998-2000 - mail: rhumpert @
iname.com. Try to catch me with MSN Messenger: Rafael_Humpert @
hotmail.com