Fundamentally, computers just deal with numbers. They store letters and other characters by
assigning a number for each one. Before Unicode was invented, there were hundreds of different
encoding systems for assigning these numbers. No single encoding could contain enough characters: for
example, the European Union alone requires several different encodings to cover all its languages.
Even for a single language like English no single encoding was adequate for all the letters,
punctuation, and technical symbols in common use. These encoding systems also conflict with one
another. That is, two encodings can use the same number for two different characters, or use different
numbers for the same character. Any given computer needs to support many different encodings; yet
whenever data is passed between different encodings or platforms, that data always runs the risk of
corruption.
Unicode is changing all that!
Unicode provides a unique number for every character, no matter what the platform, no matter what the
program, no matter what the language. The Unicode Standard has been adopted by such industry leaders
as Apple, HP, IBM, JustSystem, Microsoft, Oracle, SAP, Sun, Sybase, Unisys and many others. Unicode is
required by modern standards such as XML, Java,JavaScript, LDAP, CORBA 3.0, WML, etc., and is the
official way to implement ISO 10646. It is supported in many operating systems, all modern browsers,
and many other products. The emergence of the Unicode Standard, and the availability of tools
supporting it, are among the most significant recent global software technology trends. Incorporating
Unicode into client-server or multi-tiered applications and websites offers significant cost savings
over the use of legacy character sets. Unicode enables a single software product or a single website
to be targeted across multiple platforms, languages and countries without re-engineering. It allows
data to be transported through many different systems without corruption.
Mer informatsjon kan du finne på Unicode