Character encoding is the process of assigning numbers to graphical characters), especially the written characters of human language, allowing them to be stored, transmitted, and transformed using digital computers.[1] The numerical values that make up a character encoding are known as “code points“ and collectively comprise a “code space”, a “code page“, or a “character map)”.
Early character codes associated with the optical or electrical telegraph could only represent a subset of the characters used in written languages, sometimes restricted to upper case letters, numerals and some punctuation only. The low cost of digital representation of data in modern computer systems allows more elaborate character codes (such as Unicode) which represent most of the characters used in many written languages. Character encoding using internationally accepted standards permits worldwide interchange of text in electronic form.

History

The history of character codes illustrates the evolving need for machine-mediated character-based symbolic information over a distance, using once-novel electrical means.

manual Bacon’s cipher, Braille, International maritime signal flags, and the 4-digit encoding of Chinese characters for a Chinese telegraph code (Hans Schjellerup, 1869).
electrical and electro-mechanical The earliest well-known electrically-transmitted character code, Morse code, introduced in the 1840s, used a system of four “symbols” (short signal, long signal, short space, long space) to generate codes of variable length. Though some commercial use of Morse code was via machinery, it was often used as a manual code, generated by hand on a telegraph key and decipherable by ear, and persists in amateur radio and aeronautical use.

Most codes are of fixed per-character length or variable-length sequences of fixed-length codes (e.g. Unicode). [2]

Common examples of character encoding systems include Morse code, the Baudot code, the American Standard Code for Information Interchange (ASCII) and Unicode. Unicode, a well defined and extensible encoding system, has supplanted most earlier character encodings, but the path of code development to the present is fairly well known.

  1. The Baudot code, a five-bit encoding, was created by Émile Baudot in 1870, patented in 1874, modified by Donald Murray in 1901, and standardized by CCITT as International Telegraph Alphabet No. 2 (ITA2) in 1930. The name “baudot” has been erroneously applied to ITA2 and its many variants. ITA2 suffered from many shortcomings and was often “improved” by many equipment manufacturers, sometimes creating compatibility issues.
  2. In 1959 the U.S. military defined its Fieldata code(军用数据码), a six-or seven-bit code, introduced by the U.S. Army Signal Corps. While Fieldata addressed many of the then-modern issues (e.g. letter and digit codes arranged for machine collation), Fieldata fell short of its goals and was short-lived. In 1963 the first ASCII (American Standard Code for Information Interchange) code was released (X3.4-1963) by the ASCII committee (which contained at least one member of the Fieldata committee, W. F. Leubbert) which addressed most of the shortcomings of Fieldata, using a simpler code. Many of the changes were subtle, such as collatable character sets within certain numeric ranges. ASCII63 was a success, widely adopted by industry, and with the follow-up issue of the 1967 ASCII code (which added lower-case letters and fixed some “control code” issues) ASCII67 was adopted fairly widely. ASCII67’s American-centric nature was somewhat addressed in the European ECMA-6 standard.[3]

The need to support more writing systems for different languages, including the CJK family of East Asian scripts, required support for a far larger number of characters and demanded a systematic approach to character encoding rather than the previous ad hoc approaches.[citation needed]

In trying to develop universally interchangeable character encodings, researchers in the 1980s faced the dilemma that, on the one hand, it seemed necessary to add more bits to accommodate additional characters, but on the other hand, for the users of the relatively small character set of the Latin alphabet (who still constituted the majority of computer users), those additional bits were a colossal waste of then-scarce and expensive computing resources (as they would always be zeroed out for such users). In 1985, the average personal computer user’s hard disk drive could store only about 10 megabytes, and it cost approximately US$250 on the wholesale market (and much higher if purchased separately at retail),[5] so it was very important at the time to make every bit count.

The compromise solution that was eventually found and developed into Unicode was to break the assumption (dating back to telegraph codes) that each character should always directly correspond to a particular sequence of bits. Instead, characters would first be mapped to a universal intermediate representation in the form of abstract numbers called code points. Code points would then be represented in a variety of ways and with various default numbers of bits per character (code units) depending on context. To encode code points higher than the length of the code unit, such as above 256 for 8-bit units, the solution was to implement variable-width encodings where an escape sequence would signal that subsequent bits should be parsed as a higher code point.

Terminology

Terminology related to character encoding
image.png.svg)

  • A character is a minimal unit of text that has semantic value.
  • A character set is a collection of characters that might be used by multiple languages. Example: The Latin character set is used by English and most European languages, though the Greek character set is used only by the Greek language.
  • A coded character set is a character set in which each character corresponds to a unique number.
  • A ~~_code point_ of a coded character set is any allowed value in the character set or code space.~~
  • A ~~_code space_ is a range of integers whose values are code points~~
  • A code unit is a bit sequence used to encode each character of a repertoire within a given encoding form. This is referred to as a code value in some documents.[6]

Character repertoire (the abstract set of characters)
The character repertoire is an abstract set of more than one million characters found in a wide variety of scripts including Latin, Cyrillic, Chinese, Korean, Japanese, Hebrew, and Aramaic. Other symbols such as musical notation are also included in the character repertoire. Both the Unicode and GB18030 standards have a character repertoire. As new characters are added to one standard, the other standard also adds those characters, to maintain parity.(奇偶校验)
The code unit size is equivalent to the bit measurement for the particular encoding:

  • A code unit in US-ASCII consists of 7 bits;
  • A code unit in UTF-8, EBCDIC and GB18030 consists of 8 bits;
  • A code unit in UTF-16 consists of 16 bits;
  • A code unit in UTF-32 consists of 32 bits.

Example of a code unit: Consider a string) of the letters “abc” followed by U+10400 𐐀 DESERET CAPITAL LETTER LONG I (represented with 1 char32_t, 2 char16_t or 4 char8_t). That string contains:

  • four characters;
  • four code points
  • either:four code units in UTF-32 (00000061, 00000062, 00000063, 00010400)five code units in UTF-16 (0061, 0062, 0063, d801, dc00), or seven code units in UTF-8 (61, 62, 63, f0, 90, 90, 80).

The convention to refer to a character in Unicode is to start with ‘U+’ followed by the codepoint value in hexadecimal. The range of valid code points for the Unicode standard is U+0000 to U+10FFFF, inclusive, divided in 17 planes), identified by the numbers 0 to 16. Characters in the range U+0000 to U+FFFF are in plane 0, called the Basic Multilingual Plane#Basic_Multilingual_Plane) (BMP). This plane contains most commonly-used characters. Characters in the range U+10000 to U+10FFFF in the other planes are called supplementary characters.
The following table shows examples of code point values:

Character Unicode code point Glyph
Latin A U+0041 Α
Latin sharp S U+00DF ß
Han for East U+6771
Ampersand
与符号
U+0026 &
Inverted exclamation mark
倒置感叹号
U+00A1 ¡
Section sign
分节号
U+00A7 §

A code point is represented by a sequence of code units. The mapping is defined by the encoding. Thus, the number of code units required to represent a code point depends on the encoding:

  • UTF-8: code points map to a sequence of one, two, three or four code units.
  • UTF-16: code units are twice as long as 8-bit code units. Therefore, any code point with a scalar value less than U+10000 is encoded with a single code unit. Code points with a value U+10000 or higher require two code units each. These pairs of code units have a unique term in UTF-16: “Unicode surrogate pairs”.
  • UTF-32: the 32-bit code unit is large enough that every code point is represented as a single code unit.
  • GB18030: multiple code units per code point are common, because of the small code units. Code points are mapped to one, two, or four code units.[7]全称《信息技术 中文编码字符集》,是中华人民共和国国家标准所规定的变长多字节字符集。其对GB 2312-1980完全向后兼容,与GBK基本向后兼容,并支持Unicode(GB 13000)的所有码位。GB 18030-2005共收录汉字70,244个。