UnicodeEncoding

Starting with GNU Emacs 23, there is relatively complete Unicode support. See the Emacs manual for more information. Some starter tips about entering (inserting) Unicode characters:

In both cases, the commands created have the same names as the characters they insert, except that ‘SPC’ characters in the character names are replaced by hyphens (‘-’), and the command names are lowercase, not uppercase like the character names.

Input Characters Using an Input Method

Assume you have an UTF-8 file, but your locale uses ISO-8859-1 (Latin-1). Typically when you open the file, it will be garbled, depending on its exact contents and your setup. Auto-detection of UTF-8 is effectively disabled by default in GNU Emacs 21.3 and below. You can prefer it just below your preferred coding system by specifying utf-8 with ‘M-x prefer-coding-system’ and then repeating the command to replace your most preferred coding system at the front of the priority list (‘coding-category-list’).

Otherwise, you need to tell Emacs that the file is in UTF-8 before reading it by prefixing your ‘C-x C-f’ or ‘C-x C-v’ command with ‘C-x C-m c utf-8 RET’. It should display correctly.

To edit the file, just type. If you need to insert characters that your keyboard does not have, you need an InputMethod. Choose one using ‘M-x set-input-method’. Once you have an input method active, toggle it on and off using `C-\’.

To describe input methods, use ‘C-h I’.

I also bind ‘set-input-method’ as follows:

    (global-set-key (kbd "C-x C-m i") 'set-input-method)

C-\ can also set input method. --XueFuqiao

If you always prefer UTF-8 to ISO-8859-1, you can use this:

    (prefer-coding-system 'utf-8)
You might just want to use Emacs 21.3’s UTF-8 language environment.

Input Characters by Code Point

This function, bound to a key, will allow you to enter unicode characters by code point: handy if you do not know which input method to use, but can find the code point of the character:

    (defun unicode-insert (char)
      "Read a unicode code point and insert said character.
    Input uses `read-quoted-char-radix'.  If you want to copy
    the values from the Unicode charts, you should set it to 16."
      (interactive (list (read-quoted-char "Char: ")))
      (ucs-insert char))

You can find charts of the characters here: http://www.unicode.org/charts/

If you use them, you might want to use the following setting to enter hexadecimal using C-q:

    (setq read-quoted-char-radix 16)

`auto-coding-regexp-alist'

You need ‘M-x customize-option RET auto-coding-regexp-alist RET’. You can then insert items such as

  <meta\b[^>]*\bcontent="text/html; charset=UTF-8"[^>]*>
  utf-8
  
  <\?xml\b[^>]*\bencoding="utf-8"[^>]*\?>
  utf-8

There may be a better way – these patterns are a little too rigid for reliable identification of all Unicode documents. At least you can edit them to cope with the set of documents you normally deal with.

You can use ‘file-coding-system-alist’ to specify that, say, .xml and .utf8 files are utf-8-encoded.

Saving CJK as Unicode

Some CJK (Chinese, Japanese, Korean) characters have been “unified” in the Unicode standard. What does that mean? The Unicode FAQ by Markus Kuhn [1] says:

ISO 10646 defines formally a 31-bit character set. The most commonly used characters, including all those found in older encoding standards, have been placed in one of the first 65534 positions (0x0000 to 0xFFFD). This 16-bit subset of UCS is called the Basic Multilingual Plane (BMP) or Plane 0. Initially, 29,000 CJK characters were placed in the CJK Unified Ideographs block starting at U+4E00. Since then three more blocks have been allocated: CJK Unified Ideographs Extension A, starting at U+3400; CJK Compatibility Ideographs, starting at U+F900; and CJK Unified Ideographs Extension B, in Plane 1, starting at U+020000. Further additions are planned, and there is an entire plane of more than 65,000 code points reserved for CJK, which is expected to reach somewhere between 100,000 and 150,000 code points.

Some compilations of Chinese characters alone can already take 50000 positions. Putting them all in the BMP would not leave enough space for all the other characters there, so Unicode unified sets of character shapes that have the same meaning but vary in inessential detail. The Han Unification in Unicode article by Otfried Cheong [2] uses the following two characters as an example to illustrate what kinds of characters have been unified, even though this particular pair has not been unified:

Another example of a character element that appears in two variants is the “black” radical. It can be written either in its traditional form as U+9ed1 (with two little dots), or in its simplified form as U+9ed2 (where the two dots have been replaced by a single stroke):
黑 黒

footnote: In Fact, the radical on the left is simplified form. Simplified Chinese Character are not always simpler than other variants,it’s more of a unified standard form! Anyway, there are indeed a lot of variants which cause communication difficulty in some extend in Chinese world. – A reader from Beijing

For some users, this unification was not acceptable, however. Otfried Cheong says:

I believe the JIS standard actually prescribes the shapes of the glyphs for each character, and this is perhaps exactly the grief that Japanese have with Unicode. If you are used to think about a codepoint being associated with a well-defined shape, the loose view that Unicode takes seems rather careless.
Note that the situation is similar for maths characters. E.g. the black letter I ࠿ (U+2111) and script I ࠾ (U+2110) mean very different things to a physicist, and Unicode actually defines many mathematical shape letters as separate characters.

Mokurai 黙雷 Cherlin says:

Historically, however, shapes of Unified characters that many Japanese now consider distinct were taught as being equivalent even in Japan. There are only a few characters about which there is any fuss, and all of them fall into this category, where there was a difference between Chinese and Japanese calligraphy, which continues to be respected in fonts specifically meant for one language, but not in generic Unicode fonts.
Another point of contention is the Japanese modification of ASCII, JIS X 0201, to have the Yen sign (now U+00A5) in the location of the backslash, or REVERSE SOLIDUS, ‘\’ (ASCII 0x5C, U+005C), and to substitute overline ‘¯’, now U+00AF, for tilde ‘~’ at 0x7E. This created inherent ambiguity in programming for Windows, where ‘\’ is the separator in file paths, except in systems where ‘\’ has been moved to 80h, thus breaking compatibility with the ISO 8859 encodings. Some Japanese programmers claim that Unicode is broken for not permitting the practice of overloading 0x5C to continue, and insisting that Japanese programmers should either label their files with the proper legacy encoding, or translate them properly to Unicode. This may require manual intervention to resolve the ambiguity, throughout a huge code base. Note that some Microsoft Windows fonts for Japan and Korea that claim to be Unicode-compliant have Yen sign or Korean Won sign at this code point.

Pasting

Maybe worth noting that Windows, Mac, and Linux all have character selection tools designed to make cut/paste with special characters a little easier (e.g., linux utilities include gucharmap and kcharselect)?

If you have trouble pasting from other applications: Emacs by default requests a selection of type COMPOUND_TEXT. It seems that xterm (and other applications) ignore this sometimes and reply using UTF-8 encoded text.

Here’s how to tell Emacs to request UTF-8 first:

   (setq x-select-request-type '(UTF8_STRING COMPOUND_TEXT TEXT STRING))

Combining characters

It seems Emacs does not support some combining characters. For instance if you paste or read from a file the word страни́ца (as seen in Wiktionary) you will see страни´ца. – DanielClemente, Apr2012, 24.1.50.4 in X.org in Debian

Combining characters work for me with the Freemono font. Interestingly, no other (monospace) fonts seem to work.

CategoryInternationalization