In particular you should note that:
print
, cat
etc.
silently reencodes each string so that it can be properly
shown e.g. in the R's console.
stri_enc_isutf8
). Most of the computations in stringi are performed internally
using either UTF-8 or UTF-16 encodings (this depends on type of service
you request: some ICU services are designed only to work with UTF-16).
Thanks to such a choice, with stringi you get the same result on
each platform, which is -- unfortunately -- not the case of base R's
functions (it is for example known that performing a regular expression
search under Linux on some texts may give you a different result
to those obtained under Windows). We really had portability in our minds
while developing our package! We have observed that R correctly handles UTF-8 strings regardless of your
platform's native encoding (see below). Therefore, we decided that most
functions in stringi will output its results in UTF-8
-- this speeds ups computations on cascading calls to our functions:
the strings does not have to be re-encoded each time. Note that some Unicode characters may have an ambiguous representation.
For example, ``a with ogonek'' (one character) and ``a''+``ogonek''
(two graphemes) are semantically the same. stringi provides functions
to normalize character sequences, see stri_trans_nfc
for discussion. However, it is observed that denormalized strings
do appear very rarely in typical string processing activities. Additionally, do note that stringi silently removes byte order marks
(BOMs - they may incidentally appear in a string read from a text file)
from UTF8-encoded strings, see stri_enc_toutf8
.stri_enc_mark
. There is an implicit assumption
that your platform's default (native) encoding is always a superset
of ASCII -- stringi checks that when your native encoding
is being detected automatically on ICU's initialization and each time
when you change it manually by calling stri_enc_set
. Character strings in R (internally) can be declared to be in:
UTF-8
;
latin1
, i.e. ISO-8859-1 (Western European);
bytes
-- for strings that
should be manipulated as sequences of bytes.
native
(a.k.a. unknown
in Encoding
;
quite a misleading name: no explicit encoding mark) -- for
strings that are assumed to be in your platform's native (default) encoding.
This can represent UTF-8 if you are an OS X user,
or some 8-bit Windows code page, for example.
The native encoding used by R may be determined by examining
the LC_CTYPE category, see Sys.getlocale
.
stri_enc_get
(unless you know what you are doing, the default encoding should only be
changed if the automatic encoding detection process fails on stringi
load). Functions which allow "bytes"
encoding markings are very rare in
stringi, and were carefully selected. These are:
stri_enc_toutf8
(with argument is_unknown_8bit=TRUE
),
stri_enc_toascii
, and stri_encode
. Finally, note that R lets strings in ASCII, UTF-8, and your platform's
native encoding coexist peacefully. Character vector printed with
print
, cat
etc. silently reencodes each
string so that it can be properly shown e.g. on the console.stri_enc_list
for the list of
encodings supported by ICU.
Note that converter names are case-insensitive
and ICU tries to normalize the encoding specifiers.
Leading zeroes are ignored in sequences of digits (if further digits follow),
and all non-alphanumeric characters are ignored. Thus the strings
"UTF-8", "utf_8", "u*Tf08" and "Utf 8" are equivalent. The stri_encode
function
allows you to convert between any given encodings
(in some cases you will obtain bytes
-marked
strings, or even lists of raw vectors (i.e. for UTF-16).
There are also some useful more specialized functions,
like stri_enc_toutf32
(converts a character vector to a list
of integers, where one code point is exactly one numeric value)
or stri_enc_toascii
(substitutes all non-ASCII
bytes with the SUBSTITUTE CHARACTER,
which plays a similar role as R's NA
value). There are also some routines for automated encoding detection,
see e.g. stri_enc_detect
.stri_enc_detect
(among others) for a useful
function in this category."Unicode provides a single character set that covers the major languages of the world, and a small number of machine-friendly encoding forms and schemes to fit the needs of existing applications and protocols. It is designed for best interoperability with both ASCII and ISO-8859-1 (the most widely used character sets) to make it easier for Unicode to be used in almost all applications and protocols" (see the ICU User Guide).
The Unicode Standard determines the way to map any possible character to a numeric value -- a so-called code point. Such code points, however, have to be stored somehow in computer's memory. The Unicode Standard encodes characters in the range U+0000..U+10FFFF, which amounts to a 21-bit code space. Depending on the encoding form (UTF-8, UTF-16, or UTF-32), each character will then be represented either as a sequence of one to four 8-bit bytes, one or two 16-bit code units, or a single 32-bit integer (compare the ICU FAQ).
In most cases, Unicode is a superset of the characters supported by any given code page.
Conversion -- ICU User Guide, http://userguide.icu-project.org/conversion
Converters -- ICU User Guide, http://userguide.icu-project.org/conversion/converters (technical details)
UTF-8, UTF-16, UTF-32 & BOM -- ICU FAQ, http://www.unicode.org/faq/utf_bom.html
stri_enc_fromutf32
,
stri_enc_toascii
,
stri_enc_tonative
,
stri_enc_toutf32
,
stri_enc_toutf8
, stri_encode
Other encoding_detection: stri_enc_detect2
,
stri_enc_detect
,
stri_enc_isascii
,
stri_enc_isutf16be
,
stri_enc_isutf8
Other encoding_management: stri_enc_info
,
stri_enc_list
, stri_enc_mark
,
stri_enc_set
Other stringi_general_topics: stringi-arguments
,
stringi-locale
,
stringi-package
,
stringi-search-boundaries
,
stringi-search-charclass
,
stringi-search-coll
,
stringi-search-fixed
,
stringi-search-regex
,
stringi-search