To process strings it is often very useful to tokenise them into graphemes (i.e. functional units of the orthography), and possibly replace those graphemes by other symbols to harmonize the orthographic representation of different orthographic representations (`transcription/transliteration'). As a quick and easy way to specify, save, and document the decisions taken for the tokenization, we propose using an orthography profile.
This function is the main function to produce, test and apply orthography profiles.
tokenize(strings,
profile = NULL, transliterate = NULL,
method = "global", ordering = c("size", "context", "reverse"),
sep = " ", sep.replace = NULL, missing = "\u2047", normalize = "NFC",
regex = FALSE, silent = FALSE,
file.out = NULL)
Without specificatino of file.out
, the function tokenize
will return a list of four:
a dataframe with the original and the tokenized/transliterated strings
a dataframe with the graphemes with added frequencies. The dataframe is ordered according to the order that resulted from the specifications in ordering
.
a dataframe with all original strings that contain unmatched parts.
a dataframe with the graphemes that are missing from the original orthography profilr, as indicated in the errors. Note that the report of missing characters does currently not lead to correct results for transliterated strings.
When file.out
is specified, these four tables will be written to three different tab-separated files (with header lines): file_strings.tsv
for the strings, file_profile.tsv
for the orthrography profile, file_errors.tsv
for the strings that have unidentifyable parts, and file_missing.tsv
for the graphemes that seem to be missing. When there is nothing missing, then no file for the missing strings is produced.
Vector of strings to the tokenized. It is also possibly to pass a filename, which will then simply be read as scan(strings, sep = "\n", what = "character")
.
Orthography profile specifying the graphemes for the tokenization, and possibly any replacements of the available graphemes. Can be a reference to a file or an R object. If NULL then the orthography profile will be created on the fly using the defaults of write.profile
.
Default NULL
, meaning no transliteration is to be performed. Alternatively, specify the name of the column in the orthography profile that should be used for replacement.
Method to be used for parsing the strings into graphemes. Currently two options are implemented: global
and linear
. See Details for further explanation.
Method for ordering. Currently three different methods are implemented, which can be combined (see Details below): size
, context
, reverse
and frequency
. Use NULL
to prevent ordering and use the top to bottom order as specified in the orthography profile.
Separator to be inserted between graphemes. Defaults to space. This function assumes that the separator specified here does not occur in the data. If it does, unexpected things might happen. Consider removing the chosen seperator from your strings first, e.g. by using gsub
or use the option sep.replace
.
Sometimes, the chosen separator (see above) occurs in the strings to be parsed. This is technically not a problem, but the result might show unexpected sequences. When sep.replace
is specified, this marking is inserted in the string at those places where the sep
marker occurs. Typical usage in linguistics would be sep = " ", sep.replace = "#"
adding spaces between graphemes and replacing spaces in the input string by hashes in the output string.
Character to be inserted at transliteration when no transliteration is specified. Defaults to DOUBLE QUESTION MARK at U+2047. Change this when this character appears in the input string.
Which normalization to use before tokenization, defaults to "NFC". Other option is "NFD". Any other input will result in no normalisation being performed.
Logical: when regex = FALSE
internally the matching of graphemes is done exact, i.e. without using regular expressions. When regex = TRUE
ICU-style regular expression (see stringi-search-regex
) are used for all content in the profile (including the Grapheme-column!), so any reserved characters have to be escaped in the orthography profile. Specifically, add a slash "\" before any occurrence of the characters [](){}|+*.-!?ˆ$\
in your profile (except of course when these characters are used in their regular expression meaning).
Note that this parameter also influences whether contexts should be considered in the tokenization (internally, contextual searching uses regular expressions). By default, when regex = FALSE
, context is ignored. If regex = TRUE
then the function checks whether there are columns called Left
(for the left context) and Right
(for the right context), and optionally a column called Class
(for the specification of grapheme-classes) in the orthography profile. These are hard-coded column-names, so please adapt your orthography profile accordingly. The columns Left
and Right
allow for regular expression to specify context.
Logical: by default missing characters in the strings are reported with a warning. use silent = TRUE
to supress these warnings.
Filename for results to be written. No suffix should be specified, as various different files with different suffixes are produced (see Details below). When file.out
is specified, then the data is written to disk AND the R dataframe is returned invisibly.
Michael Cysouw <cysouw@mac.com>
Given a set of graphemes, there are at least two different methods to tokenize strings. The first is called global
here: this approach takes the first grapheme, matches this grapheme globally at all places in the string, and then turns to the next string. The other approach is called linear
here: this approach walks through the string from left to right. At the first character it looks through all graphemes whether there is any match, and then walks further to the end of the match and starts again. In some special cases these two methods can lead to different results (see Examples).
The ordering or the lines in the ortography profile is of crucial importance, and different orderings will lead to radically different results. To simply use the top to bottom ordering as specified in the profile, use order = NULL
. Currently, there are four different ordering strategies implemented: size
, context
, reverse
and frequency
. By specifying more than one in a vector, these orderings are used to break ties, e.g. the default specification c("size", "context", "reverse")
will first order by size, and for those with the same size, it will order by whether any context is specifed (with context coming first). For lines that are still tied (i.e. the have the same size and all either have or have no context) the order will be reversed in comparison to the order as attested in the profile. Reversing order can be useful, because hand-written profiles tend to put general rules before specific rules, which mostly should be applied in reverse order.
size
: order the lines in the profile by the size of the grapheme, largest first. Size is measured by number of Unicode characters after normalization as specified in the option normalize
. For example, é
has a size of 1 with normalize = "NFC"
, but a size of 2 with normalize = "NFD"
.
context:
order the lines by whether they have any context specified, lines with context coming first. Note that this only works when the option regex = TRUE
is also chosen.
reverse:
order the lines from bottom to top.
frequency
: order the lines by the frequency with which they match in the specified strings before tokenization, least frequent coming first. This frequency of course depends crucially on the available strings, so it will lead to different orderings when applied to different data. Also note that this frequency is (necessarily) measured before graphemes are identified, so these ordering frequencies are not the same as the final frequencies shown in the outpur. Frequency of course also strongly differs on whether context is used for the matching through regex = TRUE
.
Moran & Cysouw (forthcoming)
See also write.profile
for preparing a skeleton orthography profile.
# simple example with interesting warning and error reporting
# the string might look like "AABB" but it isn't...
(string <- "\u0041\u0410\u0042\u0412")
tokenize(string,c("A","B"))
# make an ad-hoc orthography profile
profile <- cbind(
Grapheme = c("a","ä","n","ng","ch","sch"),
Trans = c("a","e","n","N","x","sh"))
# tokenization
tokenize(c("nana", "änngschä", "ach"), profile)
# with replacements and a warning
tokenize(c("Naná", "änngschä", "ach"), profile, transliterate = "Trans")
# different results of ordering
tokenize("aaa", c("a","aa"), order = NULL)
tokenize("aaa", c("a","aa"), order = "size")
# regexmatching does not catch overlap, which can lead to wrong results
# the second example results in a warning instead of just parsing "ab bb"
# this should occur only rarely in natural language
tokenize("abbb", profile = c("ab","bb"), order = NULL)
tokenize("abbb", profile = c("ab","bb"), order = NULL, regex = TRUE)
# different parsing methods can lead to different results
# note that in natural language this is VERY unlikely to happen
tokenize("abc", c("bc","ab","a","c"), order = NULL, method = "global")$strings
tokenize("abc", c("bc","ab","a","c"), order = NULL, method = "linear")$strings
Run the code above in your browser using DataLab