如何定义特殊的“不可令牌化"? nltk.word_tokenize的单词

问题描述:

我正在使用nltk.word_tokenize标记某些句子,这些句子包含编程语言,框架等,这些句子被错误地标记了.

I'm using nltk.word_tokenize for tokenizing some sentences which contain programming languages, frameworks, etc., which get incorrectly tokenized.

例如:

>>> tokenize.word_tokenize("I work with C#.")
['I', 'work', 'with', 'C', '#', '.']

是否可以将这样的例外"列表输入令牌生成器?我已经整理了一份我不想拆分的所有内容(语言等)的列表.

Is there a way to enter a list of "exceptions" like this to the tokenizer? I already have compiled a list of all the things (languages, etc.) that I don't want to split.

The Multi Word Expression Tokenizer should be what you need.

您将例外列表添加为元组,并将已经标记化的句子传递给它:

You add the list of exceptions as tuples and pass it the already tokenized sentences:

tokenizer = nltk.tokenize.MWETokenizer()
tokenizer.add_mwe(('C', '#'))
tokenizer.add_mwe(('F', '#'))
tokenizer.tokenize(['I', 'work', 'with', 'C', '#', '.'])
['I', 'work', 'with', 'C_#', '.']
tokenizer.tokenize(['I', 'work', 'with', 'F', '#', '.'])
['I', 'work', 'with', 'F_#', '.']