API Reference (auto-generated)

Morphological Analyzer

class pymorphy2.analyzer.MorphAnalyzer(path=None, result_type=<class 'pymorphy2.analyzer.Parse'>, units=None)[source]

Morphological analyzer for Russian language.

For a given word it can find all possible inflectional paradigms and thus compute all possible tags and normal forms.

Analyzer uses morphological word features and a lexicon (dictionary compiled from XML available at OpenCorpora.org); for unknown words heuristic algorithm is used.

Create a MorphAnalyzer object:

>>> import pymorphy2
>>> morph = pymorphy2.MorphAnalyzer()

MorphAnalyzer uses dictionaries from pymorphy2-dicts package (which can be installed via pip install pymorphy2-dicts).

Alternatively (e.g. if you have your own precompiled dictionaries), either create PYMORPHY2_DICT_PATH environment variable with a path to dictionaries, or pass path argument to pymorphy2.MorphAnalyzer constructor:

>>> morph = pymorphy2.MorphAnalyzer('/path/to/dictionaries') 

By default, methods of this class return parsing results as namedtuples Parse. This has performance implications under CPython, so if you need maximum speed then pass result_type=None to make analyzer return plain unwrapped tuples:

>>> morph = pymorphy2.MorphAnalyzer(result_type=None)
DEFAULT_UNITS = [<class 'pymorphy2.units.by_lookup.DictionaryAnalyzer'>, <class 'pymorphy2.units.by_shape.NumberAnalyzer'>, <class 'pymorphy2.units.by_shape.PunctuationAnalyzer'>, <class 'pymorphy2.units.by_shape.LatinAnalyzer'>, <class 'pymorphy2.units.by_hyphen.HyphenSeparatedParticleAnalyzer'>, <class 'pymorphy2.units.by_hyphen.HyphenAdverbAnalyzer'>, <class 'pymorphy2.units.by_hyphen.HyphenatedWordsAnalyzer'>, <class 'pymorphy2.units.by_analogy.KnownPrefixAnalyzer'>, <class 'pymorphy2.units.by_analogy.UnknownPrefixAnalyzer'>, <class 'pymorphy2.units.by_analogy.KnownSuffixAnalyzer'>]
ENV_VARIABLE = u'PYMORPHY2_DICT_PATH'
TagClass[source]
classmethod choose_dictionary_path(path=None)[source]
get_lexeme(form)[source]

Return the lexeme this parse belongs to.

iter_known_word_parses(prefix=u'')[source]

Return an iterator over parses of dictionary words that starts with a given prefix (default empty prefix means “all words”).

normal_forms(word)[source]

Return a list of word normal forms.

parse(word)[source]

Analyze the word and return a list of pymorphy2.analyzer.Parse namedtuples:

Parse(word, tag, normal_form, para_id, idx, _estimate)

(or plain tuples if result_type=None was used in constructor).

tag(word)[source]
word_is_known(word, strict_ee=False)[source]

Check if a word is in the dictionary. Pass strict_ee=True if word is guaranteed to have correct е/ё letters.

Примечание

Dictionary words are not always correct words; the dictionary also contains incorrect forms which are commonly used. So for spellchecking tasks this method should be used with extra care.

class pymorphy2.analyzer.Parse[source]

Parse result wrapper.

inflect(required_grammemes)[source]
is_known[source]

True if this form is a known dictionary form.

lexeme[source]

A lexeme this form belongs to.

normalized[source]

A Parse instance for self.normal_form.

Analyzer units

Dictionary analyzer unit

class pymorphy2.units.by_lookup.DictionaryAnalyzer(morph)[source]

Analyzer unit that analyzes word using dictionary.

get_lexeme(form)[source]

Return a lexeme (given a parsed word).

parse(word, word_lower, seen_parses)[source]

Parse a word using this dictionary.

tag(word, word_lower, seen_tags)[source]

Tag a word using this dictionary.

Analogy analyzer units

This module provides analyzer units that analyzes unknown words by looking at how similar known words are analyzed.

class pymorphy2.units.by_analogy.KnownPrefixAnalyzer(morph)[source]

Parse the word by checking if it starts with a known prefix and parsing the reminder.

Example: псевдокошка -> (псевдо) + кошка.

class pymorphy2.units.by_analogy.KnownSuffixAnalyzer(morph)[source]

Parse the word by checking how the words with similar suffixes are parsed.

Example: бутявкать -> ...вкать

class FakeDictionary(morph)[source]

This is just a DictionaryAnalyzer with different __repr__

class pymorphy2.units.by_analogy.UnknownPrefixAnalyzer(morph)[source]

Parse the word by parsing only the word suffix (with restrictions on prefix & suffix lengths).

Example: байткод -> (байт) + код

Analyzer units for unknown words with hyphens

class pymorphy2.units.by_hyphen.HyphenAdverbAnalyzer(morph)[source]

Detect adverbs that starts with “по-”.

Example: по-западному

class pymorphy2.units.by_hyphen.HyphenSeparatedParticleAnalyzer(morph)[source]

Parse the word by analyzing it without a particle after a hyphen.

Example: смотри-ка -> смотри + “-ка”.

Примечание

This analyzer doesn’t remove particles from the result so for normalization you may need to handle particles at tokenization level.

class pymorphy2.units.by_hyphen.HyphenatedWordsAnalyzer(morph)[source]

Parse the word by parsing its hyphen-separated parts.

Examples:

  • интернет-магазин -> “интернет-” + магазин
  • человек-гора -> человек + гора

Analyzer units that analyzes non-word tokes

class pymorphy2.units.by_shape.LatinAnalyzer(morph)[source]

This analyzer marks latin words with “LATN” tag. Example: “pdf” -> LATN

class pymorphy2.units.by_shape.NumberAnalyzer(morph)[source]

This analyzer marks numbers with “NUMB” tag. Example: “12” -> NUMB

Примечание

Don’t confuse it with “NUMR”: “тридцать” -> NUMR

class pymorphy2.units.by_shape.PunctuationAnalyzer(morph)[source]

This analyzer tags punctuation marks as “PNCT”. Example: ”,” -> PNCT

Tagset

Utils for working with grammatical tags.

class pymorphy2.tagset.OpencorporaTag(tag)[source]

Wrapper class for OpenCorpora.org tags.

Предупреждение

In order to work properly, the class has to be globally initialized with actual grammemes (using _init_grammemes method).

Pymorphy2 initializes it when loading a dictionary; it may be not a good idea to use this class directly. If possible, use morph_analyzer.TagClass instead.

Example:

>>> from pymorphy2 import MorphAnalyzer
>>> morph = MorphAnalyzer()
>>> Tag = morph.TagClass # get an initialzed Tag class
>>> tag = Tag('VERB,perf,tran plur,impr,excl')
>>> tag
OpencorporaTag('VERB,perf,tran plur,impr,excl')

Tag instances have attributes for accessing grammemes:

>>> print(tag.POS)
VERB
>>> print(tag.number)
plur
>>> print(tag.case)
None

Available attributes are: POS, animacy, aspect, case, gender, involvement, mood, number, person, tense, transitivity and voice.

You may check if a grammeme is in tag or if all grammemes from a given set are in tag:

>>> 'perf' in tag
True
>>> 'nomn' in tag
False
>>> 'Geox' in tag
False
>>> set(['VERB', 'perf']) in tag
True
>>> set(['VERB', 'perf', 'sing']) in tag
False

In order to fight typos, for unknown grammemes an exception is raised:

>>> 'foobar' in tag
Traceback (most recent call last):
...
ValueError: Grammeme is unknown: foobar
>>> set(['NOUN', 'foo', 'bar']) in tag
Traceback (most recent call last):
...
ValueError: Grammemes are unknown: {'bar', 'foo'}

This also works for attributes:

>>> tag.POS == 'plur'
Traceback (most recent call last):
...
ValueError: 'plur' is not a valid grammeme for this attribute.
grammemes[source]

A frozenset with grammemes for this tag.

updated_grammemes(required)[source]

Return a new set of grammemes with required grammemes added and incompatible grammemes removed.

Command-Line Interface

Usage:

pymorphy dict compile <XML_FILE> [--out <PATH>] [--force] [--verbose] [--min_ending_freq <NUM>] [--min_paradigm_popularity <NUM>] [--max_suffix_length <NUM>]
pymorphy dict download_xml <OUT_FILE> [--verbose]
pymorphy dict mem_usage [--dict <PATH>] [--verbose]
pymorphy dict make_test_suite <XML_FILE> <OUT_FILE> [--limit <NUM>] [--verbose]
pymorphy dict meta [--dict <PATH>]
pymorphy _parse <IN_FILE> <OUT_FILE> [--dict <PATH>] [--verbose]
pymorphy -h | --help
pymorphy --version

Options:

-v --verbose                        Be more verbose
-f --force                          Overwrite target folder
-o --out <PATH>                     Output folder name [default: dict]
--limit <NUM>                       Min. number of words per gram. tag [default: 100]
--min_ending_freq <NUM>             Prediction: min. number of suffix occurances [default: 2]
--min_paradigm_popularity <NUM>     Prediction: min. number of lexemes for the paradigm [default: 3]
--max_suffix_length <NUM>           Prediction: max. length of prediction suffixes [default: 5]
--dict <PATH>                       Dictionary folder path

Utilities for OpenCorpora Dictionaries

class pymorphy2.opencorpora_dict.wrapper.Dictionary(path)[source]

OpenCorpora dictionary wrapper class.

build_normal_form(para_id, idx, fixed_word)[source]

Build a normal form.

build_paradigm_info(para_id)[source]

Return a list of

(prefix, tag, suffix)

tuples representing the paradigm.

build_stem(paradigm, idx, fixed_word)[source]

Return word stem (given a word, paradigm and the word index).

build_tag_info(para_id, idx)[source]

Return tag as a string.

iter_known_words(prefix=u'')[source]

Return an iterator over (word, tag, normal_form, para_id, idx) tuples with dictionary words that starts with a given prefix (default empty prefix means “all words”).

word_is_known(word, strict_ee=False)[source]

Check if a word is in the dictionary. Pass strict_ee=True if word is guaranteed to have correct е/ё letters.

Примечание

Dictionary words are not always correct words; the dictionary also contains incorrect forms which are commonly used. So for spellchecking tasks this method should be used with extra care.

Various Utilities

pymorphy2.shapes.is_latin(token)[source]

Return True if all token letters are latin and there is at least one latin letter in the token:

>>> is_latin('foo')
True
>>> is_latin('123-FOO')
True
>>> is_latin('123')
False
>>> is_latin(':)')
False
>>> is_latin('')
False
pymorphy2.shapes.is_punctuation(token)[source]

Return True if a word contains only spaces and punctuation marks and there is at least one punctuation mark:

>>> is_punctuation(', ')
True
>>> is_punctuation('..!')
True
>>> is_punctuation('x')
False
>>> is_punctuation(' ')
False
>>> is_punctuation('')
False
pymorphy2.shapes.restore_word_case(word, example)[source]

Make the word be the same case as an example:

>>> restore_word_case('bye', 'Hello')
'Bye'
>>> restore_word_case('half-an-hour', 'Minute')
'Half-An-Hour'
>>> restore_word_case('usa', 'IEEE')
'USA'
>>> restore_word_case('pre-world', 'anti-World')
'pre-World'
>>> restore_word_case('123-do', 'anti-IEEE')
'123-DO'
>>> restore_word_case('123--do', 'anti--IEEE')
'123--DO'

In the alignment fails, the reminder is lower-cased:

>>> restore_word_case('foo-BAR-BAZ', 'Baz-Baz')
'Foo-Bar-baz'
>>> restore_word_case('foo', 'foo-bar')
'foo'
pymorphy2.utils.combinations_of_all_lengths(it)[source]

Return an iterable with all possible combinations of items from it:

>>> for comb in combinations_of_all_lengths('ABC'):
...     print("".join(comb))
A
B
C
AB
AC
BC
ABC
pymorphy2.utils.download_bz2(url, out_fp, chunk_size=262144, on_chunk=<function <lambda> at 0x31e90c8>)[source]

Download a bz2-encoded file from url and write it to out_fp file.

pymorphy2.utils.json_read(filename, **json_options)[source]

Read an object from a json file filename

pymorphy2.utils.json_write(filename, obj, **json_options)[source]

Create file filename with obj serialized to JSON

pymorphy2.utils.largest_group(iterable, key)[source]

Find a group of largest elements (according to key).

>>> s = [-4, 3, 5, 7, 4, -7]
>>> largest_group(s, abs)
[7, -7]
pymorphy2.utils.longest_common_substring(data)[source]

Return a longest common substring of a list of strings:

>>> longest_common_substring(["apricot", "rice", "cricket"])
'ric'
>>> longest_common_substring(["apricot", "banana"])
'a'
>>> longest_common_substring(["foo", "bar", "baz"])
''

See http://stackoverflow.com/questions/2892931/.

pymorphy2.utils.word_splits(word, min_reminder=3, max_prefix_length=5)[source]

Return all splits of a word (taking in account min_reminder and max_prefix_length).