Contain lexers for pygments.


ArgparseLexer(*args, **kwds)

A pygments lexer for agrparse help text.


Version of pygment's "native" style which works better on light backgrounds.

RegexLexer(*args, **kwds)

Base for simple stateful regular expression-based lexers.


class euporie.core.pygments.ArgparseLexer(*args, **kwds)

Bases: RegexLexer

A pygments lexer for agrparse help text.

add_filter(filter_, **options)

Add a new stream filter to this lexer.

alias_filenames = []

A list of fnmatch patterns that match filenames which may or may not contain content for this lexer. This list is used by the guess_lexer_for_filename() function, to determine which lexers are then included in guessing the correct one. That means that e.g. every lexer for HTML and a template language should include \*.html in this list.

aliases: ClassVar[list[str]] = ['argparse']

A list of short, unique identifiers that can be used to look up the lexer from a list, e.g., using get_lexer_by_name().

static analyse_text(text)

A static method which is called for lexer guessing.

It should analyse the text and return a float in the range from 0.0 to 1.0. If it returns 0.0, the lexer will not be selected as the most probable one, if it returns 1.0, it will be selected immediately. This is used by guess_lexer.

The LexerMeta metaclass automatically wraps this function so that it works like a static method (no self or cls parameter) and the return value is automatically converted to float. If the return value is an object that is boolean False it’s the same as if the return values was 0.0.

filenames: ClassVar[list[str]] = []

A list of fnmatch patterns that match filenames which contain content for this lexer. The patterns in this list should be unique among all lexers.

flags = 8

Flags for compiling the regular expressions. Defaults to MULTILINE.

get_tokens(text, unfiltered=False)

This method is the basic interface of a lexer. It is called by the highlight() function. It must process the text and return an iterable of (tokentype, value) pairs from text.

Normally, you don’t need to override this method. The default implementation processes the options recognized by all lexers (stripnl, stripall and so on), and then yields all tokens from get_tokens_unprocessed(), with the index dropped.

If unfiltered is set to True, the filtering mechanism is bypassed even if filters are defined.

get_tokens_unprocessed(text, stack=('root',))

Split text into (tokentype, text) pairs.

stack is the initial stack (default: ['root'])

mimetypes = []

A list of MIME types for content that can be lexed with this lexer.

name = 'argparse'

Full name of the lexer, in human-readable form

priority = 0

Priority, should multiple lexers match and no content is provided

tokens: ClassVar[dict[str, list[tuple[str, pygments.token._TokenType] | tuple[str, pygments.token._TokenType, str]]]] = {'options': [('\\d+', Token.Literal.Number), (',', Token.Text), ('[^\\}]', Token.Literal.String), ('\\}', Token.Operator, '#pop')], 'root': [('(?<=usage: )[^\\s]+', Token.Name.Namespace), ('\\{', Token.Operator, 'options'), ('[\\[\\{\\|\\}\\]]', Token.Operator), ('((?<=\\s)|(?<=\\[))(--[a-zA-Z0-9-]+|-[a-zA-Z0-9-])', Token.Keyword), ('^(\\w+\\s)?\\w+:', Token.Generic.Heading), ('\\b(str|int|bool|UPath|loads)\\b', Token.Name.Builtin), ('\\b[A-Z]+_[A-Z]*\\b', Token.Name.Variable), ("'.*?'", Token.Literal.String), ('.', Token.Text)]}

At all time there is a stack of states. Initially, the stack contains a single state ‘root’. The top of the stack is called “the current state”.

Dict of {'state': [(regex, tokentype, new_state), ...], ...}

new_state can be omitted to signify no state transition. If new_state is a string, it is pushed on the stack. This ensure the new current state is new_state. If new_state is a tuple of strings, all of those strings are pushed on the stack and the current state will be the last element of the list. new_state can also be combined('state1', 'state2', ...) to signify a new, anonymous state combined from the rules of two or more existing ones. Furthermore, it can be ‘#pop’ to signify going back one step in the state stack, or ‘#push’ to push the current state on the stack again. Note that if you push while in a combined state, the combined state itself is pushed, and not only the state in which the rule is defined.

The tuple can also be replaced with include('state'), in which case the rules from the state named by the string are included in the current one.

url = None

URL of the language specification/definition. Used in the Pygments documentation. Set to an empty string to disable.

version_added = None

Version of Pygments in which the lexer was added.

class euporie.core.pygments.EuporiePygmentsStyle

Bases: Style

Version of pygment’s “native” style which works better on light backgrounds.

aliases = []
background_color = '#ffffff'

overall background color (None means transparent)

highlight_color = '#ffffcc'

highlight background color

line_number_background_color = 'transparent'

line number background color

line_number_color = 'inherit'

line number font color

line_number_special_background_color = '#ffffc0'

special line number background color

line_number_special_color = '#000000'

special line number font color

name = 'unnamed'
styles: ClassVar[dict[pygments.token._TokenType, str]] = {Token: '', Token.Comment: 'italic #888888', Token.Comment.Hashbang: '', Token.Comment.Multiline: '', Token.Comment.Preproc: 'noitalic bold #cd2828', Token.Comment.PreprocFile: '', Token.Comment.Single: '', Token.Comment.Special: 'noitalic bold #e50808 bg:#520000', Token.Error: 'bold bg:#a61717 #ffffff', Token.Escape: '', Token.Generic: '', Token.Generic.Deleted: '#d22323', Token.Generic.Emph: 'italic', Token.Generic.EmphStrong: '', Token.Generic.Error: '#d22323', Token.Generic.Heading: 'bold', Token.Generic.Inserted: '#589819', Token.Generic.Output: '', Token.Generic.Prompt: '', Token.Generic.Strong: 'bold', Token.Generic.Subheading: 'underline', Token.Generic.Traceback: '#d22323', Token.Keyword: 'bold #6ebf26', Token.Keyword.Constant: 'nobold #ff3d3d', Token.Keyword.Declaration: '', Token.Keyword.Namespace: '', Token.Keyword.Pseudo: 'nobold', Token.Keyword.Reserved: '', Token.Keyword.Type: '', Token.Literal: '', Token.Literal.Date: '#2fbccd', Token.Literal.Number: '#51b2fd', Token.Literal.Number.Bin: '', Token.Literal.Number.Float: '', Token.Literal.Number.Hex: '', Token.Literal.Number.Integer: '', Token.Literal.Number.Integer.Long: '', Token.Literal.Number.Oct: '', Token.Literal.String: '#ed9d13', Token.Literal.String.Affix: '', Token.Literal.String.Backtick: '', Token.Literal.String.Char: '', Token.Literal.String.Delimiter: '', Token.Literal.String.Doc: '', Token.Literal.String.Double: '', Token.Literal.String.Escape: '', Token.Literal.String.Heredoc: '', Token.Literal.String.Interpol: '', Token.Literal.String.Other: '#ffa500', Token.Literal.String.Regex: '', Token.Literal.String.Single: '', Token.Literal.String.Symbol: '', Token.Name: '', Token.Name.Attribute: 'noinherit', Token.Name.Builtin: '#2fbccd', Token.Name.Builtin.Pseudo: '', Token.Name.Class: 'underline #71adff', Token.Name.Constant: '#40ffff', Token.Name.Decorator: '#ffa500', Token.Name.Entity: '', Token.Name.Exception: 'noinherit bold', Token.Name.Function: '#71adff', Token.Name.Function.Magic: '', Token.Name.Label: '', Token.Name.Namespace: 'underline #71adff', Token.Name.Other: '', Token.Name.Property: '', Token.Name.Tag: 'bold #6ebf26', Token.Name.Variable: '#40ffff', Token.Name.Variable.Class: '', Token.Name.Variable.Global: '', Token.Name.Variable.Instance: '', Token.Name.Variable.Magic: '', Token.Operator: '', Token.Operator.Word: 'bold #6ebf26', Token.Other: '', Token.Punctuation: '', Token.Punctuation.Marker: '', Token.Text: '', Token.Text.Whitespace: ''}

Style definitions for individual token types.