A parser for symbolic equations and expressions¶
It is both safer and more powerful than using Python’s eval, as one has complete control over what names are used (including dynamically creating variables) and how integer and floating point literals are created.
AUTHOR:
Robert Bradshaw 2008-04 (initial version)
- class sage.misc.parser.LookupNameMaker[source]¶
Bases:
object
This class wraps a dictionary as a callable for use in creating names. It takes a dictionary of names, and an (optional) callable to use when the given name is not found in the dictionary.
EXAMPLES:
sage: # needs sage.symbolic sage: from sage.misc.parser import LookupNameMaker sage: maker = LookupNameMaker({'pi': pi}, var) sage: maker('pi') pi sage: maker('pi') is pi True sage: maker('a') a
>>> from sage.all import * >>> # needs sage.symbolic >>> from sage.misc.parser import LookupNameMaker >>> maker = LookupNameMaker({'pi': pi}, var) >>> maker('pi') pi >>> maker('pi') is pi True >>> maker('a') a
- class sage.misc.parser.Parser[source]¶
Bases:
object
Create a symbolic expression parser.
INPUT:
make_int
– callable object to construct integers from strings (default: int)make_float
– callable object to construct real numbers from strings (default: float)make_var
– callable object to construct variables from strings (default: str) this may also be a dictionary of variable namesmake_function
– callable object to construct callable functions from strings this may also be a dictionaryimplicit_multiplication
– whether or not to accept implicit multiplication
OUTPUT:
The evaluated expression tree given by the string, where the above functions are used to create the leaves of this tree.
EXAMPLES:
sage: from sage.misc.parser import Parser sage: p = Parser() sage: p.parse("1+2") 3 sage: p.parse("1+2 == 3") True sage: p = Parser(make_var=var) # needs sage.symbolic sage: p.parse("a*b^c - 3a") # needs sage.symbolic a*b^c - 3*a sage: R.<x> = QQ[] sage: p = Parser(make_var={'x': x}) sage: p.parse("(x+1)^5-x") x^5 + 5*x^4 + 10*x^3 + 10*x^2 + 4*x + 1 sage: p.parse("(x+1)^5-x").parent() is R True sage: p = Parser(make_float=RR, make_var=var, # needs sage.symbolic ....: make_function={'foo': (lambda x: x*x+x)}) sage: p.parse("1.5 + foo(b)") # needs sage.symbolic b^2 + b + 1.50000000000000 sage: p.parse("1.9").parent() # needs sage.symbolic Real Field with 53 bits of precision
>>> from sage.all import * >>> from sage.misc.parser import Parser >>> p = Parser() >>> p.parse("1+2") 3 >>> p.parse("1+2 == 3") True >>> p = Parser(make_var=var) # needs sage.symbolic >>> p.parse("a*b^c - 3a") # needs sage.symbolic a*b^c - 3*a >>> R = QQ['x']; (x,) = R._first_ngens(1) >>> p = Parser(make_var={'x': x}) >>> p.parse("(x+1)^5-x") x^5 + 5*x^4 + 10*x^3 + 10*x^2 + 4*x + 1 >>> p.parse("(x+1)^5-x").parent() is R True >>> p = Parser(make_float=RR, make_var=var, # needs sage.symbolic ... make_function={'foo': (lambda x: x*x+x)}) >>> p.parse("1.5 + foo(b)") # needs sage.symbolic b^2 + b + 1.50000000000000 >>> p.parse("1.9").parent() # needs sage.symbolic Real Field with 53 bits of precision
- p_arg(tokens)[source]¶
Return an
expr
, or a(name, expr)
tuple corresponding to a single function call argument.EXAMPLES:
Parsing a normal expression:
sage: from sage.misc.parser import Parser, Tokenizer sage: p = Parser(make_var=var) # needs sage.symbolic sage: p.p_arg(Tokenizer("a+b")) # needs sage.symbolic a + b
>>> from sage.all import * >>> from sage.misc.parser import Parser, Tokenizer >>> p = Parser(make_var=var) # needs sage.symbolic >>> p.p_arg(Tokenizer("a+b")) # needs sage.symbolic a + b
A keyword expression argument:
sage: from sage.misc.parser import Parser, Tokenizer sage: p = Parser(make_var=var) # needs sage.symbolic sage: p.p_arg(Tokenizer("val=a+b")) # needs sage.symbolic ('val', a + b) A lone list:: sage: from sage.misc.parser import Parser, Tokenizer sage: p = Parser(make_var=var) # needs sage.symbolic sage: p.p_arg(Tokenizer("[x]")) # needs sage.symbolic [x]
- p_args(tokens)[source]¶
Return a
list, dict
pair.EXAMPLES:
sage: from sage.misc.parser import Parser, Tokenizer sage: p = Parser() sage: p.p_args(Tokenizer("1,2,a=3")) ([1, 2], {'a': 3}) sage: p.p_args(Tokenizer("1, 2, a = 1+5^2")) ([1, 2], {'a': 26})
>>> from sage.all import * >>> from sage.misc.parser import Parser, Tokenizer >>> p = Parser() >>> p.p_args(Tokenizer("1,2,a=3")) ([1, 2], {'a': 3}) >>> p.p_args(Tokenizer("1, 2, a = 1+5^2")) ([1, 2], {'a': 26})
- p_atom(tokens)[source]¶
Parse an atom. This is either a parenthesized expression, a function call, or a literal name/int/float.
EXAMPLES:
sage: # needs sage.symbolic sage: from sage.misc.parser import Parser, Tokenizer sage: p = Parser(make_var=var, make_function={'sin': sin}) sage: p.p_atom(Tokenizer("1")) 1 sage: p.p_atom(Tokenizer("12")) 12 sage: p.p_atom(Tokenizer("12.5")) 12.5 sage: p.p_atom(Tokenizer("(1+a)")) a + 1 sage: p.p_atom(Tokenizer("(1+a)^2")) a + 1 sage: p.p_atom(Tokenizer("sin(1+a)")) sin(a + 1) sage: p = Parser(make_var=var, ....: make_function={'foo': sage.misc.parser.foo}) sage: p.p_atom(Tokenizer("foo(a, b, key=value)")) ((a, b), {'key': value}) sage: p.p_atom(Tokenizer("foo()")) ((), {})
>>> from sage.all import * >>> # needs sage.symbolic >>> from sage.misc.parser import Parser, Tokenizer >>> p = Parser(make_var=var, make_function={'sin': sin}) >>> p.p_atom(Tokenizer("1")) 1 >>> p.p_atom(Tokenizer("12")) 12 >>> p.p_atom(Tokenizer("12.5")) 12.5 >>> p.p_atom(Tokenizer("(1+a)")) a + 1 >>> p.p_atom(Tokenizer("(1+a)^2")) a + 1 >>> p.p_atom(Tokenizer("sin(1+a)")) sin(a + 1) >>> p = Parser(make_var=var, ... make_function={'foo': sage.misc.parser.foo}) >>> p.p_atom(Tokenizer("foo(a, b, key=value)")) ((a, b), {'key': value}) >>> p.p_atom(Tokenizer("foo()")) ((), {})
- p_eqn(tokens)[source]¶
Parse an equation or expression.
This is the top-level node called by the code{parse} function.
EXAMPLES:
sage: from sage.misc.parser import Parser, Tokenizer sage: p = Parser(make_var=var) # needs sage.symbolic sage: p.p_eqn(Tokenizer("1+a")) # needs sage.symbolic a + 1 sage: # needs sage.symbolic sage: p.p_eqn(Tokenizer("a == b")) a == b sage: p.p_eqn(Tokenizer("a < b")) a < b sage: p.p_eqn(Tokenizer("a > b")) a > b sage: p.p_eqn(Tokenizer("a <= b")) a <= b sage: p.p_eqn(Tokenizer("a >= b")) a >= b sage: p.p_eqn(Tokenizer("a != b")) a != b
>>> from sage.all import * >>> from sage.misc.parser import Parser, Tokenizer >>> p = Parser(make_var=var) # needs sage.symbolic >>> p.p_eqn(Tokenizer("1+a")) # needs sage.symbolic a + 1 >>> # needs sage.symbolic >>> p.p_eqn(Tokenizer("a == b")) a == b >>> p.p_eqn(Tokenizer("a < b")) a < b >>> p.p_eqn(Tokenizer("a > b")) a > b >>> p.p_eqn(Tokenizer("a <= b")) a <= b >>> p.p_eqn(Tokenizer("a >= b")) a >= b >>> p.p_eqn(Tokenizer("a != b")) a != b
- p_expr(tokens)[source]¶
Parse a list of one or more terms.
EXAMPLES:
sage: # needs sage.symbolic sage: from sage.misc.parser import Parser, Tokenizer sage: p = Parser(make_var=var) sage: p.p_expr(Tokenizer("a+b")) a + b sage: p.p_expr(Tokenizer("a")) a sage: p.p_expr(Tokenizer("a - b + 4*c - d^2")) -d^2 + a - b + 4*c sage: p.p_expr(Tokenizer("a - -3")) a + 3 sage: p.p_expr(Tokenizer("a + 1 == b")) a + 1
>>> from sage.all import * >>> # needs sage.symbolic >>> from sage.misc.parser import Parser, Tokenizer >>> p = Parser(make_var=var) >>> p.p_expr(Tokenizer("a+b")) a + b >>> p.p_expr(Tokenizer("a")) a >>> p.p_expr(Tokenizer("a - b + 4*c - d^2")) -d^2 + a - b + 4*c >>> p.p_expr(Tokenizer("a - -3")) a + 3 >>> p.p_expr(Tokenizer("a + 1 == b")) a + 1
- p_factor(tokens)[source]¶
Parse a single factor, which consists of any number of unary +/- and a power.
EXAMPLES:
sage: from sage.misc.parser import Parser, Tokenizer sage: R.<t> = ZZ[['t']] sage: p = Parser(make_var={'t': t}) sage: p.p_factor(Tokenizer("- -t")) t sage: p.p_factor(Tokenizer("- + - -t^2")) -t^2 sage: p.p_factor(Tokenizer("t^11 * x")) t^11
>>> from sage.all import * >>> from sage.misc.parser import Parser, Tokenizer >>> R = ZZ[['t']]; (t,) = R._first_ngens(1) >>> p = Parser(make_var={'t': t}) >>> p.p_factor(Tokenizer("- -t")) t >>> p.p_factor(Tokenizer("- + - -t^2")) -t^2 >>> p.p_factor(Tokenizer("t^11 * x")) t^11
- p_list(tokens)[source]¶
Parse a list of items.
EXAMPLES:
sage: from sage.misc.parser import Parser, Tokenizer sage: p = Parser(make_var=var) # needs sage.symbolic sage: p.p_list(Tokenizer("[1+2, 1e3]")) # needs sage.symbolic [3, 1000.0] sage: p.p_list(Tokenizer("[]")) # needs sage.symbolic []
>>> from sage.all import * >>> from sage.misc.parser import Parser, Tokenizer >>> p = Parser(make_var=var) # needs sage.symbolic >>> p.p_list(Tokenizer("[1+2, 1e3]")) # needs sage.symbolic [3, 1000.0] >>> p.p_list(Tokenizer("[]")) # needs sage.symbolic []
- p_matrix(tokens)[source]¶
Parse a matrix.
EXAMPLES:
sage: from sage.misc.parser import Parser, Tokenizer sage: p = Parser(make_var=var) # needs sage.symbolic sage: p.p_matrix(Tokenizer("([a,0],[0,a])")) # needs sage.symbolic [a 0] [0 a]
>>> from sage.all import * >>> from sage.misc.parser import Parser, Tokenizer >>> p = Parser(make_var=var) # needs sage.symbolic >>> p.p_matrix(Tokenizer("([a,0],[0,a])")) # needs sage.symbolic [a 0] [0 a]
- p_power(tokens)[source]¶
Parses a power. Note that exponentiation groups right to left.
EXAMPLES:
sage: from sage.misc.parser import Parser, Tokenizer sage: R.<t> = ZZ[['t']] sage: p = Parser(make_var={'t': t}) sage: p.p_factor(Tokenizer("-(1+t)^-1")) -1 + t - t^2 + t^3 - t^4 + t^5 - t^6 + t^7 - t^8 + t^9 - t^10 + t^11 - t^12 + t^13 - t^14 + t^15 - t^16 + t^17 - t^18 + t^19 + O(t^20) sage: p.p_factor(Tokenizer("t**2")) t^2 sage: p.p_power(Tokenizer("2^3^2")) == 2^9 True sage: # needs sage.symbolic sage: p = Parser(make_var=var) sage: p.p_factor(Tokenizer('x!')) factorial(x) sage: p.p_factor(Tokenizer('(x^2)!')) factorial(x^2) sage: p.p_factor(Tokenizer('x!^2')) factorial(x)^2
>>> from sage.all import * >>> from sage.misc.parser import Parser, Tokenizer >>> R = ZZ[['t']]; (t,) = R._first_ngens(1) >>> p = Parser(make_var={'t': t}) >>> p.p_factor(Tokenizer("-(1+t)^-1")) -1 + t - t^2 + t^3 - t^4 + t^5 - t^6 + t^7 - t^8 + t^9 - t^10 + t^11 - t^12 + t^13 - t^14 + t^15 - t^16 + t^17 - t^18 + t^19 + O(t^20) >>> p.p_factor(Tokenizer("t**2")) t^2 >>> p.p_power(Tokenizer("2^3^2")) == Integer(2)**Integer(9) True >>> # needs sage.symbolic >>> p = Parser(make_var=var) >>> p.p_factor(Tokenizer('x!')) factorial(x) >>> p.p_factor(Tokenizer('(x^2)!')) factorial(x^2) >>> p.p_factor(Tokenizer('x!^2')) factorial(x)^2
- p_sequence(tokens)[source]¶
Parse a (possibly nested) set of lists and tuples.
EXAMPLES:
sage: from sage.misc.parser import Parser, Tokenizer sage: p = Parser(make_var=var) # needs sage.symbolic sage: p.p_sequence(Tokenizer("[1+2,0]")) # needs sage.symbolic [[3, 0]] sage: p.p_sequence(Tokenizer("(1,2,3) , [1+a, 2+b, (3+c), (4+d,)]")) # needs sage.symbolic [(1, 2, 3), [a + 1, b + 2, c + 3, (d + 4,)]]
>>> from sage.all import * >>> from sage.misc.parser import Parser, Tokenizer >>> p = Parser(make_var=var) # needs sage.symbolic >>> p.p_sequence(Tokenizer("[1+2,0]")) # needs sage.symbolic [[3, 0]] >>> p.p_sequence(Tokenizer("(1,2,3) , [1+a, 2+b, (3+c), (4+d,)]")) # needs sage.symbolic [(1, 2, 3), [a + 1, b + 2, c + 3, (d + 4,)]]
- p_term(tokens)[source]¶
Parse a single term (consisting of one or more factors).
EXAMPLES:
sage: # needs sage.symbolic sage: from sage.misc.parser import Parser, Tokenizer sage: p = Parser(make_var=var) sage: p.p_term(Tokenizer("a*b")) a*b sage: p.p_term(Tokenizer("a * b / c * d")) a*b*d/c sage: p.p_term(Tokenizer("-a * b + c")) -a*b sage: p.p_term(Tokenizer("a*(b-c)^2")) a*(b - c)^2 sage: p.p_term(Tokenizer("-3a")) -3*a
>>> from sage.all import * >>> # needs sage.symbolic >>> from sage.misc.parser import Parser, Tokenizer >>> p = Parser(make_var=var) >>> p.p_term(Tokenizer("a*b")) a*b >>> p.p_term(Tokenizer("a * b / c * d")) a*b*d/c >>> p.p_term(Tokenizer("-a * b + c")) -a*b >>> p.p_term(Tokenizer("a*(b-c)^2")) a*(b - c)^2 >>> p.p_term(Tokenizer("-3a")) -3*a
- p_tuple(tokens)[source]¶
Parse a tuple of items.
EXAMPLES:
sage: from sage.misc.parser import Parser, Tokenizer sage: p = Parser(make_var=var) # needs sage.symbolic sage: p.p_tuple(Tokenizer("( (), (1), (1,), (1,2), (1,2,3), (1+2)^2, )")) # needs sage.symbolic ((), 1, (1,), (1, 2), (1, 2, 3), 9)
>>> from sage.all import * >>> from sage.misc.parser import Parser, Tokenizer >>> p = Parser(make_var=var) # needs sage.symbolic >>> p.p_tuple(Tokenizer("( (), (1), (1,), (1,2), (1,2,3), (1+2)^2, )")) # needs sage.symbolic ((), 1, (1,), (1, 2), (1, 2, 3), 9)
- parse(s, accept_eqn=True)[source]¶
Parse the given string.
EXAMPLES:
sage: from sage.misc.parser import Parser sage: p = Parser(make_var=var) # needs sage.symbolic sage: p.parse("E = m c^2") # needs sage.symbolic E == c^2*m
>>> from sage.all import * >>> from sage.misc.parser import Parser >>> p = Parser(make_var=var) # needs sage.symbolic >>> p.parse("E = m c^2") # needs sage.symbolic E == c^2*m
- parse_expression(s)[source]¶
Parse an expression.
EXAMPLES:
sage: from sage.misc.parser import Parser sage: p = Parser(make_var=var) # needs sage.symbolic sage: p.parse_expression('a-3b^2') # needs sage.symbolic -3*b^2 + a
>>> from sage.all import * >>> from sage.misc.parser import Parser >>> p = Parser(make_var=var) # needs sage.symbolic >>> p.parse_expression('a-3b^2') # needs sage.symbolic -3*b^2 + a
- parse_sequence(s)[source]¶
Parse a (possibly nested) set of lists and tuples.
EXAMPLES:
sage: # needs sage.symbolic sage: from sage.misc.parser import Parser sage: p = Parser(make_var=var) sage: p.parse_sequence("1,2,3") [1, 2, 3] sage: p.parse_sequence("[1,2,(a,b,c+d)]") [1, 2, (a, b, c + d)] sage: p.parse_sequence("13") 13
>>> from sage.all import * >>> # needs sage.symbolic >>> from sage.misc.parser import Parser >>> p = Parser(make_var=var) >>> p.parse_sequence("1,2,3") [1, 2, 3] >>> p.parse_sequence("[1,2,(a,b,c+d)]") [1, 2, (a, b, c + d)] >>> p.parse_sequence("13") 13
- class sage.misc.parser.Tokenizer[source]¶
Bases:
object
This class takes a string and turns it into a list of tokens for use by the parser.
The tokenizer wraps a string object, to tokenize a different string create a new tokenizer.
EXAMPLES:
sage: from sage.misc.parser import Tokenizer sage: Tokenizer("1.5+2*3^4-sin(x)").test() ['FLOAT(1.5)', '+', 'INT(2)', '*', 'INT(3)', '^', 'INT(4)', '-', 'NAME(sin)', '(', 'NAME(x)', ')']
>>> from sage.all import * >>> from sage.misc.parser import Tokenizer >>> Tokenizer("1.5+2*3^4-sin(x)").test() ['FLOAT(1.5)', '+', 'INT(2)', '*', 'INT(3)', '^', 'INT(4)', '-', 'NAME(sin)', '(', 'NAME(x)', ')']
The single character tokens are given by:
sage: Tokenizer("+-*/^(),=<>[]{}").test() ['+', '-', '*', '/', '^', '(', ')', ',', '=', '<', '>', '[', ']', '{', '}']
>>> from sage.all import * >>> Tokenizer("+-*/^(),=<>[]{}").test() ['+', '-', '*', '/', '^', '(', ')', ',', '=', '<', '>', '[', ']', '{', '}']
Two-character comparisons accepted are:
sage: Tokenizer("<= >= != == **").test() ['LESS_EQ', 'GREATER_EQ', 'NOT_EQ', '=', '^']
>>> from sage.all import * >>> Tokenizer("<= >= != == **").test() ['LESS_EQ', 'GREATER_EQ', 'NOT_EQ', '=', '^']
Integers are strings of 0-9:
sage: Tokenizer("1 123 9879834759873452908375013").test() ['INT(1)', 'INT(123)', 'INT(9879834759873452908375013)']
>>> from sage.all import * >>> Tokenizer("1 123 9879834759873452908375013").test() ['INT(1)', 'INT(123)', 'INT(9879834759873452908375013)']
Floating point numbers can contain a single decimal point and possibly exponential notation:
sage: Tokenizer("1. .01 1e3 1.e-3").test() ['FLOAT(1.)', 'FLOAT(.01)', 'FLOAT(1e3)', 'FLOAT(1.e-3)']
>>> from sage.all import * >>> Tokenizer("1. .01 1e3 1.e-3").test() ['FLOAT(1.)', 'FLOAT(.01)', 'FLOAT(1e3)', 'FLOAT(1.e-3)']
Note that negative signs are not attached to the token:
sage: Tokenizer("-1 -1.2").test() ['-', 'INT(1)', '-', 'FLOAT(1.2)']
>>> from sage.all import * >>> Tokenizer("-1 -1.2").test() ['-', 'INT(1)', '-', 'FLOAT(1.2)']
Names are alphanumeric sequences not starting with a digit:
sage: Tokenizer("a a1 _a_24").test() ['NAME(a)', 'NAME(a1)', 'NAME(_a_24)']
>>> from sage.all import * >>> Tokenizer("a a1 _a_24").test() ['NAME(a)', 'NAME(a1)', 'NAME(_a_24)']
There is special handling for matrices:
sage: Tokenizer("matrix(a)").test() ['MATRIX', '(', 'NAME(a)', ')']
>>> from sage.all import * >>> Tokenizer("matrix(a)").test() ['MATRIX', '(', 'NAME(a)', ')']
Anything else is an error:
sage: Tokenizer("&@~").test() ['ERROR', 'ERROR', 'ERROR']
>>> from sage.all import * >>> Tokenizer("&@~").test() ['ERROR', 'ERROR', 'ERROR']
No attempt for correctness is made at this stage:
sage: Tokenizer(") )( 5e5e5").test() [')', ')', '(', 'FLOAT(5e5)', 'NAME(e5)'] sage: Tokenizer("?$%").test() ['ERROR', 'ERROR', 'ERROR']
>>> from sage.all import * >>> Tokenizer(") )( 5e5e5").test() [')', ')', '(', 'FLOAT(5e5)', 'NAME(e5)'] >>> Tokenizer("?$%").test() ['ERROR', 'ERROR', 'ERROR']
- backtrack()[source]¶
Put
self
in such a state that the subsequent call tonext()
will return the same as ifnext()
had not been called.Currently, one can only backtrack once.
EXAMPLES:
sage: from sage.misc.parser import Tokenizer, token_to_str sage: t = Tokenizer("a+b") sage: token_to_str(t.next()) 'NAME' sage: token_to_str(t.next()) '+' sage: t.backtrack() # the return type is bint for performance reasons False sage: token_to_str(t.next()) '+'
>>> from sage.all import * >>> from sage.misc.parser import Tokenizer, token_to_str >>> t = Tokenizer("a+b") >>> token_to_str(t.next()) 'NAME' >>> token_to_str(t.next()) '+' >>> t.backtrack() # the return type is bint for performance reasons False >>> token_to_str(t.next()) '+'
- last()[source]¶
Return the last token seen.
EXAMPLES:
sage: from sage.misc.parser import Tokenizer, token_to_str sage: t = Tokenizer("3a") sage: token_to_str(t.next()) 'INT' sage: token_to_str(t.last()) 'INT' sage: token_to_str(t.next()) 'NAME' sage: token_to_str(t.last()) 'NAME'
>>> from sage.all import * >>> from sage.misc.parser import Tokenizer, token_to_str >>> t = Tokenizer("3a") >>> token_to_str(t.next()) 'INT' >>> token_to_str(t.last()) 'INT' >>> token_to_str(t.next()) 'NAME' >>> token_to_str(t.last()) 'NAME'
- last_token_string()[source]¶
Return the actual contents of the last token.
EXAMPLES:
sage: from sage.misc.parser import Tokenizer, token_to_str sage: t = Tokenizer("a - 1e5") sage: token_to_str(t.next()) 'NAME' sage: t.last_token_string() 'a' sage: token_to_str(t.next()) '-' sage: token_to_str(t.next()) 'FLOAT' sage: t.last_token_string() '1e5'
>>> from sage.all import * >>> from sage.misc.parser import Tokenizer, token_to_str >>> t = Tokenizer("a - 1e5") >>> token_to_str(t.next()) 'NAME' >>> t.last_token_string() 'a' >>> token_to_str(t.next()) '-' >>> token_to_str(t.next()) 'FLOAT' >>> t.last_token_string() '1e5'
- next()[source]¶
Return the next token in the string.
EXAMPLES:
sage: from sage.misc.parser import Tokenizer, token_to_str sage: t = Tokenizer("a+3") sage: token_to_str(t.next()) 'NAME' sage: token_to_str(t.next()) '+' sage: token_to_str(t.next()) 'INT' sage: token_to_str(t.next()) 'EOS'
>>> from sage.all import * >>> from sage.misc.parser import Tokenizer, token_to_str >>> t = Tokenizer("a+3") >>> token_to_str(t.next()) 'NAME' >>> token_to_str(t.next()) '+' >>> token_to_str(t.next()) 'INT' >>> token_to_str(t.next()) 'EOS'
- peek()[source]¶
Return the next token that will be encountered, without changing the state of
self
.EXAMPLES:
sage: from sage.misc.parser import Tokenizer, token_to_str sage: t = Tokenizer("a+b") sage: token_to_str(t.peek()) 'NAME' sage: token_to_str(t.next()) 'NAME' sage: token_to_str(t.peek()) '+' sage: token_to_str(t.peek()) '+' sage: token_to_str(t.next()) '+'
>>> from sage.all import * >>> from sage.misc.parser import Tokenizer, token_to_str >>> t = Tokenizer("a+b") >>> token_to_str(t.peek()) 'NAME' >>> token_to_str(t.next()) 'NAME' >>> token_to_str(t.peek()) '+' >>> token_to_str(t.peek()) '+' >>> token_to_str(t.next()) '+'
- reset(pos=0)[source]¶
Reset the tokenizer to a given position.
EXAMPLES:
sage: from sage.misc.parser import Tokenizer sage: t = Tokenizer("a+b*c") sage: t.test() ['NAME(a)', '+', 'NAME(b)', '*', 'NAME(c)'] sage: t.test() [] sage: t.reset() sage: t.test() ['NAME(a)', '+', 'NAME(b)', '*', 'NAME(c)'] sage: t.reset(3) sage: t.test() ['*', 'NAME(c)']
>>> from sage.all import * >>> from sage.misc.parser import Tokenizer >>> t = Tokenizer("a+b*c") >>> t.test() ['NAME(a)', '+', 'NAME(b)', '*', 'NAME(c)'] >>> t.test() [] >>> t.reset() >>> t.test() ['NAME(a)', '+', 'NAME(b)', '*', 'NAME(c)'] >>> t.reset(Integer(3)) >>> t.test() ['*', 'NAME(c)']
No care is taken to make sure we don’t jump in the middle of a token:
sage: t = Tokenizer("12345+a") sage: t.test() ['INT(12345)', '+', 'NAME(a)'] sage: t.reset(2) sage: t.test() ['INT(345)', '+', 'NAME(a)']
>>> from sage.all import * >>> t = Tokenizer("12345+a") >>> t.test() ['INT(12345)', '+', 'NAME(a)'] >>> t.reset(Integer(2)) >>> t.test() ['INT(345)', '+', 'NAME(a)']
- test()[source]¶
This is a utility function for easy testing of the tokenizer.
Destructively read off the tokens in self, returning a list of string representations of the tokens.
EXAMPLES:
sage: from sage.misc.parser import Tokenizer sage: t = Tokenizer("a b 3") sage: t.test() ['NAME(a)', 'NAME(b)', 'INT(3)'] sage: t.test() []
>>> from sage.all import * >>> from sage.misc.parser import Tokenizer >>> t = Tokenizer("a b 3") >>> t.test() ['NAME(a)', 'NAME(b)', 'INT(3)'] >>> t.test() []
- sage.misc.parser.foo(*args, **kwds)[source]¶
This is a function for testing that simply returns the arguments and keywords passed into it.
EXAMPLES:
sage: from sage.misc.parser import foo sage: foo(1, 2, a=3) ((1, 2), {'a': 3})
>>> from sage.all import * >>> from sage.misc.parser import foo >>> foo(Integer(1), Integer(2), a=Integer(3)) ((1, 2), {'a': 3})
- sage.misc.parser.token_to_str(token)[source]¶
For speed reasons, tokens are integers. This function returns a string representation of a given token.
EXAMPLES:
sage: from sage.misc.parser import Tokenizer, token_to_str sage: t = Tokenizer("+ 2") sage: token_to_str(t.next()) '+' sage: token_to_str(t.next()) 'INT'
>>> from sage.all import * >>> from sage.misc.parser import Tokenizer, token_to_str >>> t = Tokenizer("+ 2") >>> token_to_str(t.next()) '+' >>> token_to_str(t.next()) 'INT'