436 lines
31 KiB
HTML
436 lines
31 KiB
HTML
|
|
||
|
<!DOCTYPE html>
|
||
|
|
||
|
<html xmlns="http://www.w3.org/1999/xhtml">
|
||
|
<head>
|
||
|
<meta charset="utf-8" />
|
||
|
<title>tokenize — Tokenizer for Python source — Python 3.7.4 documentation</title>
|
||
|
<link rel="stylesheet" href="../_static/pydoctheme.css" type="text/css" />
|
||
|
<link rel="stylesheet" href="../_static/pygments.css" type="text/css" />
|
||
|
|
||
|
<script type="text/javascript" id="documentation_options" data-url_root="../" src="../_static/documentation_options.js"></script>
|
||
|
<script type="text/javascript" src="../_static/jquery.js"></script>
|
||
|
<script type="text/javascript" src="../_static/underscore.js"></script>
|
||
|
<script type="text/javascript" src="../_static/doctools.js"></script>
|
||
|
<script type="text/javascript" src="../_static/language_data.js"></script>
|
||
|
|
||
|
<script type="text/javascript" src="../_static/sidebar.js"></script>
|
||
|
|
||
|
<link rel="search" type="application/opensearchdescription+xml"
|
||
|
title="Search within Python 3.7.4 documentation"
|
||
|
href="../_static/opensearch.xml"/>
|
||
|
<link rel="author" title="About these documents" href="../about.html" />
|
||
|
<link rel="index" title="Index" href="../genindex.html" />
|
||
|
<link rel="search" title="Search" href="../search.html" />
|
||
|
<link rel="copyright" title="Copyright" href="../copyright.html" />
|
||
|
<link rel="next" title="tabnanny — Detection of ambiguous indentation" href="tabnanny.html" />
|
||
|
<link rel="prev" title="keyword — Testing for Python keywords" href="keyword.html" />
|
||
|
<link rel="shortcut icon" type="image/png" href="../_static/py.png" />
|
||
|
<link rel="canonical" href="https://docs.python.org/3/library/tokenize.html" />
|
||
|
|
||
|
<script type="text/javascript" src="../_static/copybutton.js"></script>
|
||
|
<script type="text/javascript" src="../_static/switchers.js"></script>
|
||
|
|
||
|
|
||
|
|
||
|
<style>
|
||
|
@media only screen {
|
||
|
table.full-width-table {
|
||
|
width: 100%;
|
||
|
}
|
||
|
}
|
||
|
</style>
|
||
|
|
||
|
|
||
|
</head><body>
|
||
|
|
||
|
<div class="related" role="navigation" aria-label="related navigation">
|
||
|
<h3>Navigation</h3>
|
||
|
<ul>
|
||
|
<li class="right" style="margin-right: 10px">
|
||
|
<a href="../genindex.html" title="General Index"
|
||
|
accesskey="I">index</a></li>
|
||
|
<li class="right" >
|
||
|
<a href="../py-modindex.html" title="Python Module Index"
|
||
|
>modules</a> |</li>
|
||
|
<li class="right" >
|
||
|
<a href="tabnanny.html" title="tabnanny — Detection of ambiguous indentation"
|
||
|
accesskey="N">next</a> |</li>
|
||
|
<li class="right" >
|
||
|
<a href="keyword.html" title="keyword — Testing for Python keywords"
|
||
|
accesskey="P">previous</a> |</li>
|
||
|
<li><img src="../_static/py.png" alt=""
|
||
|
style="vertical-align: middle; margin-top: -1px"/></li>
|
||
|
<li><a href="https://www.python.org/">Python</a> »</li>
|
||
|
<li>
|
||
|
<span class="language_switcher_placeholder">en</span>
|
||
|
<span class="version_switcher_placeholder">3.7.4</span>
|
||
|
<a href="../index.html">Documentation </a> »
|
||
|
</li>
|
||
|
|
||
|
<li class="nav-item nav-item-1"><a href="index.html" >The Python Standard Library</a> »</li>
|
||
|
<li class="nav-item nav-item-2"><a href="language.html" accesskey="U">Python Language Services</a> »</li>
|
||
|
<li class="right">
|
||
|
|
||
|
|
||
|
<div class="inline-search" style="display: none" role="search">
|
||
|
<form class="inline-search" action="../search.html" method="get">
|
||
|
<input placeholder="Quick search" type="text" name="q" />
|
||
|
<input type="submit" value="Go" />
|
||
|
<input type="hidden" name="check_keywords" value="yes" />
|
||
|
<input type="hidden" name="area" value="default" />
|
||
|
</form>
|
||
|
</div>
|
||
|
<script type="text/javascript">$('.inline-search').show(0);</script>
|
||
|
|
|
||
|
</li>
|
||
|
|
||
|
</ul>
|
||
|
</div>
|
||
|
|
||
|
<div class="document">
|
||
|
<div class="documentwrapper">
|
||
|
<div class="bodywrapper">
|
||
|
<div class="body" role="main">
|
||
|
|
||
|
<div class="section" id="module-tokenize">
|
||
|
<span id="tokenize-tokenizer-for-python-source"></span><h1><a class="reference internal" href="#module-tokenize" title="tokenize: Lexical scanner for Python source code."><code class="xref py py-mod docutils literal notranslate"><span class="pre">tokenize</span></code></a> — Tokenizer for Python source<a class="headerlink" href="#module-tokenize" title="Permalink to this headline">¶</a></h1>
|
||
|
<p><strong>Source code:</strong> <a class="reference external" href="https://github.com/python/cpython/tree/3.7/Lib/tokenize.py">Lib/tokenize.py</a></p>
|
||
|
<hr class="docutils" />
|
||
|
<p>The <a class="reference internal" href="#module-tokenize" title="tokenize: Lexical scanner for Python source code."><code class="xref py py-mod docutils literal notranslate"><span class="pre">tokenize</span></code></a> module provides a lexical scanner for Python source code,
|
||
|
implemented in Python. The scanner in this module returns comments as tokens
|
||
|
as well, making it useful for implementing “pretty-printers,” including
|
||
|
colorizers for on-screen displays.</p>
|
||
|
<p>To simplify token stream handling, all <a class="reference internal" href="../reference/lexical_analysis.html#operators"><span class="std std-ref">operator</span></a> and
|
||
|
<a class="reference internal" href="../reference/lexical_analysis.html#delimiters"><span class="std std-ref">delimiter</span></a> tokens and <a class="reference internal" href="constants.html#Ellipsis" title="Ellipsis"><code class="xref py py-data docutils literal notranslate"><span class="pre">Ellipsis</span></code></a> are returned using
|
||
|
the generic <a class="reference internal" href="token.html#token.OP" title="token.OP"><code class="xref py py-data docutils literal notranslate"><span class="pre">OP</span></code></a> token type. The exact
|
||
|
type can be determined by checking the <code class="docutils literal notranslate"><span class="pre">exact_type</span></code> property on the
|
||
|
<a class="reference internal" href="../glossary.html#term-named-tuple"><span class="xref std std-term">named tuple</span></a> returned from <a class="reference internal" href="#tokenize.tokenize" title="tokenize.tokenize"><code class="xref py py-func docutils literal notranslate"><span class="pre">tokenize.tokenize()</span></code></a>.</p>
|
||
|
<div class="section" id="tokenizing-input">
|
||
|
<h2>Tokenizing Input<a class="headerlink" href="#tokenizing-input" title="Permalink to this headline">¶</a></h2>
|
||
|
<p>The primary entry point is a <a class="reference internal" href="../glossary.html#term-generator"><span class="xref std std-term">generator</span></a>:</p>
|
||
|
<dl class="function">
|
||
|
<dt id="tokenize.tokenize">
|
||
|
<code class="descclassname">tokenize.</code><code class="descname">tokenize</code><span class="sig-paren">(</span><em>readline</em><span class="sig-paren">)</span><a class="headerlink" href="#tokenize.tokenize" title="Permalink to this definition">¶</a></dt>
|
||
|
<dd><p>The <a class="reference internal" href="#tokenize.tokenize" title="tokenize.tokenize"><code class="xref py py-func docutils literal notranslate"><span class="pre">tokenize()</span></code></a> generator requires one argument, <em>readline</em>, which
|
||
|
must be a callable object which provides the same interface as the
|
||
|
<a class="reference internal" href="io.html#io.IOBase.readline" title="io.IOBase.readline"><code class="xref py py-meth docutils literal notranslate"><span class="pre">io.IOBase.readline()</span></code></a> method of file objects. Each call to the
|
||
|
function should return one line of input as bytes.</p>
|
||
|
<p>The generator produces 5-tuples with these members: the token type; the
|
||
|
token string; a 2-tuple <code class="docutils literal notranslate"><span class="pre">(srow,</span> <span class="pre">scol)</span></code> of ints specifying the row and
|
||
|
column where the token begins in the source; a 2-tuple <code class="docutils literal notranslate"><span class="pre">(erow,</span> <span class="pre">ecol)</span></code> of
|
||
|
ints specifying the row and column where the token ends in the source; and
|
||
|
the line on which the token was found. The line passed (the last tuple item)
|
||
|
is the <em>logical</em> line; continuation lines are included. The 5 tuple is
|
||
|
returned as a <a class="reference internal" href="../glossary.html#term-named-tuple"><span class="xref std std-term">named tuple</span></a> with the field names:
|
||
|
<code class="docutils literal notranslate"><span class="pre">type</span> <span class="pre">string</span> <span class="pre">start</span> <span class="pre">end</span> <span class="pre">line</span></code>.</p>
|
||
|
<p>The returned <a class="reference internal" href="../glossary.html#term-named-tuple"><span class="xref std std-term">named tuple</span></a> has an additional property named
|
||
|
<code class="docutils literal notranslate"><span class="pre">exact_type</span></code> that contains the exact operator type for
|
||
|
<a class="reference internal" href="token.html#token.OP" title="token.OP"><code class="xref py py-data docutils literal notranslate"><span class="pre">OP</span></code></a> tokens. For all other token types <code class="docutils literal notranslate"><span class="pre">exact_type</span></code>
|
||
|
equals the named tuple <code class="docutils literal notranslate"><span class="pre">type</span></code> field.</p>
|
||
|
<div class="versionchanged">
|
||
|
<p><span class="versionmodified changed">Changed in version 3.1: </span>Added support for named tuples.</p>
|
||
|
</div>
|
||
|
<div class="versionchanged">
|
||
|
<p><span class="versionmodified changed">Changed in version 3.3: </span>Added support for <code class="docutils literal notranslate"><span class="pre">exact_type</span></code>.</p>
|
||
|
</div>
|
||
|
<p><a class="reference internal" href="#tokenize.tokenize" title="tokenize.tokenize"><code class="xref py py-func docutils literal notranslate"><span class="pre">tokenize()</span></code></a> determines the source encoding of the file by looking for a
|
||
|
UTF-8 BOM or encoding cookie, according to <span class="target" id="index-0"></span><a class="pep reference external" href="https://www.python.org/dev/peps/pep-0263"><strong>PEP 263</strong></a>.</p>
|
||
|
</dd></dl>
|
||
|
|
||
|
<p>All constants from the <a class="reference internal" href="token.html#module-token" title="token: Constants representing terminal nodes of the parse tree."><code class="xref py py-mod docutils literal notranslate"><span class="pre">token</span></code></a> module are also exported from
|
||
|
<a class="reference internal" href="#module-tokenize" title="tokenize: Lexical scanner for Python source code."><code class="xref py py-mod docutils literal notranslate"><span class="pre">tokenize</span></code></a>.</p>
|
||
|
<p>Another function is provided to reverse the tokenization process. This is
|
||
|
useful for creating tools that tokenize a script, modify the token stream, and
|
||
|
write back the modified script.</p>
|
||
|
<dl class="function">
|
||
|
<dt id="tokenize.untokenize">
|
||
|
<code class="descclassname">tokenize.</code><code class="descname">untokenize</code><span class="sig-paren">(</span><em>iterable</em><span class="sig-paren">)</span><a class="headerlink" href="#tokenize.untokenize" title="Permalink to this definition">¶</a></dt>
|
||
|
<dd><p>Converts tokens back into Python source code. The <em>iterable</em> must return
|
||
|
sequences with at least two elements, the token type and the token string.
|
||
|
Any additional sequence elements are ignored.</p>
|
||
|
<p>The reconstructed script is returned as a single string. The result is
|
||
|
guaranteed to tokenize back to match the input so that the conversion is
|
||
|
lossless and round-trips are assured. The guarantee applies only to the
|
||
|
token type and token string as the spacing between tokens (column
|
||
|
positions) may change.</p>
|
||
|
<p>It returns bytes, encoded using the <a class="reference internal" href="token.html#token.ENCODING" title="token.ENCODING"><code class="xref py py-data docutils literal notranslate"><span class="pre">ENCODING</span></code></a> token, which
|
||
|
is the first token sequence output by <a class="reference internal" href="#tokenize.tokenize" title="tokenize.tokenize"><code class="xref py py-func docutils literal notranslate"><span class="pre">tokenize()</span></code></a>.</p>
|
||
|
</dd></dl>
|
||
|
|
||
|
<p><a class="reference internal" href="#tokenize.tokenize" title="tokenize.tokenize"><code class="xref py py-func docutils literal notranslate"><span class="pre">tokenize()</span></code></a> needs to detect the encoding of source files it tokenizes. The
|
||
|
function it uses to do this is available:</p>
|
||
|
<dl class="function">
|
||
|
<dt id="tokenize.detect_encoding">
|
||
|
<code class="descclassname">tokenize.</code><code class="descname">detect_encoding</code><span class="sig-paren">(</span><em>readline</em><span class="sig-paren">)</span><a class="headerlink" href="#tokenize.detect_encoding" title="Permalink to this definition">¶</a></dt>
|
||
|
<dd><p>The <a class="reference internal" href="#tokenize.detect_encoding" title="tokenize.detect_encoding"><code class="xref py py-func docutils literal notranslate"><span class="pre">detect_encoding()</span></code></a> function is used to detect the encoding that
|
||
|
should be used to decode a Python source file. It requires one argument,
|
||
|
readline, in the same way as the <a class="reference internal" href="#tokenize.tokenize" title="tokenize.tokenize"><code class="xref py py-func docutils literal notranslate"><span class="pre">tokenize()</span></code></a> generator.</p>
|
||
|
<p>It will call readline a maximum of twice, and return the encoding used
|
||
|
(as a string) and a list of any lines (not decoded from bytes) it has read
|
||
|
in.</p>
|
||
|
<p>It detects the encoding from the presence of a UTF-8 BOM or an encoding
|
||
|
cookie as specified in <span class="target" id="index-1"></span><a class="pep reference external" href="https://www.python.org/dev/peps/pep-0263"><strong>PEP 263</strong></a>. If both a BOM and a cookie are present,
|
||
|
but disagree, a <a class="reference internal" href="exceptions.html#SyntaxError" title="SyntaxError"><code class="xref py py-exc docutils literal notranslate"><span class="pre">SyntaxError</span></code></a> will be raised. Note that if the BOM is found,
|
||
|
<code class="docutils literal notranslate"><span class="pre">'utf-8-sig'</span></code> will be returned as an encoding.</p>
|
||
|
<p>If no encoding is specified, then the default of <code class="docutils literal notranslate"><span class="pre">'utf-8'</span></code> will be
|
||
|
returned.</p>
|
||
|
<p>Use <a class="reference internal" href="#tokenize.open" title="tokenize.open"><code class="xref py py-func docutils literal notranslate"><span class="pre">open()</span></code></a> to open Python source files: it uses
|
||
|
<a class="reference internal" href="#tokenize.detect_encoding" title="tokenize.detect_encoding"><code class="xref py py-func docutils literal notranslate"><span class="pre">detect_encoding()</span></code></a> to detect the file encoding.</p>
|
||
|
</dd></dl>
|
||
|
|
||
|
<dl class="function">
|
||
|
<dt id="tokenize.open">
|
||
|
<code class="descclassname">tokenize.</code><code class="descname">open</code><span class="sig-paren">(</span><em>filename</em><span class="sig-paren">)</span><a class="headerlink" href="#tokenize.open" title="Permalink to this definition">¶</a></dt>
|
||
|
<dd><p>Open a file in read only mode using the encoding detected by
|
||
|
<a class="reference internal" href="#tokenize.detect_encoding" title="tokenize.detect_encoding"><code class="xref py py-func docutils literal notranslate"><span class="pre">detect_encoding()</span></code></a>.</p>
|
||
|
<div class="versionadded">
|
||
|
<p><span class="versionmodified added">New in version 3.2.</span></p>
|
||
|
</div>
|
||
|
</dd></dl>
|
||
|
|
||
|
<dl class="exception">
|
||
|
<dt id="tokenize.TokenError">
|
||
|
<em class="property">exception </em><code class="descclassname">tokenize.</code><code class="descname">TokenError</code><a class="headerlink" href="#tokenize.TokenError" title="Permalink to this definition">¶</a></dt>
|
||
|
<dd><p>Raised when either a docstring or expression that may be split over several
|
||
|
lines is not completed anywhere in the file, for example:</p>
|
||
|
<div class="highlight-python3 notranslate"><div class="highlight"><pre><span></span><span class="s2">"""Beginning of</span>
|
||
|
<span class="s2">docstring</span>
|
||
|
</pre></div>
|
||
|
</div>
|
||
|
<p>or:</p>
|
||
|
<div class="highlight-python3 notranslate"><div class="highlight"><pre><span></span><span class="p">[</span><span class="mi">1</span><span class="p">,</span>
|
||
|
<span class="mi">2</span><span class="p">,</span>
|
||
|
<span class="mi">3</span>
|
||
|
</pre></div>
|
||
|
</div>
|
||
|
</dd></dl>
|
||
|
|
||
|
<p>Note that unclosed single-quoted strings do not cause an error to be
|
||
|
raised. They are tokenized as <a class="reference internal" href="token.html#token.ERRORTOKEN" title="token.ERRORTOKEN"><code class="xref py py-data docutils literal notranslate"><span class="pre">ERRORTOKEN</span></code></a>, followed by the
|
||
|
tokenization of their contents.</p>
|
||
|
</div>
|
||
|
<div class="section" id="command-line-usage">
|
||
|
<span id="tokenize-cli"></span><h2>Command-Line Usage<a class="headerlink" href="#command-line-usage" title="Permalink to this headline">¶</a></h2>
|
||
|
<div class="versionadded">
|
||
|
<p><span class="versionmodified added">New in version 3.3.</span></p>
|
||
|
</div>
|
||
|
<p>The <a class="reference internal" href="#module-tokenize" title="tokenize: Lexical scanner for Python source code."><code class="xref py py-mod docutils literal notranslate"><span class="pre">tokenize</span></code></a> module can be executed as a script from the command line.
|
||
|
It is as simple as:</p>
|
||
|
<div class="highlight-sh notranslate"><div class="highlight"><pre><span></span>python -m tokenize <span class="o">[</span>-e<span class="o">]</span> <span class="o">[</span>filename.py<span class="o">]</span>
|
||
|
</pre></div>
|
||
|
</div>
|
||
|
<p>The following options are accepted:</p>
|
||
|
<dl class="cmdoption">
|
||
|
<dt id="cmdoption-tokenize-h">
|
||
|
<code class="descname">-h</code><code class="descclassname"></code><code class="descclassname">, </code><code class="descname">--help</code><code class="descclassname"></code><a class="headerlink" href="#cmdoption-tokenize-h" title="Permalink to this definition">¶</a></dt>
|
||
|
<dd><p>show this help message and exit</p>
|
||
|
</dd></dl>
|
||
|
|
||
|
<dl class="cmdoption">
|
||
|
<dt id="cmdoption-tokenize-e">
|
||
|
<code class="descname">-e</code><code class="descclassname"></code><code class="descclassname">, </code><code class="descname">--exact</code><code class="descclassname"></code><a class="headerlink" href="#cmdoption-tokenize-e" title="Permalink to this definition">¶</a></dt>
|
||
|
<dd><p>display token names using the exact type</p>
|
||
|
</dd></dl>
|
||
|
|
||
|
<p>If <code class="file docutils literal notranslate"><span class="pre">filename.py</span></code> is specified its contents are tokenized to stdout.
|
||
|
Otherwise, tokenization is performed on stdin.</p>
|
||
|
</div>
|
||
|
<div class="section" id="examples">
|
||
|
<h2>Examples<a class="headerlink" href="#examples" title="Permalink to this headline">¶</a></h2>
|
||
|
<p>Example of a script rewriter that transforms float literals into Decimal
|
||
|
objects:</p>
|
||
|
<div class="highlight-python3 notranslate"><div class="highlight"><pre><span></span><span class="kn">from</span> <span class="nn">tokenize</span> <span class="k">import</span> <span class="n">tokenize</span><span class="p">,</span> <span class="n">untokenize</span><span class="p">,</span> <span class="n">NUMBER</span><span class="p">,</span> <span class="n">STRING</span><span class="p">,</span> <span class="n">NAME</span><span class="p">,</span> <span class="n">OP</span>
|
||
|
<span class="kn">from</span> <span class="nn">io</span> <span class="k">import</span> <span class="n">BytesIO</span>
|
||
|
|
||
|
<span class="k">def</span> <span class="nf">decistmt</span><span class="p">(</span><span class="n">s</span><span class="p">):</span>
|
||
|
<span class="sd">"""Substitute Decimals for floats in a string of statements.</span>
|
||
|
|
||
|
<span class="sd"> >>> from decimal import Decimal</span>
|
||
|
<span class="sd"> >>> s = 'print(+21.3e-5*-.1234/81.7)'</span>
|
||
|
<span class="sd"> >>> decistmt(s)</span>
|
||
|
<span class="sd"> "print (+Decimal ('21.3e-5')*-Decimal ('.1234')/Decimal ('81.7'))"</span>
|
||
|
|
||
|
<span class="sd"> The format of the exponent is inherited from the platform C library.</span>
|
||
|
<span class="sd"> Known cases are "e-007" (Windows) and "e-07" (not Windows). Since</span>
|
||
|
<span class="sd"> we're only showing 12 digits, and the 13th isn't close to 5, the</span>
|
||
|
<span class="sd"> rest of the output should be platform-independent.</span>
|
||
|
|
||
|
<span class="sd"> >>> exec(s) #doctest: +ELLIPSIS</span>
|
||
|
<span class="sd"> -3.21716034272e-0...7</span>
|
||
|
|
||
|
<span class="sd"> Output from calculations with Decimal should be identical across all</span>
|
||
|
<span class="sd"> platforms.</span>
|
||
|
|
||
|
<span class="sd"> >>> exec(decistmt(s))</span>
|
||
|
<span class="sd"> -3.217160342717258261933904529E-7</span>
|
||
|
<span class="sd"> """</span>
|
||
|
<span class="n">result</span> <span class="o">=</span> <span class="p">[]</span>
|
||
|
<span class="n">g</span> <span class="o">=</span> <span class="n">tokenize</span><span class="p">(</span><span class="n">BytesIO</span><span class="p">(</span><span class="n">s</span><span class="o">.</span><span class="n">encode</span><span class="p">(</span><span class="s1">'utf-8'</span><span class="p">))</span><span class="o">.</span><span class="n">readline</span><span class="p">)</span> <span class="c1"># tokenize the string</span>
|
||
|
<span class="k">for</span> <span class="n">toknum</span><span class="p">,</span> <span class="n">tokval</span><span class="p">,</span> <span class="n">_</span><span class="p">,</span> <span class="n">_</span><span class="p">,</span> <span class="n">_</span> <span class="ow">in</span> <span class="n">g</span><span class="p">:</span>
|
||
|
<span class="k">if</span> <span class="n">toknum</span> <span class="o">==</span> <span class="n">NUMBER</span> <span class="ow">and</span> <span class="s1">'.'</span> <span class="ow">in</span> <span class="n">tokval</span><span class="p">:</span> <span class="c1"># replace NUMBER tokens</span>
|
||
|
<span class="n">result</span><span class="o">.</span><span class="n">extend</span><span class="p">([</span>
|
||
|
<span class="p">(</span><span class="n">NAME</span><span class="p">,</span> <span class="s1">'Decimal'</span><span class="p">),</span>
|
||
|
<span class="p">(</span><span class="n">OP</span><span class="p">,</span> <span class="s1">'('</span><span class="p">),</span>
|
||
|
<span class="p">(</span><span class="n">STRING</span><span class="p">,</span> <span class="nb">repr</span><span class="p">(</span><span class="n">tokval</span><span class="p">)),</span>
|
||
|
<span class="p">(</span><span class="n">OP</span><span class="p">,</span> <span class="s1">')'</span><span class="p">)</span>
|
||
|
<span class="p">])</span>
|
||
|
<span class="k">else</span><span class="p">:</span>
|
||
|
<span class="n">result</span><span class="o">.</span><span class="n">append</span><span class="p">((</span><span class="n">toknum</span><span class="p">,</span> <span class="n">tokval</span><span class="p">))</span>
|
||
|
<span class="k">return</span> <span class="n">untokenize</span><span class="p">(</span><span class="n">result</span><span class="p">)</span><span class="o">.</span><span class="n">decode</span><span class="p">(</span><span class="s1">'utf-8'</span><span class="p">)</span>
|
||
|
</pre></div>
|
||
|
</div>
|
||
|
<p>Example of tokenizing from the command line. The script:</p>
|
||
|
<div class="highlight-python3 notranslate"><div class="highlight"><pre><span></span><span class="k">def</span> <span class="nf">say_hello</span><span class="p">():</span>
|
||
|
<span class="nb">print</span><span class="p">(</span><span class="s2">"Hello, World!"</span><span class="p">)</span>
|
||
|
|
||
|
<span class="n">say_hello</span><span class="p">()</span>
|
||
|
</pre></div>
|
||
|
</div>
|
||
|
<p>will be tokenized to the following output where the first column is the range
|
||
|
of the line/column coordinates where the token is found, the second column is
|
||
|
the name of the token, and the final column is the value of the token (if any)</p>
|
||
|
<div class="highlight-shell-session notranslate"><div class="highlight"><pre><span></span><span class="gp">$</span> python -m tokenize hello.py
|
||
|
<span class="go">0,0-0,0: ENCODING 'utf-8'</span>
|
||
|
<span class="go">1,0-1,3: NAME 'def'</span>
|
||
|
<span class="go">1,4-1,13: NAME 'say_hello'</span>
|
||
|
<span class="go">1,13-1,14: OP '('</span>
|
||
|
<span class="go">1,14-1,15: OP ')'</span>
|
||
|
<span class="go">1,15-1,16: OP ':'</span>
|
||
|
<span class="go">1,16-1,17: NEWLINE '\n'</span>
|
||
|
<span class="go">2,0-2,4: INDENT ' '</span>
|
||
|
<span class="go">2,4-2,9: NAME 'print'</span>
|
||
|
<span class="go">2,9-2,10: OP '('</span>
|
||
|
<span class="go">2,10-2,25: STRING '"Hello, World!"'</span>
|
||
|
<span class="go">2,25-2,26: OP ')'</span>
|
||
|
<span class="go">2,26-2,27: NEWLINE '\n'</span>
|
||
|
<span class="go">3,0-3,1: NL '\n'</span>
|
||
|
<span class="go">4,0-4,0: DEDENT ''</span>
|
||
|
<span class="go">4,0-4,9: NAME 'say_hello'</span>
|
||
|
<span class="go">4,9-4,10: OP '('</span>
|
||
|
<span class="go">4,10-4,11: OP ')'</span>
|
||
|
<span class="go">4,11-4,12: NEWLINE '\n'</span>
|
||
|
<span class="go">5,0-5,0: ENDMARKER ''</span>
|
||
|
</pre></div>
|
||
|
</div>
|
||
|
<p>The exact token type names can be displayed using the <a class="reference internal" href="#cmdoption-tokenize-e"><code class="xref std std-option docutils literal notranslate"><span class="pre">-e</span></code></a> option:</p>
|
||
|
<div class="highlight-shell-session notranslate"><div class="highlight"><pre><span></span><span class="gp">$</span> python -m tokenize -e hello.py
|
||
|
<span class="go">0,0-0,0: ENCODING 'utf-8'</span>
|
||
|
<span class="go">1,0-1,3: NAME 'def'</span>
|
||
|
<span class="go">1,4-1,13: NAME 'say_hello'</span>
|
||
|
<span class="go">1,13-1,14: LPAR '('</span>
|
||
|
<span class="go">1,14-1,15: RPAR ')'</span>
|
||
|
<span class="go">1,15-1,16: COLON ':'</span>
|
||
|
<span class="go">1,16-1,17: NEWLINE '\n'</span>
|
||
|
<span class="go">2,0-2,4: INDENT ' '</span>
|
||
|
<span class="go">2,4-2,9: NAME 'print'</span>
|
||
|
<span class="go">2,9-2,10: LPAR '('</span>
|
||
|
<span class="go">2,10-2,25: STRING '"Hello, World!"'</span>
|
||
|
<span class="go">2,25-2,26: RPAR ')'</span>
|
||
|
<span class="go">2,26-2,27: NEWLINE '\n'</span>
|
||
|
<span class="go">3,0-3,1: NL '\n'</span>
|
||
|
<span class="go">4,0-4,0: DEDENT ''</span>
|
||
|
<span class="go">4,0-4,9: NAME 'say_hello'</span>
|
||
|
<span class="go">4,9-4,10: LPAR '('</span>
|
||
|
<span class="go">4,10-4,11: RPAR ')'</span>
|
||
|
<span class="go">4,11-4,12: NEWLINE '\n'</span>
|
||
|
<span class="go">5,0-5,0: ENDMARKER ''</span>
|
||
|
</pre></div>
|
||
|
</div>
|
||
|
</div>
|
||
|
</div>
|
||
|
|
||
|
|
||
|
</div>
|
||
|
</div>
|
||
|
</div>
|
||
|
<div class="sphinxsidebar" role="navigation" aria-label="main navigation">
|
||
|
<div class="sphinxsidebarwrapper">
|
||
|
<h3><a href="../contents.html">Table of Contents</a></h3>
|
||
|
<ul>
|
||
|
<li><a class="reference internal" href="#"><code class="xref py py-mod docutils literal notranslate"><span class="pre">tokenize</span></code> — Tokenizer for Python source</a><ul>
|
||
|
<li><a class="reference internal" href="#tokenizing-input">Tokenizing Input</a></li>
|
||
|
<li><a class="reference internal" href="#command-line-usage">Command-Line Usage</a></li>
|
||
|
<li><a class="reference internal" href="#examples">Examples</a></li>
|
||
|
</ul>
|
||
|
</li>
|
||
|
</ul>
|
||
|
|
||
|
<h4>Previous topic</h4>
|
||
|
<p class="topless"><a href="keyword.html"
|
||
|
title="previous chapter"><code class="xref py py-mod docutils literal notranslate"><span class="pre">keyword</span></code> — Testing for Python keywords</a></p>
|
||
|
<h4>Next topic</h4>
|
||
|
<p class="topless"><a href="tabnanny.html"
|
||
|
title="next chapter"><code class="xref py py-mod docutils literal notranslate"><span class="pre">tabnanny</span></code> — Detection of ambiguous indentation</a></p>
|
||
|
<div role="note" aria-label="source link">
|
||
|
<h3>This Page</h3>
|
||
|
<ul class="this-page-menu">
|
||
|
<li><a href="../bugs.html">Report a Bug</a></li>
|
||
|
<li>
|
||
|
<a href="https://github.com/python/cpython/blob/3.7/Doc/library/tokenize.rst"
|
||
|
rel="nofollow">Show Source
|
||
|
</a>
|
||
|
</li>
|
||
|
</ul>
|
||
|
</div>
|
||
|
</div>
|
||
|
</div>
|
||
|
<div class="clearer"></div>
|
||
|
</div>
|
||
|
<div class="related" role="navigation" aria-label="related navigation">
|
||
|
<h3>Navigation</h3>
|
||
|
<ul>
|
||
|
<li class="right" style="margin-right: 10px">
|
||
|
<a href="../genindex.html" title="General Index"
|
||
|
>index</a></li>
|
||
|
<li class="right" >
|
||
|
<a href="../py-modindex.html" title="Python Module Index"
|
||
|
>modules</a> |</li>
|
||
|
<li class="right" >
|
||
|
<a href="tabnanny.html" title="tabnanny — Detection of ambiguous indentation"
|
||
|
>next</a> |</li>
|
||
|
<li class="right" >
|
||
|
<a href="keyword.html" title="keyword — Testing for Python keywords"
|
||
|
>previous</a> |</li>
|
||
|
<li><img src="../_static/py.png" alt=""
|
||
|
style="vertical-align: middle; margin-top: -1px"/></li>
|
||
|
<li><a href="https://www.python.org/">Python</a> »</li>
|
||
|
<li>
|
||
|
<span class="language_switcher_placeholder">en</span>
|
||
|
<span class="version_switcher_placeholder">3.7.4</span>
|
||
|
<a href="../index.html">Documentation </a> »
|
||
|
</li>
|
||
|
|
||
|
<li class="nav-item nav-item-1"><a href="index.html" >The Python Standard Library</a> »</li>
|
||
|
<li class="nav-item nav-item-2"><a href="language.html" >Python Language Services</a> »</li>
|
||
|
<li class="right">
|
||
|
|
||
|
|
||
|
<div class="inline-search" style="display: none" role="search">
|
||
|
<form class="inline-search" action="../search.html" method="get">
|
||
|
<input placeholder="Quick search" type="text" name="q" />
|
||
|
<input type="submit" value="Go" />
|
||
|
<input type="hidden" name="check_keywords" value="yes" />
|
||
|
<input type="hidden" name="area" value="default" />
|
||
|
</form>
|
||
|
</div>
|
||
|
<script type="text/javascript">$('.inline-search').show(0);</script>
|
||
|
|
|
||
|
</li>
|
||
|
|
||
|
</ul>
|
||
|
</div>
|
||
|
<div class="footer">
|
||
|
© <a href="../copyright.html">Copyright</a> 2001-2019, Python Software Foundation.
|
||
|
<br />
|
||
|
The Python Software Foundation is a non-profit corporation.
|
||
|
<a href="https://www.python.org/psf/donations/">Please donate.</a>
|
||
|
<br />
|
||
|
Last updated on Jul 13, 2019.
|
||
|
<a href="../bugs.html">Found a bug</a>?
|
||
|
<br />
|
||
|
Created using <a href="http://sphinx.pocoo.org/">Sphinx</a> 2.0.1.
|
||
|
</div>
|
||
|
|
||
|
</body>
|
||
|
</html>
|