tiktoken(OpenAI)分词器

tiktokenopen in new window 是由 OpenAI 创建的快速 BPE 分词器。

我们可以使用它来估计所使用的标记数量。对于 OpenAI 模型来说,它可能会更准确。

  1. 文本如何进行分割:根据传入的字符进行分割。
  2. 分块大小的测量方式:由 tiktoken 分词器进行测量。
#!pip install tiktoken
# This is a long document we can split up.
with open('../../../state_of_the_union.txt') as f:
    state_of_the_union = f.read()
from langchain.text_splitter import CharacterTextSplitter
text_splitter = CharacterTextSplitter.from_tiktoken_encoder(chunk_size=100, chunk_overlap=0)
texts = text_splitter.split_text(state_of_the_union)
print(texts[0])
    Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans.  
    
    Last year COVID-19 kept us apart. This year we are finally together again. 
    
    Tonight, we meet as Democrats Republicans and Independents. But most importantly as Americans. 
    
    With a duty to one another to the American people to the Constitution.
Last Updated:
Contributors: 刘强