huggingface/tokenizers
{ "createdAt": "2019-11-01T17:52:20Z", "defaultBranch": "main", "description": "💥 Fast State-of-the-Art Tokenizers optimized for Research and Production", "fullName": "huggingface/tokenizers", "homepage": "https://huggingface.co/docs/tokenizers", "language": "Rust", "name": "tokenizers", "pushedAt": "2025-10-16T09:22:48Z", "stargazersCount": 10248, "topics": [ "bert", "gpt", "language-model", "natural-language-processing", "natural-language-understanding", "nlp", "transformers" ], "updatedAt": "2025-11-25T19:27:02Z", "url": "https://github.com/huggingface/tokenizers"}
Provides an implementation of today’s most used tokenizers, with a focus on performance and versatility.
Main features:
Section titled “Main features:”- Train new vocabularies and tokenize, using today’s most used tokenizers.
- Extremely fast (both training and tokenization), thanks to the Rust implementation. Takes less than 20 seconds to tokenize a GB of text on a server’s CPU.
- Easy to use, but also extremely versatile.
- Designed for research and production.
- Normalization comes with alignments tracking. It’s always possible to get the part of the original sentence that corresponds to a given token.
- Does all the pre-processing: Truncate, Pad, add the special tokens your model needs.
Performances
Section titled “Performances”Performances can vary depending on hardware, but running the [~/bindings/python/benches/test_tiktoken.py]!(bindings/python/benches/test_tiktoken.py) should give the following on a g6 aws instance:
Bindings
Section titled “Bindings”We provide bindings to the following languages (more to come!):
Installation
Section titled “Installation”You can install from source using:
pip install git+https://github.com/huggingface/tokenizers.git#subdirectory=bindings/pythonor install the released versions with
pip install tokenizersQuick example using Python:
Section titled “Quick example using Python:”Choose your model between Byte-Pair Encoding, WordPiece or Unigram and instantiate a tokenizer:
from tokenizers import Tokenizerfrom tokenizers.models import BPE
tokenizer = Tokenizer(BPE())You can customize how pre-tokenization (e.g., splitting into words) is done:
from tokenizers.pre_tokenizers import Whitespace
tokenizer.pre_tokenizer = Whitespace()Then training your tokenizer on a set of files just takes two lines of codes:
from tokenizers.trainers import BpeTrainer
trainer = BpeTrainer(special_tokens=["[UNK]", "[CLS]", "[SEP]", "[PAD]", "[MASK]"])tokenizer.train(files=["wiki.train.raw", "wiki.valid.raw", "wiki.test.raw"], trainer=trainer)Once your tokenizer is trained, encode any text with just one line:
output = tokenizer.encode("Hello, y'all! How are you 😁 ?")print(output.tokens)# ["Hello", ",", "y", "'", "all", "!", "How", "are", "you", "[UNK]", "?"]Check the documentation or the quicktour to learn more!