GeeeekExplorer/nano-vllm
Nano-vLLM
A lightweight vLLM implementation built from scratch.
Key Features
- π Fast offline inference - Comparable inference speeds to vLLM
- π Readable codebase - Clean implementation in ~ 1,200 lines of Python code
- β‘ Optimization Suite - Prefix caching, Tensor Parallelism, Torch compilation, CUDA graph, etc.
Installation
|
|
Model Download
To download the model weights manually, use the following command:
|
|
Quick Start
See example.py for usage. The API mirrors vLLMβs interface with minor differences in the LLM.generate method:
|
|
Benchmark
See bench.py for benchmark.
Test Configuration:
- Hardware: RTX 4070 Laptop (8GB)
- Model: Qwen3-0.6B
- Total Requests: 256 sequences
- Input Length: Randomly sampled between 100β1024 tokens
- Output Length: Randomly sampled between 100β1024 tokens
Performance Results:
| Inference Engine | Output Tokens | Time (s) | Throughput (tokens/s) |
|---|---|---|---|
| vLLM | 133,966 | 98.37 | 1361.84 |
| Nano-vLLM | 133,966 | 93.41 | 1434.13 |