You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+4-160Lines changed: 4 additions & 160 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -16,163 +16,7 @@ Easy, fast, and cheap LLM serving for everyone
16
16
17
17
---
18
18
19
-
*Latest News* 🔥
20
-
21
-
-[2025/08] We hosted [vLLM Shenzhen Meetup](https://mp.weixin.qq.com/s/k8ZBO1u2_2odgiKWH_GVTQ) focusing on the ecosystem around vLLM! Please find the meetup slides [here](https://drive.google.com/drive/folders/1Ua2SVKVSu-wp5vou_6ElraDt2bnKhiEA).
22
-
-[2025/08] We hosted [vLLM Singapore Meetup](https://www.sginnovate.com/event/vllm-sg-meet). We shared V1 updates, disaggregated serving and MLLM speedups with speakers from Embedded LLM, AMD, WekaIO, and A*STAR. Please find the meetup slides [here](https://drive.google.com/drive/folders/1ncf3GyqLdqFaB6IeB834E5TZJPLAOiXZ?usp=sharing).
23
-
-[2025/08] We hosted [vLLM Shanghai Meetup](https://mp.weixin.qq.com/s/pDmAXHcN7Iqc8sUKgJgGtg) focusing on building, developing, and integrating with vLLM! Please find the meetup slides [here](https://drive.google.com/drive/folders/1OvLx39wnCGy_WKq8SiVKf7YcxxYI3WCH).
24
-
-[2025/05] vLLM is now a hosted project under PyTorch Foundation! Please find the announcement [here](https://pytorch.org/blog/pytorch-foundation-welcomes-vllm/).
25
-
-[2025/01] We are excited to announce the alpha release of vLLM V1: A major architectural upgrade with 1.7x speedup! Clean code, optimized execution loop, zero-overhead prefix caching, enhanced multimodal support, and more. Please check out our blog post [here](https://blog.vllm.ai/2025/01/27/v1-alpha-release.html).
26
-
27
-
<details>
28
-
<summary>Previous News</summary>
29
-
30
-
-[2025/08] We hosted [vLLM Korea Meetup](https://luma.com/cgcgprmh) with Red Hat and Rebellions! We shared the latest advancements in vLLM along with project spotlights from the vLLM Korea community. Please find the meetup slides [here](https://drive.google.com/file/d/1bcrrAE1rxUgx0mjIeOWT6hNe2RefC5Hm/view).
31
-
-[2025/08] We hosted [vLLM Beijing Meetup](https://mp.weixin.qq.com/s/dgkWg1WFpWGO2jCdTqQHxA) focusing on large-scale LLM deployment! Please find the meetup slides [here](https://drive.google.com/drive/folders/1Pid6NSFLU43DZRi0EaTcPgXsAzDvbBqF) and the recording [here](https://www.chaspark.com/#/live/1166916873711665152).
32
-
-[2025/05] We hosted [NYC vLLM Meetup](https://lu.ma/c1rqyf1f)! Please find the meetup slides [here](https://docs.google.com/presentation/d/1_q_aW_ioMJWUImf1s1YM-ZhjXz8cUeL0IJvaquOYBeA/edit?usp=sharing).
33
-
-[2025/04] We hosted [Asia Developer Day](https://www.sginnovate.com/event/limited-availability-morning-evening-slots-remaining-inaugural-vllm-asia-developer-day)! Please find the meetup slides from the vLLM team [here](https://docs.google.com/presentation/d/19cp6Qu8u48ihB91A064XfaXruNYiBOUKrBxAmDOllOo/edit?usp=sharing).
34
-
-[2025/03] We hosted [vLLM x Ollama Inference Night](https://lu.ma/vllm-ollama)! Please find the meetup slides from the vLLM team [here](https://docs.google.com/presentation/d/16T2PDD1YwRnZ4Tu8Q5r6n53c5Lr5c73UV9Vd2_eBo4U/edit?usp=sharing).
35
-
-[2025/03] We hosted [the first vLLM China Meetup](https://mp.weixin.qq.com/s/n77GibL2corAtQHtVEAzfg)! Please find the meetup slides from vLLM team [here](https://docs.google.com/presentation/d/1REHvfQMKGnvz6p3Fd23HhSO4c8j5WPGZV0bKYLwnHyQ/edit?usp=sharing).
36
-
-[2025/03] We hosted [the East Coast vLLM Meetup](https://lu.ma/7mu4k4xx)! Please find the meetup slides [here](https://docs.google.com/presentation/d/1NHiv8EUFF1NLd3fEYODm56nDmL26lEeXCaDgyDlTsRs/edit#slide=id.g31441846c39_0_0).
37
-
-[2025/02] We hosted [the ninth vLLM meetup](https://lu.ma/h7g3kuj9) with Meta! Please find the meetup slides from vLLM team [here](https://docs.google.com/presentation/d/1jzC_PZVXrVNSFVCW-V4cFXb6pn7zZ2CyP_Flwo05aqg/edit?usp=sharing) and AMD [here](https://drive.google.com/file/d/1Zk5qEJIkTmlQ2eQcXQZlljAx3m9s7nwn/view?usp=sharing). The slides from Meta will not be posted.
38
-
-[2025/01] We hosted [the eighth vLLM meetup](https://lu.ma/zep56hui) with Google Cloud! Please find the meetup slides from vLLM team [here](https://docs.google.com/presentation/d/1epVkt4Zu8Jz_S5OhEHPc798emsYh2BwYfRuDDVEF7u4/edit?usp=sharing), and Google Cloud team [here](https://drive.google.com/file/d/1h24pHewANyRL11xy5dXUbvRC9F9Kkjix/view?usp=sharing).
39
-
-[2024/12] vLLM joins [pytorch ecosystem](https://pytorch.org/blog/vllm-joins-pytorch)! Easy, Fast, and Cheap LLM Serving for Everyone!
40
-
-[2024/11] We hosted [the seventh vLLM meetup](https://lu.ma/h0qvrajz) with Snowflake! Please find the meetup slides from vLLM team [here](https://docs.google.com/presentation/d/1e3CxQBV3JsfGp30SwyvS3eM_tW-ghOhJ9PAJGK6KR54/edit?usp=sharing), and Snowflake team [here](https://docs.google.com/presentation/d/1qF3RkDAbOULwz9WK5TOltt2fE9t6uIc_hVNLFAaQX6A/edit?usp=sharing).
41
-
-[2024/10] We have just created a developer slack ([slack.vllm.ai](https://slack.vllm.ai)) focusing on coordinating contributions and discussing features. Please feel free to join us there!
42
-
-[2024/10] Ray Summit 2024 held a special track for vLLM! Please find the opening talk slides from the vLLM team [here](https://docs.google.com/presentation/d/1B_KQxpHBTRa_mDF-tR6i8rWdOU5QoTZNcEg2MKZxEHM/edit?usp=sharing). Learn more from the [talks](https://www.youtube.com/playlist?list=PLzTswPQNepXl6AQwifuwUImLPFRVpksjR) from other vLLM contributors and users!
43
-
-[2024/09] We hosted [the sixth vLLM meetup](https://lu.ma/87q3nvnh) with NVIDIA! Please find the meetup slides [here](https://docs.google.com/presentation/d/1wrLGwytQfaOTd5wCGSPNhoaW3nq0E-9wqyP7ny93xRs/edit?usp=sharing).
44
-
-[2024/07] We hosted [the fifth vLLM meetup](https://lu.ma/lp0gyjqr) with AWS! Please find the meetup slides [here](https://docs.google.com/presentation/d/1RgUD8aCfcHocghoP3zmXzck9vX3RCI9yfUAB2Bbcl4Y/edit?usp=sharing).
45
-
-[2024/07] In partnership with Meta, vLLM officially supports Llama 3.1 with FP8 quantization and pipeline parallelism! Please check out our blog post [here](https://blog.vllm.ai/2024/07/23/llama31.html).
46
-
-[2024/06] We hosted [the fourth vLLM meetup](https://lu.ma/agivllm) with Cloudflare and BentoML! Please find the meetup slides [here](https://docs.google.com/presentation/d/1iJ8o7V2bQEi0BFEljLTwc5G1S10_Rhv3beed5oB0NJ4/edit?usp=sharing).
47
-
-[2024/04] We hosted [the third vLLM meetup](https://robloxandvllmmeetup2024.splashthat.com/) with Roblox! Please find the meetup slides [here](https://docs.google.com/presentation/d/1A--47JAK4BJ39t954HyTkvtfwn0fkqtsL8NGFuslReM/edit?usp=sharing).
48
-
-[2024/01] We hosted [the second vLLM meetup](https://lu.ma/ygxbpzhl) with IBM! Please find the meetup slides [here](https://docs.google.com/presentation/d/12mI2sKABnUw5RBWXDYY-HtHth4iMSNcEoQ10jDQbxgA/edit?usp=sharing).
49
-
-[2023/10] We hosted [the first vLLM meetup](https://lu.ma/first-vllm-meetup) with a16z! Please find the meetup slides [here](https://docs.google.com/presentation/d/1QL-XPFXiFpDBh86DbEegFXBXFXjix4v032GhShbKf3s/edit?usp=sharing).
50
-
-[2023/08] We would like to express our sincere gratitude to [Andreessen Horowitz](https://a16z.com/2023/08/30/supporting-the-open-source-ai-community/) (a16z) for providing a generous grant to support the open-source development and research of vLLM.
51
-
-[2023/06] We officially released vLLM! FastChat-vLLM integration has powered [LMSYS Vicuna and Chatbot Arena](https://chat.lmsys.org) since mid-April. Check out our [blog post](https://vllm.ai).
52
-
53
-
</details>
54
-
55
-
---
56
-
57
-
## About
58
-
59
-
vLLM is a fast and easy-to-use library for LLM inference and serving.
60
-
61
-
Originally developed in the [Sky Computing Lab](https://sky.cs.berkeley.edu) at UC Berkeley, vLLM has evolved into a community-driven project with contributions from both academia and industry.
62
-
63
-
vLLM is fast with:
64
-
65
-
- State-of-the-art serving throughput
66
-
- Efficient management of attention key and value memory with [**PagedAttention**](https://blog.vllm.ai/2023/06/20/vllm.html)
67
-
- Continuous batching of incoming requests
68
-
- Fast model execution with CUDA/HIP graph
69
-
- Quantizations: [GPTQ](https://arxiv.org/abs/2210.17323), [AWQ](https://arxiv.org/abs/2306.00978), [AutoRound](https://arxiv.org/abs/2309.05516), INT4, INT8, and FP8
70
-
- Optimized CUDA kernels, including integration with FlashAttention and FlashInfer
71
-
- Speculative decoding
72
-
- Chunked prefill
73
-
74
-
vLLM is flexible and easy to use with:
75
-
76
-
- Seamless integration with popular Hugging Face models
77
-
- High-throughput serving with various decoding algorithms, including *parallel sampling*, *beam search*, and more
78
-
- Tensor, pipeline, data and expert parallelism support for distributed inference
79
-
- Streaming outputs
80
-
- OpenAI-compatible API server
81
-
- Support NVIDIA GPUs, AMD CPUs and GPUs, Intel CPUs and GPUs, PowerPC CPUs, TPU, and AWS Neuron
82
-
- Prefix caching support
83
-
- Multi-LoRA support
84
-
85
-
vLLM seamlessly supports most popular open-source models on HuggingFace, including:
86
-
87
-
- Transformer-like LLMs (e.g., Llama)
88
-
- Mixture-of-Expert LLMs (e.g., Mixtral, Deepseek-V2 and V3)
89
-
- Embedding Models (e.g., E5-Mistral)
90
-
- Multi-modal LLMs (e.g., LLaVA)
91
-
92
-
Find the full list of supported models [here](https://docs.vllm.ai/en/latest/models/supported_models.html).
93
-
94
-
## Getting Started
95
-
96
-
Install vLLM with `pip` or [from source](https://docs.vllm.ai/en/latest/getting_started/installation/gpu/index.html#build-wheel-from-source):
97
-
98
-
```bash
99
-
pip install vllm
100
-
```
101
-
102
-
Visit our [documentation](https://docs.vllm.ai/en/latest/) to learn more.
-[List of Supported Models](https://docs.vllm.ai/en/latest/models/supported_models.html)
107
-
108
-
## Contributing
109
-
110
-
We welcome and value any contributions and collaborations.
111
-
Please check out [Contributing to vLLM](https://docs.vllm.ai/en/latest/contributing/index.html) for how to get involved.
112
-
113
-
## Sponsors
114
-
115
-
vLLM is a community project. Our compute resources for development and testing are supported by the following organizations. Thank you for your support!
116
-
117
-
<!-- Note: Please sort them in alphabetical order. -->
118
-
<!-- Note: Please keep these consistent with docs/community/sponsors.md -->
119
-
Cash Donations:
120
-
121
-
- a16z
122
-
- Dropbox
123
-
- Sequoia Capital
124
-
- Skywork AI
125
-
- ZhenFund
126
-
127
-
Compute Resources:
128
-
129
-
- Alibaba Cloud
130
-
- AMD
131
-
- Anyscale
132
-
- AWS
133
-
- Crusoe Cloud
134
-
- Databricks
135
-
- DeepInfra
136
-
- Google Cloud
137
-
- Intel
138
-
- Lambda Lab
139
-
- Nebius
140
-
- Novita AI
141
-
- NVIDIA
142
-
- Replicate
143
-
- Roblox
144
-
- RunPod
145
-
- Trainy
146
-
- UC Berkeley
147
-
- UC San Diego
148
-
149
-
Slack Sponsor: Anyscale
150
-
151
-
We also have an official fundraising venue through [OpenCollective](https://opencollective.com/vllm). We plan to use the fund to support the development, maintenance, and adoption of vLLM.
152
-
153
-
## Citation
154
-
155
-
If you use vLLM for your research, please cite our [paper](https://arxiv.org/abs/2309.06180):
156
-
157
-
```bibtex
158
-
@inproceedings{kwon2023efficient,
159
-
title={Efficient Memory Management for Large Language Model Serving with PagedAttention},
160
-
author={Woosuk Kwon and Zhuohan Li and Siyuan Zhuang and Ying Sheng and Lianmin Zheng and Cody Hao Yu and Joseph E. Gonzalez and Hao Zhang and Ion Stoica},
161
-
booktitle={Proceedings of the ACM SIGOPS 29th Symposium on Operating Systems Principles},
162
-
year={2023}
163
-
}
164
-
```
165
-
166
-
## Contact Us
167
-
168
-
<!-- --8<-- [start:contact-us] -->
169
-
- For technical questions and feature requests, please use GitHub [Issues](https://github.com/vllm-project/vllm/issues)
170
-
- For discussing with fellow users, please use the [vLLM Forum](https://discuss.vllm.ai)
171
-
- For coordinating contributions and development, please use [Slack](https://slack.vllm.ai)
172
-
- For security disclosures, please use GitHub's [Security Advisories](https://github.com/vllm-project/vllm/security/advisories) feature
0 commit comments