技能 编程开发 Qdrant内存使用优化指南

Qdrant内存使用优化指南

v20260420
qdrant-memory-usage-optimization
本指南旨在帮助用户诊断和优化Qdrant的内存使用问题。当遇到内存占用过高、内存泄漏或节点崩溃等问题时,可以参考此文档。它详细介绍了Qdrant的内存结构,并提供了从量化、使用float16/int8数据类型到将索引和向量组件存储到磁盘的多种高级优化策略,确保系统稳定和高效扩展。
获取技能
264 次下载
概览

Understanding memory usage

Qdrant operates with two types of memory:

  • Resident memory (aka RSSAnon) - memory used for internal data structures like the ID tracker, plus components that must stay in RAM, such as quantized vectors when always_ram=true and payload indexes.

  • OS page cache - memory used for caching disk reads, which can be released when needed. Original vectors are normally stored in page cache, so the service won't crash if RAM is full, but performance may degrade.

It is normal for the OS page cache to occupy all available RAM, but if resident memory is above 80% of total RAM, it is a sign of a problem.

Memory usage monitoring

  • Qdrant exposes memory usage through the /metrics endpoint. See Monitoring docs.

How much memory is needed for Qdrant?

Optimal memory usage depends on the use case.

For a detailed breakdown of memory usage at large scale, see Large scale memory usage example.

Payload indexes and HNSW graph also require memory, along with vectors themselves, so it's important to consider them in calculations.

Additionally, Qdrant requires some extra memory for optimizations. During optimization, optimized segments are fully loaded into RAM, so it is important to leave enough headroom. The larger max_segment_size is, the more headroom is needed.

When to put HNSW index on disk

Putting frequently used components (such as HNSW index) on disk might cause significant performance degradation. There are some scenarios, however, when it can be a good option:

  • Deployments with low latency disks - local NVMe or similar.
  • Multi-tenant deployments, where only a subset of tenants is frequently accessed, so that only a fraction of data & index is loaded in RAM at a time.
  • For deployments with inline storage enabled.

How to minimize memory footprint

The main challenge is to put on disk those parts of data, which are rarely accessed. Here are the main techniques to achieve that:

  • Use quantization to store only compressed vectors in RAM Quantization docs

  • Use float16 or int8 datatypes to reduce memory usage of vectors by 2x or 4x respectively, with some tradeoff in precision. Read more about vector datatypes in documentation

  • Leverage Matryoshka Representation Learning (MRL) to store only small vectors in RAM while keeping large vectors on disk. Examples of how to use MRL with Qdrant Cloud inference: MRL docs

  • For multi-tenant deployments with small tenants, vectors might be stored on disk because the same tenant's data is stored together Multitenancy docs

  • For deployments with fast local storage and relatively low requirements for search throughput, it may be possible to store all components of vector store on disk. Read more about the performance implications of on-disk storage in the article

  • For low RAM environments, consider async_scorer config, which enables support of io_uring for parallel disk access, which can significantly improve performance of on-disk storage. Read more about async_scorer in the article (only available on Linux with kernel 5.11+)

  • Consider storing Sparse Vectors and text payload on disk, as they are usually more disk-friendly than dense vectors.

  • Configure payload indexes to be stored on disk docs

  • Configure sparse vectors to be stored on disk docs

信息
Category 编程开发
Name qdrant-memory-usage-optimization
版本 v20260420
大小 4.75KB
更新时间 2026-04-28
语言