Qdrant operates with two types of memory:
Resident memory (aka RSSAnon) - memory used for internal data structures like the ID tracker, plus components that must stay in RAM, such as quantized vectors when always_ram=true and payload indexes.
OS page cache - memory used for caching disk reads, which can be released when needed. Original vectors are normally stored in page cache, so the service won't crash if RAM is full, but performance may degrade.
It is normal for the OS page cache to occupy all available RAM, but if resident memory is above 80% of total RAM, it is a sign of a problem.
/metrics endpoint. See Monitoring docs.Optimal memory usage depends on the use case.
For a detailed breakdown of memory usage at large scale, see Large scale memory usage example.
Payload indexes and HNSW graph also require memory, along with vectors themselves, so it's important to consider them in calculations.
Additionally, Qdrant requires some extra memory for optimizations. During optimization, optimized segments are fully loaded into RAM, so it is important to leave enough headroom.
The larger max_segment_size is, the more headroom is needed.
Putting frequently used components (such as HNSW index) on disk might cause significant performance degradation. There are some scenarios, however, when it can be a good option:
The main challenge is to put on disk those parts of data, which are rarely accessed. Here are the main techniques to achieve that:
Use quantization to store only compressed vectors in RAM Quantization docs
Use float16 or int8 datatypes to reduce memory usage of vectors by 2x or 4x respectively, with some tradeoff in precision. Read more about vector datatypes in documentation
Leverage Matryoshka Representation Learning (MRL) to store only small vectors in RAM while keeping large vectors on disk. Examples of how to use MRL with Qdrant Cloud inference: MRL docs
For multi-tenant deployments with small tenants, vectors might be stored on disk because the same tenant's data is stored together Multitenancy docs
For deployments with fast local storage and relatively low requirements for search throughput, it may be possible to store all components of vector store on disk. Read more about the performance implications of on-disk storage in the article
For low RAM environments, consider async_scorer config, which enables support of io_uring for parallel disk access, which can significantly improve performance of on-disk storage. Read more about async_scorer in the article (only available on Linux with kernel 5.11+)
Consider storing Sparse Vectors and text payload on disk, as they are usually more disk-friendly than dense vectors.
Configure payload indexes to be stored on disk docs
Configure sparse vectors to be stored on disk docs