Skills Data Science Diagnosing Vector Database Search Quality

Diagnosing Vector Database Search Quality

v20260420
qdrant-search-quality-diagnosis
Provides a systematic guide for diagnosing and improving poor search quality in vector databases like Qdrant. This guide covers troubleshooting issues such as low recall, when approximate search degrades, selecting optimal embedding models, and correctly tuning HNSW parameters, quantization, and filtering strategies. Essential for optimizing RAG and semantic search pipelines.
Get Skill
391 downloads
Overview

How to Diagnose Bad Search Quality

Before tuning, establish baselines. Use exact KNN as ground truth, compare against approximate HNSW. Target >95% recall@K for production.

Don't Know What's Wrong Yet

Use when: results are irrelevant or missing expected matches and you need to isolate the cause.

  • Test with exact=true to bypass HNSW approximation Search API
  • Exact search bad = model or search pipeline problem. Exact good, approximate bad = tune HNSW.
  • Check if quantization degrades quality (compare with and without)
  • Check if filters are too restrictive (then you might need to use ACORN)
  • If duplicate results from chunked documents, use Grouping API to deduplicate Grouping

Payload filtering and sparse vector search are different things. Metadata (dates, categories, tags) goes in payload for filtering. Text content goes in sparse vectors for search.

Approximate Search Worse Than Exact

Use when: exact search returns good results but HNSW approximation misses them.

Binary quantization requires rescore. Without it, quality loss is severe. Use oversampling (3-5x minimum for binary) to recover recall. Always test quantization impact on your data before production. Quantization

Wrong Embedding Model

Use when: exact search also returns bad results.

Test top 3 MTEB models on 100-1000 sample queries, measure recall@10. Domain-specific models often outperform general models. Hosted inference

Unoptimized Search Pipeline

Use when: exact search also returns bad results and model choice is confirmed by user.

Optimize search according to advanced search-strategies skill.

What NOT to Do

  • Tune Qdrant before verifying the model is right for the task (most quality issues are model issues)
  • Use binary quantization without rescore (severe quality loss)
  • Set hnsw_ef lower than results requested (guaranteed bad recall)
  • Skip payload indexes on filtered fields then blame quality (HNSW can't traverse filtered-out nodes, and filterable HNSW is built only if payload indexes were set up prior)
  • Deploy without baseline recall or other search relevance metrics (no way to measure regressions)
  • Confuse payload filtering with sparse vector search (different things, different config)
Info
Category Data Science
Name qdrant-search-quality-diagnosis
Version v20260420
Size 3.62KB
Updated At 2026-04-28
Language