{"id":40963,"date":"2025-10-12T14:17:54","date_gmt":"2025-10-12T12:17:54","guid":{"rendered":"https:\/\/www.dbi-services.com\/blog\/?p=40963"},"modified":"2025-10-12T14:17:56","modified_gmt":"2025-10-12T12:17:56","slug":"rag-series-adaptive-rag-understanding-confidence-precision-ndcg","status":"publish","type":"post","link":"https:\/\/www.dbi-services.com\/blog\/rag-series-adaptive-rag-understanding-confidence-precision-ndcg\/","title":{"rendered":"RAG Series \u2013 Adaptive RAG, understanding Confidence, Precision &amp; nDCG"},"content":{"rendered":"\n<h1 class=\"wp-block-heading\" id=\"h-introduction\">Introduction<\/h1>\n\n\n\n<p>In this RAG series we tried so far to introduce new concepts of the RAG workflow each time. This new article is going to introduce also new key concepts at the heart of Retrieval. Adaptive RAG will allow us to talk about measuring the quality of the retrieved data and how we can leverage it to push our optimizations further.<br>A now <a href=\"https:\/\/mlq.ai\/media\/quarterly_decks\/v0.1_State_of_AI_in_Business_2025_Report.pdf\">famous study from MIT<\/a> is stating how 95% of organizations fail to get ROI within the 6 months of their &#8220;AI projects&#8221;. Although we could argue about the relevancy of the study and what it actually measured,  one of the key element to have a successful implementation is measurement. <br>An old BI principle is to know your KPI, what it really measures but also when it fails to measure.  For example if you would use the speedometer on your dashboard&#8217;s car to measure the speed at which you are going, you&#8217;d be right as long as the wheels are touching the ground. So with that in mind, let&#8217;s see how we can create smart and reliable retrieval. <br><\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-from-hybrid-to-adaptive\">From Hybrid to Adaptive<\/h2>\n\n\n\n<p>Hybrid search significantly improves retrieval quality by combining dense semantic vectors with sparse lexical signals. However, real-world queries vary:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Some are <strong>factual<\/strong>, asking for specific names, numbers, or entities.<\/li>\n\n\n\n<li>Others are <strong>conceptual<\/strong>, exploring ideas, reasons, or relationships.<\/li>\n<\/ul>\n\n\n\n<p>A single static weighting between dense and sparse methods cannot perform optimally across all query types.<\/p>\n\n\n\n<p><strong>Adaptive RAG<\/strong> introduces a lightweight classifier that analyzes each query to determine its type and dynamically adjusts the hybrid weights before searching.<br>For example:<\/p>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><thead><tr><th>Query Type<\/th><th>Example<\/th><th>Dense Weight<\/th><th>Sparse Weight<\/th><\/tr><\/thead><tbody><tr><td>Factual<\/td><td>\u201cWho founded PostgreSQL?\u201d<\/td><td>0.3<\/td><td>0.7<\/td><\/tr><tr><td>Conceptual<\/td><td>\u201cHow does PostgreSQL handle concurrency?\u201d<\/td><td>0.7<\/td><td>0.3<\/td><\/tr><tr><td>Exploratory<\/td><td>\u201cTell me about Postgres performance tuning\u201d<\/td><td>0.5<\/td><td>0.5<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<p>This dynamic weighting ensures that each search leverages the right signals:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Sparse when <strong>exact matching<\/strong> matters.<\/li>\n\n\n\n<li>Dense when <strong>semantic similarity<\/strong> matters.<\/li>\n<\/ul>\n\n\n\n<p>Under the hood, our <code>AdaptiveSearchEngine<\/code> wraps dense and sparse retrieval modules. Before executing, it classifies the query, assigns weights, and fuses the results via a <strong>weighted Reciprocal Rank Fusion (RRF)<\/strong>, giving us the best of both worlds \u2014 adaptivity without complexity.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-confidence-driven-retrieval\">Confidence-Driven Retrieval<\/h2>\n\n\n\n<p>Once we make retrieval adaptive, the next challenge is <strong>trust<\/strong>. How confident are we in the results we just returned?<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"h-confidence-from-classification\">Confidence from Classification<\/h3>\n\n\n\n<p>Each query classification includes a <strong>confidence score<\/strong> (e.g., 0.92 \u201cfactual\u201d vs 0.58 \u201cconceptual\u201d).<br>When classification confidence is low, Adaptive RAG defaults to a balanced retrieval (dense 0.5, sparse 0.5) \u2014 avoiding extreme weighting that might miss relevant content.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"h-confidence-from-retrieval\">Confidence from Retrieval<\/h3>\n\n\n\n<p>We also compute confidence based on retrieval statistics:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>The similarity gap between the first and second ranked results (large gap = high confidence).<\/li>\n\n\n\n<li>Average similarity score of the top-k results.<\/li>\n\n\n\n<li>Ratio of sparse vs dense agreement (when both find the same document, confidence increases).<\/li>\n<\/ul>\n\n\n\n<p>These metrics are aggregated into a <strong>normalized confidence score<\/strong> between 0 and 1:<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: plain; title: ; notranslate\" title=\"\">\ndef compute_confidence(top_scores, overlap_ratio):\n    sim_conf = min(1.0, sum(top_scores&#x5B;:3]) \/ 3)\n    overlap_conf = 0.3 + 0.7 * overlap_ratio\n    return round((sim_conf + overlap_conf) \/ 2, 2)\n\n<\/pre><\/div>\n\n\n<p>If confidence &lt; 0.5, the system triggers a <strong>fallback strategy<\/strong>:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Expands <code>top_k<\/code> results (e.g., from 10 \u2192 30).<\/li>\n\n\n\n<li>Broadens search to both dense and sparse equally.<\/li>\n\n\n\n<li>Logs the event for later evaluation.<\/li>\n<\/ul>\n\n\n\n<p>The retrieval API now returns a structured response:<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: plain; title: ; notranslate\" title=\"\">\n{\n  &quot;query&quot;: &quot;When was PostgreSQL 1.0 released?&quot;,\n  &quot;query_type&quot;: &quot;factual&quot;,\n  &quot;confidence&quot;: 0.87,\n  &quot;precision@10&quot;: 0.8,\n  &quot;recall@10&quot;: 0.75\n}\n\n<\/pre><\/div>\n\n\n<p>This allows monitoring not just <em>what<\/em> was retrieved, but <em>how sure<\/em> the system is. Enabling alerting, adaptive reruns, or downstream LLM prompt adjustments (e.g., &#8220;Answer cautiously&#8221; when confidence &lt; 0.6).<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-evaluating-quality-with-ndcg\">Evaluating Quality with nDCG<\/h2>\n\n\n\n<p>Precision and recall are fundamental metrics for retrieval systems, but they don\u2019t consider <strong>the order<\/strong> of results. If a relevant document appears at rank 10 instead of rank 1, the user experience is still poor even if recall is high.<\/p>\n\n\n\n<p>That\u2019s why we now add <strong>nDCG@k (normalized Discounted Cumulative Gain)<\/strong> \u2014 a ranking-aware measure that rewards systems for ordering relevant results near the top.<\/p>\n\n\n\n<p>The idea:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>DCG@k<\/strong> evaluates gain by position:<\/li>\n<\/ul>\n\n\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"424\" height=\"129\" src=\"https:\/\/www.dbi-services.com\/blog\/wp-content\/uploads\/sites\/2\/2025\/10\/image-5.png\" alt=\"\" class=\"wp-image-40969\" srcset=\"https:\/\/www.dbi-services.com\/blog\/wp-content\/uploads\/sites\/2\/2025\/10\/image-5.png 424w, https:\/\/www.dbi-services.com\/blog\/wp-content\/uploads\/sites\/2\/2025\/10\/image-5-300x91.png 300w\" sizes=\"auto, (max-width: 424px) 100vw, 424px\" \/><\/figure>\n<\/div>\n\n\n<ul class=\"wp-block-list\">\n<li><strong>nDCG@k<\/strong> normalizes this against the ideal order (IDCG):<\/li>\n<\/ul>\n\n\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"408\" height=\"116\" src=\"https:\/\/www.dbi-services.com\/blog\/wp-content\/uploads\/sites\/2\/2025\/10\/image-6.png\" alt=\"\" class=\"wp-image-40970\" srcset=\"https:\/\/www.dbi-services.com\/blog\/wp-content\/uploads\/sites\/2\/2025\/10\/image-6.png 408w, https:\/\/www.dbi-services.com\/blog\/wp-content\/uploads\/sites\/2\/2025\/10\/image-6-300x85.png 300w\" sizes=\"auto, (max-width: 408px) 100vw, 408px\" \/><\/figure>\n<\/div>\n\n\n<p>A perfect ranking yields nDCG = 1.0. Poorly ordered but complete results may still have high recall, but lower nDCG.<\/p>\n\n\n\n<p>In practice, we calculate nDCG@10 for each query and average it over the dataset.<br>Our evaluation script (<code>lab\/04_evaluate\/metrics.py<\/code>) integrates this directly:<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: plain; title: ; notranslate\" title=\"\">\nfrom evaluation import ndcg_at_k\n\nscore = ndcg_at_k(actual=relevant_docs, predicted=retrieved_docs, k=10)\nprint(f&quot;nDCG@10: {score:.3f}&quot;)\n\n<\/pre><\/div>\n\n\n<h3 class=\"wp-block-heading\" id=\"h-results-on-the-wikipedia-dataset-25k-articles\">Results on the Wikipedia dataset (25K articles)<\/h3>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><thead><tr><th>Method<\/th><th>Precision@10<\/th><th>Recall@10<\/th><th>nDCG@10<\/th><\/tr><\/thead><tbody><tr><td>Dense only<\/td><td>0.61<\/td><td>0.54<\/td><td>0.63<\/td><\/tr><tr><td>Hybrid fixed weights<\/td><td>0.72<\/td><td>0.68<\/td><td>0.75<\/td><\/tr><tr><td><strong>Adaptive (dynamic)<\/strong><\/td><td><strong>0.78<\/strong><\/td><td><strong>0.74<\/strong><\/td><td><strong>0.82<\/strong><\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<p>These results confirm that <strong>adaptive weighting not only improves raw accuracy but also produces better-ranked results<\/strong>, giving users relevant documents earlier in the list.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-implementation-in-our-lab\">Implementation in our LAB<\/h2>\n\n\n\n<p>You can explore the implementation in the GitHub repository:<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: plain; title: ; notranslate\" title=\"\">\ngit clone https:\/\/github.com\/boutaga\/pgvector_RAG_search_lab\ncd pgvector_RAG_search_lab\n\n<\/pre><\/div>\n\n\n<p>Key components:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><code>lab\/04_search\/adaptive_search.py<\/code> \u2014 query classification, adaptive weights, confidence scoring.<\/li>\n\n\n\n<li><code>lab\/04_evaluate\/metrics.py<\/code> \u2014 precision, recall, and nDCG evaluation.<\/li>\n\n\n\n<li>Streamlit UI (<code>streamlit run streamlit_demo.py<\/code>) \u2014 visualize retrieved chunks, scores, and confidence in real time.<\/li>\n<\/ul>\n\n\n\n<p>Example usage:<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: plain; title: ; notranslate\" title=\"\">\npython lab\/04_search\/adaptive_search.py --query &quot;Who invented SQL?&quot;\n\n<\/pre><\/div>\n\n\n<p>Output:<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: plain; title: ; notranslate\" title=\"\">\nQuery type: factual (0.91 confidence)\nDense weight: 0.3 | Sparse weight: 0.7\nPrecision@10: 0.82 | Recall@10: 0.77 | nDCG@10: 0.84\n\n<\/pre><\/div>\n\n\n<p>This feedback loop closes the gap between research and production \u2014 making RAG not only smarter but measurable.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-what-is-relevance\">What is \u201cRelevance\u201d?<\/h2>\n\n\n\n<p>When we talk about <strong>precision<\/strong>, <strong>recall<\/strong>, or <strong>nDCG<\/strong>, all three depend on one hidden thing:<\/p>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p class=\"has-text-align-center\"><strong>a <em>ground truth<\/em> of which documents are relevant for each query.<\/strong><\/p>\n<\/blockquote>\n\n\n\n<p>There are <strong>two main ways<\/strong> to establish that ground truth:<\/p>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><thead><tr><th>Approach<\/th><th>Who decides relevance<\/th><th>Pros<\/th><th>Cons<\/th><\/tr><\/thead><tbody><tr><td><strong>Human labeling<\/strong><\/td><td>Experts mark which documents correctly answer each query<\/td><td>Most accurate; useful for benchmarks<\/td><td>Expensive and slow<\/td><\/tr><tr><td><strong>Automated or LLM-assisted labeling<\/strong><\/td><td>An LLM (or rules) judges if a retrieved doc contains the correct answer<\/td><td>Scalable and repeatable<\/td><td>Risk of bias \/ noise<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<p>In some business activity you are almost forced to use human labeling because the business technicalities are so deep that automating it is hard. Labeling can be slow and expensive for a business but I learned that it also is a way to introduce change management towards AI workflow by enabling key employees of the company to participate and build a solution with their expertise and without going through a harder project of asking to an external organization to create specific business logic into a software that was never made to handle it in the first place. As a DBA, I witnessed business logic move away from databases towards ORMs and application code and this time the business logic is going towards AI workflow. Starting this human labeling project my be the first step towards it and guarantees solid foundations. <br>Managers need to keep in mind that AI workflows are not just a technical solution, they are social-technical framework to allow organizational growth. You can&#8217;t just ship an AI chatbot into an app and expect 10x returns with minimal effort, this is a simplistic state of mind that already cost billions according the MIT study.  <br><br>In a research setup (like your <code>pgvector_RAG_search_lab<\/code>), you can <strong>mix both<\/strong> approach:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Start with a <strong>seed dataset<\/strong> of <code>(query, relevant_doc_ids)<\/code> pairs (e.g. small set labeled manually).<\/li>\n\n\n\n<li>Use the LLM to <strong>extend or validate<\/strong> relevance judgments automatically.<\/li>\n<\/ul>\n\n\n\n<p>For example:<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: plain; title: ; notranslate\" title=\"\">\nprompt = f&quot;&quot;&quot;\nQuery: {query}\nDocument: {doc_text&#x5B;:2000]}\nIs this document relevant to answering the query? (yes\/no)\n&quot;&quot;&quot;\nllm_response = openai.ChatCompletion.create(...)\nlabel = llm_response&#x5B;&#039;choices&#039;]&#x5B;0]&#x5B;&#039;message&#039;]&#x5B;&#039;content&#039;].strip().lower() == &#039;yes&#039;\n\n<\/pre><\/div>\n\n\n<p>Then you store that in a simple table or CSV:<\/p>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><thead><tr><th>query_id<\/th><th>doc_id<\/th><th>relevant<\/th><\/tr><\/thead><tbody><tr><td>1<\/td><td>101<\/td><td>true<\/td><\/tr><tr><td>1<\/td><td>102<\/td><td>false<\/td><\/tr><tr><td>2<\/td><td>104<\/td><td>true<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-precision-amp-recall-in-practice\">Precision &amp; Recall in Practice<\/h2>\n\n\n\n<p>Once you have that table of true relevances, you can compute:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Precision@k<\/strong> \u2192 \u201cOf the top <em>k<\/em> documents I retrieved, how many were actually relevant?\u201d <\/li>\n\n\n\n<li><strong>Recall@k<\/strong> \u2192 \u201cOf all truly relevant documents, how many did I retrieve in my top <em>k<\/em>?\u201d <\/li>\n<\/ul>\n\n\n\n<p>They\u2019re correlated but not the same:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>High precision<\/strong> \u2192 few false positives.<\/li>\n\n\n\n<li><strong>High recall<\/strong> \u2192 few false negatives.<\/li>\n<\/ul>\n\n\n\n<p>For example:<\/p>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><thead><tr><th>Query<\/th><th>Retrieved docs (top 5)<\/th><th>True relevant<\/th><th>Precision@5<\/th><th>Recall@5<\/th><\/tr><\/thead><tbody><tr><td>\u201cWho founded PostgreSQL?\u201d<\/td><td>[d3, d7, d9, d1, d4]<\/td><td>[d1, d4]<\/td><td>0.4<\/td><td>1.0<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<p>You got both relevant docs (good recall = 1.0), but only 2 of the 5 retrieved were correct (precision = 0.4).<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-why-ndcg-is-needed\">Why nDCG is Needed<\/h2>\n\n\n\n<p>Precision and recall only measure <em>which<\/em> docs were retrieved, not <em>where they appeared in the ranking<\/em>.<\/p>\n\n\n\n<p><strong>nDCG@k<\/strong> adds <em>ranking quality<\/em>:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Each relevant document gets a <strong>relevance grade<\/strong> (commonly 0, 1, 2 \u2014 irrelevant, relevant, highly relevant).<\/li>\n\n\n\n<li>The higher it appears in the ranked list, the higher the gain.<\/li>\n<\/ul>\n\n\n\n<p>So if a highly relevant doc is ranked 1st, you get more credit than if it\u2019s ranked 10th.<\/p>\n\n\n\n<p><strong>In your database<\/strong>, you can store relevance grades in a table like:<\/p>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><thead><tr><th>query_id<\/th><th>doc_id<\/th><th>rel_grade<\/th><\/tr><\/thead><tbody><tr><td>1<\/td><td>101<\/td><td>2<\/td><\/tr><tr><td>1<\/td><td>102<\/td><td>1<\/td><\/tr><tr><td>1<\/td><td>103<\/td><td>0<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<p>Then your evaluator computes:<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: python; title: ; notranslate\" title=\"\">\nimport math\n\ndef dcg_at_k(relevances, k):\n    return sum((2**rel - 1) \/ math.log2(i+2) for i, rel in enumerate(relevances&#x5B;:k]))\n\ndef ndcg_at_k(actual_relevances, k):\n    ideal = sorted(actual_relevances, reverse=True)\n    return dcg_at_k(actual_relevances, k) \/ dcg_at_k(ideal, k)\n\n<\/pre><\/div>\n\n\n<p><strong>You do need to keep track of rank<\/strong> (the order in which docs were returned).<br>In PostgreSQL, you could log that like:<\/p>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><thead><tr><th>query_id<\/th><th>doc_id<\/th><th>rank<\/th><th>score<\/th><th>rel_grade<\/th><\/tr><\/thead><tbody><tr><td>1<\/td><td>101<\/td><td>1<\/td><td>0.92<\/td><td>2<\/td><\/tr><tr><td>1<\/td><td>102<\/td><td>2<\/td><td>0.87<\/td><td>1<\/td><\/tr><tr><td>1<\/td><td>103<\/td><td>3<\/td><td>0.54<\/td><td>0<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<p>Then it\u2019s easy to run SQL to evaluate:<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: sql; title: ; notranslate\" title=\"\">\nSELECT query_id,\n       SUM((POWER(2, rel_grade) - 1) \/ LOG(2, rank + 1)) AS dcg\nFROM eval_results\nWHERE rank &lt;= 10\nGROUP BY query_id;\n\n<\/pre><\/div>\n\n\n<p>In a real system (like your Streamlit or API demo), you can:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Log <strong>each retrieval attempt<\/strong> (query, timestamp, ranking list, scores, confidence).<\/li>\n\n\n\n<li>Periodically <strong>recompute metrics<\/strong> (precision, recall, nDCG) using a fixed ground-truth set.<\/li>\n<\/ul>\n\n\n\n<p>This lets you track if tuning (e.g., changing dense\/sparse weights) is improving performance.<\/p>\n\n\n\n<p>Structure of your evaluation log table could be:<\/p>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-fixed-layout\"><thead><tr><th>run_id<\/th><th>query_id<\/th><th>method<\/th><th>rank<\/th><th>doc_id<\/th><th>score<\/th><th>confidence<\/th><th>rel_grade<\/th><\/tr><\/thead><tbody><tr><td>2025-10-12_01<\/td><td>1<\/td><td>adaptive_rrf<\/td><td>1<\/td><td>101<\/td><td>0.92<\/td><td>0.87<\/td><td>2<\/td><\/tr><tr><td>2025-10-12_01<\/td><td>1<\/td><td>adaptive_rrf<\/td><td>2<\/td><td>102<\/td><td>0.85<\/td><td>0.87<\/td><td>1<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<p>From there, you can generate:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>nDCG@10 trend over runs<\/strong> (e.g., in Prometheus or Streamlit chart)<\/li>\n\n\n\n<li><strong>Precision vs Confidence correlation<\/strong><\/li>\n\n\n\n<li><strong>Recall improvements per query type<\/strong><\/li>\n<\/ul>\n\n\n\n<p><em>\u26a0\ufe0f Note: While nDCG is a strong metric for ranking quality, it\u2019s not free from bias. Because it normalizes per query, easier questions (with few relevant documents) can inflate the average score. In our lab, we mitigate this by logging both raw DCG and nDCG, and by comparing results across query categories (factual vs conceptual vs exploratory). This helps ensure improvements reflect true retrieval quality rather than statistical artifacts.<\/em><\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-human-llm-hybrid-evaluation-practical-middle-ground\">Human + LLM Hybrid Evaluation (Practical Middle Ground)<\/h2>\n\n\n\n<p>For your PostgreSQL lab setup:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Label a <strong>small gold set<\/strong> manually (e.g., 20\u201350 queries \u00d7 3\u20135 relevant docs each).<\/li>\n\n\n\n<li>For larger coverage, use the <strong>LLM as an auto-grader<\/strong>.<br>You can even use <em>self-consistency<\/em>: ask the LLM to re-evaluate relevance twice and keep consistent labels only.<\/li>\n<\/ul>\n\n\n\n<p>This gives you a <strong>semi-automated evaluation dataset<\/strong>, good enough to monitor:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Precision@10<\/li>\n\n\n\n<li>Recall@10<\/li>\n\n\n\n<li>nDCG@10 over time<\/li>\n<\/ul>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-lessons-learned\">Lessons Learned<\/h2>\n\n\n\n<p>Through Adaptive RAG, we\u2019ve transformed retrieval from a static process into a self-aware one.<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Precision increased by ~6\u20137%<\/strong>, especially for conceptual queries.<\/li>\n\n\n\n<li><strong>Recall improved by ~8%<\/strong> for factual questions thanks to better keyword anchoring.<\/li>\n\n\n\n<li><strong>nDCG@10 rose from 0.75 \u2192 0.82<\/strong>, confirming that relevant results are appearing earlier.<\/li>\n\n\n\n<li><strong>Confidence scoring<\/strong> provides operational visibility: we now know when the system is uncertain, enabling safe fallbacks and trust signals.<\/li>\n<\/ul>\n\n\n\n<p>The combination of adaptive routing, confidence estimation, and nDCG evaluation makes this pipeline suitable for enterprise-grade RAG use cases \u2014 where explainability, reliability, and observability are as important as accuracy.<\/p>\n\n\n\n<hr class=\"wp-block-separator has-alpha-channel-opacity\" \/>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-conclusion-and-next-steps\">Conclusion and Next Steps<\/h2>\n\n\n\n<p>Adaptive RAG is the bridge between smart retrieval and <strong>reliable retrieval<\/strong>.<br>By classifying queries, tuning dense\/sparse balance dynamically, and measuring ranking quality with nDCG, we now have a system that understands <em>what kind of question it\u2019s facing<\/em> and <em>how well it performed<\/em> in answering it.<\/p>\n\n\n\n<p>This version of the lab introduces the first metrics-driven feedback loop for RAG in PostgreSQL:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Retrieve adaptively,<\/li>\n\n\n\n<li>Measure precisely,<\/li>\n\n\n\n<li>Adjust intelligently.<\/li>\n<\/ul>\n\n\n\n<p>In <strong>the next part<\/strong>, we\u2019ll push even further \u2014 introducing <strong>Agentic RAG<\/strong>, and how it plans and executes multi-step reasoning chains to improve retrieval and answer quality even more.<\/p>\n\n\n\n<p>Try Adaptive RAG in the <a href=\"https:\/\/github.com\/boutaga\/pgvector_RAG_search_lab\">pgvector_RAG_search_lab<\/a> repository, explore your own datasets, and start measuring nDCG@10 to see how adaptive retrieval changes the game.<\/p>\n\n\n\n<p><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Introduction In this RAG series we tried so far to introduce new concepts of the RAG workflow each time. This new article is going to introduce also new key concepts at the heart of Retrieval. Adaptive RAG will allow us to talk about measuring the quality of the retrieved data and how we can leverage [&hellip;]<\/p>\n","protected":false},"author":153,"featured_media":37679,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[83],"tags":[2810,3685,77],"type_dbi":[2749],"class_list":["post-40963","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-postgresql","tag-ai","tag-ai-llm","tag-postgresql","type-postgresql"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO Premium plugin v27.2 (Yoast SEO v27.2) - https:\/\/yoast.com\/product\/yoast-seo-premium-wordpress\/ -->\n<title>RAG Series \u2013 Adaptive RAG, understanding Confidence, Precision &amp; nDCG - dbi Blog<\/title>\n<meta name=\"description\" content=\"Explore how Adaptive RAG uses confidence scoring, dynamic retrieval weighting, and nDCG evaluation to improve precision and recall in PostgreSQL-based retrieval systems.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.dbi-services.com\/blog\/rag-series-adaptive-rag-understanding-confidence-precision-ndcg\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"RAG Series \u2013 Adaptive RAG, understanding Confidence, Precision &amp; nDCG\" \/>\n<meta property=\"og:description\" content=\"Explore how Adaptive RAG uses confidence scoring, dynamic retrieval weighting, and nDCG evaluation to improve precision and recall in PostgreSQL-based retrieval systems.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.dbi-services.com\/blog\/rag-series-adaptive-rag-understanding-confidence-precision-ndcg\/\" \/>\n<meta property=\"og:site_name\" content=\"dbi Blog\" \/>\n<meta property=\"article:published_time\" content=\"2025-10-12T12:17:54+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2025-10-12T12:17:56+00:00\" \/>\n<meta property=\"og:image\" content=\"http:\/\/www.dbi-services.com\/blog\/wp-content\/uploads\/sites\/2\/2025\/03\/pixlr-image-generator-5f64d780-c578-477a-9419-7ddcdb807c83.png\" \/>\n\t<meta property=\"og:image:width\" content=\"1024\" \/>\n\t<meta property=\"og:image:height\" content=\"1024\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/png\" \/>\n<meta name=\"author\" content=\"Adrien Obernesser\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Adrien Obernesser\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"9 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/www.dbi-services.com\/blog\/rag-series-adaptive-rag-understanding-confidence-precision-ndcg\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/www.dbi-services.com\/blog\/rag-series-adaptive-rag-understanding-confidence-precision-ndcg\/\"},\"author\":{\"name\":\"Adrien Obernesser\",\"@id\":\"https:\/\/www.dbi-services.com\/blog\/#\/schema\/person\/fd2ab917212ce0200c7618afaa7fdbcd\"},\"headline\":\"RAG Series \u2013 Adaptive RAG, understanding Confidence, Precision &amp; nDCG\",\"datePublished\":\"2025-10-12T12:17:54+00:00\",\"dateModified\":\"2025-10-12T12:17:56+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/www.dbi-services.com\/blog\/rag-series-adaptive-rag-understanding-confidence-precision-ndcg\/\"},\"wordCount\":1779,\"commentCount\":0,\"image\":{\"@id\":\"https:\/\/www.dbi-services.com\/blog\/rag-series-adaptive-rag-understanding-confidence-precision-ndcg\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/www.dbi-services.com\/blog\/wp-content\/uploads\/sites\/2\/2025\/03\/pixlr-image-generator-5f64d780-c578-477a-9419-7ddcdb807c83.png\",\"keywords\":[\"ai\",\"AI\/LLM\",\"PostgreSQL\"],\"articleSection\":[\"PostgreSQL\"],\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\/\/www.dbi-services.com\/blog\/rag-series-adaptive-rag-understanding-confidence-precision-ndcg\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/www.dbi-services.com\/blog\/rag-series-adaptive-rag-understanding-confidence-precision-ndcg\/\",\"url\":\"https:\/\/www.dbi-services.com\/blog\/rag-series-adaptive-rag-understanding-confidence-precision-ndcg\/\",\"name\":\"RAG Series \u2013 Adaptive RAG, understanding Confidence, Precision &amp; nDCG - dbi Blog\",\"isPartOf\":{\"@id\":\"https:\/\/www.dbi-services.com\/blog\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/www.dbi-services.com\/blog\/rag-series-adaptive-rag-understanding-confidence-precision-ndcg\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/www.dbi-services.com\/blog\/rag-series-adaptive-rag-understanding-confidence-precision-ndcg\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/www.dbi-services.com\/blog\/wp-content\/uploads\/sites\/2\/2025\/03\/pixlr-image-generator-5f64d780-c578-477a-9419-7ddcdb807c83.png\",\"datePublished\":\"2025-10-12T12:17:54+00:00\",\"dateModified\":\"2025-10-12T12:17:56+00:00\",\"author\":{\"@id\":\"https:\/\/www.dbi-services.com\/blog\/#\/schema\/person\/fd2ab917212ce0200c7618afaa7fdbcd\"},\"description\":\"Explore how Adaptive RAG uses confidence scoring, dynamic retrieval weighting, and nDCG evaluation to improve precision and recall in PostgreSQL-based retrieval systems.\",\"breadcrumb\":{\"@id\":\"https:\/\/www.dbi-services.com\/blog\/rag-series-adaptive-rag-understanding-confidence-precision-ndcg\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/www.dbi-services.com\/blog\/rag-series-adaptive-rag-understanding-confidence-precision-ndcg\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/www.dbi-services.com\/blog\/rag-series-adaptive-rag-understanding-confidence-precision-ndcg\/#primaryimage\",\"url\":\"https:\/\/www.dbi-services.com\/blog\/wp-content\/uploads\/sites\/2\/2025\/03\/pixlr-image-generator-5f64d780-c578-477a-9419-7ddcdb807c83.png\",\"contentUrl\":\"https:\/\/www.dbi-services.com\/blog\/wp-content\/uploads\/sites\/2\/2025\/03\/pixlr-image-generator-5f64d780-c578-477a-9419-7ddcdb807c83.png\",\"width\":1024,\"height\":1024},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/www.dbi-services.com\/blog\/rag-series-adaptive-rag-understanding-confidence-precision-ndcg\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Accueil\",\"item\":\"https:\/\/www.dbi-services.com\/blog\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"RAG Series \u2013 Adaptive RAG, understanding Confidence, Precision &amp; nDCG\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/www.dbi-services.com\/blog\/#website\",\"url\":\"https:\/\/www.dbi-services.com\/blog\/\",\"name\":\"dbi Blog\",\"description\":\"\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/www.dbi-services.com\/blog\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Person\",\"@id\":\"https:\/\/www.dbi-services.com\/blog\/#\/schema\/person\/fd2ab917212ce0200c7618afaa7fdbcd\",\"name\":\"Adrien Obernesser\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/secure.gravatar.com\/avatar\/dc9316c729e50107159e0a1e631b9c1742ce8898576887d0103c83b1ca3bc9e6?s=96&d=mm&r=g\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/dc9316c729e50107159e0a1e631b9c1742ce8898576887d0103c83b1ca3bc9e6?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/dc9316c729e50107159e0a1e631b9c1742ce8898576887d0103c83b1ca3bc9e6?s=96&d=mm&r=g\",\"caption\":\"Adrien Obernesser\"},\"url\":\"https:\/\/www.dbi-services.com\/blog\/author\/adrienobernesser\/\"}]}<\/script>\n<!-- \/ Yoast SEO Premium plugin. -->","yoast_head_json":{"title":"RAG Series \u2013 Adaptive RAG, understanding Confidence, Precision &amp; nDCG - dbi Blog","description":"Explore how Adaptive RAG uses confidence scoring, dynamic retrieval weighting, and nDCG evaluation to improve precision and recall in PostgreSQL-based retrieval systems.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.dbi-services.com\/blog\/rag-series-adaptive-rag-understanding-confidence-precision-ndcg\/","og_locale":"en_US","og_type":"article","og_title":"RAG Series \u2013 Adaptive RAG, understanding Confidence, Precision &amp; nDCG","og_description":"Explore how Adaptive RAG uses confidence scoring, dynamic retrieval weighting, and nDCG evaluation to improve precision and recall in PostgreSQL-based retrieval systems.","og_url":"https:\/\/www.dbi-services.com\/blog\/rag-series-adaptive-rag-understanding-confidence-precision-ndcg\/","og_site_name":"dbi Blog","article_published_time":"2025-10-12T12:17:54+00:00","article_modified_time":"2025-10-12T12:17:56+00:00","og_image":[{"width":1024,"height":1024,"url":"http:\/\/www.dbi-services.com\/blog\/wp-content\/uploads\/sites\/2\/2025\/03\/pixlr-image-generator-5f64d780-c578-477a-9419-7ddcdb807c83.png","type":"image\/png"}],"author":"Adrien Obernesser","twitter_card":"summary_large_image","twitter_misc":{"Written by":"Adrien Obernesser","Est. reading time":"9 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/www.dbi-services.com\/blog\/rag-series-adaptive-rag-understanding-confidence-precision-ndcg\/#article","isPartOf":{"@id":"https:\/\/www.dbi-services.com\/blog\/rag-series-adaptive-rag-understanding-confidence-precision-ndcg\/"},"author":{"name":"Adrien Obernesser","@id":"https:\/\/www.dbi-services.com\/blog\/#\/schema\/person\/fd2ab917212ce0200c7618afaa7fdbcd"},"headline":"RAG Series \u2013 Adaptive RAG, understanding Confidence, Precision &amp; nDCG","datePublished":"2025-10-12T12:17:54+00:00","dateModified":"2025-10-12T12:17:56+00:00","mainEntityOfPage":{"@id":"https:\/\/www.dbi-services.com\/blog\/rag-series-adaptive-rag-understanding-confidence-precision-ndcg\/"},"wordCount":1779,"commentCount":0,"image":{"@id":"https:\/\/www.dbi-services.com\/blog\/rag-series-adaptive-rag-understanding-confidence-precision-ndcg\/#primaryimage"},"thumbnailUrl":"https:\/\/www.dbi-services.com\/blog\/wp-content\/uploads\/sites\/2\/2025\/03\/pixlr-image-generator-5f64d780-c578-477a-9419-7ddcdb807c83.png","keywords":["ai","AI\/LLM","PostgreSQL"],"articleSection":["PostgreSQL"],"inLanguage":"en-US","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/www.dbi-services.com\/blog\/rag-series-adaptive-rag-understanding-confidence-precision-ndcg\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/www.dbi-services.com\/blog\/rag-series-adaptive-rag-understanding-confidence-precision-ndcg\/","url":"https:\/\/www.dbi-services.com\/blog\/rag-series-adaptive-rag-understanding-confidence-precision-ndcg\/","name":"RAG Series \u2013 Adaptive RAG, understanding Confidence, Precision &amp; nDCG - dbi Blog","isPartOf":{"@id":"https:\/\/www.dbi-services.com\/blog\/#website"},"primaryImageOfPage":{"@id":"https:\/\/www.dbi-services.com\/blog\/rag-series-adaptive-rag-understanding-confidence-precision-ndcg\/#primaryimage"},"image":{"@id":"https:\/\/www.dbi-services.com\/blog\/rag-series-adaptive-rag-understanding-confidence-precision-ndcg\/#primaryimage"},"thumbnailUrl":"https:\/\/www.dbi-services.com\/blog\/wp-content\/uploads\/sites\/2\/2025\/03\/pixlr-image-generator-5f64d780-c578-477a-9419-7ddcdb807c83.png","datePublished":"2025-10-12T12:17:54+00:00","dateModified":"2025-10-12T12:17:56+00:00","author":{"@id":"https:\/\/www.dbi-services.com\/blog\/#\/schema\/person\/fd2ab917212ce0200c7618afaa7fdbcd"},"description":"Explore how Adaptive RAG uses confidence scoring, dynamic retrieval weighting, and nDCG evaluation to improve precision and recall in PostgreSQL-based retrieval systems.","breadcrumb":{"@id":"https:\/\/www.dbi-services.com\/blog\/rag-series-adaptive-rag-understanding-confidence-precision-ndcg\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.dbi-services.com\/blog\/rag-series-adaptive-rag-understanding-confidence-precision-ndcg\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.dbi-services.com\/blog\/rag-series-adaptive-rag-understanding-confidence-precision-ndcg\/#primaryimage","url":"https:\/\/www.dbi-services.com\/blog\/wp-content\/uploads\/sites\/2\/2025\/03\/pixlr-image-generator-5f64d780-c578-477a-9419-7ddcdb807c83.png","contentUrl":"https:\/\/www.dbi-services.com\/blog\/wp-content\/uploads\/sites\/2\/2025\/03\/pixlr-image-generator-5f64d780-c578-477a-9419-7ddcdb807c83.png","width":1024,"height":1024},{"@type":"BreadcrumbList","@id":"https:\/\/www.dbi-services.com\/blog\/rag-series-adaptive-rag-understanding-confidence-precision-ndcg\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Accueil","item":"https:\/\/www.dbi-services.com\/blog\/"},{"@type":"ListItem","position":2,"name":"RAG Series \u2013 Adaptive RAG, understanding Confidence, Precision &amp; nDCG"}]},{"@type":"WebSite","@id":"https:\/\/www.dbi-services.com\/blog\/#website","url":"https:\/\/www.dbi-services.com\/blog\/","name":"dbi Blog","description":"","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.dbi-services.com\/blog\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Person","@id":"https:\/\/www.dbi-services.com\/blog\/#\/schema\/person\/fd2ab917212ce0200c7618afaa7fdbcd","name":"Adrien Obernesser","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/secure.gravatar.com\/avatar\/dc9316c729e50107159e0a1e631b9c1742ce8898576887d0103c83b1ca3bc9e6?s=96&d=mm&r=g","url":"https:\/\/secure.gravatar.com\/avatar\/dc9316c729e50107159e0a1e631b9c1742ce8898576887d0103c83b1ca3bc9e6?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/dc9316c729e50107159e0a1e631b9c1742ce8898576887d0103c83b1ca3bc9e6?s=96&d=mm&r=g","caption":"Adrien Obernesser"},"url":"https:\/\/www.dbi-services.com\/blog\/author\/adrienobernesser\/"}]}},"_links":{"self":[{"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/posts\/40963","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/users\/153"}],"replies":[{"embeddable":true,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/comments?post=40963"}],"version-history":[{"count":25,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/posts\/40963\/revisions"}],"predecessor-version":[{"id":40991,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/posts\/40963\/revisions\/40991"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/media\/37679"}],"wp:attachment":[{"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/media?parent=40963"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/categories?post=40963"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/tags?post=40963"},{"taxonomy":"type","embeddable":true,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/type_dbi?post=40963"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}