{"id":18733,"date":"2022-08-29T08:00:00","date_gmt":"2022-08-29T06:00:00","guid":{"rendered":"https:\/\/www.dbi-services.com\/blog\/?p=18733"},"modified":"2023-08-07T16:34:40","modified_gmt":"2023-08-07T14:34:40","slug":"kubernetes-deployment-autoscaling-using-memory-cpu","status":"publish","type":"post","link":"https:\/\/www.dbi-services.com\/blog\/kubernetes-deployment-autoscaling-using-memory-cpu\/","title":{"rendered":"Kubernetes Deployment Autoscaling using Memory &amp; CPU"},"content":{"rendered":"\n<p>If you would like to implement autoscaling in your Kubernetes cluster then you are at the right place to get started, so read on.<\/p>\n\n\n\n<p>I&#8217;ve explored the implementation of the Kubernetes object called HorizontalPodAutoscaler (HPA for short) in order to autoscale (up or down) a deployment according to the memory usage of its pods. The idea is to have a deployment with 10 replicas, but only deploy the number of pods required according to their memory usage. So, at some point during the day, we could deploy up to 10 pods when there is a lot of traffic to process but during the night, go down to 4 for example when it is quieter. I&#8217;ll only have to set a few parameters and HPA will take care of the rest based on the metrics it collects. The algorithm used for this autoscaling is described in the Kubernetes documentation <a href=\"https:\/\/kubernetes.io\/docs\/tasks\/run-application\/horizontal-pod-autoscale\/\">here<\/a>.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-metrics-prerequisite\">Metrics prerequisite<\/h2>\n\n\n\n<p>Before diving into HPA, you must ensure metrics are installed in your cluster as the HPA algorithm is using them for autoscaling. A quick check is to use the <strong>kubectl top<\/strong> command and observe the results:<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: bash; title: ; notranslate\" title=\"\">\n$ kubectl top pod\nNAME                       CPU(cores)   MEMORY(bytes)\nbusybox-5cfd866f57-4l95f   0m           0Mi\nbusybox-5cfd866f57-b9h45   0m           0Mi\n<\/pre><\/div>\n\n\n<p>If the output displays the CPU and MEMORY for each of your pod then metrics are installed and you can move forward. If like me you are using Minikube first for testing this, installing metrics is as easy as the command below:<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: bash; title: ; notranslate\" title=\"\">\n$ minikube addons enable metrics-server\n<\/pre><\/div>\n\n\n<h2 class=\"wp-block-heading\" id=\"h-api-autoscaling-v1\">API autoscaling v1<\/h2>\n\n\n\n<p>Let&#8217;s move on to HPA. I&#8217;ve configured HorizontalPodAutoscaler through a yaml file to experiment with it. Below I&#8217;ll describe what you need to know to make it work. The first step is to check what is your API version for this hpa object:<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: bash; title: ; notranslate\" title=\"\">\n$ kubectl api-resources|grep hpa\nhorizontalpodautoscalers          hpa                                        autoscaling\/v1                          true         HorizontalPodAutoscaler\n<\/pre><\/div>\n\n\n<p>On this old cluster I&#8217;ve got, the API version of hpa is <strong><em>autoscaling\/v1<\/em><\/strong>. Here is the bad news: in this version it is not possible to use pod memory metrics to do autoscaling. There is just one parameter available and it can only use the CPU metrics of the pods:<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: bash; highlight: [27]; title: ; notranslate\" title=\"\">\n$ kubectl explain hpa.spec\nKIND:     HorizontalPodAutoscaler\nVERSION:  autoscaling\/v1\n\nRESOURCE: spec &lt;Object&gt;\n\nDESCRIPTION:\n     behaviour of autoscaler. More info:\n     https:\/\/git.k8s.io\/community\/contributors\/devel\/api-conventions.md#spec-and-status.\n\n     specification of a horizontal pod autoscaler.\n\nFIELDS:\n   maxReplicas  &lt;integer&gt; -required-\n     upper limit for the number of pods that can be set by the autoscaler;\n     cannot be smaller than MinReplicas.\n\n   minReplicas  &lt;integer&gt;\n     lower limit for the number of pods that can be set by the autoscaler,\n     default 1.\n\n   scaleTargetRef       &lt;Object&gt; -required-\n     reference to scaled resource; horizontal pod autoscaler will learn the\n     current resource consumption and will set the desired number of pods by\n     using its Scale subresource.\n\n   targetCPUUtilizationPercentage       &lt;integer&gt;\n     target average CPU utilization (represented as a percentage of requested\n     CPU) over all the pods; if not specified the default autoscaling policy\n     will be used.\n\n<\/pre><\/div>\n\n\n<p>In this API version, the possibilities are then limited to this parameter <strong><em>targetCPUUtilizationPercentage<\/em><\/strong> that evaluates the average CPU utilization of the pods.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-api-autoscaling-v2\">Api autoscaling v2<\/h2>\n\n\n\n<p>To have more options, your cluster needs to use <strong>autoscaling\/v2<\/strong> for HPA (this is what a recent version of Minikube is automatically using). An hpa yaml file could then be for example:<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: yaml; highlight: [6,7,12,13,14,15,16,17,18]; title: ; notranslate\" title=\"\">\napiVersion: autoscaling\/v2\nkind: HorizontalPodAutoscaler\nmetadata:\n  name: hpa-busybox\nspec:\n  maxReplicas: 2\n  minReplicas: 1\n  scaleTargetRef:\n    apiVersion: apps\/v1\n    kind: Deployment\n    name: busybox\n  metrics:\n  - type: Resource\n    resource:\n      name: memory\n      target:\n        type: Utilization\n        averageUtilization: 60\n<\/pre><\/div>\n\n\n<p>In this version 2 the syntax has changed from version 1 and <strong><em>targetCPUUtilizationPercentage<\/em><\/strong> has been replaced by <em><strong>metrics<\/strong><\/em> which allows a more flexible way to configure metrics.<\/p>\n\n\n\n<p>To leverage HPA, the <strong>resources<\/strong> parameter has to be configured for the pod&#8217;s containers of the deployment as for example:<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: yaml; title: ; notranslate\" title=\"\">\nresources:\n          limits:\n            memory: &quot;1Gi&quot;\n          requests:\n            memory: &quot;1Ki&quot;\n<\/pre><\/div>\n\n\n<p>When the deployment is set and your hpa configuration is running, you&#8217;ll then be able to check its status:<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: bash; title: ; notranslate\" title=\"\">\n$ kubectl get hpa\nNAME          REFERENCE            TARGETS      MINPODS   MAXPODS   REPLICAS   AGE\nhpa-busybox   Deployment\/busybox   16200%\/60%   1         2         2          121m\n<\/pre><\/div>\n\n\n<p>The TARGETS column shows the percentage of memory used by all the pods (not relevant here as my pods are not running anything and are in a sleep state) \/ the average utilization we&#8217;ve configured for our hpa metrics. That works and I now have a solution to adapt and deploy in the wild. If the memory metrics were not collected properly I would have seen instead &lt;unknown&gt;\/60%.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"h-add-cpu-autoscaling\">Add CPU Autoscaling<\/h2>\n\n\n\n<p>If you also want to use HPA with the CPU metrics of your pods, here is how to proceed. Update the hpa yaml file as follows:<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: yaml; highlight: [13,14,15,16,17,18]; title: ; notranslate\" title=\"\">\napiVersion: autoscaling\/v2\nkind: HorizontalPodAutoscaler\nmetadata:\n  name: hpa-busybox\nspec:\n  maxReplicas: 2\n  minReplicas: 1\n  scaleTargetRef:\n    apiVersion: apps\/v1\n    kind: Deployment\n    name: busybox\n  metrics:\n  - type: Resource\n    resource:\n      name: cpu\n      target:\n        type: Utilization\n        averageUtilization: 60\n  - type: Resource\n    resource:\n      name: memory\n      target:\n        type: Utilization\n        averageUtilization: 60\n<\/pre><\/div>\n\n\n<p>Just add a block <strong>resources<\/strong> for cpu and its metric and value. You see how flexible that new api version 2 is in comparison with the unique cpu parameter in version 1.<\/p>\n\n\n\n<p>Add in the deployment pod&#8217;s containers a cpu resource for your pods:<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: yaml; highlight: [3,6]; title: ; notranslate\" title=\"\">\nresources:\n          limits:\n            cpu: &quot;1&quot;\n            memory: &quot;1Gi&quot;\n          requests:\n            cpu: &quot;0.5&quot;\n            memory: &quot;1Ki&quot;\n<\/pre><\/div>\n\n\n<p>When all is set, you&#8217;ll now have the following output:<\/p>\n\n\n<div class=\"wp-block-syntaxhighlighter-code \"><pre class=\"brush: bash; title: ; notranslate\" title=\"\">\n$ kubectl get hpa\nNAME          REFERENCE            TARGETS              MINPODS   MAXPODS   REPLICAS   AGE\nhpa-busybox   Deployment\/busybox   16000%\/60%, 0%\/60%   1         2         2          6m49s\n<\/pre><\/div>\n\n\n<p>In addition to the memory, hpa is now also monitoring the CPU (0%\/60%) and use both metrics to autoscale the pods of our deployment.<\/p>\n\n\n\n<p>I hope this post will help you quickly configure HPA in your Kubernetes cluster. If you want to learn more about Kubernetes, check out our <a href=\"https:\/\/www.dbi-services.com\/en\/courses\/docker-and-kubernetes-essential-skills\/\">Training course<\/a> given by our Kubernetes wizard!<\/p>\n\n\n\n<p>If you need to design HPA and understand in details how the targets calculation above is done then check out my <a href=\"https:\/\/www.dbi-services.com\/blog\/kubernetes-design-horizontalpodautoscaler-using-memory-cpu\/\" target=\"_blank\" rel=\"noreferrer noopener\">blog<\/a> on this topic.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>If you would like to implement autoscaling in your Kubernetes cluster then you are at the right place to get started, so read on. I&#8217;ve explored the implementation of the Kubernetes object called HorizontalPodAutoscaler (HPA for short) in order to autoscale (up or down) a deployment according to the memory usage of its pods. The [&hellip;]<\/p>\n","protected":false},"author":109,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[1320,1522],"tags":[2398,2667,2686,2634,2434],"type_dbi":[],"class_list":["post-18733","post","type-post","status-publish","format-standard","hentry","category-devops","category-kubernetes","tag-autoscaling","tag-devops-2","tag-hpa","tag-kubernetes-2","tag-minikube"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO Premium plugin v27.2 (Yoast SEO v27.2) - https:\/\/yoast.com\/product\/yoast-seo-premium-wordpress\/ -->\n<title>Kubernetes Deployment Autoscaling using Memory &amp; CPU - dbi Blog<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.dbi-services.com\/blog\/kubernetes-deployment-autoscaling-using-memory-cpu\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Kubernetes Deployment Autoscaling using Memory &amp; CPU\" \/>\n<meta property=\"og:description\" content=\"If you would like to implement autoscaling in your Kubernetes cluster then you are at the right place to get started, so read on. I&#8217;ve explored the implementation of the Kubernetes object called HorizontalPodAutoscaler (HPA for short) in order to autoscale (up or down) a deployment according to the memory usage of its pods. The [&hellip;]\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.dbi-services.com\/blog\/kubernetes-deployment-autoscaling-using-memory-cpu\/\" \/>\n<meta property=\"og:site_name\" content=\"dbi Blog\" \/>\n<meta property=\"article:published_time\" content=\"2022-08-29T06:00:00+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2023-08-07T14:34:40+00:00\" \/>\n<meta name=\"author\" content=\"DevOps\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"DevOps\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"3 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/www.dbi-services.com\/blog\/kubernetes-deployment-autoscaling-using-memory-cpu\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/www.dbi-services.com\/blog\/kubernetes-deployment-autoscaling-using-memory-cpu\/\"},\"author\":{\"name\":\"DevOps\",\"@id\":\"https:\/\/www.dbi-services.com\/blog\/#\/schema\/person\/4cd1b5f8a3de93f05a16ab8d7d2b7735\"},\"headline\":\"Kubernetes Deployment Autoscaling using Memory &amp; CPU\",\"datePublished\":\"2022-08-29T06:00:00+00:00\",\"dateModified\":\"2023-08-07T14:34:40+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/www.dbi-services.com\/blog\/kubernetes-deployment-autoscaling-using-memory-cpu\/\"},\"wordCount\":693,\"commentCount\":0,\"keywords\":[\"autoscaling\",\"devops\",\"hpa\",\"kubernetes\",\"minikube\"],\"articleSection\":[\"DevOps\",\"Kubernetes\"],\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"CommentAction\",\"name\":\"Comment\",\"target\":[\"https:\/\/www.dbi-services.com\/blog\/kubernetes-deployment-autoscaling-using-memory-cpu\/#respond\"]}]},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/www.dbi-services.com\/blog\/kubernetes-deployment-autoscaling-using-memory-cpu\/\",\"url\":\"https:\/\/www.dbi-services.com\/blog\/kubernetes-deployment-autoscaling-using-memory-cpu\/\",\"name\":\"Kubernetes Deployment Autoscaling using Memory &amp; CPU - dbi Blog\",\"isPartOf\":{\"@id\":\"https:\/\/www.dbi-services.com\/blog\/#website\"},\"datePublished\":\"2022-08-29T06:00:00+00:00\",\"dateModified\":\"2023-08-07T14:34:40+00:00\",\"author\":{\"@id\":\"https:\/\/www.dbi-services.com\/blog\/#\/schema\/person\/4cd1b5f8a3de93f05a16ab8d7d2b7735\"},\"breadcrumb\":{\"@id\":\"https:\/\/www.dbi-services.com\/blog\/kubernetes-deployment-autoscaling-using-memory-cpu\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/www.dbi-services.com\/blog\/kubernetes-deployment-autoscaling-using-memory-cpu\/\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/www.dbi-services.com\/blog\/kubernetes-deployment-autoscaling-using-memory-cpu\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Accueil\",\"item\":\"https:\/\/www.dbi-services.com\/blog\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Kubernetes Deployment Autoscaling using Memory &amp; CPU\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/www.dbi-services.com\/blog\/#website\",\"url\":\"https:\/\/www.dbi-services.com\/blog\/\",\"name\":\"dbi Blog\",\"description\":\"\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/www.dbi-services.com\/blog\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Person\",\"@id\":\"https:\/\/www.dbi-services.com\/blog\/#\/schema\/person\/4cd1b5f8a3de93f05a16ab8d7d2b7735\",\"name\":\"DevOps\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/secure.gravatar.com\/avatar\/cdd2dd7441774355062c0f0f68612296b059cd1e2ff6c7af0b15dba0ed64a85f?s=96&d=mm&r=g\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/cdd2dd7441774355062c0f0f68612296b059cd1e2ff6c7af0b15dba0ed64a85f?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/cdd2dd7441774355062c0f0f68612296b059cd1e2ff6c7af0b15dba0ed64a85f?s=96&d=mm&r=g\",\"caption\":\"DevOps\"},\"url\":\"https:\/\/www.dbi-services.com\/blog\/author\/devops\/\"}]}<\/script>\n<!-- \/ Yoast SEO Premium plugin. -->","yoast_head_json":{"title":"Kubernetes Deployment Autoscaling using Memory &amp; CPU - dbi Blog","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.dbi-services.com\/blog\/kubernetes-deployment-autoscaling-using-memory-cpu\/","og_locale":"en_US","og_type":"article","og_title":"Kubernetes Deployment Autoscaling using Memory &amp; CPU","og_description":"If you would like to implement autoscaling in your Kubernetes cluster then you are at the right place to get started, so read on. I&#8217;ve explored the implementation of the Kubernetes object called HorizontalPodAutoscaler (HPA for short) in order to autoscale (up or down) a deployment according to the memory usage of its pods. The [&hellip;]","og_url":"https:\/\/www.dbi-services.com\/blog\/kubernetes-deployment-autoscaling-using-memory-cpu\/","og_site_name":"dbi Blog","article_published_time":"2022-08-29T06:00:00+00:00","article_modified_time":"2023-08-07T14:34:40+00:00","author":"DevOps","twitter_card":"summary_large_image","twitter_misc":{"Written by":"DevOps","Est. reading time":"3 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/www.dbi-services.com\/blog\/kubernetes-deployment-autoscaling-using-memory-cpu\/#article","isPartOf":{"@id":"https:\/\/www.dbi-services.com\/blog\/kubernetes-deployment-autoscaling-using-memory-cpu\/"},"author":{"name":"DevOps","@id":"https:\/\/www.dbi-services.com\/blog\/#\/schema\/person\/4cd1b5f8a3de93f05a16ab8d7d2b7735"},"headline":"Kubernetes Deployment Autoscaling using Memory &amp; CPU","datePublished":"2022-08-29T06:00:00+00:00","dateModified":"2023-08-07T14:34:40+00:00","mainEntityOfPage":{"@id":"https:\/\/www.dbi-services.com\/blog\/kubernetes-deployment-autoscaling-using-memory-cpu\/"},"wordCount":693,"commentCount":0,"keywords":["autoscaling","devops","hpa","kubernetes","minikube"],"articleSection":["DevOps","Kubernetes"],"inLanguage":"en-US","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/www.dbi-services.com\/blog\/kubernetes-deployment-autoscaling-using-memory-cpu\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/www.dbi-services.com\/blog\/kubernetes-deployment-autoscaling-using-memory-cpu\/","url":"https:\/\/www.dbi-services.com\/blog\/kubernetes-deployment-autoscaling-using-memory-cpu\/","name":"Kubernetes Deployment Autoscaling using Memory &amp; CPU - dbi Blog","isPartOf":{"@id":"https:\/\/www.dbi-services.com\/blog\/#website"},"datePublished":"2022-08-29T06:00:00+00:00","dateModified":"2023-08-07T14:34:40+00:00","author":{"@id":"https:\/\/www.dbi-services.com\/blog\/#\/schema\/person\/4cd1b5f8a3de93f05a16ab8d7d2b7735"},"breadcrumb":{"@id":"https:\/\/www.dbi-services.com\/blog\/kubernetes-deployment-autoscaling-using-memory-cpu\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.dbi-services.com\/blog\/kubernetes-deployment-autoscaling-using-memory-cpu\/"]}]},{"@type":"BreadcrumbList","@id":"https:\/\/www.dbi-services.com\/blog\/kubernetes-deployment-autoscaling-using-memory-cpu\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Accueil","item":"https:\/\/www.dbi-services.com\/blog\/"},{"@type":"ListItem","position":2,"name":"Kubernetes Deployment Autoscaling using Memory &amp; CPU"}]},{"@type":"WebSite","@id":"https:\/\/www.dbi-services.com\/blog\/#website","url":"https:\/\/www.dbi-services.com\/blog\/","name":"dbi Blog","description":"","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.dbi-services.com\/blog\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Person","@id":"https:\/\/www.dbi-services.com\/blog\/#\/schema\/person\/4cd1b5f8a3de93f05a16ab8d7d2b7735","name":"DevOps","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/secure.gravatar.com\/avatar\/cdd2dd7441774355062c0f0f68612296b059cd1e2ff6c7af0b15dba0ed64a85f?s=96&d=mm&r=g","url":"https:\/\/secure.gravatar.com\/avatar\/cdd2dd7441774355062c0f0f68612296b059cd1e2ff6c7af0b15dba0ed64a85f?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/cdd2dd7441774355062c0f0f68612296b059cd1e2ff6c7af0b15dba0ed64a85f?s=96&d=mm&r=g","caption":"DevOps"},"url":"https:\/\/www.dbi-services.com\/blog\/author\/devops\/"}]}},"_links":{"self":[{"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/posts\/18733","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/users\/109"}],"replies":[{"embeddable":true,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/comments?post=18733"}],"version-history":[{"count":47,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/posts\/18733\/revisions"}],"predecessor-version":[{"id":27152,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/posts\/18733\/revisions\/27152"}],"wp:attachment":[{"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/media?parent=18733"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/categories?post=18733"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/tags?post=18733"},{"taxonomy":"type","embeddable":true,"href":"https:\/\/www.dbi-services.com\/blog\/wp-json\/wp\/v2\/type_dbi?post=18733"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}