<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Archives des DevOps - dbi Blog</title>
	<atom:link href="https://www.dbi-services.com/blog/category/devops/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.dbi-services.com/blog/category/devops/</link>
	<description></description>
	<lastBuildDate>Tue, 03 Feb 2026 09:00:51 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	

 
	<item>
		<title>Deploying Azure Terraform code with Azure DevOps and a storage account as remote backend</title>
		<link>https://www.dbi-services.com/blog/deploying-azure-terraform-code-with-azure-devops-and-a-storage-account-as-remote-backend/</link>
					<comments>https://www.dbi-services.com/blog/deploying-azure-terraform-code-with-azure-devops-and-a-storage-account-as-remote-backend/#respond</comments>
		
		<dc:creator><![CDATA[Adrien Devaux]]></dc:creator>
		<pubDate>Tue, 03 Feb 2026 09:00:48 +0000</pubDate>
				<category><![CDATA[Azure]]></category>
		<category><![CDATA[Cloud]]></category>
		<category><![CDATA[DevOps]]></category>
		<category><![CDATA[devops]]></category>
		<guid isPermaLink="false">https://www.dbi-services.com/blog/?p=42496</guid>

					<description><![CDATA[<p>Why this blog? While I was working for a customer, I was tasked to create an Azure infrastructure, using Terraform and Azure DevOps (ADO). I thought about doing it like I usually do with GitLab but it wasn&#8217;t possible with ADO as it doesn&#8217;t store the state file itself. Instead I have to use an [&#8230;]</p>
<p>L’article <a href="https://www.dbi-services.com/blog/deploying-azure-terraform-code-with-azure-devops-and-a-storage-account-as-remote-backend/">Deploying Azure Terraform code with Azure DevOps and a storage account as remote backend</a> est apparu en premier sur <a href="https://www.dbi-services.com/blog">dbi Blog</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<h2 class="wp-block-heading" id="h-why-this-blog">Why this blog? </h2>



<p>While I was working for a customer, I was tasked to create an Azure infrastructure, using Terraform and Azure DevOps (ADO). I thought about doing it like I usually do with GitLab but it wasn&#8217;t possible with ADO as it doesn&#8217;t store the state file itself. Instead I have to use an Azure Storage Account. I configured it, blocked public network, and realized that my pipeline couldn&#8217;t push the state in the Storage Account</p>



<p>In fact, ADO isn&#8217;t supported as a &#8220;Trusted Microsoft Service&#8221; and so it can&#8217;t bypass firewall rules using that option in Storage Accounts. For this to work, I had to create a self-hosted agent that run on Azure VM Scale Set and that will be the topic of this blog. </p>



<h2 class="wp-block-heading" id="h-azure-resources-creation">Azure resources creation</h2>



<h3 class="wp-block-heading" id="h-agent-creation">Agent Creation </h3>



<p>First thing, we create a Azure VM Scale Set. I kept most parameters to their default values but it can be customized. I chose Linux as operating system as it was what I needed. One important thing is to set the &#8220;Orchestration mode&#8221; to &#8220;Uniform&#8221;, else ADO pipelines won&#8217;t work.</p>



<figure class="wp-block-image size-full"><img fetchpriority="high" decoding="async" width="467" height="221" src="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/01/image-13.png" alt="" class="wp-image-42519" srcset="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/01/image-13.png 467w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/01/image-13-300x142.png 300w" sizes="(max-width: 467px) 100vw, 467px" /></figure>



<h3 class="wp-block-heading" id="h-storage-account">Storage account </h3>



<p>For the storage account that will store our state, any storage account should work. Just note that you also need to create a <strong>container </strong>inside of it to fill your terraform provider. Also, for network preferences we will go with &#8220;Public access&#8221; and &#8220;Enable from selected networks&#8221;. This is will allow public access only from restricted networks. I do this to avoid creating a private endpoint to connect to a fully private storage account.</p>



<figure class="wp-block-image size-large"><img decoding="async" width="1024" height="487" src="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/01/image-15-1024x487.png" alt="" class="wp-image-42521" srcset="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/01/image-15-1024x487.png 1024w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/01/image-15-300x143.png 300w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/01/image-15-768x365.png 768w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/01/image-15.png 1352w" sizes="(max-width: 1024px) 100vw, 1024px" /></figure>



<h3 class="wp-block-heading" id="h-entra-id-identity-for-the-pipeline">Entra ID identity for the pipeline</h3>



<p>We also need to create an Entra ID Enterprise Application that we will provide to the pipeline. This identity must have <strong>Contributor</strong> (or any look alike) role over the scope you target. Also, it must have at least <strong>Storage Blob Data Contributor</strong> on the Storage Account to be able to write in it.</p>



<h2 class="wp-block-heading" id="h-azure-devops-setup">Azure DevOps setup</h2>



<h3 class="wp-block-heading" id="h-terraform-code">Terraform code</h3>



<p>You can use any Terraform code you want, for my example I only use one which creates a Resource Group and a Virtual Network. Just note that your provider should look like this</p>



<figure class="wp-block-image size-full"><img decoding="async" width="483" height="322" src="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/01/image-16.png" alt="" class="wp-image-42522" srcset="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/01/image-16.png 483w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/01/image-16-300x200.png 300w" sizes="(max-width: 483px) 100vw, 483px" /></figure>



<h3 class="wp-block-heading" id="h-pipeline-code">Pipeline code</h3>



<p>I&#8217;m used to split my pipeline in two files, the plan.yml will be given to the ADO pipeline and it will call the template to run its code. The things done in the pipeline are pretty simple. It installs Terraform on the VM Scale Set instance, then run the Terraform commands. The block of code can be reused for the &#8220;apply&#8221;.</p>



<p>Few things to note, in my plan.yml I set a Variable Group &#8220;Terraform_SPN&#8221; that I will show you just after. That&#8217;s where we will find the information about our previously created Entra Id Enterprise Application</p>



<figure class="wp-block-image size-full"><img loading="lazy" decoding="async" width="697" height="441" src="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/01/image-19.png" alt="" class="wp-image-42525" srcset="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/01/image-19.png 697w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/01/image-19-300x190.png 300w" sizes="auto, (max-width: 697px) 100vw, 697px" /></figure>



<p>In the template.yml, what is important to note is the pool definition. Here I point just a name, which correspond to ADO Agent Pool that I created. I&#8217;ll also show this step a bit further. </p>



<pre class="wp-block-code"><code></code></pre>



<figure class="wp-block-image size-full"><img loading="lazy" decoding="async" width="711" height="557" src="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/01/image-20.png" alt="" class="wp-image-42526" srcset="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/01/image-20.png 711w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/01/image-20-300x235.png 300w" sizes="auto, (max-width: 711px) 100vw, 711px" /></figure>



<p id="h-">For the pipeline creation itself, we will go to <strong>Pipeline</strong> -&gt; Create a new pipeline -&gt; Azure Repos Git</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1020" height="1024" src="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/01/image-21-1020x1024.png" alt="" class="wp-image-42527" srcset="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/01/image-21-1020x1024.png 1020w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/01/image-21-300x300.png 300w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/01/image-21-150x150.png 150w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/01/image-21-768x771.png 768w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/01/image-21.png 1042w" sizes="auto, (max-width: 1020px) 100vw, 1020px" /></figure>



<p>Then Existing Azure Pipelines YAML file, and pick our file from our repo.</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="509" src="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/01/image-22-1024x509.png" alt="" class="wp-image-42528" srcset="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/01/image-22-1024x509.png 1024w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/01/image-22-300x149.png 300w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/01/image-22-768x382.png 768w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/01/image-22-1536x763.png 1536w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/01/image-22-2048x1018.png 2048w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<p>We will also create a <strong>Variable Group</strong>, the name doesn&#8217;t matter, just remember to put the same in your YAML code. Here you create 4 variables which are information coming from your tenant and your enterprise application. That&#8217;s gonna be used during the pipeline run to deploy your resources.</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="724" src="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/01/image-25-1024x724.png" alt="" class="wp-image-42531" srcset="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/01/image-25-1024x724.png 1024w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/01/image-25-300x212.png 300w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/01/image-25-768x543.png 768w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/01/image-25.png 1375w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<h3 class="wp-block-heading" id="h-ado-agent-pool">ADO Agent Pool</h3>



<p>In the <strong>Project Settings</strong>, look for <strong>Agent Pools</strong>. Then create a new one and fill it as follow:</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="505" src="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/01/image-23-1024x505.png" alt="" class="wp-image-42529" srcset="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/01/image-23-1024x505.png 1024w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/01/image-23-300x148.png 300w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/01/image-23-768x378.png 768w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/01/image-23-1536x757.png 1536w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/01/image-23-2048x1009.png 2048w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<p>The <strong>Authorize</strong> button will appear after you select the subscription you want, and to accept this your user must have the Owner role, as it adds rights. This will allow ADO to communicate with Azure by creating a Service Principal. Then you can fill the rest as follow:</p>



<figure class="wp-block-image size-full"><img loading="lazy" decoding="async" width="475" height="797" src="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/01/image-24.png" alt="" class="wp-image-42530" srcset="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/01/image-24.png 475w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/01/image-24-179x300.png 179w" sizes="auto, (max-width: 475px) 100vw, 475px" /></figure>



<h3 class="wp-block-heading" id="h-ado-pipeline-run">ADO pipeline run</h3>



<p>When you first run your pipeline you must authorize it to use the <strong>Variable Group</strong> and the <strong>Agent pool</strong>.</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="189" src="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/01/image-27-1024x189.png" alt="" class="wp-image-42533" srcset="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/01/image-27-1024x189.png 1024w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/01/image-27-300x55.png 300w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/01/image-27-768x141.png 768w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/01/image-27-1536x283.png 1536w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/01/image-27.png 1917w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<figure class="wp-block-image size-full"><img loading="lazy" decoding="async" width="445" height="307" src="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/01/image-26.png" alt="" class="wp-image-42532" srcset="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/01/image-26.png 445w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/01/image-26-300x207.png 300w" sizes="auto, (max-width: 445px) 100vw, 445px" /></figure>



<p>One this is done, everything should go smoothly and end like this.</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="42" src="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/01/image-28-1024x42.png" alt="" class="wp-image-42534" srcset="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/01/image-28-1024x42.png 1024w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/01/image-28-300x12.png 300w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/01/image-28-768x31.png 768w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/01/image-28-1536x63.png 1536w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/01/image-28.png 1592w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<p>I hope that this blog was useful and could help you troubleshoot that king of problem between Azure and Azure DevOps.</p>
<p>L’article <a href="https://www.dbi-services.com/blog/deploying-azure-terraform-code-with-azure-devops-and-a-storage-account-as-remote-backend/">Deploying Azure Terraform code with Azure DevOps and a storage account as remote backend</a> est apparu en premier sur <a href="https://www.dbi-services.com/blog">dbi Blog</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.dbi-services.com/blog/deploying-azure-terraform-code-with-azure-devops-and-a-storage-account-as-remote-backend/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>How to properly containerize a Node.js application</title>
		<link>https://www.dbi-services.com/blog/containerize-node-js/</link>
					<comments>https://www.dbi-services.com/blog/containerize-node-js/#respond</comments>
		
		<dc:creator><![CDATA[Nicolas Meunier]]></dc:creator>
		<pubDate>Mon, 19 Jan 2026 13:16:45 +0000</pubDate>
				<category><![CDATA[DevOps]]></category>
		<category><![CDATA[container]]></category>
		<category><![CDATA[Dockerfile]]></category>
		<category><![CDATA[Node.js]]></category>
		<category><![CDATA[Web Application]]></category>
		<guid isPermaLink="false">https://www.dbi-services.com/blog/?p=42566</guid>

					<description><![CDATA[<p>Containerizing a Node.js application is easy, but doing it well is another story. To create images that are efficient, secure, and consistent across environments, you need to make the right choices in your build strategy, dependency management, and base image selection. In this article, I’ll walk you through practical tips to containerize your Node.js applications [&#8230;]</p>
<p>L’article <a href="https://www.dbi-services.com/blog/containerize-node-js/">How to properly containerize a Node.js application</a> est apparu en premier sur <a href="https://www.dbi-services.com/blog">dbi Blog</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Containerizing a <a href="https://nodejs.org/">Node.js</a> application is easy, but doing it well is another story. To create images that are efficient, secure, and consistent across environments, you need to make the right choices in your build strategy, dependency management, and base image selection. In this article, I’ll walk you through practical tips to containerize your Node.js applications properly.</p>



<h2 class="wp-block-heading" id="h-multi-stage-build-or-not"><strong>Multi-stage build or not ?</strong></h2>



<p>Multi-stage builds are mainly useful when the projects require a build step to produce the final JavaScript files, such as with TypeScript. Generally, when the package.json contains a build scripts, it&#8217;s a good idea to use a multi-stage build.</p>



<p>The advantage is that the final image only contains the necessary files to run the application, which reduces the image size and improves security by excluding build tools and dependencies.</p>



<pre class="wp-block-code"><code># -------- Build --------
FROM node:24-slim AS builder
WORKDIR /app

COPY . .
RUN npm ci

RUN npm run build

# -------- Runtime --------
FROM node:24-slim
WORKDIR /app

ENV NODE_ENV=production

COPY package*.json ./
RUN npm ci --omit=dev

COPY --from=builder /app/dist .

USER node
EXPOSE 80
CMD &#091;"node", "index.js"]</code></pre>



<p>The main idea behind multi-stage build is to separate the containers for build and execution.</p>



<p><br>The build stage contains the source code and its own dependencies and tools, while the runtime stage only includes what is required to run the application. During the complete build process, we copy only the necessary files from the build stage to the runtime stage.</p>



<p>However, if your project is pure JavaScript without any build step, a single-stage build is sufficient and simpler.</p>



<h2 class="wp-block-heading" id="h-run-the-same-packages-in-development-and-production"><strong>Run the same packages in development and production</strong></h2>



<p>Your developers are the first users of your application. To avoid the &#8216;it works on my machine&#8217; problem, it is important to ensure consistency between development and production environments.</p>



<p>That means using the same package versions in both environments.</p>



<p>To achieve this, use the command &#8216;npm ci&#8217; instead of &#8216;npm install&#8217; in your Dockerfile.</p>



<p>The &#8216;npm ci&#8217; command installs the exact versions of the packages specified in the package-lock.json file, ensuring that both development and production environments use the same dependencies.</p>



<p><strong>Note: </strong>add the option `&#8211;omit=dev` in runtime build to avoid installing dev dependencies.<br></p>



<h2 class="wp-block-heading" id="h-carefully-choose-the-base-image"><strong>Carefully choose the base image</strong></h2>



<p>The base image is the foundation of your Docker image. Choosing the right base image can have a significant impact on the size, security, and performance of your application.</p>



<p>First, choosing a Node.js official image is a good practice. The most important is to select the right variant of the image. Image *-alpine is a popular choice because of its small size, but you must understand the implications</p>



<p>Alpine images use musl as C library instead of glibc, commonly, it&#8217;s not a problem for pure JavaScript applications. However, if your application relies on native modules or certain npm packages that expect glibc, you may encounter compatibility issues.</p>



<p>Another alternative is to use slim images, which are based on Debian with glibc. They are larger than alpine images but provide better compatibility with native modules.</p>



<p>When to choose a slim instead of alpine?</p>



<p>When your application depends on native modules or npm packages that require glibc, it&#8217;s safer to use a slim image to avoid compatibility issues. With the popularity of alpine images, the ecosystem has improved its support for musl, but some exotic packages may still cause incompatibility issues (or may require to compile binaries from sources).</p>



<p>Be aware of this when you choose your base image.</p>



<h2 class="wp-block-heading" id="h-conclusion"><strong>Conclusion</strong></h2>



<p>Containerizing a Node.js application requires careful consideration of build strategies, dependency management, and base image selection. All these choices depend on your application requirements. Therefore, evaluating your needs and choosing the best approach for your specific use case is the key to containerizing your application effectively.</p>



<p></p>
<p>L’article <a href="https://www.dbi-services.com/blog/containerize-node-js/">How to properly containerize a Node.js application</a> est apparu en premier sur <a href="https://www.dbi-services.com/blog">dbi Blog</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.dbi-services.com/blog/containerize-node-js/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Access your Kubernetes pods via Tailscale using a Sidecar container</title>
		<link>https://www.dbi-services.com/blog/access-your-kubernetes-pods-via-tailscale-using-a-sidecar-container/</link>
					<comments>https://www.dbi-services.com/blog/access-your-kubernetes-pods-via-tailscale-using-a-sidecar-container/#respond</comments>
		
		<dc:creator><![CDATA[Rémy Gaudey]]></dc:creator>
		<pubDate>Tue, 13 Jan 2026 09:00:00 +0000</pubDate>
				<category><![CDATA[DevOps]]></category>
		<category><![CDATA[Kubernetes]]></category>
		<category><![CDATA[devops]]></category>
		<category><![CDATA[kubernetes]]></category>
		<category><![CDATA[speedtest-tracker]]></category>
		<category><![CDATA[tailscale]]></category>
		<guid isPermaLink="false">https://www.dbi-services.com/blog/?p=42364</guid>

					<description><![CDATA[<p>Tailscale is a mesh VPN (Virtual Private Network) service that streamlines connecting devices and services securely across different networks. It enables encrypted point-to-point connections using the open source WireGuard protocol, which means only devices on your private network can communicate with each other. (source: https://tailscale.com/kb/1151/what-is-tailscale) I’ve been using Tailscale to connect my personal devices for [&#8230;]</p>
<p>L’article <a href="https://www.dbi-services.com/blog/access-your-kubernetes-pods-via-tailscale-using-a-sidecar-container/">Access your Kubernetes pods via Tailscale using a Sidecar container</a> est apparu en premier sur <a href="https://www.dbi-services.com/blog">dbi Blog</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Tailscale is a mesh VPN (Virtual Private Network) service that streamlines connecting devices and services securely across different networks. It enables encrypted point-to-point connections using the open source WireGuard protocol, which means only devices on your private network can communicate with each other. <br>(source: <a href="https://tailscale.com/kb/1151/what-is-tailscale">https://tailscale.com/kb/1151/what-is-tailscale</a>)</p>



<p>I’ve been using Tailscale to connect my personal devices for a while. I have it installed almost everywhere: on my laptop, my phone, my Synology NAS, etc. It is very convenient as it helps me connect to any device, from anywhere. Tailscale adds a virtual interface to your device and manages its own IP address (you’ll understand why this is important in a minute)</p>



<p>Tailscale automatically assigns a unique IP address to each device in your Tailscale network (known as a tailnet). This IP address is known as a Tailscale IP address and comes from the shared address space defined in RFC6598, known as Carrier-Grade NAT (CGNAT). <br>(source: <a href="https://tailscale.com/kb/1015/100.x-addresses">https://tailscale.com/kb/1015/100.x-addresses</a>)</p>



<p>Today, I’m taking it to the next level: I’d like to install Tailscale alongside one of my application pod and access the web interface my pod exposes, directly from my Tailscale network (aka Tailnet).</p>



<h2 class="wp-block-heading" id="h-the-challenge"><strong>The challenge:</strong></h2>



<p>I’ve installed Tailscale on the VM hosting my Kubernetes cluster (it’s a 1 node cluster, just for playing). Cool, I can access the VM from any other device. However, what about the web app my pod provides? How can I access it from my Tailnet?<br><br>As mentioned before, Tailscale has its own IP addressing, using 100.x.y.z addresses : your devices are assigned an IP from this address space.</p>



<p>Moreover, the network interface Tailscale creates (tailscale0) is not a standard interface and Kubernetes cannot simply expose services through that interface as for any other NodePort. To do so, you need to deploy Tailscale in your Kubernetes cluster.</p>



<p>Let’s do that.</p>



<h2 class="wp-block-heading" id="h-the-options"><strong>The options:</strong></h2>



<p id="h-the-options-tailscale-offers-several-options-to-connect-your-cluster-to-your-tailnet">Tailscale offers several options to connect your cluster to your tailnet:</p>



<ul class="wp-block-list">
<li><strong>Proxy</strong>: Tailscale proxies traffic to one of your Kubernetes services. Your tailnet devices can communicate with the service but not with any other Kubernetes resources. Tailscale users can reach the service using the proxy&#8217;s name.</li>



<li><strong>Sidecar</strong>: Tailscale runs as a sidecar next to a specific pod in your cluster. It lets you expose that pod on your tailnet without allowing access to any others. Tailscale users can connect to the pod using its name.</li>



<li><strong>Subnet router</strong>: A subnet router deployment exposes your entire cluster network in your tailnet. Your Tailscale devices can connect to any pod or service in your cluster, provided that applicable Kubernetes network policies and Tailscale access controls allow it.</li>
</ul>



<p>My use-case is to expose a specific pod to my tailnet (my speedtest-tracker app frontend), the “sidecar” option is then enough for my need.<br>Let’s see how to configure that together.<br><br>I invite you to read <a href="https://www.dbi-services.com/blog/monitor-your-isps-performance-with-speedtest-tracker/">my other blog about speedtest-tracker</a>. This is the app we are going to work with today.<br>I’ve been using speedtest-tracker for a while, but the app is only available from within my local network for now. Let’s see how to adapt my app’s deployment definition to add the Tailscale sidecar container.</p>



<h2 class="wp-block-heading" id="h-what-we-need"><strong>What we need:</strong></h2>



<ol class="wp-block-list">
<li>An application (that’s my speedtest-tracker app that already exists)</li>



<li>To generate an auth key that will be used by the Tailscale service deployed into the cluster</li>



<li>A secret with this auth key value in my cluster, for my pod to authenticate to my Tailscale account.</li>



<li>A service account, role and role binding to configure RBAC for my deployment ( my pod will use this service account and RBAC permissions to interact with the cluster)</li>



<li>Finally, I will add the sidecar container running Tailscale alongside my speedtest-tracker app container</li>
</ol>



<h2 class="wp-block-heading" id="h-generate-an-auth-key">Generate an auth key</h2>



<p>First, let’s generate the auth key from my Tailscale account web interface. <br>This is done under Settings &#8211;&gt; Keys &#8211;&gt; Generate auth key…</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="645" src="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/01/BLOG-auth-ky-1024x645.png" alt="" class="wp-image-42368" srcset="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/01/BLOG-auth-ky-1024x645.png 1024w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/01/BLOG-auth-ky-300x189.png 300w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/01/BLOG-auth-ky-768x483.png 768w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/01/BLOG-auth-ky-1536x967.png 1536w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/01/BLOG-auth-ky-2048x1289.png 2048w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<p>Fill out the form, and make the key reusable. Then configure the device this key applies to as ephemeral (so is your pod).<br>Copy the key value somewhere as we are going to need it in a moment.</p>



<figure class="wp-block-image size-full is-resized"><img loading="lazy" decoding="async" width="508" height="708" src="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/01/BLOG-auth-ky2.png" alt="" class="wp-image-42369" style="width:418px;height:auto" srcset="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/01/BLOG-auth-ky2.png 508w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/01/BLOG-auth-ky2-215x300.png 215w" sizes="auto, (max-width: 508px) 100vw, 508px" /></figure>



<h2 class="wp-block-heading" id="h-create-a-secret">Create a secret</h2>



<p>I create my secret, here is my tailscale-secret.yaml file:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: yaml; title: ; notranslate">
apiVersion: v1
kind: Secret
metadata:
  name: tailscale-auth
stringData:
  TS_AUTHKEY: &lt;my key value from previous step&gt;
</pre></div>


<p>I apply the configuration to my speedtest namespace:</p>



<pre class="wp-block-code"><code>kubectl apply -f tailscale-secret.yaml -n speedtest</code></pre>



<h2 class="wp-block-heading">Service account, role and role binding</h2>



<p>Next step is to configure RBAC for my Tailscale deployment. I need a service account, a role and role binding. Lucky me, Tailscale doc is well written, all I need is to follow their instructions.</p>



<p>I create a manifest called tailscale-rbac.yaml:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: yaml; title: ; notranslate">
apiVersion: v1
kind: ServiceAccount
metadata:
  name: tailscale

---

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: tailscale
rules:
  - apiGroups: &#x5B;&quot;&quot;]
    resourceNames: &#x5B;&quot;tailscale-auth&quot;]
    resources: &#x5B;&quot;secrets&quot;]
    verbs: &#x5B;&quot;get&quot;, &quot;update&quot;, &quot;patch&quot;]

---

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: tailscale
subjects:
  - kind: ServiceAccount
    name: tailscale
roleRef:
  kind: Role
  name: tailscale
  apiGroup: rbac.authorization.k8s.io
</pre></div>


<p>I apply the configuration to my speedtest namespace:</p>



<pre class="wp-block-code"><code>kubectl apply -f tailscale-rbac.yaml -n speedtest</code></pre>



<h2 class="wp-block-heading" id="h-add-the-sidecar-container-to-my-deployment">Add the sidecar container to my deployment</h2>



<p>Last step is to adapt my existing deployment to add the tailscale sidecar container.</p>



<p>Under the spec section, we need to assign the serviceAccount created previously, to the pod:</p>



<pre class="wp-block-code"><code>serviceAccountName: tailscale</code></pre>



<p>Then I create the sidecar container as per the tailscale documentation</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: yaml; title: ; notranslate">
apiVersion: apps/v1
kind: Deployment
metadata:
  name: speedtest-tracker
spec:
  replicas: 1
  revisionHistoryLimit: 0
  selector:
    matchLabels:
      app: speedtest-tracker
  template:
    metadata:
      labels:
        app: speedtest-tracker
    spec:
      serviceAccountName: tailscale  ## &lt;-- Add the Service Account Name
      containers:
        ##### Tailscal sidecar container definition#######
        - name: tailscale-sidecar
          image: ghcr.io/tailscale/tailscale:latest
          env:
            - name: TS_KUBE_SECRET
              value: tailscale-auth
            - name: TS_AUTHKEY
              valueFrom:
                secretKeyRef:
                  name: tailscale-auth
                  key: TS_AUTHKEY
            - name: TS_USERSPACE
              value: &quot;false&quot;
          securityContext:
            capabilities:
              add:
               - NET_ADMIN
        ######################

        - name: speedtest-tracker
          image: lscr.io/linuxserver/speedtest-tracker:latest
          ports:
            - containerPort: 80
          env:
            - name: PUID
              value: &quot;1000&quot;
            - name: PGID
              value: &quot;1000&quot;
            - name: DB_CONNECTION
              value: pgsql
            - name: DB_HOST
              value: postgres
            - name: DB_PORT
              value: &quot;5432&quot;
            - name: DB_DATABASE
              value: speedtest_tracker
            - name: DB_USERNAME
              value: speedy
            - name: DB_PASSWORD
              value: password

          volumeMounts:
            - mountPath: /config
              name: speedtest-tracker
      volumes:
        - name: speedtest-tracker
          persistentVolumeClaim:
            claimName: speedtest-tracker

</pre></div>


<p>I apply the configuration to my speedtest namespace:</p>



<pre class="wp-block-code"><code>kubectl apply -f speedtest-tracker.yaml -n speedtest</code></pre>



<p>Quick check, my speedtest-tracker pod is now running with 2 containers inside:</p>



<pre class="wp-block-code"><code>Rancher:~/syno/speedtest # kubectl get pods -n speedtest
NAME                                READY   STATUS    RESTARTS   AGE
postgres-7958dd877c-f4d2l           1/1     Running   0          22h
speedtest-tracker-8975967cd-s2fmc   2/2     Running   0          105m
</code></pre>



<p>And that’s it!</p>



<p>I can now access my app from both networks : my local network and my tailnet.</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="499" src="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/01/Picture-1-1024x499.png" alt="" class="wp-image-42377" srcset="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/01/Picture-1-1024x499.png 1024w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/01/Picture-1-300x146.png 300w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/01/Picture-1-768x375.png 768w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/01/Picture-1.png 1384w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<p>My pod is now seen as a device in my Tailscale network and can communicate with my other machines.</p>



<figure class="wp-block-image size-full"><img loading="lazy" decoding="async" width="904" height="534" src="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/01/Picture-2.png" alt="" class="wp-image-42378" srcset="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/01/Picture-2.png 904w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/01/Picture-2-300x177.png 300w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/01/Picture-2-768x454.png 768w" sizes="auto, (max-width: 904px) 100vw, 904px" /></figure>



<h2 class="wp-block-heading" id="h-conclusion">Conclusion</h2>



<p>What we&#8217;ve done is to turn our Pod into a Tailscale node by injecting a WireGuard interface into the Pod’s shared network namespace, with the help of a tailscale sidecar container. This allows encrypted traffic to flow directly to the app container without Kubernetes Services or Ingress.</p>



<p>This is it, I hope you enjoyed reading this blog and that you learned something new.</p>



<p> If so, drop a like, it&#8217;s always appreciated 😉</p>



<p>To go further, please visit the Tailscale official documentation that will take you through all the steps and options to configure your tailnet on Kubernetes:<br><a href="https://tailscale.com/learn/managing-access-to-kubernetes-with-tailscale#sidecar-deployments">https://tailscale.com/learn/managing-access-to-kubernetes-with-tailscale#sidecar-deployments</a></p>



<p></p>
<p>L’article <a href="https://www.dbi-services.com/blog/access-your-kubernetes-pods-via-tailscale-using-a-sidecar-container/">Access your Kubernetes pods via Tailscale using a Sidecar container</a> est apparu en premier sur <a href="https://www.dbi-services.com/blog">dbi Blog</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.dbi-services.com/blog/access-your-kubernetes-pods-via-tailscale-using-a-sidecar-container/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Containerize Vue 3 application with GitLab CI/CD</title>
		<link>https://www.dbi-services.com/blog/containerize-vue-3-application-with-gitlab-ci-cd/</link>
					<comments>https://www.dbi-services.com/blog/containerize-vue-3-application-with-gitlab-ci-cd/#respond</comments>
		
		<dc:creator><![CDATA[Nicolas Meunier]]></dc:creator>
		<pubDate>Fri, 09 Jan 2026 17:01:46 +0000</pubDate>
				<category><![CDATA[DevOps]]></category>
		<category><![CDATA[CI/CD]]></category>
		<category><![CDATA[GitLab]]></category>
		<category><![CDATA[Pipeline]]></category>
		<category><![CDATA[Vue3]]></category>
		<guid isPermaLink="false">https://www.dbi-services.com/blog/?p=42311</guid>

					<description><![CDATA[<p>Introduction In this article, I will explain how I containerized my Vuetify + Vue 3 application with GitLab CI/CD. My goal was to deploy my application on Kubernetes. To achieve this, I need to: Quick overview: overall, the full project uses a micro-service architecture and runs on Kubernetes. The Vue 3 project only contains the [&#8230;]</p>
<p>L’article <a href="https://www.dbi-services.com/blog/containerize-vue-3-application-with-gitlab-ci-cd/">Containerize Vue 3 application with GitLab CI/CD</a> est apparu en premier sur <a href="https://www.dbi-services.com/blog">dbi Blog</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<h2 class="wp-block-heading" id="h-introduction"><strong>Introduction</strong></h2>



<p>In this article, I will explain how I containerized my Vuetify + Vue 3 application with GitLab CI/CD.</p>



<p>My goal was to deploy my application on Kubernetes. To achieve this, I need to:</p>



<ul class="wp-block-list">
<li>Build the Vue 3 application</li>



<li>Build the Docker image</li>



<li>Push the image to the GitLab registry</li>
</ul>



<p>Quick overview: overall, the full project uses a micro-service architecture and runs on Kubernetes. The Vue 3 project only contains the UI, and we containerize it and serve it with an Nginx image. The backend is a REST API built with NestJS, and we containerize it separately.</p>



<h2 class="wp-block-heading" id="h-add-a-dockerfile-to-the-project"><strong>Add a Dockerfile to the project</strong></h2>



<p>First, I need a Dockerfile to build the image for my application. In this Dockerfile, I use a double-stage build. As a result, a final image containing only what is strictly necessary.</p>



<pre class="wp-block-code"><code># Build the Vue.js application
FROM node:current-alpine AS build
COPY . ./app
WORKDIR /app
RUN npm install
RUN npm run build 

# Final Nginx container
FROM nginx:alpine
COPY --from=build /app/dist /usr/share/nginx/html</code></pre>



<p>The important part in this Dockerfile is the multi-stage build.<br>The first part, with the Node container, build the application, but production does not require all the tools used during this step.<br>As a result, the second step copies only the <code>dist</code> folder, the result of the build, and embeds it into an Nginx container to serve the generated files.</p>



<h2 class="wp-block-heading" id="h-add-the-ci-cd-pipeline-configuration"><strong>Add the CI/CD pipeline configuration</strong></h2>



<p>In the second step, I add the .gitlab-ci.yml file to the project root directory.</p>



<p>This file configures the pipeline, I use the docker-in-docker service to build the image. First, I login into the registry of my project. Next, I build and push the image.</p>



<pre class="wp-block-code"><code>stages:
- build

build:
  # Use the official docker image.
  image: docker:latest
  stage: build
  services:
    - docker:dind
  before_script:
    # Login to the gitlab registry
    - docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
  script:
    # Build and push the image
    - docker build --pull -t "$CI_REGISTRY_IMAGE:$CI_COMMIT_SHA" .
    - docker push "$CI_REGISTRY_IMAGE:$CI_COMMIT_SHA"

  # Run this job where a Dockerfile exists
  rules:
    - if: $CI_COMMIT_BRANCH
      exists:
        - Dockerfile</code></pre>



<p><strong>Note:</strong> all the variables ($CI_REGISTRY_IMAGE, $CI_COMMIT_SHA&#8230;) used in the .gitlab-ci.yml are predefined variables provided by GitLab CI/CD.</p>



<h2 class="wp-block-heading" id="h-build-the-container"><strong>Build the container</strong></h2>



<p>Once I push the <code>.gitlab-ci.yml</code> file, GitLab automatically triggers the pipeline following the rules definition.</p>



<p>After completion, the status of the pipeline is green and the status passed</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="299" src="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/01/Pipelines-1024x299.png" alt="" class="wp-image-42314" srcset="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/01/Pipelines-1024x299.png 1024w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/01/Pipelines-300x88.png 300w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/01/Pipelines-768x224.png 768w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/01/Pipelines-1536x449.png 1536w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/01/Pipelines-2048x598.png 2048w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<p>As expected, the image is available in the registry of the project.</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="188" src="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/01/Container_Registry-1024x188.png" alt="" class="wp-image-42315" srcset="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/01/Container_Registry-1024x188.png 1024w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/01/Container_Registry-300x55.png 300w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/01/Container_Registry-768x141.png 768w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/01/Container_Registry-1536x282.png 1536w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2026/01/Container_Registry-2048x376.png 2048w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<h2 class="wp-block-heading" id="h-conclusion">Conclusion</h2>



<p>In summary, properly containerizing a Vue application is easy, but it requires to separate build and execution. A multi-step build with an Nginx container produces a lightweight, production-ready image.</p>



<p></p>
<p>L’article <a href="https://www.dbi-services.com/blog/containerize-vue-3-application-with-gitlab-ci-cd/">Containerize Vue 3 application with GitLab CI/CD</a> est apparu en premier sur <a href="https://www.dbi-services.com/blog">dbi Blog</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.dbi-services.com/blog/containerize-vue-3-application-with-gitlab-ci-cd/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Forgejo: Organizations, Repositories and Actions</title>
		<link>https://www.dbi-services.com/blog/forgejo-organizations-repositories-and-actions/</link>
					<comments>https://www.dbi-services.com/blog/forgejo-organizations-repositories-and-actions/#respond</comments>
		
		<dc:creator><![CDATA[Daniel Westermann]]></dc:creator>
		<pubDate>Mon, 15 Dec 2025 13:13:09 +0000</pubDate>
				<category><![CDATA[Development & Performance]]></category>
		<category><![CDATA[DevOps]]></category>
		<category><![CDATA[Forgejo]]></category>
		<guid isPermaLink="false">https://www.dbi-services.com/blog/?p=42007</guid>

					<description><![CDATA[<p>In the last post we&#8217;ve deployed Forgejo on FreeBSD 15. In this post we&#8217;re going to do something with it and that is: We&#8217;ll create a new organization, a new repository, and finally we want to create a simple action. An &#8220;Action&#8221; is what GitLab calls a pipeline. Creating a new organization is just a [&#8230;]</p>
<p>L’article <a href="https://www.dbi-services.com/blog/forgejo-organizations-repositories-and-actions/">Forgejo: Organizations, Repositories and Actions</a> est apparu en premier sur <a href="https://www.dbi-services.com/blog">dbi Blog</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>In the <a href="https://www.dbi-services.com/blog/what-is-forgejo-and-getting-it-up-and-running-on-freebsd-15/" target="_blank" rel="noreferrer noopener">last post </a>we&#8217;ve deployed Forgejo on FreeBSD 15. In this post we&#8217;re going to do something with it and that is: We&#8217;ll create a new organization, a new repository, and finally we want to create a simple action. An &#8220;Action&#8221; is what GitLab calls a pipeline.</p>



<p>Creating a new organization is just a matter of a few clicks:</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="303" src="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/12/forgejo1-1024x303.png" alt="" class="wp-image-42027" srcset="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/12/forgejo1-1024x303.png 1024w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/12/forgejo1-300x89.png 300w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/12/forgejo1-768x227.png 768w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/12/forgejo1-1536x455.png 1536w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/12/forgejo1.png 1904w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="303" src="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/12/forgejo2-1024x303.png" alt="" class="wp-image-42028" srcset="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/12/forgejo2-1024x303.png 1024w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/12/forgejo2-300x89.png 300w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/12/forgejo2-768x227.png 768w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/12/forgejo2-1536x455.png 1536w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/12/forgejo2.png 1904w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<p>The only change to the default settings is the visibility, which is changed to private. The interface directly switches to the new organizations once it is created:</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="303" src="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/12/forgejo3-1024x303.png" alt="" class="wp-image-42029" srcset="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/12/forgejo3-1024x303.png 1024w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/12/forgejo3-300x89.png 300w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/12/forgejo3-768x227.png 768w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/12/forgejo3-1536x455.png 1536w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/12/forgejo3.png 1904w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<p>The next step is to create and initialize a new repository, which is also just a matter of a few clicks:</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="303" src="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/12/forgejo4-1024x303.png" alt="" class="wp-image-42032" srcset="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/12/forgejo4-1024x303.png 1024w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/12/forgejo4-300x89.png 300w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/12/forgejo4-768x227.png 768w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/12/forgejo4-1536x455.png 1536w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/12/forgejo4.png 1904w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<figure class="wp-block-image size-full"><img loading="lazy" decoding="async" width="756" height="933" src="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/12/forgejo5.png" alt="" class="wp-image-42033" srcset="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/12/forgejo5.png 756w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/12/forgejo5-243x300.png 243w" sizes="auto, (max-width: 756px) 100vw, 756px" /></figure>



<p>All the defaults, except for the &#8220;private&#8221; flag.</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="472" src="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/12/forgejo6-1024x472.png" alt="" class="wp-image-42034" srcset="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/12/forgejo6-1024x472.png 1024w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/12/forgejo6-300x138.png 300w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/12/forgejo6-768x354.png 768w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/12/forgejo6.png 1399w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<p>To clone this repository locally you&#8217;ll need to add your public ssh key to your user&#8217;s profile:</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="396" src="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/12/forgejo7-1024x396.png" alt="" class="wp-image-42040" srcset="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/12/forgejo7-1024x396.png 1024w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/12/forgejo7-300x116.png 300w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/12/forgejo7-768x297.png 768w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/12/forgejo7-1536x594.png 1536w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/12/forgejo7.png 1669w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="396" src="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/12/forgejo8-1024x396.png" alt="" class="wp-image-42041" srcset="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/12/forgejo8-1024x396.png 1024w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/12/forgejo8-300x116.png 300w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/12/forgejo8-768x297.png 768w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/12/forgejo8-1536x594.png 1536w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/12/forgejo8.png 1669w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="396" src="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/12/forgejo9-1024x396.png" alt="" class="wp-image-42042" srcset="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/12/forgejo9-1024x396.png 1024w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/12/forgejo9-300x116.png 300w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/12/forgejo9-768x297.png 768w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/12/forgejo9-1536x594.png 1536w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/12/forgejo9.png 1669w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<p>Once you have that, the repository can be cloned as usual:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: bash; highlight: [1,7]; title: ; notranslate">
dwe@ltdwe:~/Downloads$ git clone ssh://git@192.168.122.66/dwe/myrepo.git
Cloning into &#039;myrepo&#039;...
remote: Enumerating objects: 3, done.
remote: Counting objects: 100% (3/3), done.
remote: Total 3 (delta 0), reused 0 (delta 0), pack-reused 0 (from 0)
Receiving objects: 100% (3/3), done.
dwe@ltdwe:~/Downloads$ ls -la myrepo/
total 4
drwxr-xr-x 1 dwe dwe  26 Dec 15 09:41 .
drwxr-xr-x 1 dwe dwe 910 Dec 15 09:41 ..
drwxr-xr-x 1 dwe dwe 122 Dec 15 09:41 .git
-rw-r--r-- 1 dwe dwe  16 Dec 15 09:41 README.md

</pre></div>


<p>So far so good, lets create a new &#8220;Action&#8221;. Before we do that, we need to check that actions are enabled for the repository:</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="478" src="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/12/forgejo10-1024x478.png" alt="" class="wp-image-42045" srcset="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/12/forgejo10-1024x478.png 1024w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/12/forgejo10-300x140.png 300w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/12/forgejo10-768x359.png 768w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/12/forgejo10.png 1381w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="478" src="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/12/forgejo11-1024x478.png" alt="" class="wp-image-42047" srcset="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/12/forgejo11-1024x478.png 1024w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/12/forgejo11-300x140.png 300w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/12/forgejo11-768x359.png 768w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/12/forgejo11.png 1381w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<p>What we need now is a so-called &#8220;Runner&#8221;. A &#8220;Runner&#8221; is a daemon that fetches work from an Forgejo instance, executes and returns back the result. For the &#8220;Runner&#8221; we&#8217;ll use a Debian 13 minimal setup:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: bash; highlight: [1]; title: ; notranslate">
root@debian13:~$ cat /etc/os-release 
PRETTY_NAME=&quot;Debian GNU/Linux 13 (trixie)&quot;
NAME=&quot;Debian GNU/Linux&quot;
VERSION_ID=&quot;13&quot;
VERSION=&quot;13 (trixie)&quot;
VERSION_CODENAME=trixie
DEBIAN_VERSION_FULL=13.2
ID=debian
HOME_URL=&quot;https://www.debian.org/&quot;
SUPPORT_URL=&quot;https://www.debian.org/support&quot;
BUG_REPORT_URL=&quot;https://bugs.debian.org/&quot;
</pre></div>


<p>The only requirement is to have Git, curl and jq installed, so:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: plain; title: ; notranslate">
root@debian13:~$ apt install -y git curl jq
root@debian13:~$ git --version
git version 2.47.3
</pre></div>


<p>Downloading and installing the runner (this is a copy/paste from the official <a href="https://forgejo.org/docs/latest/admin/actions/runner-installation/" target="_blank" rel="noreferrer noopener">documentation</a>):</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: bash; highlight: [1,2,3,4,5,6,7,8,9,10,11,18]; title: ; notranslate">
root@debian13:~$ export ARCH=$(uname -m | sed &#039;s/x86_64/amd64/;s/aarch64/arm64/&#039;)
root@debian13:~$ echo $ARCH
amd64
root@debian13:~$ export RUNNER_VERSION=$(curl -X &#039;GET&#039; https://data.forgejo.org/api/v1/repos/forgejo/runner/releases/latest | jq .name -r | cut -c 2-)
root@debian13:~$ echo $RUNNER_VERSION
12.1.2
root@debian13:~$ export FORGEJO_URL=&quot;https://code.forgejo.org/forgejo/runner/releases/download/v${RUNNER_VERSION}/forgejo-runner-${RUNNER_VERSION}-linux-${ARCH}&quot;
root@debian13:~$ wget -O forgejo-runner ${FORGEJO_URL}
root@debian13:~$ chmod +x forgejo-runner
root@debian13:~$ wget -O forgejo-runner.asc ${FORGEJO_URL}.asc
root@debian13:~$ gpg --keyserver hkps://keys.openpgp.org --recv EB114F5E6C0DC2BCDD183550A4B61A2DC5923710
gpg: directory &#039;/root/.gnupg&#039; created
gpg: keybox &#039;/root/.gnupg/pubring.kbx&#039; created
gpg: /root/.gnupg/trustdb.gpg: trustdb created
gpg: key A4B61A2DC5923710: public key &quot;Forgejo &lt;contact@forgejo.org&gt;&quot; imported
gpg: Total number processed: 1
gpg:               imported: 1
root@debian13:~$ gpg --verify forgejo-runner.asc forgejo-runner &amp;&amp; echo &quot;✓ Verified&quot; || echo &quot;✗ Failed&quot;
gpg: Signature made Sat 06 Dec 2025 11:10:50 PM CET
gpg:                using EDDSA key 0F527CF93A3D0D0925D3C55ED0A820050E1609E5
gpg: Good signature from &quot;Forgejo &lt;contact@forgejo.org&gt;&quot; &#x5B;unknown]
gpg:                 aka &quot;Forgejo Releases &lt;release@forgejo.org&gt;&quot; &#x5B;unknown]
gpg: WARNING: This key is not certified with a trusted signature!
gpg:          There is no indication that the signature belongs to the owner.
Primary key fingerprint: EB11 4F5E 6C0D C2BC DD18  3550 A4B6 1A2D C592 3710
     Subkey fingerprint: 0F52 7CF9 3A3D 0D09 25D3  C55E D0A8 2005 0E16 09E5
✓ Verified
</pre></div>


<p>Move that to a location which is in the PATH:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: plain; highlight: [1,2]; title: ; notranslate">
root@debian13:~$ mv forgejo-runner /usr/local/bin/forgejo-runner
root@debian13:~$ forgejo-runner -v
forgejo-runner version v12.1.2
</pre></div>


<p>As usual, a separate user should be created to run a service:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: bash; highlight: [1,2]; title: ; notranslate">
root@debian13:~$ groupadd runner
root@debian13:~$ useradd -g runner -m -s /bin/bash runner
</pre></div>


<p>As the runner will use Docker, Podman or LXC to execute the Actions, we&#8217;ll need to install Podman as well:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: bash; highlight: [1,2,4,5,7]; title: ; notranslate">
root@debian13:~$ apt install -y podman podman-docker
root@debian13:~$ podman --version
podman version 5.4.2
root@debian13:~$ systemctl enable --now podman.socket
root@debian13:~$ machinectl shell runner@
Connected to the local host. Press ^] three times within 1s to exit session.
runner@debian13:~$ systemctl --user enable --now podman.socket
Created symlink &#039;/home/runner/.config/systemd/user/sockets.target.wants/podman.socket&#039; → &#039;/usr/lib/systemd/user/podman.socket&#039;.
</pre></div>


<p>Now we need to register the runner with the Forgejo instance. Before we can do that, we need to fetch the registration token:</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="604" src="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/12/forgejo12-1024x604.png" alt="" class="wp-image-42055" srcset="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/12/forgejo12-1024x604.png 1024w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/12/forgejo12-300x177.png 300w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/12/forgejo12-768x453.png 768w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/12/forgejo12.png 1325w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="604" src="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/12/forgejo13-1024x604.png" alt="" class="wp-image-42056" srcset="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/12/forgejo13-1024x604.png 1024w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/12/forgejo13-300x177.png 300w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/12/forgejo13-768x453.png 768w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/12/forgejo13.png 1325w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<p>Back on the runner, register it:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: bash; highlight: [1,2]; title: ; notranslate">
root@debian13:~$ su - runner
runner@debian13:~$ forgejo-runner register
INFO Registering runner, arch=amd64, os=linux, version=v12.1.2. 
WARN Runner in user-mode.                         
INFO Enter the Forgejo instance URL (for example, https://next.forgejo.org/): 
http://192.168.122.66:3000/
INFO Enter the runner token:                      
BBE3MbNuTl0Wl52bayiRltJS8ciagRqghe7bXIXE
INFO Enter the runner name (if set empty, use hostname: debian13): 
runner1
INFO Enter the runner labels, leave blank to use the default labels (comma-separated, for example, ubuntu-20.04:docker://node:20-bookworm,ubuntu-18.04:docker://node:20-bookworm): 

INFO Registering runner, name=runner1, instance=http://192.168.122.66:3000/, labels=&#x5B;docker:docker://data.forgejo.org/oci/node:20-bullseye]. 
DEBU Successfully pinged the Forgejo instance server 
INFO Runner registered successfully.              
runner@debian13:~$ 
</pre></div>


<p>This will make the new runner visible in the interface, but it is in &#8220;offline&#8221; state:</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="604" src="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/12/forgejo14-1024x604.png" alt="" class="wp-image-42058" srcset="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/12/forgejo14-1024x604.png 1024w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/12/forgejo14-300x177.png 300w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/12/forgejo14-768x453.png 768w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/12/forgejo14.png 1325w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<p>Time to startup the runner:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: bash; highlight: [1,21,22,23]; title: ; notranslate">
root@debian13:~$ cat /etc/systemd/system/forgejo-runner.service
&#x5B;Unit]
Description=Forgejo Runner
Documentation=https://forgejo.org/docs/latest/admin/actions/
After=docker.service

&#x5B;Service]
ExecStart=/usr/local/bin/forgejo-runner daemon
ExecReload=/bin/kill -s HUP $MAINPID

# This user and working directory must already exist
User=runner 
WorkingDirectory=/home/runner
Restart=on-failure
TimeoutSec=0
RestartSec=10

&#x5B;Install]
WantedBy=multi-user.target

root@debian13:~$ systemctl daemon-reload
root@debian13:~$ systemctl enable forgejo-runner
root@debian13:~$ systemctl start forgejo-runner
</pre></div>


<p>Once the runner is running, the status in the interface will switch to &#8220;Idle&#8221;:</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="604" src="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/12/forgejo15-1024x604.png" alt="" class="wp-image-42059" srcset="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/12/forgejo15-1024x604.png 1024w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/12/forgejo15-300x177.png 300w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/12/forgejo15-768x453.png 768w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/12/forgejo15.png 1325w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<p>Ready for our first &#8220;Action&#8221;. Actions are defined as a yaml file in a specific directory of the repository:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: bash; highlight: [1,2,10,11,15]; title: ; notranslate">
dwe@ltdwe:~/Downloads/myrepo$ mkdir -p .forgejo/workflows/
dwe@ltdwe:~/Downloads/myrepo$ cat .forgejo/workflows/demo.yaml
on: &#x5B;push]
jobs:
  test:
    runs-on: docker
    steps:
      - run: echo All good!

dwe@ltdwe:~/Downloads/myrepo$ git add .forgejo/
dwe@ltdwe:~/Downloads/myrepo$ git commit -m &quot;my first action&quot;
&#x5B;main f9aa487] my first action
 1 file changed, 6 insertions(+)
 create mode 100644 .forgejo/workflows/demo.yaml
dwe@ltdwe:~/Downloads/myrepo$ git push
</pre></div>


<p>What that does: Whenever there is a &#8220;push&#8221; to the repository, a job will be executed on the runner with label &#8220;docker&#8221; which doesn&#8217;t do more than printing &#8220;All good!&#8221;. If everything went fine you should see the result under &#8220;Actions&#8221; section of the repository:</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="199" src="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/12/forgejo16-1024x199.png" alt="" class="wp-image-42070" srcset="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/12/forgejo16-1024x199.png 1024w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/12/forgejo16-300x58.png 300w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/12/forgejo16-768x149.png 768w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/12/forgejo16.png 1392w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="341" src="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/12/forgejo17-1024x341.png" alt="" class="wp-image-42071" srcset="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/12/forgejo17-1024x341.png 1024w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/12/forgejo17-300x100.png 300w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/12/forgejo17-768x256.png 768w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/12/forgejo17.png 1489w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<p>Nice, now we&#8217;re ready to do some real work, bust this is the topic for the next post.</p>
<p>L’article <a href="https://www.dbi-services.com/blog/forgejo-organizations-repositories-and-actions/">Forgejo: Organizations, Repositories and Actions</a> est apparu en premier sur <a href="https://www.dbi-services.com/blog">dbi Blog</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.dbi-services.com/blog/forgejo-organizations-repositories-and-actions/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>What is Forgejo and getting it up and running on FreeBSD 15</title>
		<link>https://www.dbi-services.com/blog/what-is-forgejo-and-getting-it-up-and-running-on-freebsd-15/</link>
					<comments>https://www.dbi-services.com/blog/what-is-forgejo-and-getting-it-up-and-running-on-freebsd-15/#respond</comments>
		
		<dc:creator><![CDATA[Daniel Westermann]]></dc:creator>
		<pubDate>Fri, 12 Dec 2025 14:11:27 +0000</pubDate>
				<category><![CDATA[Database Administration & Monitoring]]></category>
		<category><![CDATA[Database management]]></category>
		<category><![CDATA[Development & Performance]]></category>
		<category><![CDATA[DevOps]]></category>
		<category><![CDATA[Forgejo]]></category>
		<category><![CDATA[FreeBSD]]></category>
		<guid isPermaLink="false">https://www.dbi-services.com/blog/?p=41979</guid>

					<description><![CDATA[<p>In recent customer projects I had less to do with PostgreSQL but more with reviewing infrastructures and give recommendations about what and how to improve. In all of those projects GitLab is used in one way or the other. Some only use it for managing their code in Git and work on issues, others use [&#8230;]</p>
<p>L’article <a href="https://www.dbi-services.com/blog/what-is-forgejo-and-getting-it-up-and-running-on-freebsd-15/">What is Forgejo and getting it up and running on FreeBSD 15</a> est apparu en premier sur <a href="https://www.dbi-services.com/blog">dbi Blog</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>In recent customer projects I had less to do with PostgreSQL but more with reviewing infrastructures and give recommendations about what and how to improve. In all of those projects <a href="https://about.gitlab.com/" target="_blank" rel="noreferrer noopener">GitLab</a> is used in one way or the other. Some only use it for managing their code in Git and work on issues, others use pipelines to build their stuff, and others almost use the full set of features. Gitlab is a great product, but sometimes you do not need the full set of features so I started to look for alternatives mostly because of my own interest. One of the more popular choices seemed to be <a href="https://about.gitea.com/">Gitea</a> but as a <a href="https://blog.gitea.com/a-message-from-lunny-on-gitea-ltd.-and-the-gitea-project/" target="_blank" rel="noreferrer noopener">company was created</a> around it, a fork was created and this is <a href="https://forgejo.org" target="_blank" rel="noreferrer noopener">Forgejo</a>. The <a href="https://forgejo.org/faq/" target="_blank" rel="noreferrer noopener">FAQ</a> summarizes the most important topics around the project pretty well, so please read it.</p>



<p>As <a href="https://www.freebsd.org/releases/15.0R/announce/" target="_blank" rel="noreferrer noopener">FreeBSD 15</a> was released on 2. December that&#8217;s the perfect chance to get that up and running there and have a look how it feels like. I am not going into the installation of FreeBSD 15, this really is straight forward. I just want to mention that I opted for the &#8220;packaged base system&#8221; instead of the distributions sets which is currently in tech preview. What that means is that the whole system is installed and managed with packages and you don&#8217;t need <a href="https://man.freebsd.org/cgi/man.cgi?freebsd-update" target="_blank" rel="noreferrer noopener">freebsd-update</a> anymore. Although it is still available, it will not work anymore if you try to use it:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: bash; highlight: [1,11]; title: ; notranslate">
root@forgejo:~ $ cat /etc/os-release 
NAME=FreeBSD
VERSION=&quot;15.0-RELEASE&quot;
VERSION_ID=&quot;15.0&quot;
ID=freebsd
ANSI_COLOR=&quot;0;31&quot;
PRETTY_NAME=&quot;FreeBSD 15.0-RELEASE&quot;
CPE_NAME=&quot;cpe:/o:freebsd:freebsd:15.0&quot;
HOME_URL=&quot;https://FreeBSD.org/&quot;
BUG_REPORT_URL=&quot;https://bugs.FreeBSD.org/&quot;
root@forgejo:~ $ freebsd-update fetch
freebsd-update is incompatible with the use of packaged base.  Please see
https://wiki.freebsd.org/PkgBase for more information.

</pre></div>


<p>Coming back to Forgejo: On FreeBSD this is available as a package, so you can just go ahead and install it:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: bash; highlight: [1,6,25]; title: ; notranslate">
root@forgejo:~$ pkg search forgejo
forgejo-13.0.2_1               Compact self-hosted Git forge
forgejo-act_runner-9.1.0_2     Act runner is a runner for Forgejo based on the Gitea Act runner
forgejo-lts-11.0.7_1           Compact self-hosted Git forge
forgejo7-7.0.14_3              Compact self-hosted Git service
root@forgejo:~ $ pkg install forgejo
Updating FreeBSD-ports repository catalogue...
FreeBSD-ports repository is up to date.
Updating FreeBSD-ports-kmods repository catalogue...
FreeBSD-ports-kmods repository is up to date.
Updating FreeBSD-base repository catalogue...
FreeBSD-base repository is up to date.
All repositories are up to date.
The following 32 package(s) will be affected (of 0 checked):

New packages to be INSTALLED:
        FreeBSD-clibs-lib32: 15.0 &#x5B;FreeBSD-base]
        brotli: 1.1.0,1 &#x5B;FreeBSD-ports]
...
Number of packages to be installed: 32

The process will require 472 MiB more space.
100 MiB to be downloaded.

Proceed with this action? &#x5B;y/N]: y
Message from python311-3.11.13_1:

--
Note that some standard Python modules are provided as separate ports
as they require additional dependencies. They are available as:

py311-gdbm       databases/py-gdbm@py311
py311-sqlite3    databases/py-sqlite3@py311
py311-tkinter    x11-toolkits/py-tkinter@py311
=====
Message from git-2.51.0:

--
If you installed the GITWEB option please follow these instructions:

In the directory /usr/local/share/examples/git/gitweb you can find all files to
make gitweb work as a public repository on the web.

All you have to do to make gitweb work is:
1) Please be sure you&#039;re able to execute CGI scripts in
   /usr/local/share/examples/git/gitweb.
2) Set the GITWEB_CONFIG variable in your webserver&#039;s config to
   /usr/local/etc/git/gitweb.conf. This variable is passed to gitweb.cgi.
3) Restart server.


If you installed the CONTRIB option please note that the scripts are
installed in /usr/local/share/git-core/contrib. Some of them require
other ports to be installed (perl, python, etc), which you may need to
install manually.
=====
Message from git-lfs-3.6.1_8:

--
To get started with Git LFS, the following commands can be used:

  1. Setup Git LFS on your system. You only have to do this once per
     repository per machine:

     $ git lfs install

  2. Choose the type of files you want to track, for examples all ISO
     images, with git lfs track:

     $ git lfs track &quot;*.iso&quot;

  3. The above stores this information in gitattributes(5) files, so
     that file needs to be added to the repository:

     $ git add .gitattributes

  4. Commit, push and work with the files normally:

     $ git add file.iso
     $ git commit -m &quot;Add disk image&quot;
     $ git push
=====
Message from forgejo-13.0.2_1:

--
Before starting forgejo for the first time, you must set a number of
secrets in the configuration file. For your convenience, a sample file
has been copied to /usr/local/etc/forgejo/conf/app.ini.

You need to replace every occurence of CHANGE_ME in the file with
sensible values. Please refer to the official documentation at
https://forgejo.org for details.

You will also likely need to create directories for persistent storage.
Run
    su -m git -c &#039;forgejo doctor check&#039;
to check if all prerequisites have been met.
</pre></div>


<p>What I really like with the FreeBSD packages is, that they usually give clear instructions on what to do. We&#8217;ll go with the web-based installer, so:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: bash; highlight: [1,2,3,5,7]; title: ; notranslate">
root@forgejo:~ $ chown git:git /usr/local/etc/forgejo/conf
root@forgejo:~ $ rm /usr/local/etc/forgejo/conf/app.ini
root@forgejo:~ $ service -l | grep for
forgejo
root@forgejo:~ $ service forgejo enable
forgejo enabled in /etc/rc.conf
root@forgejo:~ $ service forgejo start
2025/12/12 14:16:42 ...etting/repository.go:318:loadRepositoryFrom() &#x5B;W] SCRIPT_TYPE &quot;bash&quot; is not on the current PATH. Are you sure that this is the correct SCRIPT_TYPE?

&#x5B;1] Check paths and basic configuration
 - &#x5B;E] Failed to find configuration file at &#039;/usr/local/etc/forgejo/conf/app.ini&#039;.
 - &#x5B;E] If you&#039;ve never ran Forgejo yet, this is normal and &#039;/usr/local/etc/forgejo/conf/app.ini&#039; will be created for you on first run.
 - &#x5B;E] Otherwise check that you are running this command from the correct path and/or provide a `--config` parameter.
 - &#x5B;E] Cannot proceed without a configuration file
FAIL
Command error: stat /usr/local/etc/forgejo/conf/app.ini: no such file or directory

2025/12/12 14:16:42 ...etting/repository.go:318:loadRepositoryFrom() &#x5B;W] SCRIPT_TYPE &quot;bash&quot; is not on the current PATH. Are you sure that this is the correct SCRIPT_TYPE?

&#x5B;1] Check paths and basic configuration
 - &#x5B;E] Failed to find configuration file at &#039;/usr/local/etc/forgejo/conf/app.ini&#039;.
 - &#x5B;E] If you&#039;ve never ran Forgejo yet, this is normal and &#039;/usr/local/etc/forgejo/conf/app.ini&#039; will be created for you on first run.
 - &#x5B;E] Otherwise check that you are running this command from the correct path and/or provide a `--config` parameter.
 - &#x5B;E] Cannot proceed without a configuration file
FAIL
Command error: stat /usr/local/etc/forgejo/conf/app.ini: no such file or directory
</pre></div>


<p>Seems bash is somehow expected, but this is not available right now:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: bash; highlight: [1]; title: ; notranslate">
root@forgejo:~ $ which bash
root@forgejo:~ $ 
</pre></div>


<p>Once more:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: bash; highlight: [1,2,4,13]; title: ; notranslate">
root@forgejo:~ $ pkg install bash
root@forgejo:~ $ service forgejo stop
Stopping forgejo.
root@forgejo:~ $ service forgejo start

&#x5B;1] Check paths and basic configuration
 - &#x5B;E] Failed to find configuration file at &#039;/usr/local/etc/forgejo/conf/app.ini&#039;.
 - &#x5B;E] If you&#039;ve never ran Forgejo yet, this is normal and &#039;/usr/local/etc/forgejo/conf/app.ini&#039; will be created for you on first run.
 - &#x5B;E] Otherwise check that you are running this command from the correct path and/or provide a `--config` parameter.
 - &#x5B;E] Cannot proceed without a configuration file
FAIL
Command error: stat /usr/local/etc/forgejo/conf/app.ini: no such file or directory
root@forgejo:~ $ service forgejo status
forgejo is running as pid 3448.
</pre></div>


<p>The web installer is available on port 3000 and you can choose between the usual database backends:</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="433" src="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/12/Screenshot_20251212_143805-1024x433.png" alt="" class="wp-image-41996" srcset="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/12/Screenshot_20251212_143805-1024x433.png 1024w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/12/Screenshot_20251212_143805-300x127.png 300w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/12/Screenshot_20251212_143805-768x325.png 768w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/12/Screenshot_20251212_143805.png 1332w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<p>To keep it simple I went with SQLite3, kept everything at the default and provided the Administrator information further down the screen. Before the installer succeeded I had to create these two directories:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: plain; highlight: [1,2,3,4]; title: ; notranslate">
root@forgejo:~ $ mkdir /usr/local/share/forgejo/data/
root@forgejo:~ $ chown git:git /usr/local/share/forgejo/data/
root@forgejo:~ $ mkdir /usr/local/share/forgejo/log
root@forgejo:~ $ chown git:git /usr/local/share/forgejo/log
</pre></div>


<p>Once that was done it went fine and this is the welcome screen:</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="233" src="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/12/Screenshot_20251212_144416-1024x233.png" alt="" class="wp-image-41998" srcset="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/12/Screenshot_20251212_144416-1024x233.png 1024w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/12/Screenshot_20251212_144416-300x68.png 300w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/12/Screenshot_20251212_144416-768x174.png 768w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/12/Screenshot_20251212_144416-1536x349.png 1536w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/12/Screenshot_20251212_144416.png 1902w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<p>As with the other tools in that area there are the common sections like &#8220;Issues&#8221;, &#8220;Pull requests&#8221;, and &#8220;Milestones&#8221;. </p>



<p>In the next post we&#8217;re going to create an organization, a repository and try to create a simple, how GitLab calls it, pipeline.</p>
<p>L’article <a href="https://www.dbi-services.com/blog/what-is-forgejo-and-getting-it-up-and-running-on-freebsd-15/">What is Forgejo and getting it up and running on FreeBSD 15</a> est apparu en premier sur <a href="https://www.dbi-services.com/blog">dbi Blog</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.dbi-services.com/blog/what-is-forgejo-and-getting-it-up-and-running-on-freebsd-15/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Monitor your ISP&#8217;s performance with Speedtest Tracker</title>
		<link>https://www.dbi-services.com/blog/monitor-your-isps-performance-with-speedtest-tracker/</link>
					<comments>https://www.dbi-services.com/blog/monitor-your-isps-performance-with-speedtest-tracker/#respond</comments>
		
		<dc:creator><![CDATA[Rémy Gaudey]]></dc:creator>
		<pubDate>Tue, 05 Aug 2025 07:30:00 +0000</pubDate>
				<category><![CDATA[DevOps]]></category>
		<category><![CDATA[Kubernetes]]></category>
		<category><![CDATA[devops]]></category>
		<category><![CDATA[rke2]]></category>
		<category><![CDATA[speedtest]]></category>
		<guid isPermaLink="false">https://www.dbi-services.com/blog/?p=39861</guid>

					<description><![CDATA[<p>I recently changed ISP, and I wanted to monitor its performance and make sure I get what I&#8217;m paying for. I initially started writing a bash script that I was running in a crontab, then writing the results in an md file. But that&#8217;s not very sexy. I wanted something graphical with a nice UI. [&#8230;]</p>
<p>L’article <a href="https://www.dbi-services.com/blog/monitor-your-isps-performance-with-speedtest-tracker/">Monitor your ISP&#8217;s performance with Speedtest Tracker</a> est apparu en premier sur <a href="https://www.dbi-services.com/blog">dbi Blog</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>I recently changed ISP, and I wanted to monitor its performance and make sure I get what I&#8217;m paying for. I initially started writing a bash script that I was running in a crontab, then writing the results in an md file. <br>But that&#8217;s not very sexy. I wanted something graphical with a nice UI.</p>



<p>It turns out that there is a project called <a href="https://github.com/alexjustesen/speedtest-tracker">Speedtest-tracker</a>, written and maintained by <a href="https://www.linkedin.com/in/alexander-justesen/">Alex Justesen</a> on GitHub that does just what I was looking for. Behind the scenes, speedtest-tracker uses the <a href="https://www.speedtest.net/apps/cli">official Ookla CLI</a>.</p>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
<p>Speedtest Tracker is a self-hosted application that monitors the performance and uptime of your internet connection. Built using Laravel and Speedtest CLI from Ookla®, deployable with Docker.</p>
</blockquote>



<p>The cool thing is that Speedtest Tracker is containerized; you can run it anywhere you want! At first, I had installed it as a Docker container on my Synology, but the NAS I own only has 1Gbps Ethernet ports, and my ISP advertises DL/UL speeds of 5 Gbps / 900 Mbps</p>



<p>I have a mini PC that I use for my <a href="https://www.dbi-services.com/blog/install-a-single-node-kubernetes-cluster-with-suse-rke2-and-deploy-your-own-yak-instance/">YaK projects</a>, which embeds a 2.5Gbps Ethernet card. I&#8217;d rather use this machine than the NAS. Even though I will not be able to test the full speed my ISP provides, at least I will be able to see if I get near 2.5Gbps, which is already a great download speed.</p>



<p>OK, enough talking, let&#8217;s get our hands dirty and let&#8217;s deploy Speedtest-tracker!</p>



<h2 class="wp-block-heading" id="h-your-list-of-ingredients">Your list of ingredients</h2>



<p>Here is what you need to add to your recipe:</p>



<ul class="wp-block-list">
<li>A hypervisor, in my case I&#8217;m using Proxmox</li>



<li>A virtual machine (on which I&#8217;m using SUSE Linux, but any distro will work just fine)</li>



<li>A Kubernetes cluster. Keep it simple, <a href="https://docs.rke2.io/install/quickstart">install a single node RKE2</a></li>



<li>A persistent volume: <a href="https://github.com/rancher/local-path-provisioner">local-path provisioner</a> does the job</li>
</ul>



<p>I&#8217;m passing these steps here, but you can find them in <a href="https://www.dbi-services.com/blog/install-a-single-node-kubernetes-cluster-with-suse-rke2-and-deploy-your-own-yak-instance/">my other blog</a> dedicated to installing the YaK, if you need.</p>



<p>Everything is well documented on <a href="https://docs.speedtest-tracker.dev/">Speedtest tracker web page</a></p>



<p>There is already a <a href="https://github.com/maximemoreillon/kubernetes-manifests/tree/master/speedtest-tracker">community manifest</a> available for Kubernetes, written and maintained by <a href="https://github.com/maximemoreillon">Maxime Moreillon</a>.</p>



<h2 class="wp-block-heading" id="h-how-to-install-speedtest-tracker">How to install speedtest-tracker?</h2>



<p>Installing speedtest tracker is as simple as deploying 2 yaml files:</p>



<ul class="wp-block-list">
<li>1 for the postgreSQL database</li>



<li>1 for the frontend app</li>
</ul>



<p>The PG database and the application manifests are available <a href="https://github.com/maximemoreillon/kubernetes-manifests/tree/master/speedtest-tracker">here</a>.<br>All the credit goes to <a href="https://github.com/maximemoreillon">Maxime Moreillon</a>, who wrote these manifests and made them available to the community.<br>All I had to do was save the files to my &#8220;speedtest&#8221; folder and adjust them to my context.</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: bash; title: ; notranslate">
localhost:~ # cd speedtest
localhost:~/speedtest # ls -ltrh
total 8.0K
-rw-r--r-- 1 root root 1.1K Jul 29 16:29 postgres.yaml
-rw-r--r-- 1 root root 2.2K Aug  3 18:48 speedtest-tracker.yaml
</pre></div>


<h3 class="wp-block-heading" id="h-my-postgres-manifest">My Postgres manifest </h3>



<p>I haven&#8217;t changed a single line from Maxime Moreillon&#8217;s code. The manifest includes the PVC definition, the PostgreSQL deployment itself, and a service:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: yaml; title: ; notranslate">
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: postgres
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 50Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: postgres
spec:
  replicas: 1
  selector:
    matchLabels:
      app: postgres
  template:
    metadata:
      labels:
        app: postgres
    spec:
      containers:
        - name: postgres
          image: postgres:15.1
          env:
            - name: POSTGRES_PASSWORD
              value: password
            - name: POSTGRES_DB
              value: speedtest_tracker
            - name: POSTGRES_USER
              value: speedy
          volumeMounts:
            - mountPath: /var/lib/postgresql/data
              name: postgres
      volumes:
        - name: postgres
          persistentVolumeClaim:
            claimName: postgres

---
apiVersion: v1
kind: Service
metadata:
  name: postgres
spec:
  ports:
    - port: 5432
  selector:
    app: postgres
  type: ClusterIP
</pre></div>


<h3 class="wp-block-heading" id="h-my-speedtest-tracker-manifest">My speedtest-tracker manifest </h3>



<p>With a few adjustments from Maxime&#8217;s code to fit my needs. It comes with the PVC definition, the application deployment, and the service:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: yaml; title: ; notranslate">
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: speedtest-tracker
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 5Gi
  storageClassName: local-path  # Adjust if you&#039;re using a different StorageClass

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: speedtest-tracker
spec:
  replicas: 1
  revisionHistoryLimit: 0
  selector:
    matchLabels:
      app: speedtest-tracker
  template:
    metadata:
      labels:
        app: speedtest-tracker
    spec:
      containers:
        - name: speedtest-tracker
          image: lscr.io/linuxserver/speedtest-tracker:latest
          ports:
            - containerPort: 80
          env:
            - name: PUID
              value: &quot;1000&quot;
            - name: PGID
              value: &quot;1000&quot;
            - name: DB_CONNECTION
              value: pgsql
            - name: DB_HOST
              value: postgres
            - name: DB_PORT
              value: &quot;5432&quot;
            - name: DB_DATABASE
              value: speedtest_tracker
            - name: DB_USERNAME
              value: speedy
            - name: DB_PASSWORD
              value: password

########MY PERSONAL ENV VARIABLES########
            - name: APP_NAME
              value: home-speedtest-k8s
            - name: APP_KEY
              value: &lt;generate your own app key&gt;
            - name: DISPLAY_TIMEZONE
              value: Europe/Paris
            - name: SPEEDTEST_SERVERS
              value: &quot;62493&quot;
            - name: SPEEDTEST_SCHEDULE
              value: &#039;*/30 * * * *&#039;
            - name: PUBLIC_DASHBOARD
              value: &quot;true&quot;
#########################################

          volumeMounts:
            - mountPath: /config
              name: speedtest-tracker
      volumes:
        - name: speedtest-tracker
          persistentVolumeClaim:
            claimName: speedtest-tracker

---
apiVersion: v1
kind: Service
metadata:
  name: speedtest-tracker
  labels:
    app: speedtest-tracker
spec:
  selector:
    app: speedtest-tracker
  type: NodePort
  ports:
    - name: http
      port: 80
      targetPort: 80
      nodePort: 30080  # You can change this to any port in the 30000-32767 range
</pre></div>


<p>Now, generate your own APP KEY and paste the value in the placeholder in the code above (including the <code>base64:</code> prefix), here is how:</p>



<pre class="wp-block-code"><code>
echo -n 'base64:'; openssl rand -base64 32;</code></pre>



<p>And that&#8217;s it!<br>Once your manifests are ready and once you are happy with the environment variables you want (the list of env. variables is <a href="https://docs.speedtest-tracker.dev/getting-started/environment-variables">available here</a>), you just need to create your namespace on your cluster and apply the configuration:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: plain; title: ; notranslate">
kubectl create ns speedtest
kubectl apply -f speedtest/postgres.yaml -n speedtest
kubectl apply -f speedtest/speedtest-racker.yaml -n speedtest
</pre></div>


<p>After a few seconds, your pods will come up:</p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: bash; title: ; notranslate">
localhost:~/speedtest # kubectl get pods -n speedtest
NAME                                 READY   STATUS    RESTARTS        AGE
postgres-6c8499b968-rbwlw            1/1     Running   2 (4d18h ago)   5d21h
speedtest-tracker-7997cbdc8f-64n7c   1/1     Running   0               19h
</pre></div>


<h2 class="wp-block-heading" id="h-enjoy">Enjoy !</h2>



<p>If you did things right, you should be able to monitor your internet speed and display the results on a neat UI. In my case, I fire a speedtest every 30 minutes (I know, that&#8217;s overkill, but I just wanted to play a bit. I will reduce the frequency to something more reasonable, I promise <img src="https://s.w.org/images/core/emoji/17.0.2/72x72/1f609.png" alt="😉" class="wp-smiley" style="height: 1em; max-height: 1em;" /> )<br></p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="601" src="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/08/image-1-1024x601.png" alt="Speedtest-tracker UI" class="wp-image-39874" srcset="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/08/image-1-1024x601.png 1024w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/08/image-1-300x176.png 300w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/08/image-1-768x451.png 768w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/08/image-1-1536x901.png 1536w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/08/image-1-2048x1201.png 2048w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<p>Cool, no?</p>



<h2 class="wp-block-heading" id="h-to-go-further">To go further</h2>



<p>I&#8217;d love to monitor the full bandwidth my ISP advertises, but I&#8217;m limited by my hardware: my router does not support link aggregation, and it only comes with one 10G fiber-optic WAN interface + one 2.5 Gbps and two 1Gbps LAN interfaces. There is no chance I can test the full fiber-optic capacity with this hardware.</p>



<p>In the future, I might buy a switch that supports LACP and configure my router in bridge mode to be able to reach the full WAN bandwidth, or invest in a router that provides more high-speed interfaces. But to be honest, the investment is not really worth it.</p>



<p>One thing I could do however would be to enable HTTPS and add a Let&#8217;s Encrypt certificate to secure the connections to my frontend. That&#8217;s an improvement I could make soon.</p>
<p>L’article <a href="https://www.dbi-services.com/blog/monitor-your-isps-performance-with-speedtest-tracker/">Monitor your ISP&#8217;s performance with Speedtest Tracker</a> est apparu en premier sur <a href="https://www.dbi-services.com/blog">dbi Blog</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.dbi-services.com/blog/monitor-your-isps-performance-with-speedtest-tracker/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Parallel execution of Ansible roles</title>
		<link>https://www.dbi-services.com/blog/parallel-execution-of-ansible-roles/</link>
					<comments>https://www.dbi-services.com/blog/parallel-execution-of-ansible-roles/#respond</comments>
		
		<dc:creator><![CDATA[Martin Bracher]]></dc:creator>
		<pubDate>Tue, 10 Jun 2025 18:56:18 +0000</pubDate>
				<category><![CDATA[Ansible]]></category>
		<category><![CDATA[DevOps]]></category>
		<guid isPermaLink="false">https://www.dbi-services.com/blog/?p=31860</guid>

					<description><![CDATA[<p>Introduction You can run a playbook for specific host(s), a group of hosts, or &#8220;all&#8221; (all hosts of the inventory). Ansible will then run the tasks in parallel on the specified hosts. To avoid an overload, the parallelism &#8211; called &#8220;forks&#8221; &#8211; is limited to 5 per default. A task with a loop (e.g. with_items:) [&#8230;]</p>
<p>L’article <a href="https://www.dbi-services.com/blog/parallel-execution-of-ansible-roles/">Parallel execution of Ansible roles</a> est apparu en premier sur <a href="https://www.dbi-services.com/blog">dbi Blog</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<h2 class="wp-block-heading" id="h-introduction">Introduction</h2>



<p>You can run a playbook for specific host(s), a group of hosts, or &#8220;all&#8221; (all hosts of the inventory). </p>



<p>Ansible will then run the tasks in parallel on the specified hosts. To avoid an overload, the parallelism &#8211; called &#8220;forks&#8221; &#8211; is limited to 5 per default.</p>



<p>A task with a loop (e.g. <code>with_items:</code>) will be executed serially per default. To run it in parallel, you can use the &#8220;async&#8221; mode. </p>



<p>But unfortunately, this async mode will not work to include roles or other playbooks in the loop. In this blog post we will see a workaround to run roles in parallel (on the same host).</p>



<h2 class="wp-block-heading" id="h-parallelization-over-the-ansible-hosts">Parallelization over the ansible hosts</h2>



<p>In this example, we have 3 hosts (dbhost1, dbhost2, dbhost3) in the dbservers group <br>(use <code>ansible-inventory --graph</code> to see all your groups) and we run the following sleep1.yml playbook </p>



<pre class="wp-block-code"><code>- name: PLAY1
  hosts: &#091;dbservers]
  gather_facts: no
  tasks:
    - ansible.builtin.wait_for: timeout=10</code></pre>



<p>The tasks of the playbook will run in  parallel on all hosts of the <code>dbservers</code> group, but not more at the same time as specified with the &#8220;forks&#8221; parameter. (specified in ansible.cfg, shell-variable ANSIBLE_FORKS, commandline parameter &#8211;forks)<br><a href="https://docs.ansible.com/ansible/latest/playbook_guide/playbooks_strategies.html">https://docs.ansible.com/ansible/latest/playbook_guide/playbooks_strategies.html</a></p>


<div class="wp-block-syntaxhighlighter-code "><pre class="brush: bash; title: ; notranslate">
$ time ansible-playbook sleep1.yml --forks 2
...
ok: &#x5B;dbhost1]  #appears after 10sec
ok: &#x5B;dbhost2]  #appears after 10sec
ok: &#x5B;dbhost3]  #appears after 20sec
...
real    0m22.384s
</pre></div>


<p>With forks=2 the results of dbhost1 and dbhost2 will both be returned after 10 seconds (sleep 10 in parallel). dbhost3 has to wait until one of the running tasks is completed. So the playbook will complete after approx. 20 seconds. If forks is 1, then it takes 30s, if forks is 3, it takes 10s (plus overhead).</p>



<h2 class="wp-block-heading" id="h-parallelization-of-loops">Parallelization of loops</h2>



<p>Per default, a loop is not run in parallel</p>



<pre class="wp-block-code"><code>- name: PLAY2A
  hosts: localhost
  tasks:
    - set_fact:
        sleepsec: &#091; 1, 2, 3, 4, 5, 6, 7 ]

    - name: nonparallel loop
      ansible.builtin.wait_for: "timeout={{item}} "
      with_items: "{{sleepsec}}"
      register: loop_result
</code></pre>



<p>This sequential run will take at least 28 seconds. </p>



<p>To run the same loop in parallel, use &#8220;async&#8221;<br><a href="https://docs.ansible.com/ansible/latest/playbook_guide/playbooks_async.html">https://docs.ansible.com/ansible/latest/playbook_guide/playbooks_async.html</a></p>



<pre class="wp-block-code"><code>- name: PLAY2B
  hosts: localhost
  gather_facts: no
  tasks:
    - name: parallel loop
      ansible.builtin.wait_for: "timeout={{item}}"
      with_items: "{{sleepsec}}"
      register: loop_result
      async: 600  # Maximum runtime in seconds. Adjust as needed.
      poll: 0     # Fire and continue (never poll here)

    # in the meantime, you can run other tasks

    - name: Wait for parallel loop to finish (poll)
      async_status:
        jid: "{{ item.ansible_job_id }}"
      register: loop_check
      until: loop_check.finished
      delay: 1      # Check every 1 seconds
      retries: 600  # Retry up to 600 times. 
                    # delay*retries should be "async:" above
      with_items: "{{ loop_result.results }}"
</code></pre>



<p>In the first task we start all sleeps in parallel. It will timeout after 600 seconds. We will not wait for the result (poll: 0). A later task polls the background processes until all parallel loops are finished. This execution only takes a little bit more than 7 seconds (the longest sleep plus some overhead). Between the loop and the poll you can add other tasks to use the waiting time for something more productive. Or if you know your loop takes at least 1 minute, then you can add to reduce the overhead of the polling loop, an <code>ansible.builtin.wait_for: "timeout=60"</code>.</p>



<p>For example, we have an existing role to create and configure a new useraccount with many, sometimes longer running steps, e.g. add to LDAP, create NFS share, create a certificate, send a welcome-mail, &#8230;.; most of these tasks are not bound to a specific host, and will run on &#8220;localhost&#8221; calling a REST-API.</p>



<p>The following code example is a dummy role for copy/paste to see how it works with parallel execution.</p>



<pre class="wp-block-code"><code># roles/create_user/tasks/main.yml    
    - debug: var=user
    - ansible.builtin.wait_for: timeout=10</code></pre>



<p>Now we have to create many useraccounts and would like to do that  in parallel. We use the code above and adapt it:</p>



<pre class="wp-block-code"><code>- name: PLAY3A
  hosts: localhost
  gather_facts: no
  tasks:
    - set_fact:
        users: &#091; 'Dave', 'Eva', 'Hans' ]

    - name: parallel user creation
      ansible.builtin.include_role: name=create_user
      with_items: "{{users}}"
      loop_control:
        loop_var: user
      register: loop_result
      async: 600
      poll: 0</code></pre>



<p>But unfortunately, Ansible will not accept include_role: <br><code>ERROR! 'poll' is not a valid attribute for a IncludeRole</code></p>



<p>The only solution is to rewrite the role and to run every task with the async mode. </p>



<p>But is there no better solution to re-use existing roles? Let&#8217;s see&#8230;</p>



<h2 class="wp-block-heading" id="h-parallel-execution-of-roles-in-a-loop">Parallel execution of roles in a loop</h2>



<p>As we already know</p>



<ul class="wp-block-list">
<li>Ansible can run playbooks/tasks in parallel over different hosts (hosts parameter of the play).</li>



<li>Ansible can run tasks with a loop in parallel with the async option, but</li>



<li>Ansible can NOT run tasks with a loop in parallel for include_role or include_tasks</li>
</ul>



<p>So, the trick will be to run the roles on &#8220;different&#8221; hosts. There is a special behavior of localhost. Well-known is the localhost IP 127.0.0.1; But also 127.0.0.2 to 127.255.255.254 refer to localhost (check it with &#8216;ping&#8217;). For our create-user script: we will run it on &#8220;different&#8221; localhosts in parallel. For that, we create a host-group at runtime with localhost addresses. The number of these localhost IP&#8217;s is equal to the number of users to create.</p>



<p>users[0] is Dave. It will be created on 127.0.0.1<br>users[1] is Eva. It will be created on 127.0.0.2<br>users[2] is Hans. It will be created on 127.0.0.3<br>&#8230;</p>



<pre class="wp-block-code"><code>- name: create dynamic localhosts group
  hosts: localhost
  gather_facts: no
  vars:
    users: &#091; 'Dave', 'Eva', 'Hans' ]
  tasks:
    # Create a group of localhost IP's; 
    # Ansible will treat it as "different" hosts.
    # To know, which locahost-IP should create which user:
    # The last 2 numbers of the IP matches the element of the {{users}} list:
    # 127.0.1.12 -&gt; (1*256 + 12)-1 = 267 -&gt; users&#091;267]
    # -1: first Array-Element is 0, but localhost-IP starts at 127.0.0.1
    - name: create parallel execution localhosts group
      add_host:
        name: "127.0.{{item|int // 256}}.{{ item|int % 256 }}"
        group: localhosts
      with_sequence:  start=1  end="{{users|length}}" 

- name: create useraccounts
  hosts: &#091;localhosts]  # &#091; 127.0.0.1, 127.0.0.2, ... ]
  connection: local
  gather_facts: no
  vars:
    users: &#091; 'Dave', 'Eva', 'Hans' ]
  # this play runs in parallel over the &#091;localhosts] 
  tasks:
    - set_fact:
        ip_nr: "{{ inventory_hostname.split('.') }}"

    - name: parallel user creation
      ansible.builtin.include_role:
        name: create_user
      vars:
        user: "{{ users&#091; (ip_nr&#091;2]|int*256 + ip_nr&#091;3]|int-1) ] }}"
</code></pre>



<p>In this example: With forks=3 it runs in 11 seconds. With forks=1 (no parallelism) it takes 32 seconds.</p>



<p>The degree of parallelism (forks) depends on your use-case and your infrastructure. If you have to restore files, probably the network-bandwith, disk-I/O or the number of tape-slots is limited. Choose a value of forks that does not overload your infrastructure.</p>



<p>If some tasks or the whole role has to be run on another host than localhost (e.g. create a local useraccount on a server), then you can use <code>delegate_to: "{{remote_host}}"</code>.</p>



<p>This principle can ideally be used for plays that are not bound to a specific host, usually for tasks that will run from localhost and calling a REST-API without logging in with ssh to a server. </p>



<h2 class="wp-block-heading" id="h-summary">Summary</h2>



<p>Ansible is optimized to run playbooks on different hosts in parallel. The degree of parallelism can be limited by the &#8220;forks&#8221; parameter (default 5).</p>



<p>Ansible can run loops in parallel with the async mode. Unfortunately that does not work if we include a role or tasks.</p>



<p>The workaround to run roles in parallel on the same host is to assign every loop item to a different host, and then to run the role on different hosts. For the different hosts we can use the localhost IP&#8217;s between 127.0.0.1 and 127.255.255.254 to build a dynamic host-group; the number corresponds to number of loop items</p>
<p>L’article <a href="https://www.dbi-services.com/blog/parallel-execution-of-ansible-roles/">Parallel execution of Ansible roles</a> est apparu en premier sur <a href="https://www.dbi-services.com/blog">dbi Blog</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.dbi-services.com/blog/parallel-execution-of-ansible-roles/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>SQLDay 2025 &#8211; Wrocław &#8211; Sessions</title>
		<link>https://www.dbi-services.com/blog/sqlday-2025-wroclaw-sessions/</link>
					<comments>https://www.dbi-services.com/blog/sqlday-2025-wroclaw-sessions/#respond</comments>
		
		<dc:creator><![CDATA[Amine Haloui]]></dc:creator>
		<pubDate>Mon, 19 May 2025 18:59:35 +0000</pubDate>
				<category><![CDATA[Azure]]></category>
		<category><![CDATA[Business Intelligence]]></category>
		<category><![CDATA[Cloud]]></category>
		<category><![CDATA[Database Administration & Monitoring]]></category>
		<category><![CDATA[Database management]]></category>
		<category><![CDATA[DevOps]]></category>
		<category><![CDATA[SQL Server]]></category>
		<category><![CDATA[Artificial inteligence]]></category>
		<guid isPermaLink="false">https://www.dbi-services.com/blog/?p=38380</guid>

					<description><![CDATA[<p>After a packed workshop day, the SQLDay conference officially kicked off on Tuesday with a series of sessions covering cloud, DevOps, Microsoft Fabric, AI, and more. Here is a short overview of the sessions I attended on the first day of the main conference. Morning Kick-Off: Sponsors and Opening The day started with a short [&#8230;]</p>
<p>L’article <a href="https://www.dbi-services.com/blog/sqlday-2025-wroclaw-sessions/">SQLDay 2025 &#8211; Wrocław &#8211; Sessions</a> est apparu en premier sur <a href="https://www.dbi-services.com/blog">dbi Blog</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>After a packed workshop day, the SQLDay conference officially kicked off on Tuesday with a series of sessions covering cloud, DevOps, Microsoft Fabric, AI, and more. Here is a short overview of the sessions I attended on the first day of the main conference.</p>



<p><strong>Morning Kick-Off: Sponsors and Opening</strong></p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="768" height="1024" src="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/05/IMG_0639-768x1024.jpg" alt="" class="wp-image-38387" srcset="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/05/IMG_0639-768x1024.jpg 768w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/05/IMG_0639-225x300.jpg 225w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/05/IMG_0639-1152x1536.jpg 1152w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/05/IMG_0639-1536x2048.jpg 1536w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/05/IMG_0639-scaled.jpg 1920w" sizes="auto, (max-width: 768px) 100vw, 768px" /></figure>



<p>The day started with a short introduction and a presentation of the sponsors. A good opportunity to acknowledge the partners who made this event possible.</p>



<p><strong>Session 1: Composable AI and Its Impact on Enterprise Architecture</strong></p>



<p>This session (by Felix Mutzl) provided a strategic view of how AI is becoming a core part of enterprise architecture.</p>



<p><strong>Session 2: Migrate Your On-Premises SQL Server Databases to Microsoft Azure</strong></p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="768" height="1024" src="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/05/IMG_0646-768x1024.jpg" alt="" class="wp-image-38386" srcset="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/05/IMG_0646-768x1024.jpg 768w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/05/IMG_0646-225x300.jpg 225w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/05/IMG_0646-1152x1536.jpg 1152w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/05/IMG_0646-1536x2048.jpg 1536w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/05/IMG_0646-scaled.jpg 1920w" sizes="auto, (max-width: 768px) 100vw, 768px" /></figure>



<p>A session (by Edwin M Sarmiento) that addressed one of the most common challenges for many DBAs and IT departments: how to migrate your SQL Server workloads to Azure. The speaker shared a well-structured approach, highlighting the key elements to consider before launching a migration project:</p>



<ul class="wp-block-list">
<li><strong>Team involvement</strong>: Ensure all stakeholders are aligned.</li>



<li><strong>Planning</strong>: Migration isn’t just about moving data, dependencies must be mapped.</li>



<li><strong>Cost</strong>: Evaluate Azure pricing models and estimate consumption.</li>



<li><strong>Testing</strong>: Validate each stage in a non-production environment.</li>



<li><strong>Monitoring</strong>: Post-migration monitoring is essential for stability.</li>
</ul>



<p><strong>Session 3: Fabric Monitoring Made Simple: Built-In Tools and Custom Solutions</strong></p>



<p>This session was produced by Just Blindbaek and he talked about how Microsoft Fabric is gaining traction quickly, and with it comes the need for robust monitoring. This session explored native tools like Monitoring Hub, Admin Monitoring workspace, and Workspace Monitoring. In addition, the speaker introduced FUAM (Fabric Unified Admin Monitoring), an open-source solution supported by Microsoft that complements the built-in options.</p>



<p><strong>Session 4: Database DevOps&#8230;CJ/CD: Continuous Journey or Continuous Disaster?</strong></p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="714" src="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/05/image-1-1024x714.jpeg" alt="" class="wp-image-38381" srcset="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/05/image-1-1024x714.jpeg 1024w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/05/image-1-300x209.jpeg 300w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/05/image-1-768x536.jpeg 768w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/05/image-1-1536x1072.jpeg 1536w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/05/image-1-2048x1429.jpeg 2048w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<p>A hands-on session (by Tonie Huizer) about introducing DevOps practices in a legacy team that originally used SVN and had no automation. The speaker shared lessons learned from introducing:</p>



<ul class="wp-block-list">
<li>Sprint-based development cycles</li>



<li>Git branching strategies</li>



<li>Build and release pipelines</li>



<li>Manual vs Pull Request releases</li>



<li>Versioned databases and IDPs</li>
</ul>



<p>It was a realistic look at the challenges and practical steps involved when modernizing a database development process.</p>



<p><strong>Session 5: (Developer) Productivity, Data Intelligence, and Building an AI Application</strong></p>



<p>This session (from Felix Mutzl) shifted the focus from general AI to productivity-enhancing solutions. Built on Databricks, the use case demonstrated how to combine AI models with structured data to deliver real-time insights to knowledge workers. The practical Databricks examples were especially helpful to visualize the architecture behind these kinds of applications.</p>



<p><strong>Session 6: Azure SQL Managed Instance Demo Party</strong></p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="768" height="1024" src="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/05/image-3-768x1024.png" alt="" class="wp-image-38384" srcset="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/05/image-3-768x1024.png 768w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/05/image-3-225x300.png 225w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/05/image-3-1152x1536.png 1152w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/05/image-3-1536x2048.png 1536w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/05/image-3-scaled.png 1920w" sizes="auto, (max-width: 768px) 100vw, 768px" /></figure>



<p>The final session of the day was given by Dani Ljepava and Sasa Popovic and was more interactive and focused on showcasing the latest Azure SQL Managed Instance features. Demos covered:</p>



<ul class="wp-block-list">
<li>Performance and scaling improvements</li>



<li>Compatibility for hybrid scenarios</li>



<li>Built-in support for high availability and disaster recovery</li>
</ul>



<p>The session served as a great update on where Azure SQL MI is heading and what tools are now available for operational DBAs and cloud architects.</p>



<p>Thank you, Amine Haloui.</p>
<p>L’article <a href="https://www.dbi-services.com/blog/sqlday-2025-wroclaw-sessions/">SQLDay 2025 &#8211; Wrocław &#8211; Sessions</a> est apparu en premier sur <a href="https://www.dbi-services.com/blog">dbi Blog</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.dbi-services.com/blog/sqlday-2025-wroclaw-sessions/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>SQLDay 2025 &#8211; Wrocław &#8211; Workshops</title>
		<link>https://www.dbi-services.com/blog/sqlday-2025-wroclaw-workshops/</link>
					<comments>https://www.dbi-services.com/blog/sqlday-2025-wroclaw-workshops/#respond</comments>
		
		<dc:creator><![CDATA[Amine Haloui]]></dc:creator>
		<pubDate>Mon, 19 May 2025 18:58:56 +0000</pubDate>
				<category><![CDATA[Azure]]></category>
		<category><![CDATA[Business Intelligence]]></category>
		<category><![CDATA[Cloud]]></category>
		<category><![CDATA[Database Administration & Monitoring]]></category>
		<category><![CDATA[Database management]]></category>
		<category><![CDATA[DevOps]]></category>
		<category><![CDATA[SQL Server]]></category>
		<category><![CDATA[Artificial inteligence]]></category>
		<guid isPermaLink="false">https://www.dbi-services.com/blog/?p=38376</guid>

					<description><![CDATA[<p>I had the chance to attend SQLDay 2025 in Wrocław, one of the largest Microsoft Data Platform conferences in Central Europe. The event gathers a wide range of professionals, from database administrators to data engineers and Power BI developers. The first day was fully dedicated to pre-conference workshops. The general sessions are scheduled for the [&#8230;]</p>
<p>L’article <a href="https://www.dbi-services.com/blog/sqlday-2025-wroclaw-workshops/">SQLDay 2025 &#8211; Wrocław &#8211; Workshops</a> est apparu en premier sur <a href="https://www.dbi-services.com/blog">dbi Blog</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>I had the chance to attend SQLDay 2025 in Wrocław, one of the largest Microsoft Data Platform conferences in Central Europe. The event gathers a wide range of professionals, from database administrators to data engineers and Power BI developers. The first day was fully dedicated to pre-conference workshops. The general sessions are scheduled for the following two days.</p>


<div class="wp-block-image">
<figure class="aligncenter size-full"><img loading="lazy" decoding="async" width="189" height="83" src="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/05/image.png" alt="" class="wp-image-38377" /></figure>
</div>


<p>In this first post, I’ll focus on Monday’s workshops.</p>



<p><strong>Day 1 – Workshop Sessions</strong></p>



<p>The workshop day at SQLDay is always a strong start. It gives attendees the opportunity to focus on a specific topic for a full day. This year, several tracks were available in parallel, covering various aspects of the Microsoft data stack: from Power BI and SQL Server to Azure and Microsoft Fabric.</p>


<div class="wp-block-image">
<figure class="aligncenter size-large"><img loading="lazy" decoding="async" width="1024" height="768" src="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/05/image-1024x768.jpeg" alt="" class="wp-image-38378" srcset="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/05/image-1024x768.jpeg 1024w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/05/image-300x225.jpeg 300w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/05/image-768x576.jpeg 768w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/05/image-1536x1152.jpeg 1536w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/05/image-2048x1536.jpeg 2048w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>
</div>


<p>Here are the sessions that were available:</p>



<p><strong>Advanced DAX</strong></p>



<p>This session was clearly targeted at experienced Power BI users. Alberto Ferrari delivered an in-depth look into evaluation context, expanded tables, and advanced usage of CALCULATE. One focus area was the correct use of ALLEXCEPT and how it interacts with complex relationships.</p>



<p><strong>Execution Plans in Depth</strong></p>



<p>For SQL Server professionals interested in performance tuning, this workshop provided a detailed walkthrough of execution plans. Hugo Kornelis covered a large number of operators, explained how they work internally, and showed how to analyze problematic queries. The content was dense but well-structured.</p>



<p><strong>Becoming an Azure SQL DBA</strong></p>



<p>This workshop was led by members of the Azure SQL product team. It focused on the evolution of the DBA role in cloud environments. The agenda included topics such as high availability in Azure SQL, backup and restore, cost optimization, and integration with Microsoft Fabric. It was designed to understand the shared responsibility model and how traditional DBA tasks are shifting in cloud scenarios.</p>



<p><strong>Enterprise Databots</strong></p>



<p>This workshop explored how to build intelligent DataBots using Azure and Databricks. The session combined theoretical content with practical labs. The goal was to implement chatbots capable of interacting with SQL data and leveraging AI models. Participants had the opportunity to create bots from scratch.</p>



<p><strong>Analytics Engineering with dbt</strong></p>



<p>This session was focused on dbt (data build tool) and its role in ELT pipelines. It was well-suited for data analysts and engineers looking to standardize and scale their workflows.</p>



<p><strong>Build a Real-time Intelligence Solution in One Day</strong></p>



<p>This workshop showed how to implement real-time analytics solutions using Microsoft Fabric. It covered Real-Time Hub, Eventstream, Data Activator, and Copilot.</p>



<p><strong>From Power BI Developer to Fabric Engineer</strong></p>



<p>This workshop addressed Power BI developers looking to go beyond the limitations of Power Query and Premium refresh schedules. The session focused on transforming reports into scalable Fabric-based solutions using Lakehouse, Notebooks, Dataflows, and semantic models. A good starting point for anyone looking to shift from report building to full data engineering within the Microsoft ecosystem.</p>



<p>Thank you, Amine Haloui.</p>
<p>L’article <a href="https://www.dbi-services.com/blog/sqlday-2025-wroclaw-workshops/">SQLDay 2025 &#8211; Wrocław &#8211; Workshops</a> est apparu en premier sur <a href="https://www.dbi-services.com/blog">dbi Blog</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.dbi-services.com/blog/sqlday-2025-wroclaw-workshops/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
	</channel>
</rss>

<!--
Performance optimized by W3 Total Cache. Learn more: https://www.boldgrid.com/w3-total-cache/?utm_source=w3tc&utm_medium=footer_comment&utm_campaign=free_plugin

Page Caching using Disk: Enhanced 
Lazy Loading (feed)

Served from: www.dbi-services.com @ 2026-04-25 16:19:24 by W3 Total Cache
-->