<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Archives des MySQL - dbi Blog</title>
	<atom:link href="https://www.dbi-services.com/blog/category/mysql/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.dbi-services.com/blog/category/mysql/</link>
	<description></description>
	<lastBuildDate>Mon, 23 Mar 2026 10:18:57 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	

 
	<item>
		<title>Oracle Technology Roundtable for Digital Natives &#8211; Let&#8217;s have a look at AI, Cloud and HeatWave</title>
		<link>https://www.dbi-services.com/blog/oracle-technology-roundtable-for-digital-natives-lets-have-a-look-at-ai-cloud-and-heatwave/</link>
					<comments>https://www.dbi-services.com/blog/oracle-technology-roundtable-for-digital-natives-lets-have-a-look-at-ai-cloud-and-heatwave/#respond</comments>
		
		<dc:creator><![CDATA[Elisa Usai]]></dc:creator>
		<pubDate>Fri, 07 Mar 2025 08:16:19 +0000</pubDate>
				<category><![CDATA[Big Data]]></category>
		<category><![CDATA[Business Intelligence]]></category>
		<category><![CDATA[Cloud]]></category>
		<category><![CDATA[Cloud Native]]></category>
		<category><![CDATA[Development & Performance]]></category>
		<category><![CDATA[MySQL]]></category>
		<category><![CDATA[OCI]]></category>
		<category><![CDATA[Oracle]]></category>
		<category><![CDATA[ai]]></category>
		<category><![CDATA[data]]></category>
		<category><![CDATA[ETL]]></category>
		<category><![CDATA[genai]]></category>
		<category><![CDATA[HeatWave]]></category>
		<category><![CDATA[innovation]]></category>
		<category><![CDATA[lakehouse]]></category>
		<category><![CDATA[machinelearning]]></category>
		<category><![CDATA[ML]]></category>
		<category><![CDATA[objectstorage]]></category>
		<category><![CDATA[objectstore]]></category>
		<category><![CDATA[OLAP]]></category>
		<category><![CDATA[OLTP]]></category>
		<category><![CDATA[Performances]]></category>
		<category><![CDATA[Security]]></category>
		<category><![CDATA[vectors]]></category>
		<guid isPermaLink="false">https://www.dbi-services.com/blog/?p=37514</guid>

					<description><![CDATA[<p>Yesterday I participated to the Oracle Technology Roundtable for Digital Natives in Zurich. It was a good opportunity to learn more about AI, Cloud and HeatWave with the focus on very trendy features of this product: generative AI, machine learning, vector processing, analytics and transaction processing across data in Data Lake and MySQL databases. It [&#8230;]</p>
<p>L’article <a href="https://www.dbi-services.com/blog/oracle-technology-roundtable-for-digital-natives-lets-have-a-look-at-ai-cloud-and-heatwave/">Oracle Technology Roundtable for Digital Natives &#8211; Let&#8217;s have a look at AI, Cloud and HeatWave</a> est apparu en premier sur <a href="https://www.dbi-services.com/blog">dbi Blog</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p></p>



<p>Yesterday I participated to the Oracle Technology Roundtable for Digital Natives in Zurich.</p>



<figure class="wp-block-image size-large"><img fetchpriority="high" decoding="async" width="1024" height="994" src="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/03/20250306_095332-1024x994.jpg" alt="" class="wp-image-37541" srcset="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/03/20250306_095332-1024x994.jpg 1024w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/03/20250306_095332-300x291.jpg 300w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/03/20250306_095332-768x745.jpg 768w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/03/20250306_095332-1536x1491.jpg 1536w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/03/20250306_095332-2048x1988.jpg 2048w" sizes="(max-width: 1024px) 100vw, 1024px" /></figure>



<p>It was a good opportunity to learn more about AI, Cloud and <a href="https://www.oracle.com/heatwave/">HeatWave</a> with the focus on very trendy features of this product: generative AI, machine learning, vector processing, analytics and transaction processing across data in Data Lake and <a href="https://www.mysql.com/">MySQL</a> databases.</p>



<figure class="wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-1 is-layout-flex wp-block-gallery-is-layout-flex">
<figure class="wp-block-image size-large"><img decoding="async" width="1024" height="1024" data-id="37578" src="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/03/20250307_153441-1024x1024.jpg" alt="" class="wp-image-37578" srcset="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/03/20250307_153441-1024x1024.jpg 1024w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/03/20250307_153441-300x300.jpg 300w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/03/20250307_153441-150x150.jpg 150w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/03/20250307_153441-768x768.jpg 768w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/03/20250307_153441-1536x1536.jpg 1536w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/03/20250307_153441-2048x2048.jpg 2048w" sizes="(max-width: 1024px) 100vw, 1024px" /></figure>
</figure>



<p>It was also great to share moments with the Oracle and MySQL teams and meet customers which gave feedback and tips about their solutions already in place in this area.</p>



<figure class="wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-2 is-layout-flex wp-block-gallery-is-layout-flex">
<figure class="wp-block-image size-large"><img decoding="async" width="1024" height="1024" data-id="37579" src="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/03/20250307_153347-1024x1024.jpg" alt="" class="wp-image-37579" srcset="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/03/20250307_153347-1024x1024.jpg 1024w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/03/20250307_153347-300x300.jpg 300w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/03/20250307_153347-150x150.jpg 150w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/03/20250307_153347-768x768.jpg 768w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/03/20250307_153347-1536x1536.jpg 1536w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/03/20250307_153347-2048x2048.jpg 2048w" sizes="(max-width: 1024px) 100vw, 1024px" /></figure>
</figure>



<p>I’ll try to summarize below some key take-away of each session.</p>



<p><strong>Unlocking Innovation: How Oracle AI is Shaping the Future of Business</strong> (by <a href="https://www.linkedin.com/in/jwirtgen/">Jürgen Wirtgen</a>)</p>



<p>AI is not a new topic. But how do we use it today and where are we in the process, early or advanced?</p>



<p>To answer this question, you can have a look to the stages of adoption:</p>



<ol class="wp-block-list">
<li>Consume (AI embedded in your applications) -&gt; SaaS applications</li>



<li>Extend (models via Data Retrieval, RAG) -&gt; AI services</li>



<li>Fine tune -&gt; Data</li>



<li>Build models from scratch -&gt; Infrastructure</li>
</ol>



<p><em>AI is not AI.</em> The best AI starts with the best data, securely managed. Which can be translated back into a simple equation: Best Data + Best Technology = Best AI.</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="768" src="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/03/20250306_100552-1024x768.jpg" alt="" class="wp-image-37542" srcset="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/03/20250306_100552-1024x768.jpg 1024w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/03/20250306_100552-300x225.jpg 300w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/03/20250306_100552-768x576.jpg 768w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/03/20250306_100552-1536x1152.jpg 1536w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/03/20250306_100552-2048x1536.jpg 2048w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<p><strong>Innovations in HeatWave &amp; MySQL &#8211; The Present and the Future</strong> (by <a href="https://www.linkedin.com/in/cagribalkesen/">Cagri Balkesen</a>)</p>



<p><a href="https://www.oracle.com/heatwave/">HeatWave</a> is an in-memory query processing accelerator for data in <a href="https://www.mysql.com/">MySQL</a> transactional RDBMS or data in Object Store in different format.</p>



<p>Normally you need to put in place and maintain ETL processes to produce data that can be used effectively by analytics. This brings several drawbacks:</p>



<ul class="wp-block-list">
<li>Complexity</li>



<li>You have to maintain different systems, on which you’ll have security issues to handle and costs to assume.</li>
</ul>



<p>Using HeatWave, you don’t need that anymore, because it’s a single platform which allows you to manage together all your OLTP, OLAP, Machine Learning and GenAI workloads.</p>



<p>Which are the advantages of using HeatWave?</p>



<ul class="wp-block-list">
<li>You current SQL syntax doesn’t need to change</li>



<li>Changes to data are automatically propagated to HeatWave</li>



<li>Best performances for your queries</li>



<li>Efficient processing for Data Lake</li>



<li>Best platform for MySQL workloads</li>



<li>Built-in GenAI &amp; Vector Store</li>



<li>Available in multi cloud (natively on OCI, it can run inside AWS, you can setup a private interconnection for Microsoft Azure, and there are works in progress for Google Cloud).</li>
</ul>



<p>HeatWave is based on a massively parallel architecture which uses partitioning of data: each CPU core within a node processes the partitioned data in parallel.</p>



<p>Driven by Machine Learning algorithms, <a href="https://www.oracle.com/heatwave/features/#autopilot">HeatWave Autopilot</a> offers several features such as:</p>



<ul class="wp-block-list">
<li>Improvements in terms of performance and scalability</li>



<li>Provisioning, data loading, query execution and fault management automation, to reduce human errors.</li>
</ul>



<p>Finally, according to Oracle, with HeatWave you will have best performances and lowest costs rather than competitors: Snowflake, Amazon Redshift, Google BigQuery and Databricks.</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="768" src="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/03/20250306_104101-1024x768.jpg" alt="" class="wp-image-37543" srcset="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/03/20250306_104101-1024x768.jpg 1024w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/03/20250306_104101-300x225.jpg 300w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/03/20250306_104101-768x576.jpg 768w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/03/20250306_104101-1536x1152.jpg 1536w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/03/20250306_104101-2048x1536.jpg 2048w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<p><strong>Building Next-Gen Applications with Generative AI &amp; Vector Store</strong> (by <a href="https://www.linkedin.com/in/adihochmann/">Adi Hochmann</a>)</p>



<p>As we said, <a href="https://www.oracle.com/heatwave/">Oracle HeatWave</a> allows you to manage together all your OLTP, OLAP, Machine Learning and GenAI workloads.</p>



<p>Steps to build a <a href="https://www.oracle.com/heatwave/genai/">GenAI</a> application are the following ones:</p>



<ol class="wp-block-list">
<li>Create a vector store</li>



<li>Use vector store with LLMs</li>
</ol>



<p>And this is performed using the following routines:</p>



<p><code>call sys.HEATWAVE_LOAD (…);</code></p>



<p><code>call sys.ML_RAG(@query,@output,@options);</code></p>



<p>But how to train data for Machine Learning? The process and tasks done by a data analyst could be complex and this is replaced here by the <a href="https://www.oracle.com/heatwave/automl/">AutoML</a>:</p>



<p><code>CALL sys.ML_TRAIN(‘data_source’, ‘model_type’, JSON_OBJECT(‘task’, ‘classification’), @result_model);</code></p>



<p>This is useful for some use-cases such as classification, anomalies detection, recommendations, predictive maintenance, …</p>



<p>Additional tip: Adi used <a href="https://dev.mysql.com/doc/mysql-shell-gui/en/">MySQL Shell for VS Code</a> to run his demo. This extension enables interactive editing and execution of SQL for MySQL databases and MySQL Database Service. It integrates the MySQL shell directly into VS Code development workflows and it’s pretty nice!</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="768" src="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/03/20250306_110803-1024x768.jpg" alt="" class="wp-image-37544" srcset="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/03/20250306_110803-1024x768.jpg 1024w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/03/20250306_110803-300x225.jpg 300w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/03/20250306_110803-768x576.jpg 768w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/03/20250306_110803-1536x1152.jpg 1536w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/03/20250306_110803-2048x1536.jpg 2048w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<p><strong>Oracle Cloud for Digital Natives: Supporting Innovation and Growth</strong> (by <a href="https://www.linkedin.com/in/claire-binder-0a3543/">Claire Binder</a>)</p>



<p>Which are the 5 reasons why Digital Natives picked <a href="https://www.oracle.com/ch-fr/cloud/">OCI</a>?</p>



<ol class="wp-block-list">
<li>Developer-First openness and flexibility, to speed acquisition</li>



<li>Advanced Data &amp; AI Services, to achieve innovation and agility</li>



<li>Technical and global reach, to achieve scalability</li>



<li>Security, compliance and resilience, to control risks</li>



<li>Cost efficiency and TCO, to achieve optimized spending.</li>
</ol>



<p>Linked to the point 4, there are several services in OCI to avoid data breaches in terms of prevention, monitoring, mitigation, protection, encryption and access. &nbsp;</p>



<p>To select a Cloud provider, the recommendation would be to choose a solution which allows you to run converged open SQL databases, instead of single-use proprietary databases.</p>



<p>And finally, Oracle brings AI to your data with his new <a href="https://www.oracle.com/database/23ai/">23ai</a> release and some of its features, such as Property Graphs and AI Vector Search.</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="768" src="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/03/20250306_124252-1024x768.jpg" alt="" class="wp-image-37545" srcset="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/03/20250306_124252-1024x768.jpg 1024w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/03/20250306_124252-300x225.jpg 300w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/03/20250306_124252-768x576.jpg 768w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/03/20250306_124252-1536x1152.jpg 1536w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/03/20250306_124252-2048x1536.jpg 2048w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<p><strong>Analytics at the speed of thoughts with HeatWave Lakehouse</strong> (by <a href="https://www.linkedin.com/in/kunalnitin/">Nitin Kunal</a>)</p>



<p>What is a Data Lake? It’s a cost efficient, scalable, online storage of data as file (for instance, Object Store). Data are not structured and non-transactional and can be sourced from Big Data frameworks.</p>



<p>Again, you could have 4 different platforms to maintain:</p>



<ul class="wp-block-list">
<li>&nbsp;Your RDBMS</li>



<li>A DWH system for analytics processing</li>



<li>A Data Lake</li>



<li>A ML &amp; Gen-AI system.</li>
</ul>



<p>Instead of that, your can merge everything in only one platform: <a href="https://www.oracle.com/heatwave/">HeatWave</a>. And you can query near real time data with <a href="https://www.oracle.com/heatwave/lakehouse/">HeatWave Lakehouse</a> because new data is available in seconds: that&#8217;s great!</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="768" src="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/03/20250306_130126-1024x768.jpg" alt="" class="wp-image-37546" srcset="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/03/20250306_130126-1024x768.jpg 1024w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/03/20250306_130126-300x225.jpg 300w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/03/20250306_130126-768x576.jpg 768w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/03/20250306_130126-1536x1152.jpg 1536w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2025/03/20250306_130126-2048x1536.jpg 2048w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<p><strong>Conclusion</strong></p>



<ol class="wp-block-list">
<li>If you have mixed workloads, if you start to work with AI and if you want to improve your performances, it is really worth taking a look at <a href="https://www.oracle.com/heatwave/">Oracle HeatWave</a>. You can try it <a href="https://www.oracle.com/heatwave/free/?source=:ow:o:p:nav:092321MySQLHero&amp;intcmp=:ow:o:p:nav:092321MySQLHero">here</a> for free.</li>



<li>We all know that AI is the future. These next years, we&#8217;ll be more and more challenged on GenAI, ML, vector processing and so on. With all this innovation, we must not lose sight of topics that remain crucial (and perhaps become even more important) such as security, reliability, availability, and best practices. With <a href="https://www.dbi-services.com/">dbi services</a> and <a href="https://www.sequotech.com/">Sequotech</a> we can for sure help you with this transition.</li>
</ol>



<p></p>
<p>L’article <a href="https://www.dbi-services.com/blog/oracle-technology-roundtable-for-digital-natives-lets-have-a-look-at-ai-cloud-and-heatwave/">Oracle Technology Roundtable for Digital Natives &#8211; Let&#8217;s have a look at AI, Cloud and HeatWave</a> est apparu en premier sur <a href="https://www.dbi-services.com/blog">dbi Blog</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.dbi-services.com/blog/oracle-technology-roundtable-for-digital-natives-lets-have-a-look-at-ai-cloud-and-heatwave/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>The 1st Day at KubeCon &#038; CloudNativeCon 2023 in Amsterdam</title>
		<link>https://www.dbi-services.com/blog/the-1st-day-at-kubecon-cloudnativecon-in-amsterdam/</link>
					<comments>https://www.dbi-services.com/blog/the-1st-day-at-kubecon-cloudnativecon-in-amsterdam/#respond</comments>
		
		<dc:creator><![CDATA[Arnaud Berbier]]></dc:creator>
		<pubDate>Wed, 19 Apr 2023 18:16:13 +0000</pubDate>
				<category><![CDATA[DevOps]]></category>
		<category><![CDATA[Kubernetes]]></category>
		<category><![CDATA[MySQL]]></category>
		<category><![CDATA[argoCD]]></category>
		<category><![CDATA[cloudnative]]></category>
		<category><![CDATA[GitLab]]></category>
		<category><![CDATA[kubecon]]></category>
		<category><![CDATA[kubernetes]]></category>
		<guid isPermaLink="false">https://www.dbi-services.com/blog/?p=24685</guid>

					<description><![CDATA[<p>The KubeCon around the Kubernetes technology was one of the events I dreamed to attend since I’m focusing on the CloudNative solutions. I had the great opportunity to attend the KubeCon &#38; CloundNativeCon in Amsterdam with my colleague Benoît Entzmann. There is now CNCF-hosted Co-located events adding more topics and interesting sessions. This is community-driven, [&#8230;]</p>
<p>L’article <a href="https://www.dbi-services.com/blog/the-1st-day-at-kubecon-cloudnativecon-in-amsterdam/">The 1st Day at KubeCon &amp; CloudNativeCon 2023 in Amsterdam</a> est apparu en premier sur <a href="https://www.dbi-services.com/blog">dbi Blog</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>The KubeCon around the Kubernetes technology was one of the events I dreamed to attend since I’m focusing on the CloudNative solutions. I had the great opportunity to attend the KubeCon &amp; CloundNativeCon in Amsterdam with my colleague <strong>Benoît Entzmann.</strong></p>



<p>There is now CNCF-hosted Co-located events adding more topics and interesting sessions. This is community-driven, vendor-neutral events hosted and managed by the CNCF. ArgoCD, Cilium, Linkerd have now their own conference. Here is the list of CNCF-hosted Co-located events.</p>



<figure class="wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-3 is-layout-flex wp-block-gallery-is-layout-flex">
<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="484" data-id="24686" src="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2023/04/Screenshot-2023-04-19-at-14.14.01-1024x484.png" alt="" class="wp-image-24686" srcset="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2023/04/Screenshot-2023-04-19-at-14.14.01-1024x484.png 1024w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2023/04/Screenshot-2023-04-19-at-14.14.01-300x142.png 300w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2023/04/Screenshot-2023-04-19-at-14.14.01-768x363.png 768w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2023/04/Screenshot-2023-04-19-at-14.14.01.png 1370w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>
</figure>



<p>Unfortunately, all the above events were already sold out when we decided to come at the KubeCon. </p>



<p>During the 1<sup>st</sup> day Keynote, Chris Aniszczyk CTO of the CNCF mentioned that for this year, the increase of the number of participants was just amazing reaching more than 10’000 people. The number of CNCF projects also increased a lot. I feel that the main message of this Keynote is that there is a real need for contributors to help maintainers &#8211; more projects but seems less contributors. </p>



<figure class="wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-4 is-layout-flex wp-block-gallery-is-layout-flex">
<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="648" data-id="24698" src="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2023/04/Screenshot-2023-04-19-at-16.27.19-1024x648.png" alt="" class="wp-image-24698" srcset="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2023/04/Screenshot-2023-04-19-at-16.27.19-1024x648.png 1024w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2023/04/Screenshot-2023-04-19-at-16.27.19-300x190.png 300w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2023/04/Screenshot-2023-04-19-at-16.27.19-768x486.png 768w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2023/04/Screenshot-2023-04-19-at-16.27.19.png 1280w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>
</figure>



<p>The keynote was interesting, I learned that there are new certifications available in the CNFC cert path. </p>



<figure class="wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-5 is-layout-flex wp-block-gallery-is-layout-flex">
<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="605" data-id="24697" src="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2023/04/Screenshot-2023-04-19-at-16.25.55-1024x605.png" alt="" class="wp-image-24697" srcset="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2023/04/Screenshot-2023-04-19-at-16.25.55-1024x605.png 1024w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2023/04/Screenshot-2023-04-19-at-16.25.55-300x177.png 300w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2023/04/Screenshot-2023-04-19-at-16.25.55-768x454.png 768w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2023/04/Screenshot-2023-04-19-at-16.25.55.png 1218w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>
</figure>



<p>I also learned that GitLab choose flux for the GitOps capabilities. It&#8217;s already documented in the GitLab documentation <a href="https://docs.gitlab.com/ee/user/clusters/agent/gitops.html">here</a>. See as well <a href="https://about.gitlab.com/blog/2023/02/08/why-did-we-choose-to-integrate-fluxcd-with-gitlab/">this blog announcement</a>.</p>



<p>I noticed a project that I would like to follow: <strong>The scalable, reliable, MySQL-compatible, cloud-native database. </strong>I’m quite sure my database colleagues Elisa Usai &amp; Saïd Mendi would love to do a presentation during our next dbi xChange event. I would encourage them to have a look at <a href="https://vitess.io/">vitess.io</a>.</p>



<p>After the keynotes I participated to the following sessions</p>



<ul class="wp-block-list">
<li>Learn the Helm Code Base and PR Review Process</li>



<li>Emissary-Ingress: Self-Service APIs and the Kubernetes Gateway API</li>



<li>Argo CD Core &#8211; A Pure GitOps Agent for Kubernetes</li>



<li>How to Turn Release Management from Duty to Fun: Lessons Learned Building the Cluster API Release Team</li>



<li>Highly Available Routing with Multi Cluster Gateways</li>
</ul>



<p>Following some information about the sessions. </p>



<p><strong>Learn the Helm Code Base and PR Review Process</strong>. </p>



<p>Scott Rigby, Andrew Block &amp; Karena Angell were holding the session and explained rapidly what helm is. They focus the session on drawing attention to the need for contributors. Anybody could help not only by developing but also in doing review, documentation, bugs triage and other stuff that may not require high level of expertise. We explored the git repository with the composed directory explaining each of them. How the source code of the helm tool is composed. It was interesting but to be honest, I was thinking that we would have to describe which new features are in the pipe or other cool stuff around the futur of helm. It seems that maintainer are already overloaded and except maintaining the code there is no real new features or at least it was not covered during this session. </p>



<figure class="wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-6 is-layout-flex wp-block-gallery-is-layout-flex">
<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="740" data-id="24699" src="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2023/04/Screenshot-2023-04-19-at-16.28.36-1024x740.png" alt="" class="wp-image-24699" srcset="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2023/04/Screenshot-2023-04-19-at-16.28.36-1024x740.png 1024w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2023/04/Screenshot-2023-04-19-at-16.28.36-300x217.png 300w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2023/04/Screenshot-2023-04-19-at-16.28.36-768x555.png 768w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2023/04/Screenshot-2023-04-19-at-16.28.36.png 1188w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>
</figure>



<p><strong>Emissary-Ingress: Self-Service APIs and the Kubernetes Gateway API </strong></p>



<p>Lance Austin and Flynn Buoyant provides update on the emissary-ingress, including new features since the time it was introduced in Detroit. They also explained the needs of having a self-service, developer centric configuration tools for APIs. </p>



<figure class="wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-7 is-layout-flex wp-block-gallery-is-layout-flex">
<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="946" height="560" data-id="24700" src="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2023/04/Screenshot-2023-04-19-at-16.30.14.png" alt="" class="wp-image-24700" srcset="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2023/04/Screenshot-2023-04-19-at-16.30.14.png 946w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2023/04/Screenshot-2023-04-19-at-16.30.14-300x178.png 300w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2023/04/Screenshot-2023-04-19-at-16.30.14-768x455.png 768w" sizes="auto, (max-width: 946px) 100vw, 946px" /></figure>
</figure>



<p><strong>Argo CD Core &#8211; A Pure GitOps Agent for Kubernetes</strong></p>



<p>The speakers, specially the co-creator of Argo provides description of the Argo CD component architecture, showing the internal layer saying that the core is decoupled from the UI. The core can be easily used by Kubernetes Administrator to deploy several applications along to multi-cluster. A demonstration was done to show how Kube Admin could use Argo  CD as a pure GitOps Agent.</p>



<figure class="wp-block-gallery has-nested-images columns-default is-cropped wp-block-gallery-8 is-layout-flex wp-block-gallery-is-layout-flex">
<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="653" data-id="24712" src="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2023/04/Screenshot-2023-04-19-at-19.39.12-1024x653.png" alt="" class="wp-image-24712" srcset="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2023/04/Screenshot-2023-04-19-at-19.39.12-1024x653.png 1024w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2023/04/Screenshot-2023-04-19-at-19.39.12-300x191.png 300w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2023/04/Screenshot-2023-04-19-at-19.39.12-768x490.png 768w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2023/04/Screenshot-2023-04-19-at-19.39.12.png 1230w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>
</figure>



<p>I think you now have an overview of the session I&#8217;ve followed today, it&#8217;s a complete program during the KubeCon &amp; CloudNativeCon. I had to choose between several streams as there were interesting other sessions in parallel. I&#8217;m really happy to be part of this event but now it&#8217;s time to take a bier with my colleague Benoît and see what will do tonight &#8211; visiting Amsterdam most probably. </p>
<p>L’article <a href="https://www.dbi-services.com/blog/the-1st-day-at-kubecon-cloudnativecon-in-amsterdam/">The 1st Day at KubeCon &amp; CloudNativeCon 2023 in Amsterdam</a> est apparu en premier sur <a href="https://www.dbi-services.com/blog">dbi Blog</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.dbi-services.com/blog/the-1st-day-at-kubecon-cloudnativecon-in-amsterdam/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>ODA : Do You Know The MOVE Table In MySQL DB Repository</title>
		<link>https://www.dbi-services.com/blog/oda-do-you-know-the-move-table-in-mysql-db-repository/</link>
					<comments>https://www.dbi-services.com/blog/oda-do-you-know-the-move-table-in-mysql-db-repository/#comments</comments>
		
		<dc:creator><![CDATA[Oracle Team]]></dc:creator>
		<pubDate>Tue, 08 Nov 2022 15:58:18 +0000</pubDate>
				<category><![CDATA[Database Administration & Monitoring]]></category>
		<category><![CDATA[MySQL]]></category>
		<category><![CDATA[Operating systems]]></category>
		<category><![CDATA[Oracle]]></category>
		<category><![CDATA[MOVE GHSUSER23]]></category>
		<category><![CDATA[ODA]]></category>
		<category><![CDATA[ODA HA]]></category>
		<category><![CDATA[ODA MYSQL]]></category>
		<guid isPermaLink="false">https://www.dbi-services.com/blog/?p=20415</guid>

					<description><![CDATA[<p>By Mouhamadou Diaw During a consulting on a customer, we faced the following issue when trying to delete a dbhome Message:&#160; DCS-10001:Internal error encountered: PRGO-2470 : Working copy “OraDB19000_home1” is involved in an incomplete move or upgrade operation The result of the job is shown below After researches on the net, we found following Oracle [&#8230;]</p>
<p>L’article <a href="https://www.dbi-services.com/blog/oda-do-you-know-the-move-table-in-mysql-db-repository/">ODA : Do You Know The MOVE Table In MySQL DB Repository</a> est apparu en premier sur <a href="https://www.dbi-services.com/blog">dbi Blog</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p><strong>By Mouhamadou Diaw</strong></p>



<p></p>



<p>During a consulting on a customer, we faced the following issue when trying to delete a dbhome</p>



<p><em>Message:&nbsp; DCS-10001:Internal error encountered: PRGO-2470 : Working copy “OraDB19000_home1” is involved in an incomplete move or upgrade operation</em></p>



<p>The result of the job is shown below</p>



<pre class="wp-block-code"><code>&#091;2022-11-08 10:53:35 root@odaserverb]# odacli describe-job -i 671b5899-02ac-45ff-b5af-ee254ef0bc72

Job details
----------------------------------------------------------------
                     ID:  671b5899-02ac-45ff-b5af-ee254ef0bc72
            Description:  Database Home OraDB19000_home1 Deletion with id a1bfe23e-2569-407b-8b87-7af9f9f586bf
                 Status:  Failure
                Created:  October 26, 2022 6:45:49 AM CEST
                Message:  DCS-10001:Internal error encountered: PRGO-2470 : Working copy "OraDB19000_home1" is involved in an incomplete move or upgrade operation..

Task Name                                Start Time                          End Time                            Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------
DbHome service deletion for a1bfe23e-2569-407b-8b87-7af9f9f586bf October 26, 2022 6:45:49 AM CEST    October 26, 2022 6:46:02 AM CEST    Failure
DbHome service deletion for a1bfe23e-2569-407b-8b87-7af9f9f586bf October 26, 2022 6:45:49 AM CEST    October 26, 2022 6:46:02 AM CEST    Failure
Validate dbhome a1bfe23e-2569-407b-8b87-7af9f9f586bf for deletion October 26, 2022 6:45:50 AM CEST    October 26, 2022 6:45:50 AM CEST    Success
Setting up ssh equivalance               October 26, 2022 6:45:52 AM CEST    October 26, 2022 6:45:55 AM CEST    Success
Setting up ssh equivalance               October 26, 2022 6:45:55 AM CEST    October 26, 2022 6:45:58 AM CEST    Success
Deleting DbHome by RHP                   October 26, 2022 6:45:58 AM CEST    October 26, 2022 6:46:02 AM CEST    Failure
</code></pre>



<p>After researches on the net, we found following Oracle document</p>



<p><em>ODA: Create-prepatchreport fails with PRGP-1005 &#8211; cannot specify databases (Doc ID 2900263.1)</em></p>



<p>This document is not about the same issue, but we can find inside that the table MOVE in the database MySQL should be empty in a normal state.</p>



<p>According this document if this table contains rows, it indicates that a move have failed in the past. Indeed we did an upgrade dbhome which failed.</p>



<p>So we decided to follow the steps described in this document and then retry the delete dbhome.</p>



<p><strong>We just want to inform that we do these steps at our own risks and that it’s up to you to execute or not following commands &nbsp;without Oracle support</strong></p>



<p>As we are using an HA ODA, we do following tasks on both nodes</p>



<p>We first connect to the MySQL Metadata repository</p>



<pre class="wp-block-code"><code>&#091;2022-11-08 10:54:07 root@odaserverb]# cd /opt/oracle/dcs/mysql/bin
&#091;2022-11-08 11:31:58 root@odaserverb]# ./mysql -u root --socket=/opt/oracle/dcs/mysql/log/mysqldb.sock
</code></pre>



<p>We then locate on wich database is the table MOVE</p>



<pre class="wp-block-code"><code>mysql&gt; select table_schema as database_name, table_name
    -&gt; from information_schema.tables
    -&gt; where table_type = 'BASE TABLE'
    -&gt;     and lower(table_name) like '%move%'
    -&gt; order by table_schema,
    -&gt;     table_name;
+---------------+------------+
| database_name | TABLE_NAME |
+---------------+------------+
| GHSUSER23     | MOVE       |
+---------------+------------+
1 row in set (0.00 sec)
</code></pre>



<p>Let’s connect to the GHSUSER23 database and let’s query the table MOVE. And we can see the table is not empty. </p>



<pre class="wp-block-code"><code>mysql&gt;use GHSUSER23;

mysql&gt; select NAME,SRCWC,SRCHOME,DSTWC,DSTHOME from MOVE;
+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------+------------------------------------------------------+------------------+------------------------------------------------------+
| NAME                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                         | SRCWC            | SRCHOME                                              | DSTWC            | DSTHOME                                              |
+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------+------------------------------------------------------+------------------+------------------------------------------------------+
| 0xACED0005737200116A6176612E7574696C2E48617368536574BA44859596B8B7340300007870770C000000103F400000000000017400084F4643434D50524578                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           | OraDB19000_home1 | /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1 | OraDB19000_home2 | /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_2 |
| 0xACED0005737200116A6176612E7574696C2E48617368536574BA44859596B8B7340300007870770C000000023F400000000000017401354F4643434D5052453B4F7261444231393030305F686F6D65313B2F7530312F6170702F6F64616F7261686F6D652F6F7261636C652F70726F647563742F31392E302E302E302F6462686F6D655F313B4F7261444231393030305F686F6D65323B2F7530312F6170702F6F64616F7261686F6D652F6F7261636C652F70726F647563742F31392E302E302E302F6462686F6D655F323B6F7261636C653B31392E302E302E302E303B5241433B747275653B4E4F545F5350454349464945443B4E4F545F5350454349464945443B66616C73653B4E4F545F5350454349464945443B66616C73653B66616C73653B69726973646576707265613A44425F504F53545F55534552414354494F4E535F535543434553532B69726973646576707265623A53544152545F534552564943455F5355434345535378 | OraDB19000_home1 | /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1 | OraDB19000_home2 | /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_2 |
+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------+------------------------------------------------------+------------------+------------------------------------------------------+
2 rows in set (0.00 sec)
</code></pre>



<p>We &nbsp;decided to empty the table MOVE, but we did a copy before</p>



<pre class="wp-block-code"><code>mysql&gt; create table MOVE_BCK as select * from MOVE;

mysql&gt; select NAME,SRCWC,SRCHOME,DSTWC,DSTHOME from MOVE_BCK;
+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------+------------------------------------------------------+------------------+------------------------------------------------------+
| NAME                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                         | SRCWC            | SRCHOME                                              | DSTWC            | DSTHOME                                              |
+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------+------------------------------------------------------+------------------+------------------------------------------------------+
| 0xACED0005737200116A6176612E7574696C2E48617368536574BA44859596B8B7340300007870770C000000103F400000000000017400084F4643434D50524578                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           | OraDB19000_home1 | /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1 | OraDB19000_home2 | /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_2 |
| 0xACED0005737200116A6176612E7574696C2E48617368536574BA44859596B8B7340300007870770C000000023F400000000000017401354F4643434D5052453B4F7261444231393030305F686F6D65313B2F7530312F6170702F6F64616F7261686F6D652F6F7261636C652F70726F647563742F31392E302E302E302F6462686F6D655F313B4F7261444231393030305F686F6D65323B2F7530312F6170702F6F64616F7261686F6D652F6F7261636C652F70726F647563742F31392E302E302E302F6462686F6D655F323B6F7261636C653B31392E302E302E302E303B5241433B747275653B4E4F545F5350454349464945443B4E4F545F5350454349464945443B66616C73653B4E4F545F5350454349464945443B66616C73653B66616C73653B69726973646576707265613A44425F504F53545F55534552414354494F4E535F535543434553532B69726973646576707265623A53544152545F534552564943455F5355434345535378 | OraDB19000_home1 | /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1 | OraDB19000_home2 | /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_2 |
+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------+------------------------------------------------------+------------------+------------------------------------------------------+
2 rows in set (0.00 sec)
</code></pre>



<p>And then delete the rows</p>



<pre class="wp-block-code"><code>mysql&gt; delete from MOVE;
Query OK, 2 rows affected (0.00 sec)

mysql&gt; commit;
Query OK, 0 rows affected (0.00 sec)
</code></pre>



<p>We then retried the delete dbhome</p>



<pre class="wp-block-code"><code>&#091;2022-11-08 12:05:19 root@odaservera]# odacli list-dbhomes;

ID                                       Name                 DB Version                               Home Location                                 Status
---------------------------------------- -------------------- ---------------------------------------- --------------------------------------------- ----------
a1bfe23e-2569-407b-8b87-7af9f9f586bf     OraDB19000_home1     19.14.0.0.220118                         /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_1 FAILED
06d87fc9-7ebb-4944-aefa-a18f82100506     OraDB19000_home2     19.16.0.0.220719                         /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_2 CONFIGURED
</code></pre>



<p>And we run the delete-dbhome command on one node</p>



<pre class="wp-block-code"><code>&#091;2022-11-08 12:05:25 root@odaservera]# odacli delete-dbhome -i a1bfe23e-2569-407b-8b87-7af9f9f586bfAnd the</code></pre>



<p>A few minutes after the job finished successfully</p>



<pre class="wp-block-code"><code>&#091;2022-11-08 12:07:33 root@odaservera]# odacli describe-job -i 79ec29f2-24c8-4bc1-a674-695e81127f0a

Job details
----------------------------------------------------------------
                     ID:  79ec29f2-24c8-4bc1-a674-695e81127f0a
            Description:  Database Home OraDB19000_home1 Deletion with id a1bfe23e-2569-407b-8b87-7af9f9f586bf
                 Status:  Success
                Created:  November 8, 2022 12:05:41 PM CET
                Message:

Task Name                                Start Time                          End Time                            Status
---------------------------------------- ----------------------------------- ----------------------------------- ----------
Validate dbhome a1bfe23e-2569-407b-8b87-7af9f9f586bf for deletion November 8, 2022 12:05:41 PM CET    November 8, 2022 12:05:41 PM CET    Success
Setting up ssh equivalance               November 8, 2022 12:05:43 PM CET    November 8, 2022 12:05:46 PM CET    Success
Setting up ssh equivalance               November 8, 2022 12:05:46 PM CET    November 8, 2022 12:05:49 PM CET    Success
Deleting DbHome by RHP                   November 8, 2022 12:05:49 PM CET    November 8, 2022 12:07:53 PM CET    Success
</code></pre>



<p>And we validate with the list-dbhomes</p>



<pre class="wp-block-code"><code>&#091;2022-11-08 12:07:57 root@odaservera]# odacli list-dbhomes;

ID                                       Name                 DB Version                               Home Location                                 Status
---------------------------------------- -------------------- ---------------------------------------- --------------------------------------------- ----------
06d87fc9-7ebb-4944-aefa-a18f82100506     OraDB19000_home2     19.16.0.0.220719                         /u01/app/odaorahome/oracle/product/19.0.0.0/dbhome_2 CONFIGURED
</code></pre>



<p>Conclusion</p>



<p>Hope this blog help</p>
<p>L’article <a href="https://www.dbi-services.com/blog/oda-do-you-know-the-move-table-in-mysql-db-repository/">ODA : Do You Know The MOVE Table In MySQL DB Repository</a> est apparu en premier sur <a href="https://www.dbi-services.com/blog">dbi Blog</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.dbi-services.com/blog/oda-do-you-know-the-move-table-in-mysql-db-repository/feed/</wfw:commentRss>
			<slash:comments>1</slash:comments>
		
		
			</item>
		<item>
		<title>MySQL Server on Microsoft Azure 3rd part (backup and recovery)</title>
		<link>https://www.dbi-services.com/blog/mysql-server-on-microsoft-azure-3rd-part-backup-and-recovery/</link>
					<comments>https://www.dbi-services.com/blog/mysql-server-on-microsoft-azure-3rd-part-backup-and-recovery/#respond</comments>
		
		<dc:creator><![CDATA[Grégory Steulet]]></dc:creator>
		<pubDate>Mon, 22 Aug 2022 11:48:33 +0000</pubDate>
				<category><![CDATA[Azure]]></category>
		<category><![CDATA[Cloud]]></category>
		<category><![CDATA[MySQL]]></category>
		<category><![CDATA[Azure backup]]></category>
		<category><![CDATA[PITR]]></category>
		<category><![CDATA[Recovery process]]></category>
		<guid isPermaLink="false">https://www.dbi-services.com/blog/?p=18514</guid>

					<description><![CDATA[<p>Presentation of the Backup Recovery solution provided by Microsoft Azure for Azure Database for MySQL Server</p>
<p>L’article <a href="https://www.dbi-services.com/blog/mysql-server-on-microsoft-azure-3rd-part-backup-and-recovery/">MySQL Server on Microsoft Azure 3rd part (backup and recovery)</a> est apparu en premier sur <a href="https://www.dbi-services.com/blog">dbi Blog</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="293" src="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2022/08/MySQLAzure-001-2-1024x293.png" alt="" class="wp-image-18471" srcset="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2022/08/MySQLAzure-001-2-1024x293.png 1024w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2022/08/MySQLAzure-001-2-300x86.png 300w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2022/08/MySQLAzure-001-2-768x220.png 768w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2022/08/MySQLAzure-001-2.png 1345w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /><figcaption class="wp-element-caption">Azure Database for MySQL</figcaption></figure>



<h2 class="wp-block-heading">Introduction</h2>



<p>This blog is the third chapter related to deploying a MySQL infrastructure on the Azure cloud. In <a href="https://www.dbi-services.com/blog/mysql-server-on-microsoft-azure-2nd-part-performance-tests/">addition to performance</a>,  we should indeed also consider backup and restore capabilities. The objective of this blog is to present the main backup and restore possibilities offered by Azure through a simple example and show a second backup/restore possibility using MySQL Shell dump utilities. </p>



<h2 class="wp-block-heading">Backup window and mechanism</h2>



<p>Flexible Azure for MySQL generates by default a 7 days server backup retention period. This retention period can be extended up to 35 days or shorten to 1 day. In addition we can decide if you want to have a Geo Redundant backup storage. By default the backups are locally redundant. </p>



<p>It’s important to understand that Azure makes backup of the whole server, not only the MySQL Server through mysqldump, MySQL Enterprise Backup or any other solution. These backups can only be used to restore MySQL Server in another Azure Database for MySQL Server. It means that these backups cannot be exported to generate a new database on our on-premise server for instance. If we want to extract part of our database in order to export data, we can use <a href="https://dev.mysql.com/doc/refman/8.0/en/mysqldump.html" target="_blank" rel="noreferrer noopener">mysqldump</a>, <a href="https://dev.mysql.com/doc/mysql-shell/8.0/en/mysql-shell-utilities-dump-instance-schema.html" target="_blank" rel="noreferrer noopener">MySQL Shell&#8217;s instance dump utility</a> or the set of tools provided by <a href="https://dev.mysql.com/doc/mysql-shell/8.0/en/mysql-shell-utilities-dump-instance-schema.html" target="_blank" rel="noreferrer noopener">MySQL Shell.</a></p>



<figure class="wp-block-image size-full"><img loading="lazy" decoding="async" width="725" height="182" src="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2022/08/MySQLAzure-031.png" alt="" class="wp-image-18515" srcset="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2022/08/MySQLAzure-031.png 725w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2022/08/MySQLAzure-031-300x75.png 300w" sizes="auto, (max-width: 725px) 100vw, 725px" /><figcaption class="wp-element-caption">Backup Retention period configuration and Redundancy options</figcaption></figure>



<p>The backups provided by Azure can be used to make Point In Time Recovery of the Server with a granularity of 5 minutes since the system snapshots are done automatically every 5 minutes. As specified in the documentation, the backups are encrypted using AES 256-bit. </p>



<h3 class="wp-block-heading">Backup and restore costs</h3>



<p>As explained on Microsoft website</p>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
<p><strong><em>&#8220;Backup storage is the storage associated with automated backups of your server.&nbsp;</em><strong>Increasing your backup retention period increases the backup storage that is consumed by your server</strong><em>. There is no additional charge for backup storage for up to 100% of your total provisioned server storage. Additional consumption of backup storage will be charged in GB/month.&#8221;</em></strong>  &#8211; https://azure.microsoft.com/en-us/pricing/details/mysql/flexible-server/</p>
</blockquote>



<p>Increasing the retention of the backup, may have an impact on the pricing. We can have an idea of the global costs related to Azure Database for MySQL on the following URL: <a href="https://azure.microsoft.com/en-us/pricing/details/mysql/flexible-server/" target="_blank" rel="noreferrer noopener">https://azure.microsoft.com/en-us/pricing/details/mysql/flexible-server/</a></p>



<h2 class="wp-block-heading">Recover a database from Azure Backup Restore interface</h2>



<p>In this first test, we will simply use the recovery functionalities provided by Azure Database for MySQL flexible server. We will simulate a user error by deleting a table and we will restore the entire server (as it&#8217;s not possible to simply recover a database or a table using Azure features). </p>



<ol class="wp-block-list">
<li><strong>Dropping a table by mistake</strong></li>
</ol>



<pre class="wp-block-code"><code> MySQL  albatroz.mysql.database.azure.com:3306 ssl  SQL &gt; SELECT CURRENT_TIMESTAMP ;
+---------------------+
| CURRENT_TIMESTAMP   |
+---------------------+
| 2022-08-18 20:48:44 |
+---------------------+
1 row in set (0.1002 sec)
 MySQL  albatroz.mysql.database.azure.com:3306 ssl  SQL &gt; drop table sysbench.sbtest1;
Query OK, 0 rows affected (0.1884 sec)</code></pre>



<ol class="wp-block-list" start="2">
<li><strong>Restoring the MySQL Server to a time before the mistake</strong></li>
</ol>



<p>We first have to go in the &#8220;backup/restore&#8221; menu of our Azure Database for MySQL flexible server and select the backupset that we want to restore. As we can see a backupset is done everyday. In our current context, we want to use the most recent backupset (Automated backup #5)</p>



<figure class="wp-block-image size-full"><img loading="lazy" decoding="async" width="914" height="226" src="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2022/08/MySQLAzure-032.png" alt="" class="wp-image-18597" srcset="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2022/08/MySQLAzure-032.png 914w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2022/08/MySQLAzure-032-300x74.png 300w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2022/08/MySQLAzure-032-768x190.png 768w" sizes="auto, (max-width: 914px) 100vw, 914px" /><figcaption class="wp-element-caption">Azure Database for MySQL flexible server backupset</figcaption></figure>



<p>Once the backupset selected a screen appears showing the restore server options. It provides us with the possibility to make a Point In Time Restore (PITR) of our server by choosing between 3 options: </p>



<ul class="wp-block-list">
<li><em>Latest restore point (Now)</em></li>



<li><em>Select a custom restore point</em></li>



<li><em>Select fastest restore point (Restore using full backup)</em></li>
</ul>



<p>In our case we will use the &#8220;<em>Select a Custom Restore point</em>&#8221; option as shown in the screenshot below. We will define the custom restore time just before the mistake and specify a name for the restored Server. </p>



<figure class="wp-block-image size-full"><img loading="lazy" decoding="async" width="743" height="523" src="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2022/08/MySQLAzure-033.png" alt="" class="wp-image-18598" srcset="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2022/08/MySQLAzure-033.png 743w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2022/08/MySQLAzure-033-300x211.png 300w" sizes="auto, (max-width: 743px) 100vw, 743px" /><figcaption class="wp-element-caption">Restore of a Server using a custom restore point</figcaption></figure>



<p>Once the restore requested, it took approximately 5 minutes to deploy the new server. </p>



<figure class="wp-block-image size-full"><img loading="lazy" decoding="async" width="840" height="301" src="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2022/08/MySQLAzure-034.png" alt="" class="wp-image-18600" srcset="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2022/08/MySQLAzure-034.png 840w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2022/08/MySQLAzure-034-300x108.png 300w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2022/08/MySQLAzure-034-768x275.png 768w" sizes="auto, (max-width: 840px) 100vw, 840px" /></figure>



<ol class="wp-block-list" start="3">
<li><strong>Let&#8217;s check if my table is back on the new deployed server</strong></li>
</ol>



<p>Finally we simply have to connect to the new restored server and check if the dropped table is back. Of course we can also export this table from this restored server and import this same table on the original server using mysqldump. </p>



<pre class="wp-block-code"><code> MySQL  albatrozrestored.mysql.database.azure.com:3306 ssl  sysbench  SQL &gt; show tables from sysbench like '%1';
+-------------------------+
| Tables_in_sysbench (%1) |
+-------------------------+
| sbtest1                 |
+-------------------------+
1 row in set (0.1048 sec)</code></pre>



<p><strong>4. Export/Import table from the restored Server</strong></p>



<p>Now that the server is restored, we can export the table that was deleted by mistake using <code><em>util.dumpTables()</em></code> and import it to the albatroz server using <code><em>util.loadDump()</em></code>. The process is rather simple as you can see below:</p>



<p><strong>Export from the recovered server</strong> <strong>(albatrozrestored)</strong></p>



<pre class="wp-block-code"><code>MySQL  albatrozrestored.mysql.database.azure.com:3306 ssl  sysbench  JS &gt; util.dumpTables("sysbench", &#091; "sbtest1"], "C:/Users/grs/Albatroz-Sysbench-sbtest1");
NOTE: Backup lock is not available to the account 'grs'@'%' and DDL changes will not be blocked. The dump may fail with an error if schema changes are made while dumping.
Acquiring global read lock
Global read lock acquired
Initializing - done
...
...
109% (15.29K rows / ~13.98K rows), 0.00 rows/s, 0.00 B/s uncompressed, 0.00 B/s compressed
Dump duration: 00:00:01s
Total duration: 00:00:06s
Schemas dumped: 1
Tables dumped: 1
Uncompressed data size: 2.93 MB
Compressed data size: 1.33 MB
Compression ratio: 2.2
Rows written: 15294
Bytes written: 1.33 MB
Average uncompressed throughput: 2.35 MB/s
Average compressed throughput: 1.07 MB/s</code></pre>



<p><strong>Import on Albatroz server</strong></p>



<pre class="wp-block-code"><code>MySQL  albatroz.mysql.database.azure.com:3306 ssl  JS &gt; util.loadDump("C:/Users/grs/Albatroz-Sysbench-sbtest1", {schema: "sysbench"});
Loading DDL and Data from 'C:/Users/grs/Albatroz-Sysbench-sbtest1' using 4 threads.
Opening dump...
Target is MySQL 8.0.28. Dump was produced from MySQL 8.0.28
Scanning metadata - done
Checking for pre-existing objects...
Executing common preamble SQL
Executing DDL - done
Executing view DDL - done
Starting data load
1 thds loading | 100% (2.93 MB / 2.93 MB), 1.94 MB/s, 0 / 1 tables done
Executing common postamble SQL
Recreating indexes - done
1 chunks (15.29K rows, 2.93 MB) for 1 tables in 1 schemas were loaded in 6 sec (avg throughput 1.94 MB/s)
0 warnings were reported during the load.</code></pre>



<h2 class="wp-block-heading"><strong>Recover a database from your own backup</strong>s</h2>



<p>As stated in the introduction, I will present in this chapter a complementary solution of backup/restore. Of course Azure does not prevent us from doing our own backups by connecting to the Azure Database for MySQL flexible server and using either mysqdump, MySQL Enterprise Backup or any other MySQL backup solution. I decided to use the opportunity of this blog to use the backup tools provided by MySQL Shell. Indeed MySQL Shell&#8217;s instance dump utility, such as <code><em>util.dumpInstance()</em></code>, <code><em>util.dumpSchemas()</em></code> or even <code><em>util.dumpTables()</em></code>, introduced in MySQL Shell 8.0.22, provide interesting functionalities. This export tools alone would deserve several blogs dedicated to them. </p>



<p>Before starting let&#8217;s illustrate what will be demonstrated in the next few lines:</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="331" src="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2022/08/MySQLAzure-035-1024x331.png" alt="" class="wp-image-18633" srcset="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2022/08/MySQLAzure-035-1024x331.png 1024w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2022/08/MySQLAzure-035-300x97.png 300w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2022/08/MySQLAzure-035-768x249.png 768w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2022/08/MySQLAzure-035.png 1168w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /><figcaption class="wp-element-caption">Recovering a MySQL table after a human error</figcaption></figure>



<ol class="wp-block-list">
<li>The very first step consist in doing a dump of the MySQL Instance</li>



<li>Secondly, we will insert a row in a table named sbtest1</li>



<li>Thirdly, we will simulate a human error and drop a table</li>



<li>Then we will restore the database to the state after the backup</li>



<li>After having restored the backup, we will execute the binary to the state just before the human error</li>



<li>Finally we will check that the last insert I did is stored into the sbtest2 </li>
</ol>



<p><br></p>



<ol class="wp-block-list" start="1">
<li><strong>Dump of MySQL Instance using util.dumpInstance()</strong></li>
</ol>



<p>As explained above, the very first step consist in doing a dump of the entire instance. Without this backup we won&#8217;t be able to restore the database. We will use <code><em>util.dumpInstance()</em></code> as presented below:</p>



<pre class="wp-block-code"><code>MySQL  albatroz.mysql.database.azure.com:3306 ssl  JS &gt; util.dumpInstance("C:/Users/grs/AlbatrozDump", {dryRun: false, showProgress: true, threads: 2})
NOTE: Backup lock is not available to the account 'grs'@'%' and DDL changes will not be blocked. The dump may fail with an error if schema changes are made while dumping.
Acquiring global read lock
Global read lock acquired
Initializing - done
2 out of 6 schemas will be dumped and within them 9 tables, 0 views.
4 out of 7 users will be dumped.
...
...
107% (137.28K rows / ~128.10K rows), 14.42K rows/s, 2.61 MB/s uncompressed, 1.20 MB/s compressed
Dump duration: 00:00:15s
Total duration: 00:00:21s
Schemas dumped: 2
Tables dumped: 9
Uncompressed data size: 26.26 MB
Compressed data size: 11.97 MB
Compression ratio: 2.2
Rows written: 137284
Bytes written: 11.97 MB
Average uncompressed throughput: 1.72 MB/s
Average compressed throughput: 784.25 KB/s</code></pre>



<ol class="wp-block-list" start="2">
<li><strong>Inserting row in our table</strong></li>
</ol>



<p>Now we simulate some activity in the database by inserting a row into the table <em>sbtest1</em>. </p>



<pre class="wp-block-code"><code>MySQL  albatroz.mysql.database.azure.com:3306 ssl  sysbench  SQL &gt; insert into sbtest1 values(999999999,1,1,"my row before drop table");
Query OK, 1 row affected (0.1134 sec)
MySQL  albatroz.mysql.database.azure.com:3306 ssl  sysbench  SQL &gt; SELECT CURRENT_TIMESTAMP ;
+---------------------+
| CURRENT_TIMESTAMP   |
+---------------------+
| 2022-08-19 16:42:12 |
+---------------------+</code></pre>



<ol class="wp-block-list" start="3">
<li><strong>Dropping a table by mistake</strong></li>
</ol>



<p>Thirdly, we simulate the human error by dropping the table <em>sbtest1</em>. </p>



<pre class="wp-block-code"><code> MySQL  albatroz.mysql.database.azure.com:3306 ssl  sysbench  SQL &gt; SELECT CURRENT_TIMESTAMP ;
+---------------------+
| CURRENT_TIMESTAMP   |
+---------------------+
| 2022-08-19 16:46:47 |
+---------------------+
1 row in set (0.1095 sec)
 MySQL  albatroz.mysql.database.azure.com:3306 ssl  sysbench  SQL &gt; drop table sysbench.sbtest1;
Query OK, 0 rows affected (0.1823 sec)</code></pre>



<ol class="wp-block-list" start="4">
<li><strong>Restoring the MySQL Server using the backup </strong></li>
</ol>



<p>Now, we have to restore the database using the last backup we have. We will use <em><code><em>util.loadDump()</em></code></em> in order to restore our table. To only recover the table we can simply use the option &#8220;<em>includeTables</em>&#8220;. </p>



<pre class="wp-block-code"><code> MySQL  albatroz.mysql.database.azure.com:3306 ssl  sysbench  JS &gt; util.loadDump("C:/Users/grs/AlbatrozDump", { includeTables: &#091;"sysbench.sbtest1"],loadDdl:true, LoadData:true, threads: 2})
Loading DDL and Data from 'C:/Users/grs/AlbatrozDump' using 2 threads.
Opening dump...
Target is MySQL 8.0.28. Dump was produced from MySQL 8.0.28
Scanning metadata - done
Checking for pre-existing objects...
Executing common preamble SQL
Executing DDL - done
Executing view DDL - done
Starting data load
1 thds loading \ 100% (2.93 MB / 2.93 MB), 1.16 MB/s, 0 / 1 tables done
Executing common postamble SQL
Recreating indexes - done
1 chunks (15.29K rows, 2.93 MB) for 1 tables in 1 schemas were loaded in 9 sec (avg throughput 1.16 MB/s)
0 warnings were reported during the load.</code></pre>



<p>If the restore worked properly, the table <em>sbtest1</em> should be restored. We now have recovered the table <em>sbtest1</em> without the queries that have been executed afterwards (before the drop table).  </p>



<pre class="wp-block-code"><code> MySQL  albatroz.mysql.database.azure.com:3306 ssl  sysbench  SQL &gt; show tables;
+--------------------+
| Tables_in_sysbench |
+--------------------+
| sbtest1            |
| sbtest10           |
| sbtest2            |
| sbtest3            |
| sbtest4            |
| sbtest5            |
| sbtest6            |
| sbtest7            |
| sbtest8            |
| sbtest9            |
+--------------------+
10 rows in set (0.1169 sec)

MySQL  albatroz.mysql.database.azure.com:3306 ssl  sysbench  SQL &gt; select * from sbtest1 where pad like 'my%';
Empty set (0.1284 sec)</code></pre>



<ol class="wp-block-list" start="5">
<li><strong>Execution of the binary logs</strong></li>
</ol>



<p>Before executing the binary logs, we need to define what we call the &#8220;<em>start-position</em>&#8221; and the &#8220;<em>end-position</em>&#8220;. In order to find these two numbers, we have to find the log position after the backup and to find the log position of the &#8220;drop table&#8221;. The first (<em>start-position</em>) can be found into the metadata of the dump ( .json file). For the second we have to find the exact position using mysqlbinlog as demonstrated below (drop position at 14329648) </p>



<pre class="wp-block-code"><code>mysqlbinlog --verify-binlog-checksum --host=albatroz.mysql.database.azure.com --port=3306 --user=grs -p -                                                   -read-from-remote-server --verbose --start-datetime="2022-08-19 18:40:40" --stop-datetime="2022-08-19 18:50:47" mysql-bin.00                                                   0006 | grep -C 15 "DROP TABLE"

# at 14329648
#220819 18:47:12 server id 3691359094  end_log_pos 14329725 CRC32 0xa81066cf    Anonymous_GTID  last_committed=6652     sequence_number=6653r                                  br_only=no      original_committed_timestamp=1660927632152016   immediate_commit_timestamp=1660927632152016     transaction_length=217
# original_commit_timestamp=1660927632152016 (2022-08-19 18:47:12.152016 CEST)
# immediate_commit_timestamp=1660927632152016 (2022-08-19 18:47:12.152016 CEST)
/*!80001 SET @@session.original_commit_timestamp=1660927632152016*//*!*/;
/*!80014 SET @@session.original_server_version=80028*//*!*/;
/*!80014 SET @@session.immediate_server_version=80028*//*!*/;
SET @@SESSION.GTID_NEXT= 'ANONYMOUS'/*!*/;
# at 14329725
#220819 18:47:12 server id 3691359094  end_log_pos 14329865 CRC32 0x9f7865b0    Query   thread_id=480   exec_time=0     error_code=0    Xid =                                   412469
use `sysbench`/*!*/;
SET TIMESTAMP=1660927632/*!*/;
DROP TABLE `sbtest1` /* generated by server */
/*!*/;

</code></pre>



<p>Now that we have the start and end position of the binary log, we can apply the events in binary log file to the server. For my part, I prefer to go through an intermediate step consisting in creating a file containing all the events. This way I can look at what is inside and then just run my file. This allows me, for example, to see if I have made an error. As shown below I&#8217;m for instance able to show my &#8220;<em>insert</em>&#8221; statement: </p>



<pre class="wp-block-code"><code>osboxes@osboxes:~$  mysqlbinlog --verify-binlog-checksum --host=albatroz.mysql.database.azure.com --port=3306 --user=grs -p --read-from-remote-server --start-datetime="2022-08-19 16:40:40" --stop-datetime="2022-08-19 16:46:47" mysql-bin.000006 &gt;/tmp/restore.sql
Enter password:
osboxes@osboxes:~$ vi /tmp/restore.sql
...
# at 14329545
#220819 18:41:18 server id 3691359094  end_log_pos 14329617 CRC32 0xd124e6a0    Write_rows: table id 913 flags: STMT_END_F

BINLOG '
Lr3/YhN2qwXcRQAAAMmm2gAAAJEDAAAAAAEACHN5c2JlbmNoAAdzYnRlc3QxAAQDA/7+BO7g/vAA
AQEAAgP8/wDqexN/
Lr3/Yh52qwXcSAAAABGn2gAAAJEDAAAAAAEAAgAE/wD/yZo7AQAAAAEAMRhteSByb3cgYmVmb3Jl
IGRyb3AgdGFibGWg5iTR
'/*!*/;
### INSERT INTO `sysbench`.`sbtest1`
### SET
###   @1=999999999
###   @2=1
###   @3='1'
###   @4='my row before drop table'
# at 14329617
...</code></pre>



<p>Finally we can execute our restore script on the database.</p>



<pre class="wp-block-code"><code>osboxes@osboxes:~$ mysql --host=albatroz.mysql.database.azure.com --port=3306 --user=grs -p &lt;/tmp/restore.sql
Enter password:</code></pre>



<ol class="wp-block-list" start="6">
<li><strong>Let&#8217;s check that our last insert has been executed</strong></li>
</ol>



<p>As we can see, the last record we inserted in the table is now present.</p>



<pre class="wp-block-code"><code> MySQL  albatroz.mysql.database.azure.com:3306 ssl  sysbench  SQL &gt; select * from sbtest1 where pad like 'my%';
+-----------+---+---+--------------------------+
| id        | k | c | pad                      |
+-----------+---+---+--------------------------+
| 999999999 | 1 | 1 | my row before drop table |
+-----------+---+---+--------------------------+</code></pre>



<h2 class="wp-block-heading">Conclusion</h2>



<p>Azure Backup Restore interface provides an easy and interesting solution to backup and restore a MySQL Server going through the replication of the server. In addition in the tests I did, the deployment of the new server was rather fast. However, my server did not contain gigabytes of information. The 35 days of backup window could be seen as too small depending on your needs. </p>



<p>Beside of this solution I strongly encourage Database Administrators to keep on backing up their MySQL Database using other tools such as the ones provided by MySQL Shell or any other backup solution in order to ensure that no transactions are lost when restoring data. Such solutions could offer more flexibility in the backup and restore process and longer retention.</p>
<p>L’article <a href="https://www.dbi-services.com/blog/mysql-server-on-microsoft-azure-3rd-part-backup-and-recovery/">MySQL Server on Microsoft Azure 3rd part (backup and recovery)</a> est apparu en premier sur <a href="https://www.dbi-services.com/blog">dbi Blog</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.dbi-services.com/blog/mysql-server-on-microsoft-azure-3rd-part-backup-and-recovery/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>MySQL Server on Microsoft Azure 2nd part (Performance tests)</title>
		<link>https://www.dbi-services.com/blog/mysql-server-on-microsoft-azure-2nd-part-performance-tests/</link>
					<comments>https://www.dbi-services.com/blog/mysql-server-on-microsoft-azure-2nd-part-performance-tests/#respond</comments>
		
		<dc:creator><![CDATA[Grégory Steulet]]></dc:creator>
		<pubDate>Wed, 17 Aug 2022 06:00:58 +0000</pubDate>
				<category><![CDATA[Azure]]></category>
		<category><![CDATA[Cloud]]></category>
		<category><![CDATA[MySQL]]></category>
		<guid isPermaLink="false">https://www.dbi-services.com/blog/?p=18470</guid>

					<description><![CDATA[<p>Introduction This second blog follows the first blog about deploying MySQL Server on Microsoft Azure. In the first blog, we saw how easy it is to deploy a MySQL server in minutes on the Azure cloud and we connected on it through the MySQL Shell client. This second blog is more focused on the performance [&#8230;]</p>
<p>L’article <a href="https://www.dbi-services.com/blog/mysql-server-on-microsoft-azure-2nd-part-performance-tests/">MySQL Server on Microsoft Azure 2nd part (Performance tests)</a> est apparu en premier sur <a href="https://www.dbi-services.com/blog">dbi Blog</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="293" src="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2022/08/MySQLAzure-001-2-1024x293.png" alt="" class="wp-image-18471" srcset="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2022/08/MySQLAzure-001-2-1024x293.png 1024w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2022/08/MySQLAzure-001-2-300x86.png 300w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2022/08/MySQLAzure-001-2-768x220.png 768w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2022/08/MySQLAzure-001-2.png 1345w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /><figcaption>Azure Database for MySQL</figcaption></figure>



<h2 class="wp-block-heading">Introduction</h2>



<p>This second blog follows the first blog about <a href="https://www.dbi-services.com/blog/mysql-server-on-microsoft-azure-1st-part-deployment" target="_blank" rel="noreferrer noopener">deploying MySQL Server on Microsoft Azure</a>. In the first blog, we saw how easy it is to deploy a MySQL server in minutes on the Azure cloud and we connected on it through the MySQL Shell client.</p>



<p><br>This second blog is more focused on the performance of MySQL in the Azure Cloud. Although I didn&#8217;t do any tuning of MySQL parameters, we will see the influence of MySQL Server localization on the latency as well as changes in parameters such as CPU, memory or IOPS on performances using the SysBench tool.</p>



<h2 class="wp-block-heading">SysBench stress test</h2>



<p>Of course there are many tools available to do stress tests. On my side, I decided to use SysBench simply because I know this wonderful free tool and because it is widely deployed. Rather than making a description of SysBench, I simply take a part of its description available on sourceforge that we can find below: </p>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow"><p><em>sysbench is a scriptable multi-threaded benchmark tool based on LuaJIT. It is most frequently used for database benchmarks, but can also be used to create arbitrarily complex workloads that do not involve a database server </em>&#8211; <a href="https://sourceforge.net/projects/sysbench.mirror/" target="_blank" rel="noreferrer noopener">https://sourceforge.net/projects/sysbench.mirror/</a>, 14.08.2022</p></blockquote>



<p>Firstly I installed SysBench on my local Ubuntu client (osboxes). After the installation of SysBench but before starting the stress test, we first have to prepare the tables containing the records where the queries will be performed. In the following tests I created 10 tables with 1&#8217;000&#8217;000 rows in sysbench database (previously created). </p>



<pre class="wp-block-code"><code>osboxes@osboxes:/usr/bin$ sysbench --db-driver=mysql table-size=1000000 --mysql-host=albatroz.mysql.database.azure.com --mysql-port=3306 --mysql-db=sysbench --mysql-user=grs --tables=10 --mysql-password=MyPassword --test=/usr/share/sysbench/oltp_read_write.lua prepare</code></pre>



<p>Once the tables populated, we can run the tests. For the tests, I decided to use oltp_read_write.lua which has a default ratio of read:write 95%:5%.</p>



<pre class="wp-block-code"><code>sysbench --db-driver=mysql  --num-threads=8 --mysql-user=grs --mysql-password=MyPassword --mysql-db=sysbench --events=0 --time=100  --test=/usr/share/sysbench/oltp_read_write.lua --mysql-host=albatroz.mysql.database.azure.com --mysql-port=3306 --tables=10 --db-ps-mode=disable --table-size=1000000 --report-interval=10 run</code></pre>



<p> I varied the number of threads (num-threads parameter) from 8 to 256, limited the time to 100 seconds and changed two kinds of parameters through the Azure interface during my tests: </p>



<ul class="wp-block-list"><li>The Server location (East US, vs West Switzerland)</li><li>The Compute and Storage (Compute Tier and IOPS)</li></ul>



<p>It&#8217;s important to note that my Client is  also located in Western Switzerland to understand the impact of location. You can find below the four configurations I did: </p>



<ol class="wp-block-list"><li>Server located in <strong><mark class="has-inline-color has-vivid-cyan-blue-color">US</mark></strong>, with minimal compute and storage meaning <mark class="has-inline-color has-vivid-purple-color"><strong>1vCore 2Gib Memory and 360 IOPS</strong></mark></li><li>Server located in <strong><mark class="has-inline-color has-vivid-cyan-blue-color">US</mark></strong>, with general purpose configuration meaning <mark class="has-inline-color has-vivid-green-cyan-color"><strong>2vCores, 8Gib Memory and  3200 IOPS</strong></mark></li><li>Server located in <mark class="has-inline-color has-luminous-vivid-orange-color"><strong>West Switzerland</strong>,</mark> with minimal compute and storage meaning <mark class="has-inline-color has-vivid-purple-color"><strong>1vCore 2Gib Memory and 360 IOPS</strong></mark></li><li>Server located in <mark class="has-inline-color has-luminous-vivid-orange-color"><strong>West Switzerland</strong></mark>, with general purpose configuration meaning <mark class="has-inline-color has-vivid-green-cyan-color"><strong>2vCores, 8Gib Memory and  3200 IOPS</strong></mark></li></ol>



<h2 class="wp-block-heading">Server details</h2>



<p>As it&#8217;s not possible to change the location of a server afterwards, I created two MySQL servers in same version (version 8.0.28). One located in East US and the other located in West Switzerland. Note that we have two locations in Switzerland (North Switzerland and West Switzerland). As explained above, in order to make comparisons, I provided the same set of resources for tests 1 and 3 and test 2 and 4. Also as specified before, my client is located in Switzerland.</p>



<p>The basic configuration looks like the one below: </p>


<div class="wp-block-image">
<figure class="aligncenter size-large"><img loading="lazy" decoding="async" width="1024" height="632" src="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2022/08/MySQLAzure-009-1-1024x632.png" alt="" class="wp-image-18474" srcset="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2022/08/MySQLAzure-009-1-1024x632.png 1024w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2022/08/MySQLAzure-009-1-300x185.png 300w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2022/08/MySQLAzure-009-1-768x474.png 768w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2022/08/MySQLAzure-009-1.png 1292w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /><figcaption>Basic configuration</figcaption></figure>
</div>


<p>It is worth mentioning that to upgrade the server&#8217;s capacity (from Burstable to General Purpose with more IOPS), it only takes about ten minutes maximum. Meaning that when we end with the tests and we want to go to production, we only need few minutes to upgrade the server properties. We can find the description of the upgraded server below as well as the monthly price of this upgraded server. </p>


<div class="wp-block-image">
<figure class="aligncenter size-large"><img loading="lazy" decoding="async" width="1024" height="648" src="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2022/08/MySQLAzure-010-1024x648.png" alt="" class="wp-image-18473" srcset="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2022/08/MySQLAzure-010-1024x648.png 1024w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2022/08/MySQLAzure-010-300x190.png 300w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2022/08/MySQLAzure-010-768x486.png 768w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2022/08/MySQLAzure-010.png 1262w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /><figcaption>General Purpose configuration</figcaption></figure>
</div>


<h2 class="wp-block-heading">Let&#8217;s run the tests</h2>



<p>I executed the same tests with only a variation in the number of thread (from 8 to 256) on each four configurations. We can find an extract of the execution as well as the output below</p>



<pre class="wp-block-code"><code>osboxes@osboxes:/usr/bin$ sysbench --db-driver=mysql  --num-threads=8 --mysql-user=grs --mysql-password=MyPassword --mysql-db=sysbench --events=0 --time=100  --test=/usr/share/sysbench/oltp_read_write.lua --mysql-host=albatroz.mysql.database.azure.com --mysql-port=3306 --tables=10 --db-ps-mode=disable --table-size=1000000 --report-interval=10 run
WARNING: the --test option is deprecated. You can pass a script name or path on the command line without any options.
WARNING: --num-threads is deprecated, use --threads instead
sysbench 1.0.20 (using system LuaJIT 2.1.0-beta3)

Running the test with following options:
Number of threads: 8
Report intermediate results every 10 second(s)
Initializing random number generator from current time


Initializing worker threads...

Threads started!

&#091; 10s ] thds: 8 tps: 3.20 qps: 77.58 (r/w/o: 55.98/4.40/17.20) lat (ms,95%): 2082.91 err/s: 0.00 reconn/s: 0.00
&#091; 20s ] thds: 8 tps: 4.00 qps: 72.91 (r/w/o: 50.51/5.00/17.40) lat (ms,95%): 2585.31 err/s: 0.00 reconn/s: 0.00
&#091; 30s ] thds: 8 tps: 4.00 qps: 77.80 (r/w/o: 53.80/5.30/18.70) lat (ms,95%): 2120.76 err/s: 0.00 reconn/s: 0.00
&#091; 40s ] thds: 8 tps: 3.20 qps: 72.80 (r/w/o: 52.50/4.50/15.80) lat (ms,95%): 2632.28 err/s: 0.00 reconn/s: 0.00
&#091; 50s ] thds: 8 tps: 4.00 qps: 78.40 (r/w/o: 55.40/5.10/17.90) lat (ms,95%): 2082.91 err/s: 0.00 reconn/s: 0.00
&#091; 60s ] thds: 8 tps: 4.00 qps: 73.60 (r/w/o: 49.70/5.40/18.50) lat (ms,95%): 2632.28 err/s: 0.00 reconn/s: 0.00
&#091; 70s ] thds: 8 tps: 4.00 qps: 77.60 (r/w/o: 53.60/5.00/19.00) lat (ms,95%): 2082.91 err/s: 0.00 reconn/s: 0.00
&#091; 80s ] thds: 8 tps: 3.20 qps: 73.60 (r/w/o: 54.10/4.70/14.80) lat (ms,95%): 2405.65 err/s: 0.00 reconn/s: 0.00
&#091; 90s ] thds: 8 tps: 4.00 qps: 77.60 (r/w/o: 53.90/4.80/18.90) lat (ms,95%): 2082.91 err/s: 0.00 reconn/s: 0.00
&#091; 100s ] thds: 8 tps: 4.00 qps: 73.60 (r/w/o: 49.60/4.80/19.20) lat (ms,95%): 2493.86 err/s: 0.00 reconn/s: 0.00
SQL statistics:
    queries performed:
        read:                            5376
        write:                           501
        other:                           1803
        total:                           7680
    transactions:                        384    (3.78 per sec.)
    queries:                             7680   (75.60 per sec.)
    ignored errors:                      0      (0.00 per sec.)
    reconnects:                          0      (0.00 per sec.)

General statistics:
    total time:                          101.5914s
    total number of events:              384

Latency (ms):
         min:                                 2000.71
         avg:                                 2115.55
         max:                                 2644.11
         95th percentile:                     2585.31
         sum:                               812371.47

Threads fairness:
    events (avg/stddev):           48.0000/0.00
    execution time (avg/stddev):   101.5464/0.07</code></pre>



<p>We can see in this output, that I asked to sysbench to provide me with a feedback every 10 seconds. We can also observe that the ratio between the queries and transaction is respected (95%:5%), finally it can noted that the latency is important in our case (Server in US and client in Switzerland). The average latency in the above example is 2&#8217;115.55ms. If we are interested in the latency distribution we can use the option &#8211;histogram. </p>



<h2 class="wp-block-heading">Tests summary</h2>



<p>We can find below the summary of the tests I did with the described configurations. </p>



<table cellspacing="2" border="1"><tbody>
<tr>
	<td align="right"></td><td colspan="4" align="center">East US</td><td colspan="4" align="center">West Switzerland</td>
</tr>
<tr>
	<td align="right"></td>
	<td colspan="2" align="center">1.Minimal configuration</td>
	<td colspan="2" align="center">2. General purpose</td>
	<td colspan="2" align="center">3.Minimal configuration</td><td colspan="2" align="center">4. General purpose</td>
</tr>
<tr>
	<td align="right">Number of Threads</td>
	<td align="right">trans./sec</td>
	<td align="right">queries/sec</td>
	<td align="right">trans./sec</td>
	<td align="right">queries/sec</td>
	<td align="right">trans./sec</td>
	<td align="right">queries/sec</td>
	<td align="right">trans./sec</td>
	<td align="right">queries/sec</td>
</tr>
<tr>
	<td align="right">8</td>
	<td align="right">3,78</td>
	<td align="right">75,6</td>
	<td align="right">3,73</td>
	<td align="right">74,63</td>
	<td align="right">27,91</td>
	<td align="right">559,82</td>
	<td align="right">28,06</td>
	<td align="right">561,13</td>
</tr>
<tr>
	<td align="right">16</td>
	<td align="right">7,95</td>
	<td align="right">150,94</td>
	<td align="right">7,46</td>
	<td align="right">149,22</td>
	<td align="right">54,13</td>
	<td align="right">1083,58</td>
	<td align="right">56,04</td>
	<td align="right">1120,71</td>
</tr>
<tr>
	<td align="right">32</td>
	<td align="right">15,14</td>
	<td align="right">303,02</td>
	<td align="right">15,52</td>
	<td align="right">310,61</td>
	<td align="right">102,33</td>
	<td align="right">2047,31</td>
	<td align="right">105,18</td>
	<td align="right">2103,62</td>
</tr><tr>
	<td align="right">64</td>
	<td align="right">29,1</td>
	<td align="right">588,85</td>
	<td align="right">30,74</td>
	<td align="right">614,91</td>
	<td align="right">178,78</td>
	<td align="right">3576</td>
	<td align="right">167,92</td>
	<td align="right">3358,82</td>
</tr>
<tr>
	<td align="right">128</td>
	<td align="right">58,87</td>
	<td align="right">1179,82</td>
	<td align="right">61,19</td>
	<td align="right">1225,15</td>
	<td colspan="2" align="right">too many conn.</td>
	<td align="right">191,83</td>
	<td align="right">3837,11</td>
</tr>
<tr>
	<td align="right">192</td>
	<td colspan="2" align="right">too many conn.</td>
	<td align="right">86,08</td>
	<td align="right">1722,69</td>
	<td colspan="2" align="right">too many conn.</td>
	<td align="right">176,63</td>
	<td align="right">3533,02</td>
</tr>
	<tr><td align="right">256</td>
	<td colspan="2" align="right">too many conn.</td>
	<td align="right">107,73</td>
	<td align="right">2158,16</td>
	<td colspan="2" align="right">too many conn.</td>
	<td align="right">161,66</td>
	<td align="right">3233,95</td>
</tr></tbody></table>



<p></p>



<p>The first thing we can see after having a look on the above table is that whereas we faced a &#8220;too many connections&#8221; errors with minimal configurations, the tests worked properly with the General purpose configuration. </p>



<p>The second thing that we can observe is that the tests performed better with the server located in West Switzerland. One of the reasons is most probably linked with the latency. Whereas we had an average of 2&#8217;100ms of latency for server located in East US, we have an average of 350ms of latency for server located in West Switzerland. </p>



<p>Finally in our configuration, the increase of compute and storage doesn&#8217;t show any improvement in terms of transaction per second or queries per second. </p>



<h3 class="wp-block-heading">Latency between the application and the MySQL Server</h3>



<p>The latency is an important parameter to take into consideration. As explained on Microsoft webpage named &#8220;Best practices for optimal performance of your Azure Database for MySQL server&#8221; to improve the performance of an application, we have to take into consideration the proximity between the MySQL Server and the application. </p>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow"><p>To improve the performance and reliability of an application in a cost optimized deployment, it&#8217;s highly recommended that the web application service and the Azure Database for MySQL resource reside in the same region and availability zone. &#8211; <a href="https://docs.microsoft.com/en-us/azure/mysql/single-server/concept-performance-best-practices" target="_blank" rel="noreferrer noopener">Microsoft</a>,  16.08.2022</p></blockquote>



<p>This page also gives some tips in order to optimize your MySQL performances on Azure. To check the latency, you do not need to use SysBench, a simple query such as the one executed in the below examples presents the latency difference between the execution on a localhost, an Azure server located in Switzerland and an Azure server located in US. </p>



<p><strong>Latency on a Localhost</strong></p>



<pre class="wp-block-code"><code>mysql&gt; select 1;
+---+
| 1 |
+---+
| 1 |
+---+
1 row in set (<strong>0.00 sec</strong>)
</code></pre>



<p><strong>Latency on an</strong> <strong>Azure Server located in West Switzerland</strong></p>



<pre class="wp-block-code"><code> MySQL  albatrozswitzerland.mysql.database.azure.com:3306 ssl SQL &gt; select 1;
+---+
| 1 |
+---+
| 1 |
+---+
1 row in set (<strong>0.0436 sec</strong>)</code></pre>



<p><strong>Latency on an</strong> <strong>Azure Server located in East US</strong></p>



<pre class="wp-block-code"><code> MySQL  albatroz.mysql.database.azure.com:3306 ssl SQL &gt; select 1;
+---+
| 1 |
+---+
| 1 |
+---+
1 row in set (<strong>0.1690 sec</strong>)</code></pre>



<p>Considering only the latency, one could think that the application must be hosted on an Azure server in Switzerland (at least in my case). Unfortunately we will discover that not all services of the Microsoft Azure Cloud are available in Switzerland. For instance if we want to deploy an Ubuntu Server, we will see that only the following location are available: </p>


<div class="wp-block-image">
<figure class="aligncenter size-full"><img loading="lazy" decoding="async" width="486" height="384" src="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2022/08/MySQLAzure-012.png" alt="" class="wp-image-18505" srcset="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2022/08/MySQLAzure-012.png 486w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2022/08/MySQLAzure-012-300x237.png 300w" sizes="auto, (max-width: 486px) 100vw, 486px" /><figcaption>Ubuntu Server possible locations</figcaption></figure>
</div>


<h2 class="wp-block-heading">Performance monitoring</h2>



<p>Microsoft Azure provides us with an interface showing the following default graphs (cf screenshot below: </p>



<ul class="wp-block-list"><li>CPU and Memory</li><li>IO Percent</li><li>DB Connections</li><li>Queries</li></ul>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="294" src="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2022/08/MySQLAzure-011-1024x294.png" alt="" class="wp-image-18476" srcset="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2022/08/MySQLAzure-011-1024x294.png 1024w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2022/08/MySQLAzure-011-300x86.png 300w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2022/08/MySQLAzure-011-768x220.png 768w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2022/08/MySQLAzure-011-1536x441.png 1536w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2022/08/MySQLAzure-011.png 1798w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /><figcaption>default performance graphs</figcaption></figure>



<p>But you can add other information regarding storage, Host network in/out, replication lag, aborted connection, aso&#8230;</p>



<h2 class="wp-block-heading">Conclusion</h2>



<p>This short blog regarding the performance of MySQL in the Azure Cloud provides the following conclusions: </p>



<p>The ability to upgrade a MySQL server by adding CPU, disk and memory capacity in a few minutes is really interesting. For instance when you want to switch your test project to production, you simply need to change the servers properties and 10 minutes later, your server is adapted. Additionally, we can always reduce the CPU &amp; Memory capacity as well as the IOPS, only the storage cannot be reduced. </p>



<p>At first glance, it also seems that it is very important to use a MySQL server close to the client for performance reasons (in order to avoid latency issues). Unfortunately depending on the application and business requirements it could be difficult to have the MySQL Server and the Application Server in the same country. </p>



<p>Finally adding CPU, memory and increasing the number of IOPS did not show any improvement in the number of transactions or queries per second in this specific configuration (of course it&#8217;s strongly depending on the use case). However, this does not mean that by taking the time to configure the MySQL server correctly, one could not benefit from these additional capacities. Regarding the tuning of MySQL variables, it&#8217;s interesting to see that some variables are updated automatically such as the innodb_buffer_pool_size when we change server resources. </p>



<p></p>
<p>L’article <a href="https://www.dbi-services.com/blog/mysql-server-on-microsoft-azure-2nd-part-performance-tests/">MySQL Server on Microsoft Azure 2nd part (Performance tests)</a> est apparu en premier sur <a href="https://www.dbi-services.com/blog">dbi Blog</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.dbi-services.com/blog/mysql-server-on-microsoft-azure-2nd-part-performance-tests/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>MySQL Server on Microsoft Azure 1st part (deployment)</title>
		<link>https://www.dbi-services.com/blog/mysql-server-on-microsoft-azure-1st-part-deployment/</link>
					<comments>https://www.dbi-services.com/blog/mysql-server-on-microsoft-azure-1st-part-deployment/#respond</comments>
		
		<dc:creator><![CDATA[Grégory Steulet]]></dc:creator>
		<pubDate>Mon, 15 Aug 2022 08:00:00 +0000</pubDate>
				<category><![CDATA[Azure]]></category>
		<category><![CDATA[Cloud]]></category>
		<category><![CDATA[MySQL]]></category>
		<category><![CDATA[Microsoft Azure]]></category>
		<guid isPermaLink="false">https://www.dbi-services.com/blog/?p=18458</guid>

					<description><![CDATA[<p>Introduction Did you know that you can run MySQL on Microsoft Azure for free for 30 days with a $200 credit? In this first blog I&#8217;ll show how to create a MySQL Server and provide few information related to this service. In futures blogs I&#8217;ll present insights regarding MySQL performances and backup/recovery. Let&#8217;s start by [&#8230;]</p>
<p>L’article <a href="https://www.dbi-services.com/blog/mysql-server-on-microsoft-azure-1st-part-deployment/">MySQL Server on Microsoft Azure 1st part (deployment)</a> est apparu en premier sur <a href="https://www.dbi-services.com/blog">dbi Blog</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="293" src="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2022/08/MySQLAzure-001-1024x293.png" alt="" class="wp-image-18459" srcset="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2022/08/MySQLAzure-001-1024x293.png 1024w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2022/08/MySQLAzure-001-300x86.png 300w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2022/08/MySQLAzure-001-768x220.png 768w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2022/08/MySQLAzure-001.png 1345w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /><figcaption>Let&#8217;s try Azure Database for MySQL </figcaption></figure>



<h2 class="wp-block-heading">Introduction</h2>



<p>Did you know that you can run MySQL on Microsoft Azure for free for 30 days with a $200 credit? In this first blog I&#8217;ll show how to create a MySQL Server and provide few information related to this service. In futures blogs I&#8217;ll present insights regarding MySQL performances and backup/recovery. </p>



<h2 class="wp-block-heading">Let&#8217;s start by registering ourselves</h2>



<p>The registration process lasts approximately 5 to 10 minutes. We simply have to enter our contacts details as well as our credit card details. Don’t worry, no fees are automatically charged. Once your 200$ credit limit used, Microsoft will ask us if we want to keep on with the payment of the additional fees. &nbsp;</p>



<p>In order to register ourselves on Microsoft Azure, we can simply go on the following URL: <a href="https://azure.microsoft.com/en-us/services/mysql/" target="_blank" rel="noreferrer noopener">https://azure.microsoft.com/en-us/services/mysql/</a> and click on &#8220;Try Azure Database for MySQL free&#8221; and  &#8220;Start free&#8221;. Then we enter our contact details such as country, name, surname, phone number, e-mail address, address, aso&#8230; Once finished, we will see a screen looking like the one below. We simply have to select the time zone and if we want to join a Q&amp;A session. </p>


<div class="wp-block-image">
<figure class="aligncenter size-full"><img loading="lazy" decoding="async" width="778" height="520" src="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2022/08/MySQLAzure-002.png" alt="" class="wp-image-18460" srcset="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2022/08/MySQLAzure-002.png 778w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2022/08/MySQLAzure-002-300x201.png 300w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2022/08/MySQLAzure-002-768x513.png 768w" sizes="auto, (max-width: 778px) 100vw, 778px" /><figcaption>You&#8217;re ready to start with Azure!</figcaption></figure>
</div>


<h2 class="wp-block-heading">Our first MySQL Server on Azure</h2>



<p>Once the registration process ended, it&#8217;s time to create our first MySQL Server on Azure. Log in into Microsoft Azure and enter “mysql” in the research field as shown below and select “Azure Database for MySQL servers”. &nbsp;</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="273" src="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2022/08/MySQLAzure-003-1024x273.png" alt="" class="wp-image-18461" srcset="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2022/08/MySQLAzure-003-1024x273.png 1024w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2022/08/MySQLAzure-003-300x80.png 300w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2022/08/MySQLAzure-003-768x205.png 768w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2022/08/MySQLAzure-003.png 1380w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /><figcaption>Select &#8220;Azure Database for MySQL Server&#8221;</figcaption></figure>



<p>Once &#8220;Azure Database for MySQL servers&#8221; selected, we simply have to click on &#8220;Create Azure Database for MySQL Server&#8221; on the next screen. We now have to select the deployment option. As shown below, we have the choice between the two following options: </p>



<ul class="wp-block-list"><li>Flexible Server (Recommended)</li><li>Single Server</li></ul>



<figure class="wp-block-image size-large is-resized"><img loading="lazy" decoding="async" src="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2022/08/MySQLAzure-004-1024x658.png" alt="" class="wp-image-18462" width="840" height="539" srcset="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2022/08/MySQLAzure-004-1024x658.png 1024w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2022/08/MySQLAzure-004-300x193.png 300w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2022/08/MySQLAzure-004-768x494.png 768w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2022/08/MySQLAzure-004.png 1237w" sizes="auto, (max-width: 840px) 100vw, 840px" /><figcaption>Choice between Flexible server and Single Server</figcaption></figure>



<p>In the context of this blog, I chose the first option (Flexible server). </p>



<p>Before deploying our first MySQL Server we have to follow a straightforward process which requires 4 steps described below. As we will see, depending on the workload type retained (which will affect the CPU, memory and possible IOPS), the storage and IOPS, the monthly fees will vary between USD 18.98/month and USD &gt;10&#8217;000/month. <br><br>What is really nice is to be able to see in real time the influence that each parameter has on the price estimate on the right hand side as we can see in the screenshot below.</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="761" src="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2022/08/MySQLAzure-005-1024x761.png" alt="" class="wp-image-18463" srcset="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2022/08/MySQLAzure-005-1024x761.png 1024w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2022/08/MySQLAzure-005-300x223.png 300w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2022/08/MySQLAzure-005-768x571.png 768w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2022/08/MySQLAzure-005.png 1157w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /><figcaption>4 steps (in green) in order to complete your configuration</figcaption></figure>



<ol class="wp-block-list" type="1"><li>Firstly, we have to enter basic information regarding our MySQL Server such as :<ul><li>Subscription details,</li><li>Server name</li><li>Region where you want to deploy your server</li><li>MySQL Version (version 5.x or 8.x)</li><li>Workload type (Small or Medium Size database, Tier 1 Business Critical Workloads, for development of hobby projects)</li><li>Compute+Storage: Compute options &#8211;  (Burstable 1-20 vCores, General purpose 2-64 vCores or Business Critical 2-96 vCores). Storage options &#8211; from 20GB up to 16384GB (no scale down possibility) and from 360 to 48000IOPS</li></ul><ul><li>Availability Zone (optional)</li><li>Enable High-Availability (optional)</li><li>Administrator account<br><br></li></ul></li><li>Secondly, we have to fill in networking information regarding our MySQL Server such as:<ul><li>Network Connectivity (connect to your server through a public IP address or a Private Access)</li><li>Firewall rules (you can automatically select your current IP address or allow any remote connection)<br><br></li></ul></li><li>Thirdly, we can enter tags. Tags are name/value pairs that enable you to categorize and view consolidated billing by applying the same tag to multiple resources and resource groups.<br><br></li><li>We can finally review and create our server. Please note that we can also download a template for automation purpose.</li></ol>



<p>In the compute option, you can select few options as presented below: </p>


<div class="wp-block-image">
<figure class="aligncenter size-large"><img loading="lazy" decoding="async" width="1024" height="760" src="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2022/08/MySQLAzure-006-1024x760.png" alt="" class="wp-image-18464" srcset="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2022/08/MySQLAzure-006-1024x760.png 1024w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2022/08/MySQLAzure-006-300x223.png 300w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2022/08/MySQLAzure-006-768x570.png 768w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2022/08/MySQLAzure-006.png 1287w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /><figcaption>Compute + Storage options</figcaption></figure>
</div>


<p>Once all the the fields completed we can create our server. It&#8217;s interesting to see that we also have the possibility to generate a template for automation purpose. </p>



<p>On the summary page, you can see that you have access to the<a href="https://azure.microsoft.com/en-us/support/legal/" target="_blank" rel="noreferrer noopener"> Terms of Use</a> and <a href="https://privacy.microsoft.com/en-us/privacystatement" target="_blank" rel="noreferrer noopener">Privacy Policy</a>. In the Terms of Use we can find a link to the <a href="https://azure.microsoft.com/en-us/support/legal/sla/" target="_blank" rel="noreferrer noopener">SLA Conditions</a> where we can find the guaranteed service time as well as the credit in case of non compliance with the SLA conditions. </p>



<p>The deployment and start-up of the server takes approximately 6 minutes.</p>



<figure class="wp-block-image size-full"><img loading="lazy" decoding="async" width="901" height="525" src="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2022/08/MySQLAzure-007.png" alt="" class="wp-image-18465" srcset="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2022/08/MySQLAzure-007.png 901w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2022/08/MySQLAzure-007-300x175.png 300w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2022/08/MySQLAzure-007-768x448.png 768w" sizes="auto, (max-width: 901px) 100vw, 901px" /></figure>



<h2 class="wp-block-heading">We have a server, let&#8217;s use it now!</h2>



<p>In order to use our brand new MySQL Server, I decided to use MySQL Shell. MySQL Shell is a MySQL Client that provides scripting capabilities in JavaScript and Python. You can download MySQL Shell on this <a href="http://dev.mysql.com/downloads/shell/." target="_blank" rel="noreferrer noopener">link</a> and find MySQL Documentation on this <a href="https://dev.mysql.com/doc/mysql-shell/8.0/en/" target="_blank" rel="noreferrer noopener">link</a>. </p>



<p>When we run MySQL Shell on Windows, a cmd window opens with a command prompt as shown below. To connect to our MySQL Server, we can use the &#8220;<em>connect</em>&#8221; keyword. and then switch in sql mode with the &#8220;<em>\sql</em>&#8221; command. </p>



<pre class="wp-block-code"><code>MySQL  JS &gt; \connect --mysql albatroz.mysql.database.azure.com
Creating a Classic session to 'grs@albatroz.mysql.database.azure.com'
Please provide the password for 'grs@albatroz.mysql.database.azure.com': *************
Save password for 'grs@albatroz.mysql.database.azure.com'? &#091;Y]es/&#091;N]o/Ne&#091;v]er (default No): yes
Fetching schema names for autocompletion... Press ^C to stop.
Your MySQL connection id is 20
Server version: 8.0.28 Source distribution
No default schema selected; type \use &lt;schema&gt; to set one.
 MySQL  albatroz.mysql.database.azure.com:3306 ssl  JS &gt; \sql
Switching to SQL mode... Commands end with ;</code></pre>



<p>Using <em>show variables like &#8216;%version%&#8217;</em>; command provides us few information regarding the MySQL version as well as the Operating System in use. Version 8.0.28 is not the latest version but it is not old either since it dates from December 2021 and at the time of writing this blog, the latest version is 8.0.30.</p>



<pre class="wp-block-code"><code>MySQL  albatroz.mysql.database.azure.com:3306 ssl  SQL &gt; show variables like '%version%';
+--------------------------+---------------------+
| Variable_name            | Value               |
+--------------------------+---------------------+
| admin_tls_version        | TLSv1.2             |
| immediate_server_version | 999999              |
| innodb_version           | 8.0.28              |
| original_server_version  | 999999              |
| protocol_version         | 10                  |
| replica_type_conversions |                     |
| slave_type_conversions   |                     |
| tls_version              | TLSv1.2             |
| version                  | 8.0.28              |
| version_comment          | Source distribution |
| version_compile_machine  | x86_64              |
| version_compile_os       | Linux               |
| version_compile_zlib     | 1.2.11              |
+--------------------------+---------------------+
</code></pre>



<p>After few tests, we can see some performance graphs (CPU and Memory, IO Percent, DB Connections and Queries) through the Azure interface. </p>


<div class="wp-block-image">
<figure class="aligncenter size-large"><img loading="lazy" decoding="async" width="1024" height="311" src="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2022/08/MySQLAzure-008-1024x311.png" alt="" class="wp-image-18466" srcset="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2022/08/MySQLAzure-008-1024x311.png 1024w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2022/08/MySQLAzure-008-300x91.png 300w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2022/08/MySQLAzure-008-768x233.png 768w, https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2022/08/MySQLAzure-008.png 1280w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /><figcaption>Few graphs regarding performance</figcaption></figure>
</div>


<h2 class="wp-block-heading">Conclusion</h2>



<p>This first blog does not pretend to go into detail about the possibilities offered by Azure for the deployment of a MySQL server. It simply shows that deploying a MySQL server on the Azure Cloud is really simple. In addition, the $200 offered by Microsoft on the first 30 days of use allows you to have a first overview of the possibilities but also the costs related to a deployment in the Azure Cloud. The interface allows anyone to easily deploy a MySQL database in minutes.</p>



<p>The next blog will discuss MySQL performance on the Azure Cloud through stress tests performed with SysBench.</p>



<p></p>
<p>L’article <a href="https://www.dbi-services.com/blog/mysql-server-on-microsoft-azure-1st-part-deployment/">MySQL Server on Microsoft Azure 1st part (deployment)</a> est apparu en premier sur <a href="https://www.dbi-services.com/blog">dbi Blog</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.dbi-services.com/blog/mysql-server-on-microsoft-azure-1st-part-deployment/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Installing MySQL InnoDB Cluster in OKE using a MySQL Operator</title>
		<link>https://www.dbi-services.com/blog/installing-mysql-innodb-cluster-in-oke-using-a-mysql-operator/</link>
					<comments>https://www.dbi-services.com/blog/installing-mysql-innodb-cluster-in-oke-using-a-mysql-operator/#respond</comments>
		
		<dc:creator><![CDATA[Elisa Usai]]></dc:creator>
		<pubDate>Tue, 10 May 2022 04:20:55 +0000</pubDate>
				<category><![CDATA[Cloud]]></category>
		<category><![CDATA[Database Administration & Monitoring]]></category>
		<category><![CDATA[DevOps]]></category>
		<category><![CDATA[Kubernetes]]></category>
		<category><![CDATA[MySQL]]></category>
		<category><![CDATA[Oracle]]></category>
		<category><![CDATA[Containers]]></category>
		<category><![CDATA[databases]]></category>
		<category><![CDATA[k8s]]></category>
		<category><![CDATA[kubernetes]]></category>
		<category><![CDATA[MySQL InnoDB Cluster]]></category>
		<category><![CDATA[OCI]]></category>
		<guid isPermaLink="false">https://www.dbi-services.com/blog/installing-mysql-innodb-cluster-in-oke-using-a-mysql-operator/</guid>

					<description><![CDATA[<p>During previous months, I&#8217;ve had some time to satisfy my curiosity about databases in containers and I started to test a little bit MySQL in Kubernetes. This is how it all began&#8230; In January I had the chance to be trained on Kubernetes attending the Docker and Kubernetes essentials Workshop of dbi services. So I [&#8230;]</p>
<p>L’article <a href="https://www.dbi-services.com/blog/installing-mysql-innodb-cluster-in-oke-using-a-mysql-operator/">Installing MySQL InnoDB Cluster in OKE using a MySQL Operator</a> est apparu en premier sur <a href="https://www.dbi-services.com/blog">dbi Blog</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>During previous months, I&#8217;ve had some time to satisfy my curiosity about databases in containers and I started to test a little bit <a href="https://www.mysql.com/" target="_blank" rel="noopener">MySQL</a> in <a href="https://kubernetes.io/" target="_blank" rel="noopener">Kubernetes</a>.<br />
This is how it all began&#8230;<br />
<span id="more-17173"></span><br />
In January I had the chance to be trained on Kubernetes attending the <a href="https://www.dbi-services.com/trainings/docker-kubernetes-essentials/" target="_blank" rel="noopener">Docker and Kubernetes essentials Workshop</a> of dbi services. So I decided to prepare a session on this topic at our internal <a href="https://www.dbi-services.com/on-the-company-and-its-associates/corporate-values-company-mission/dbi-xchange/" target="_blank" rel="noopener">dbi xChange</a> event. And as if by magic, at the same time, a customer asked for our support to migrate a MySQL database to their Kubernetes cluster.</p>
<p>In general, I would like to raise two points before going into the technical details:<br />
1. Is it a good idea to move databases into containers? Here I would use a typical IT answer: &#8220;it depends&#8221;. I can suggest you to think about your needs and constraints, if you have small images to deploy, about storage and persistence, performances, &#8230;<br />
2. There are various solutions for installing, orchestrating and administering MySQL in K8s: MySQL single instance vs MySQL InnoDB Cluster, using MySQL Operator for Kubernetes or Helm Charts, on-premise but also through Oracle Container Engine for Kubernetes on OCI, &#8230; I recommend you to think about which are (again) your needs and skills, if you are already working on Cloud technologies, whether you have already set up DevOps processes and which ones, &#8230;</p>
<p>Here I will show you how to install a MySQL InnoDB Cluster in OKE using a MySQL Operator.</p>
<p>First thing is to have an account on <a href="https://cloud.oracle.com" target="_blank" rel="noopener">Oracle OCI</a> and have deployed an <a href="https://www.oracle.com/cloud-native/container-engine-kubernetes/" target="_blank" rel="noopener">Oracle Container Engine for Kubernetes</a> in your compartment. You can do it in an easy was using the Quick Create option under &#8220;Developer Services &gt; Containers &amp; Artifacts &gt; Kubernetes Clusters (OKE)&#8221;:<br />
<a href="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2022/05/OKE0.png"><img loading="lazy" decoding="async" src="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2022/05/OKE0.png" alt="" width="300" height="133" class="alignnone size-medium wp-image-55581" /></a><br />
In this way all the resources you need (VCN, Internet and NAT gateways, a K8s cluster with workers nodes and node pool) are there in one click:</p>
<pre class="brush: sql; gutter: true; first-line: 1; highlight: [1,7]">
elisa@cloudshell:~ (eu-zurich-1)$ kubectl cluster-info
Kubernetes control plane is running at https://xxx.xx.xxx.xxx:6443
CoreDNS is running at https://xxx.xx.xxx.xxx:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

elisa@cloudshell:~ (eu-zurich-1)$ kubectl get nodes -o wide
NAME         STATUS   ROLES   AGE    VERSION   INTERNAL-IP   EXTERNAL-IP       OS-IMAGE                  KERNEL-VERSION                      CONTAINER-RUNTIME
10.0.10.36   Ready    node    6m7s   v1.22.5   10.0.10.36    yyy.yyy.yyy.yyy   Oracle Linux Server 7.9   5.4.17-2136.304.4.1.el7uek.x86_64   cri-o://1.22.3-1.ci.el7
10.0.10.37   Ready    node    6m1s   v1.22.5   10.0.10.37    kkk.kkk.kkk.kk    Oracle Linux Server 7.9   5.4.17-2136.304.4.1.el7uek.x86_64   cri-o://1.22.3-1.ci.el7
10.0.10.42   Ready    node    6m     v1.22.5   10.0.10.42    jjj.jj.jjj.jj     Oracle Linux Server 7.9   5.4.17-2136.304.4.1.el7uek.x86_64   cri-o://1.22.3-1.ci.el7
</pre>
<p>As a second step, you can install the <a href="https://github.com/mysql/mysql-operator" target="_blank" rel="noopener">MySQL Operator for Kubernetes</a> using kubectl:</p>
<pre class="brush: sql; gutter: true; first-line: 1; highlight: [1,6]">
elisa@cloudshell:~ (eu-zurich-1)$ kubectl apply -f https://raw.githubusercontent.com/mysql/mysql-operator/trunk/deploy/deploy-crds.yaml
customresourcedefinition.apiextensions.k8s.io/innodbclusters.mysql.oracle.com created
customresourcedefinition.apiextensions.k8s.io/mysqlbackups.mysql.oracle.com created
customresourcedefinition.apiextensions.k8s.io/clusterkopfpeerings.zalando.org created
customresourcedefinition.apiextensions.k8s.io/kopfpeerings.zalando.org created
elisa@cloudshell:~ (eu-zurich-1)$ kubectl apply -f https://raw.githubusercontent.com/mysql/mysql-operator/trunk/deploy/deploy-operator.yaml
serviceaccount/mysql-sidecar-sa created
clusterrole.rbac.authorization.k8s.io/mysql-operator created
clusterrole.rbac.authorization.k8s.io/mysql-sidecar created
clusterrolebinding.rbac.authorization.k8s.io/mysql-operator-rolebinding created
clusterkopfpeering.zalando.org/mysql-operator created
namespace/mysql-operator created
serviceaccount/mysql-operator-sa created
deployment.apps/mysql-operator created
</pre>
<p>You can check the health of the MySQL Operator:</p>
<pre class="brush: sql; gutter: true; first-line: 1; highlight: [1,4]">
elisa@cloudshell:~ (eu-zurich-1)$ kubectl get deployment -n mysql-operator mysql-operator
NAME             READY   UP-TO-DATE   AVAILABLE   AGE
mysql-operator   1/1     1            1           24s
elisa@cloudshell:~ (eu-zurich-1)$ kubectl get pods --show-labels -n mysql-operator
NAME                              READY   STATUS    RESTARTS   AGE    LABELS
mysql-operator-869d4b4b8d-slr4t   1/1     Running   0          113s   name=mysql-operator,pod-template-hash=869d4b4b8d
</pre>
<p>To isolate resources, you can create a dedicated namespace for the MySQL InnoDB Cluster:</p>
<pre class="brush: sql; gutter: true; first-line: 1; highlight: [1]">
elisa@cloudshell:~ (eu-zurich-1)$ kubectl create namespace mysql-cluster
namespace/mysql-cluster created
</pre>
<p>You should also create a Secret using kubectl to store MySQL user credentials that will be created and then required by pods to access to the MySQL server:</p>
<pre class="brush: sql; gutter: true; first-line: 1; highlight: [1]">
elisa@cloudshell:~ (eu-zurich-1)$ kubectl create secret generic elisapwd --from-literal=rootUser=root --from-literal=rootHost=% --from-literal=rootPassword="pwd" -n mysql-cluster
secret/elisapwd created
</pre>
<p>You can check that the Secret was corrected created:</p>
<pre class="brush: sql; gutter: true; first-line: 1; highlight: [1,5]">
elisa@cloudshell:~ (eu-zurich-1)$ kubectl get secrets -n mysql-cluster
NAME                  TYPE                                  DATA   AGE
default-token-t2c47   kubernetes.io/service-account-token   3      2m
elisapwd              Opaque                                3      34s
elisa@cloudshell:~ (eu-zurich-1)$ kubectl describe secret/elisapwd -n mysql-cluster
Name:         elisapwd
Namespace:    mysql-cluster
Labels:       
Annotations:  

Type:  Opaque

Data
====
rootHost:      1 bytes
rootPassword:  7 bytes
rootUser:      4 bytes
</pre>
<p>Now you have to write a .yaml configuration file to define how the MySQL InnoDB Cluster should be created. Here is a simple example: </p>
<pre class="brush: sql; gutter: true; first-line: 1; highlight: [1]">
elisa@cloudshell:~ (eu-zurich-1)$ vi InnoDBCluster_config.yaml
apiVersion: mysql.oracle.com/v2alpha1
kind: InnoDBCluster
metadata:
  name: elisacluster
  namespace: mysql-cluster 
spec:
  secretName: elisapwd
  instances: 3
  router:
    instances: 1
</pre>
<p>At this point you can run a MySQL InnoDB Cluster applying the configuration that you just created:</p>
<pre class="brush: sql; gutter: true; first-line: 1; highlight: [1]">
elisa@cloudshell:~ (eu-zurich-1)$ kubectl apply -f InnoDBCluster_config.yaml
innodbcluster.mysql.oracle.com/elisacluster created
</pre>
<p>You can finally check if the MySQL InnoDB Cluster has been successfully created:</p>
<pre class="brush: sql; gutter: true; first-line: 1; highlight: [1,11]">
elisa@cloudshell:~ (eu-zurich-1)$ kubectl get innodbcluster --watch --namespace mysql-cluster
NAME           STATUS    ONLINE   INSTANCES   ROUTERS   AGE
elisacluster   PENDING   0        3           1         12s
elisacluster   PENDING   0        3           1         103s
elisacluster   INITIALIZING   0        3           1         103s
elisacluster   INITIALIZING   0        3           1         103s
elisacluster   INITIALIZING   0        3           1         103s
elisacluster   INITIALIZING   0        3           1         104s
elisacluster   INITIALIZING   0        3           1         106s
elisacluster   ONLINE         1        3           1         107s
elisa@cloudshell:~ (eu-zurich-1)$ kubectl get all -n mysql-cluster
NAME                                       READY   STATUS    RESTARTS   AGE
pod/elisacluster-0                         2/2     Running   0          4h44m
pod/elisacluster-1                         2/2     Running   0          4h42m
pod/elisacluster-2                         2/2     Running   0          4h41m
pod/elisacluster-router-7686457f5f-hwfcv   1/1     Running   0          4h42m

NAME                             TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)                               AGE
service/elisacluster             ClusterIP   10.96.9.203           6446/TCP,6448/TCP,6447/TCP,6449/TCP   4h44m
service/elisacluster-instances   ClusterIP   None                  3306/TCP,33060/TCP,33061/TCP          4h44m

NAME                                  READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/elisacluster-router   1/1     1            1           4h44m

NAME                                             DESIRED   CURRENT   READY   AGE
replicaset.apps/elisacluster-router-7686457f5f   1         1         1       4h44m

NAME                            READY   AGE
statefulset.apps/elisacluster   3/3     4h44m
</pre>
<p>You can use port forwarding in the following way:</p>
<pre class="brush: sql; gutter: true; first-line: 1; highlight: [1]">
elisa@cloudshell:~ (eu-zurich-1)$ kubectl port-forward service/elisacluster mysql --namespace=mysql-cluster
Forwarding from 127.0.0.1:6446 -&gt; 6446
</pre>
<p>to access your MySQL InnoDB Cluster on a second terminal in order to check its health: </p>
<pre class="brush: sql; gutter: true; first-line: 1; highlight: [1,16,65,67,74]">
elisa@cloudshell:~ (eu-zurich-1)$ mysqlsh -h127.0.0.1 -P6446 -uroot -p
Please provide the password for 'root@127.0.0.1:6446': *******
Save password for 'root@127.0.0.1:6446'? [Y]es/[N]o/Ne[v]er (default No): N
MySQL Shell 8.0.28-commercial

Copyright (c) 2016, 2022, Oracle and/or its affiliates.
Oracle is a registered trademark of Oracle Corporation and/or its affiliates.
Other names may be trademarks of their respective owners.

Type 'help' or '?' for help; 'quit' to exit.
Creating a session to 'root@127.0.0.1:6446'
Fetching schema names for autocompletion... Press ^C to stop.
Your MySQL connection id is 36651
Server version: 8.0.28 MySQL Community Server - GPL
No default schema selected; type use  to set one.
 MySQL  127.0.0.1:6446 ssl  JS &gt;  MySQL  127.0.0.1:6446 ssl  JS &gt; dba.getCluster().status();
{
    "clusterName": "elisacluster", 
    "defaultReplicaSet": {
        "name": "default", 
        "primary": "elisacluster-0.elisacluster-instances.mysql-cluster.svc.cluster.local:3306", 
        "ssl": "REQUIRED", 
        "status": "OK", 
        "statusText": "Cluster is ONLINE and can tolerate up to ONE failure.", 
        "topology": {
            "elisacluster-0.elisacluster-instances.mysql-cluster.svc.cluster.local:3306": {
                "address": "elisacluster-0.elisacluster-instances.mysql-cluster.svc.cluster.local:3306", 
                "memberRole": "PRIMARY", 
                "memberState": "(MISSING)", 
                "mode": "n/a", 
                "readReplicas": {}, 
                "role": "HA", 
                "shellConnectError": "MySQL Error 2005: Could not open connection to 'elisacluster-0.elisacluster-instances.mysql-cluster.svc.cluster.local:3306': Unknown MySQL server host 'elisacluster-0.elisacluster-instances.mysql-cluster.svc.cluster.local' (-2)", 
                "status": "ONLINE", 
                "version": "8.0.28"
            }, 
            "elisacluster-1.elisacluster-instances.mysql-cluster.svc.cluster.local:3306": {
                "address": "elisacluster-1.elisacluster-instances.mysql-cluster.svc.cluster.local:3306", 
                "memberRole": "SECONDARY", 
                "memberState": "(MISSING)", 
                "mode": "n/a", 
                "readReplicas": {}, 
                "role": "HA", 
                "shellConnectError": "MySQL Error 2005: Could not open connection to 'elisacluster-1.elisacluster-instances.mysql-cluster.svc.cluster.local:3306': Unknown MySQL server host 'elisacluster-1.elisacluster-instances.mysql-cluster.svc.cluster.local' (-2)", 
                "status": "ONLINE", 
                "version": "8.0.28"
            }, 
            "elisacluster-2.elisacluster-instances.mysql-cluster.svc.cluster.local:3306": {
                "address": "elisacluster-2.elisacluster-instances.mysql-cluster.svc.cluster.local:3306", 
                "memberRole": "SECONDARY", 
                "memberState": "(MISSING)", 
                "mode": "n/a", 
                "readReplicas": {}, 
                "role": "HA", 
                "shellConnectError": "MySQL Error 2005: Could not open connection to 'elisacluster-2.elisacluster-instances.mysql-cluster.svc.cluster.local:3306': Unknown MySQL server host 'elisacluster-2.elisacluster-instances.mysql-cluster.svc.cluster.local' (-2)", 
                "status": "ONLINE", 
                "version": "8.0.28"
            }
        }, 
        "topologyMode": "Single-Primary"
    }, 
    "groupInformationSourceMember": "elisacluster-0.elisacluster-instances.mysql-cluster.svc.cluster.local:3306"
}

 MySQL  127.0.0.1:6446 ssl  JS &gt; sql
Switching to SQL mode... Commands end with ;
 MySQL  127.0.0.1:6446 ssl  SQL &gt; select @@hostname;
+----------------+
| @@hostname     |
+----------------+
| elisacluster-0 |
+----------------+
1 row in set (0.0018 sec)
 MySQL  127.0.0.1:6446 ssl  SQL &gt; SELECT * FROM performance_schema.replication_group_members;
+---------------------------+--------------------------------------+-----------------------------------------------------------------------+-------------+--------------+-------------+----------------+----------------------------+
| CHANNEL_NAME              | MEMBER_ID                            | MEMBER_HOST                                                           | MEMBER_PORT | MEMBER_STATE | MEMBER_ROLE | MEMBER_VERSION | MEMBER_COMMUNICATION_STACK |
+---------------------------+--------------------------------------+-----------------------------------------------------------------------+-------------+--------------+-------------+----------------+----------------------------+
| group_replication_applier | 717dbe17-ba71-11ec-8a91-3665daa9c822 | elisacluster-0.elisacluster-instances.mysql-cluster.svc.cluster.local |        3306 | ONLINE       | PRIMARY     | 8.0.28         | XCom                       |
| group_replication_applier | b02c3c9a-ba71-11ec-8b65-5a93db09dda5 | elisacluster-1.elisacluster-instances.mysql-cluster.svc.cluster.local |        3306 | ONLINE       | SECONDARY   | 8.0.28         | XCom                       |
| group_replication_applier | eb06aadd-ba71-11ec-8aac-aa31e5d7e08b | elisacluster-2.elisacluster-instances.mysql-cluster.svc.cluster.local |        3306 | ONLINE       | SECONDARY   | 8.0.28         | XCom                       |
+---------------------------+--------------------------------------+-----------------------------------------------------------------------+-------------+--------------+-------------+----------------+----------------------------+
3 rows in set (0.0036 sec)
</pre>
<p>Easy, right?<br />
Yes, but databases containers is still a tricky subject. As we said above, many topics need to be addressed: deployment type, performances, backups, storage and persistence, &#8230; So stay tuned, more blog posts about MySQL on K8s will come soon&#8230; </p>
<p>By <a href="https://www.linkedin.com/in/elisausai/" target="_blank" rel="noopener">Elisa Usai</a></p>
<p>L’article <a href="https://www.dbi-services.com/blog/installing-mysql-innodb-cluster-in-oke-using-a-mysql-operator/">Installing MySQL InnoDB Cluster in OKE using a MySQL Operator</a> est apparu en premier sur <a href="https://www.dbi-services.com/blog">dbi Blog</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.dbi-services.com/blog/installing-mysql-innodb-cluster-in-oke-using-a-mysql-operator/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>What I really like about Percona PMM</title>
		<link>https://www.dbi-services.com/blog/what-i-really-like-about-percona-pmm/</link>
					<comments>https://www.dbi-services.com/blog/what-i-really-like-about-percona-pmm/#respond</comments>
		
		<dc:creator><![CDATA[Elisa Usai]]></dc:creator>
		<pubDate>Fri, 14 Jan 2022 06:07:33 +0000</pubDate>
				<category><![CDATA[Cloud]]></category>
		<category><![CDATA[Database Administration & Monitoring]]></category>
		<category><![CDATA[MariaDB]]></category>
		<category><![CDATA[MySQL]]></category>
		<category><![CDATA[Oracle]]></category>
		<category><![CDATA[PostgreSQL]]></category>
		<category><![CDATA[SQL Server]]></category>
		<category><![CDATA[Microsoft SQL Server]]></category>
		<category><![CDATA[Monitoring]]></category>
		<category><![CDATA[Percona]]></category>
		<category><![CDATA[PMM]]></category>
		<guid isPermaLink="false">https://www.dbi-services.com/blog/2022/01/14/what-i-really-like-about-percona-pmm/</guid>

					<description><![CDATA[<p>Percona Monitoring and Management tool (PMM) is an Open Source product which was developed to help DBAs and developers to monitor and manage MySQL, PostgreSQL and MongoDB performances. In this blog post, we will see that we can do much more with it! I discovered this tool 2 years ago when I started a monitoring [&#8230;]</p>
<p>L’article <a href="https://www.dbi-services.com/blog/what-i-really-like-about-percona-pmm/">What I really like about Percona PMM</a> est apparu en premier sur <a href="https://www.dbi-services.com/blog">dbi Blog</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>Percona Monitoring and Management tool (<a href="https://www.percona.com/doc/percona-monitoring-and-management/2.x/index.html" target="_blank" rel="noopener">PMM</a>) is an Open Source product which was developed to help DBAs and developers to monitor and manage MySQL, PostgreSQL and MongoDB performances. In this blog post, we will see that we can do much more with it!<br />
I discovered this tool 2 years ago when I started a monitoring study for a customer, and ever since, I&#8217;ve been in love with it. I will explain you why.</p>
<p><img loading="lazy" decoding="async" class="alignnone wp-image-53646" src="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2022/01/PMM-1.png" alt="" width="200" height="133" /><br />
<span id="more-519"></span></p>
<h2>It&#8217;s Open Source!</h2>
<h3>Even if money is not everything&#8230;</h3>
<p><a href="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2022/01/Money-5.jpg"><img loading="lazy" decoding="async" class="alignnone size-medium wp-image-53574" src="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2022/01/Money-5.jpg" alt="" width="300" height="300" /></a><br />
PMM is completely Open Source, and this is not surprising coming from <a href="https://www.percona.com/" target="_blank" rel="noopener">Percona</a>. But as we know, the advantage of Open Source is not only the fact of being free&#8230;</p>
<h3>Let&#8217;s contribute</h3>
<p><a href="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2022/01/Contribution-1.jpg"><img loading="lazy" decoding="async" class="alignnone size-medium wp-image-53575" src="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2022/01/Contribution-1.jpg" alt="" width="300" height="300" /></a><br />
One of the keys of Open Source is contribution. When I started using PMM and sometimes I cannot find all the information I needed and more details in the <a href="https://www.percona.com/doc/percona-monitoring-and-management/2.x/index.html" target="_blank" rel="noopener">official documentation</a>, or when I discover a bug or have a question, or even if I would like to have a new feature implemented, a whole community is there to support and share knowledge in a very impressive short time:<br />
&#8211; A <a href="https://jira.percona.com/projects/PMM/issues/" target="_blank" rel="noopener">JIRA</a> issue tracker to submit a bug report.<br />
&#8211; A very useful <a href="https://www.percona.com/blog/" target="_blank" rel="noopener">blog</a> with plenty of posts about new features, technical step-by-step deployment information, tips and much more. And obviously the possibility to write to the author.<br />
&#8211; A <a href="https://forums.percona.com/c/percona-monitoring-and-management-pmm/percona-monitoring-and-management-pmm-v2" target="_blank" rel="noopener">Percona Community Forum</a> to exchange with Percona experts.</p>
<h3>Evolution</h3>
<p><a href="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2022/01/Improvements.jpg"><img loading="lazy" decoding="async" class="alignnone size-medium wp-image-53589" src="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2022/01/Improvements.jpg" alt="" width="300" height="300" /></a><br />
Open Source means also quick evolution of the product. PMM is getting better every month, with continuous improvements, new features, corrections of bugs and so on.</p>
<h2>One tool to monitor several technologies</h2>
<p><a href="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2022/01/Techs_SQUARE.jpg"><img loading="lazy" decoding="async" class="alignnone size-medium wp-image-53577" src="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2022/01/Techs_SQUARE.jpg" alt="" width="300" height="300" /></a><br />
If we speak about monitoring solutions, what makes the difference nowadays? Except rare cases, each of us works with a mix of different database solutions: Open Source vs proprietary, On-Premises vs Cloud. And when we want to identify performance issues on our application, we also need system benchmarks, replication metrics, etc&#8230; PMM is a platform that we can use to centralize our monitoring data, because:<br />
&#8211; Exporters to collect metrics for <a href="https://www.mysql.com/" target="_blank" rel="noopener">MySQL</a>, <a href="https://mariadb.com/" target="_blank" rel="noopener">MariaDB</a>, <a href="https://www.postgresql.org/" target="_blank" rel="noopener">PostgreSQL</a> and <a href="https://www.mongodb.com" target="_blank" rel="noopener">MongoDB</a> instances and their replication, the <a href="https://proxysql.com/" target="_blank" rel="noopener">ProxySQL</a> and the <a href="http://www.haproxy.org/" target="_blank" rel="noopener">HAProxy</a>, instances hosted on <a href="https://aws.amazon.com/rds/" target="_blank" rel="noopener">Amazon RDS</a> or on <a href="https://cloud.google.com/" target="_blank" rel="noopener">Google Cloud Platform</a> are embedded in the PMM tool as well as pre-configured dashboards, so the monitoring can be easily put in place.<br />
&#8211; Even proprietary technologies such as <a href="https://www.oracle.com/index.html" target="_blank" rel="noopener">Oracle</a> and <a href="https://www.microsoft.com/en-us/sql-server/" target="_blank" rel="noopener">SQL Server</a> can be integrated to PMM through external exporters. In this case we have the choice to develop our own exporter and design our own dashboards or use what is already been developed and designed by other contributors. <a href="https://prometheus.io/docs/instrumenting/exporters/" target="_blank" rel="noopener">Here</a> you can find the links to some Third-party exporters.</p>
<h2>Less maintenance tasks</h2>
<p><a href="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2022/01/Maintenance.jpg"><img loading="lazy" decoding="async" class="alignnone size-medium wp-image-53581" src="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2022/01/Maintenance.jpg" alt="" width="300" height="300" /></a><br />
The PMM platform is based on a client-server model. The 2 main components on the server side are the following ones:<br />
&#8211; <a href="https://victoriametrics.com/" target="_blank" rel="noopener">VictoriaMetrics</a>, a time series database (replacing <a href="https://prometheus.io/" target="_blank" rel="noopener">Prometheus</a> since PMM 2.12.0 version) that aggregates metrics collected by exporters<br />
&#8211; <a href="https://grafana.com" target="_blank" rel="noopener">Grafana</a> allowing to visualize the aggregated data in a web interface.<br />
But actually, we don&#8217;t have to care about that! We don&#8217;t need to install Prometheus or VictoriaMetrics components, install Grafana and then configure it to let them talk together. We can just see the PMM platform as a black box: we will have only one tool to install and to maintain.</p>
<h2>Sexy dashboards</h2>
<p>Integrated dashboards are designed with accuracy and provide a detailed and temporal analysis of our data.<br />
Here my favorite ones:<br />
&#8211; <strong>Node Summary</strong><br />
It&#8217;s the system dashboard. We can visualize here metrics such as CPU, memory, disk occupation and performances, processes, network traffic and get some information about our system architecture:<br />
<a href="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2022/01/screencapture-192-168-193-101-graph-d-node-instance-summary-node-summary-2022-01-13-19_48_56.png"><img loading="lazy" decoding="async" class="alignnone size-medium wp-image-53586" src="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2022/01/screencapture-192-168-193-101-graph-d-node-instance-summary-node-summary-2022-01-13-19_48_56.png" alt="" width="163" height="300" /></a><br />
We have also the possibility to compare several nodes:<br />
<a href="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2022/01/screencapture-192-168-193-101-graph-d-node-instance-compare-nodes-compare-2022-01-13-19_50_04.png"><img loading="lazy" decoding="async" class="alignnone size-medium wp-image-53587" src="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2022/01/screencapture-192-168-193-101-graph-d-node-instance-compare-nodes-compare-2022-01-13-19_50_04.png" alt="" width="131" height="300" /></a><br />
&#8211; <strong>MySQL Summary</strong><br />
It&#8217;s one of the MySQL dashboards which displays general metrics about our instance (uptime, version, InnoDB Buffer Pool size, connections, threads, table locks, traffic, and much more):<br />
<a href="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2022/01/screencapture-192-168-193-101-graph-d-mysql-instance-summary-mysql-instance-summary-2022-01-13-20_05_36.png"><img loading="lazy" decoding="async" class="alignnone size-medium wp-image-53588" src="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2022/01/screencapture-192-168-193-101-graph-d-mysql-instance-summary-mysql-instance-summary-2022-01-13-20_05_36.png" alt="" width="128" height="300" /></a><br />
And we still have the option to compare different instances.<br />
&#8211; <strong>MySQL Replication</strong><br />
It&#8217;s the dashboard which offers us an overview on the MySQL Master-Slave Replication:<br />
<a href="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2022/01/MySQL-Replication.png"><img loading="lazy" decoding="async" class="alignnone size-medium wp-image-53591" src="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2022/01/MySQL-Replication.png" alt="" width="300" height="144" /></a><br />
&#8211; <strong>Query Analytics</strong><br />
QAN is a special dashboard that allows us to analyze MySQL and PostgreSQL databases queries over time:<br />
<a href="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2022/01/screencapture-192-168-193-101-graph-d-pmm-qan-pmm-query-analytics-2022-01-13-19_54_41.png"><img loading="lazy" decoding="async" class="alignnone size-full wp-image-53592" src="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2022/01/screencapture-192-168-193-101-graph-d-pmm-qan-pmm-query-analytics-2022-01-13-19_54_41.png" alt="" width="1920" height="1542" /></a><br />
This can be very useful to identify performance problems.</p>
<h2>Coming soon</h2>
<p><a href="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2022/01/Work-in-progress-1.jpg"><img loading="lazy" decoding="async" class="alignnone size-medium wp-image-53585" src="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2022/01/Work-in-progress-1.jpg" alt="" width="300" height="300" /></a><br />
Currently the last PMM stable release is 2.25.0. As we told just above, the tool is evolving a lot, and nice features are already available in technical preview:<br />
<a href="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2022/01/New-features.png"><img loading="lazy" decoding="async" class="alignnone size-medium wp-image-53596" src="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2022/01/New-features.png" alt="" width="300" height="80" /></a><br />
You can already play with that even if for the moment it&#8217;s not suggested to use them for production environments. I tested for example the Integrated Alerting and I cannot wait to have it available as a stable feature (we don&#8217;t need then anymore to configure an External Prometheus Alertmanager to be alerted if something goes wrong on our systems).</p>
<p>I hope you have enjoyed this introduction to PMM.<br />
Now it&#8217;s up to you: don&#8217;t hesitate to test it! For my part, I will surely write again about PMM, so stay tuned! <img src="https://s.w.org/images/core/emoji/17.0.2/72x72/1f642.png" alt="🙂" class="wp-smiley" style="height: 1em; max-height: 1em;" /></p>
<p>L’article <a href="https://www.dbi-services.com/blog/what-i-really-like-about-percona-pmm/">What I really like about Percona PMM</a> est apparu en premier sur <a href="https://www.dbi-services.com/blog">dbi Blog</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.dbi-services.com/blog/what-i-really-like-about-percona-pmm/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Foreign Keys in MySQL, SQL, NoSQL, NewSQL</title>
		<link>https://www.dbi-services.com/blog/foreign-keys-in-mysql-nosql-newsql/</link>
					<comments>https://www.dbi-services.com/blog/foreign-keys-in-mysql-nosql-newsql/#respond</comments>
		
		<dc:creator><![CDATA[Oracle Team]]></dc:creator>
		<pubDate>Thu, 18 Mar 2021 19:53:12 +0000</pubDate>
				<category><![CDATA[MySQL]]></category>
		<category><![CDATA[Oracle]]></category>
		<category><![CDATA[PostgreSQL]]></category>
		<category><![CDATA[Foreign key]]></category>
		<category><![CDATA[YugaByteDB]]></category>
		<guid isPermaLink="false">https://www.dbi-services.com/blog/foreign-keys-in-mysql-nosql-newsql/</guid>

					<description><![CDATA[<p>By Franck Pachot . In the NoSQL times, it was common to hear thinks like &#8220;SQL is bad&#8221;, &#8220;joins are bad&#8221;, &#8220;foreign keys are bad&#8221;. Just because people didn&#8217;t know how to use them, or they were running on a database system with a poor implementation of it. MySQL was very popular because easy to [&#8230;]</p>
<p>L’article <a href="https://www.dbi-services.com/blog/foreign-keys-in-mysql-nosql-newsql/">Foreign Keys in MySQL, SQL, NoSQL, NewSQL</a> est apparu en premier sur <a href="https://www.dbi-services.com/blog">dbi Blog</a>.</p>
]]></description>
										<content:encoded><![CDATA[<h2>By Franck Pachot</h2>
<p>.<br />
In the NoSQL times, it was common to hear thinks like &#8220;SQL is bad&#8221;, &#8220;joins are bad&#8221;, &#8220;foreign keys are bad&#8221;. Just because people didn&#8217;t know how to use them, or they were running on a database system with a poor implementation of it. MySQL was very popular because easy to install, but lacking on many optimization features that you find in other open source or commercial databases. Sometimes, I even wonder if this NoSQL thing was not just a NoMySQL at its roots. When people encountered the limitations of MySQL and thought that it was SQL that was limited.</p>
<p>The following twitter thread, and linked articles, mention how DML on a child table can be blocked by DML on the parent. This is not a problem in some occasions (when this parent-child relationship is a composition where you work on the whole within the same transaction) but can be a problem when the parent is shared my many unrelated transactions.</p>
<blockquote class="twitter-tweet" data-width="500" data-dnt="true">
<p lang="en" dir="ltr">Thoughts on foreign keys? The comment from <a href="https://twitter.com/ShlomiNoach?ref_src=twsrc%5Etfw">@ShlomiNoach</a> goes over some interesting points on why foreign keys should not be used:  <a href="https://t.co/b6XG1oWRmb">https://t.co/b6XG1oWRmb</a></p>
<p>&mdash; Fatih Arslan (@fatih) <a href="https://twitter.com/fatih/status/1371837422036729873?ref_src=twsrc%5Etfw">March 16, 2021</a></p></blockquote>
<p><script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script></p>
<h3>Sample Data</h3>
<p>Surprised because I&#8217;ve not seen that even in the earliest versions of Oracle, I had to test it. Especially because MySQL and the InnoDB engines have evolved a lot since version 5. I&#8217;ll use Gerald Venzl sample data on countries and cities: <a href="https://github.com/gvenzl/sample-data/tree/master/countries-cities-currencies" target="_blank" rel="noopener">https://github.com/gvenzl/sample-data/tree/master/countries-cities-currencies</a> because he provides a SQL script that works on all databases without any changes.</p>
<h3>MySQL 8.0.23</h3>
<pre><code>
{
echo "create database if not exists countries;"
echo "use countries;"
curl -s https://raw.githubusercontent.com/gvenzl/sample-data/master/countries-cities-currencies/install.sql
} | mysql</code></pre>
<p>This creates the tables and data with countries, and cities. There is a foreign key from cities that references countries.</p>
<h4>Session 1: update the parent row</h4>
<pre><code>
use countries;
begin;
select * from countries where country_code='CH';
update countries set population=population+1 where country_code='CH';
</code></pre>
<p>I&#8217;ve updated some info in countries, leaving the transaction opened.</p>
<pre><code>
mysql&gt; begin;
Query OK, 0 rows affected (0.00 sec)

mysql&gt; select * from countries where country_code='CH';
+------------+--------------+-------------+---------------------+------------+------------+----------+-----------+---------------+-----------+
| country_id | country_code | name        | official_name       | population | area_sq_km | latitude | longitude | timezone      | region_id |
+------------+--------------+-------------+---------------------+------------+------------+----------+-----------+---------------+-----------+
| CHE        | CH           | Switzerland | Swiss Confederation |    8293000 |   41277.00 | 47.00016 |   8.01427 | Europe/Zurich | EU        |
+------------+--------------+-------------+---------------------+------------+------------+----------+-----------+---------------+-----------+
1 row in set (0.00 sec)

mysql&gt; update countries set population=population+1 where country_code='CH';
Query OK, 1 row affected (0.00 sec)
Rows matched: 1  Changed: 1  Warnings: 0

mysql&gt; select * from countries where country_code='CH';
+------------+--------------+-------------+---------------------+------------+------------+----------+-----------+---------------+-----------+
| country_id | country_code | name        | official_name       | population | area_sq_km | latitude | longitude | timezone      | region_id |
+------------+--------------+-------------+---------------------+------------+------------+----------+-----------+---------------+-----------+
| CHE        | CH           | Switzerland | Swiss Confederation |    8293001 |   41277.00 | 47.00016 |   8.01427 | Europe/Zurich | EU        |
+------------+--------------+-------------+---------------------+------------+------------+----------+-----------+---------------+-----------+
1 row in set (0.00 sec)

</code></pre>
<p>With this transaction still on-going, I&#8217;ll insert, from another session, a new child row for this parent value: another country in Switzerland</p>
<h4>Session 2: insert a child</h4>
<pre><code>
use countries;
begin;
select * from cities where country_id='CHE';
insert into cities values ('CHE1170', 'Aubonne', null, 2750, 'N', 46.49514, 6.39155, null, 'CHE');
</code></pre>
<p>This blocks for a while&#8230; innodb-lock-wait-timeout defaults to 50 seconds.</p>
<p>Here is the output:</p>
<pre><code>
mysql&gt; begin;
Query OK, 0 rows affected (0.00 sec)

mysql&gt; select * from cities where country_id='CHE';
+---------+------+---------------+------------+------------+----------+-----------+----------+------------+
| city_id | name | official_name | population | is_capital | latitude | longitude | timezone | country_id |
+---------+------+---------------+------------+------------+----------+-----------+----------+------------+
| CHE0001 | Bern | NULL          |     422000 | Y          | 46.94809 |   7.44744 | NULL     | CHE        |
+---------+------+---------------+------------+------------+----------+-----------+----------+------------+
1 row in set (0.00 sec)

mysql&gt; insert into cities values ('CHE1170', 'Aubonne', null, 2750, 'N', 46.49514, 6.39155, null, 'CHE');

ERROR 1205 (HY000): Lock wait timeout exceeded; try restarting transaction
mysql&gt;
</code></pre>
<p>This insert tries to lock the parent row in share mode, and because it is currently being updated by session 1, the lock cannot be acquired. This is a very simple case of contention that can happen in many cases. Still there in MySQL 8</p>
<h3>PostgreSQL</h3>
<pre><code>
curl -s https://raw.githubusercontent.com/gvenzl/sample-data/master/countries-cities-currencies/install.sql | psql
</code></pre>
<p>This creates the tables and data with countries, and cities. There is a foreign key from cities that references countries.</p>
<h4>Session 1: update the parent row</h4>
<pre><code>
begin transaction;
select * from countries where country_code='CH';
update countries set population=population+1 where country_code='CH';
</code></pre>
<p>I&#8217;ve updated some info in countries, leaving the transaction open.</p>
<pre><code>
postgres=# begin transaction;
BEGIN
postgres=# select * from countries where country_code='CH';
 country_id | country_code |    name     |    official_name    | population | area_sq_km | latitude | longitude |   timezone    | region_id
------------+--------------+-------------+---------------------+------------+------------+----------+-----------+---------------+-----------
 CHE        | CH           | Switzerland | Swiss Confederation |    8293000 |   41277.00 | 47.00016 |   8.01427 | Europe/Zurich | EU
(1 row)

postgres=# update countries set population=population+1 where country_code='CH';
UPDATE 1
postgres=# select * from countries where country_code='CH';
 country_id | country_code |    name     |    official_name    | population | area_sq_km | latitude | longitude |   timezone    | region_id
------------+--------------+-------------+---------------------+------------+------------+----------+-----------+---------------+-----------
 CHE        | CH           | Switzerland | Swiss Confederation |    8293001 |   41277.00 | 47.00016 |   8.01427 | Europe/Zurich | EU
(1 row)
</code></pre>
<p>With this transaction still on-going, I&#8217;ll insert, from another session, a new child row for this parent value: another country in Switzerland</p>
<h4>Session 2: insert a child</h4>
<pre><code>
begin transaction;
select * from cities where country_id='CHE';
insert into cities values ('CHE1170', 'Aubonne', null, 2750, 'N', 46.49514, 6.39155, null, 'CHE');
commit;
</code></pre>
<p>I am able to commit my transaction without any locking problem.</p>
<p>Here is the output:</p>
<pre><code>
postgres=# begin;
BEGIN
postgres=# select * from cities where country_id='CHE';
 city_id | name | official_name | population | is_capital | latitude | longitude | timezone | country_id
---------+------+---------------+------------+------------+----------+-----------+----------+------------
 CHE0001 | Bern |               |     422000 | Y          | 46.94809 |   7.44744 |          | CHE
(1 row)

postgres=# insert into cities values ('CHE1170', 'Aubonne', null, 2750, 'N', 46.49514, 6.39155, null, 'CHE');
INSERT 0 1
postgres=# commit;
COMMIT
</code></pre>
<p>I was able to commit my transaction</p>
<h4>Back to session 1</h4>
<pre><code>
postgres=# select * from countries where country_code='CH';
 country_id | country_code |    name     |    official_name    | population | area_sq_km | latitude | longitude |   timezone    | region_id
------------+--------------+-------------+---------------------+------------+------------+----------+-----------+---------------+-----------
 CHE        | CH           | Switzerland | Swiss Confederation |    8293001 |   41277.00 | 47.00016 |   8.01427 | Europe/Zurich | EU
(1 row)

postgres=# select * from cities where country_id='CHE';
 city_id |  name   | official_name | population | is_capital | latitude | longitude | timezone | country_id
---------+---------+---------------+------------+------------+----------+-----------+----------+------------
 CHE0001 | Bern    |               |     422000 | Y          | 46.94809 |   7.44744 |          | CHE
 CHE1170 | Aubonne |               |       2750 | N          | 46.49514 |   6.39155 |          | CHE
(2 rows)

postgres=# commit;
COMMIT
</code></pre>
<p>In the first session, I can see immediately the changes because I&#8217;m in the default READ COMMITED isolation level. In SERIALIZABLE, I would have seen the new row only after I completed by transaction that started before this concurrent insert.</p>
<h3>Oracle</h3>
<p>Of course, Oracle can also run those transactions without any lock. There are no row level lock involved in those transactions except for the row that is modified by the transaction. But as long as the referenced key (the primary key of countries here) is not updated, there are no locks in the other table. Only if the key was updated, the index entry on the foreign key would have been &#8220;locked&#8221; to avoid inserting a row referencing a parent that will be removed.</p>
<p>I&#8217;ll not paste a demo here, as I have many ones available from this post: <a href="https://franckpachot.medium.com/oracle-table-lock-modes-83346ccf6a41" target="_blank" rel="noopener">https://franckpachot.medium.com/oracle-table-lock-modes-83346ccf6a41</a>. Oracle is very similar to PostgreSQL here. If you want to know the difference between Oracle and PostgreSQL about foreign keys, I&#8217;ve written about that on the CERN blog: <a href="https://db-blog.web.cern.ch/blog/franck-pachot/2018-09-unindexed-foreign-keys-oracle-and-postgresql" target="_blank" rel="noopener">https://db-blog.web.cern.ch/blog/franck-pachot/2018-09-unindexed-foreign-keys-oracle-and-postgresql</a></p>
<h3>NoSQL</h3>
<p>In NoSQL databases you don&#8217;t lock anything and you accept inconsistencies. Actually, all will depend on your data model. You may store a &#8220;country&#8221; item with a list of cities in a document store. Key: country id. Value: JSON with country attributes and list of cities with their attribute. Here either you lock in the same way MySQL does: don&#8217;t touch to the item while another one is modifying it. Or you can store those as multiple items (the &#8220;single table model&#8221; for DynamoDB for example) and then people can modify country and cities concurrently. They don&#8217;t lock each other but of course you may eventually live in a city which belongs to no country&#8230; This is the CAP theorem: scalability vs. consistency. You have this choice in your data model: either you cluster all items together in a single document, or you shard them within the datastore.</p>
<h3>YugaByteDB</h3>
<p>What about distributed databases? Having a foreign key referencing a parent in another node is a bit more complex because there is a compromise to define: wait on an internet latency, or raise an exception to re-try the operation. YugaByte DB is a NewSQL database which aims at full consistency, SQL and ACID, with the maximum scalability and availability possible.</p>
<p>Note that this is a new database, which has still a lot of work in progress in this area, all documented: <a href="https://docs.yugabyte.com/latest/architecture/transactions/explicit-locking" target="_blank" rel="noopener">https://docs.yugabyte.com/latest/architecture/transactions/explicit-locking</a>, and implemented while the users are asking for it. So, if you read this several weeks after the publishing date&#8230; re-run the example and you may have good surprises.</p>
<p>I have a 3 nodes YugaByteDB cluster over 3 regions (I have a few Oracle Free Tier tenants, with free VMs always up).</p>
<pre><code>{
curl -s https://raw.githubusercontent.com/gvenzl/sample-data/master/countries-cities-currencies/install.sql
} | /home/opc/yugabyte-2.5.2.0/bin/ysqlsh -h localhost -U yugabyte -d yugabyte -e</code></pre>
<p>This has created and populated the table, auto-sharded into my 3 nodes:</p>
<p><img loading="lazy" decoding="async" class="aligncenter size-full wp-image-48564" src="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2022/04/Screenshot-2021-03-17-220033-scaled-1.jpg" alt="" width="2560" height="1445" /><br />
The shards are called &#8220;tablets&#8221; here (they can additionally be replicated for HA and DR in but my replication factor is 1 here) and you can see that those PostgreSQL tables (there are multiple APIs in this YugaByte database) have leaders (would have followers with higher replication factor) in every node.</p>
<pre><code>

[opc@yb-fra-1 ~]$ /home/opc/yugabyte-2.5.2.0/bin/ysqlsh
ysqlsh (11.2-YB-2.5.2.0-b0)
Type "help" for help.

yugabyte=# begin transaction;
BEGIN

yugabyte=# select * from countries where country_code='CH';

 country_id | country_code |    name     |    official_name    | population | area_sq_km | latitude | longitude |   timezone    | region_id
------------+--------------+-------------+---------------------+------------+------------+----------+-----------+---------------+-----------
 CHE        | CH           | Switzerland | Swiss Confederation |    8293000 |   41277.00 | 47.00016 |   8.01427 | Europe/Zurich | EU
(1 row)

yugabyte=# update countries set population=population+1 where country_code='CH';
UPDATE 1

yugabyte=#
</code></pre>
<p>I have run the same as before: on one session updating the parent table.</p>
<pre><code>
[opc@yb-fra-3 ~]$ /home/opc/yugabyte-2.5.2.0/bin/ysqlsh
ysqlsh (11.2-YB-2.5.2.0-b0)
Type "help" for help.

yugabyte=# begin transaction;
BEGIN

yugabyte=# select * from cities where country_id='CHE';

 city_id | name | official_name | population | is_capital | latitude | longitude | timezone | country_id
---------+------+---------------+------------+------------+----------+-----------+----------+------------
 CHE0001 | Bern |               |     422000 | Y          | 46.94809 |   7.44744 |          | CHE
(1 row)

yugabyte=# insert into cities values ('CHE1170', 'Aubonne', null, 2750, 'N', 46.49514, 6.39155, null, 'CHE');

ERROR:  Operation failed. Try again.: d993bd6d-2f76-40ea-9475-84d09e5e0438 Conflicts with higher priority transaction: efd1aaaf-12f9-4819-9e0c-41ee9c6cfb7b

yugabyte=# commit;
ROLLBACK
</code></pre>
<p>When inserting a new child, I got a failure because optimistic locking is used there. We don&#8217;t wait but serialization conflicts have to cancel one transaction.</p>
<p>Note that it could be one or the other session that is canceled. Running the same example again I was able to insert and commit a child:</p>
<pre><code>
yugabyte=# insert into cities values ('CHE1170', 'Aubonne', null, 2750, 'N', 46.49514, 6.39155, null, 'CHE');
INSERT 0 1

yugabyte=# commit;
COMMIT
</code></pre>
<p>This is successful but now the session 1 transaction is in conflict.</p>
<pre><code>
select * from countries where country_code='CH';

 country_id | country_code |    name     |    official_name    | population | area_sq_km | latitude | longitude |   timezone    | region_id
------------+--------------+-------------+---------------------+------------+------------+----------+-----------+---------------+-----------
 CHE        | CH           | Switzerland | Swiss Confederation |    8293001 |   41277.00 | 47.00016 |   8.01427 | Europe/Zurich | EU
(1 row)

yugabyte=# commit;

ERROR:  Operation expired: Transaction expired or aborted by a conflict: 40001

yugabyte=# select * from countries where country_code='CH';

 country_id | country_code |    name     |    official_name    | population | area_sq_km | latitude | longitude |   timezone    | region_id
------------+--------------+-------------+---------------------+------------+------------+----------+-----------+---------------+-----------
 CHE        | CH           | Switzerland | Swiss Confederation |    8293000 |   41277.00 | 47.00016 |   8.01427 | Europe/Zurich | EU
(1 row)
</code></pre>
<p>The transaction started in session 1 cannot be completed and must be re-tried.</p>
<p>This is a general idea with distributed databases, SQL or NoSQL, optimistic locking is often preferred for scalability: better be ready to re-retry in the rare case of conflict rather than waiting for lock acquisition on other nodes. But I mentioned that one or the other transaction was cancelled. Which one? at random? Yes, by default each transaction has a random priority assigned. However, when I test or demo the behavior I want predictable results. This is possible by reducing the range of random priority within non-overlapping bounds.</p>
<p>The defaults are between 0 and 1:</p>
<pre><code>yugabyte=# show yb_transaction_priority_upper_bound;
 yb_transaction_priority_upper_bound
-------------------------------------
 1
(1 row)

yugabyte=# show yb_transaction_priority_lower_bound;
 yb_transaction_priority_lower_bound
-------------------------------------
 0
(1 row)</code></pre>
<p>Now, if I set the session 1 in the lower range (yb_transaction_priority_lower_bound=0, yb_transaction_priority_upper_bound=0.4) and the session 2 in the higher range (yb_transaction_priority_lower_bound=0.6, yb_transaction_priority_upper_bound=1), I know that the session 1 will have its transaction aborted on conflict (look at the timestamps, or colors if you can see them: I&#8217;ve run blue first, with low priority, then green with higher priority in the bottom session, then red, back to first session where it fails):<br />
<img loading="lazy" decoding="async" class="aligncenter size-large wp-image-48615" src="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2022/04/Screenshot-2021-03-18-203224-parent-scaled.jpg" alt="" width="1024" height="544" /></p>
<p>But if I set the session 1 in the higher range (yb_transaction_priority_lower_bound=0.6, yb_transaction_priority_upper_bound=1) and the session 2 in the lower range (yb_transaction_priority_lower_bound=0, yb_transaction_priority_upper_bound=0.4) I know that the session 2 will have its transaction failed on conflict (I started with green here in higher priority session 1, then blue and red in the bottom session where it failed):<br />
<img loading="lazy" decoding="async" class="aligncenter size-large wp-image-48616" src="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2022/04/Screenshot-2021-03-18-203300-child-scaled.jpg" alt="" width="1024" height="553" /></p>
<p>Again, this is a choice: optimistic locking. Better retry sometimes than locking always.</p>
<h3>Data Model</h3>
<p>However, even if the database can implement locking with good efficiency, you should think about the data model. Forget about normal forms here. Think of tables as business entities where attributes are tightly coupled together, but coupling between entities is just a possibility for some use cases. When looking at the domain model, CITY is an entity, and COUNTRY is another. However, it may require some transformation between the domain model and the implementation. There is the COUNTRY as a code, with a name and which is an aggregation of CITY. And it may be a different entity from the country as location where people live, with the population number that I&#8217;ve updated. Maybe the POPULATION attribute I&#8217;ve been updating belongs this other COUNTRY_POPULATION table. Don&#8217;t worry about joins: <a href="https://www.dbi-services.com/blog/the-myth-of-nosql-vs-rdbms-joins-dont-scale/" target="_blank" rel="noopener">join can scale</a> in a RDBMS with purpose-built joins algorithms. And, anyway, maybe one day this table with have temporality added because population changes and history is interesting to keep. And with this data model, whatever the locking mechanisms, you can update the population and and cities without blocking each-others. That&#8217;s how I think about normalization: business meaning, coupling, cardinalities, evolution&#8230;</p>
<p>Let me show that in my YugaByte sessions:</p>
<pre><code>
YSQL1 20:26:58 create table COUNTRY_POPULATION as select * from COUNTRIES;
SELECT 196
YSQL1 20:27:06 alter table COUNTRY_POPULATION add foreign key (COUNTRY_ID) references COUNTRIES;
ALTER TABLE
YSQL1 20:27:55
</code></pre>
<p>I have created another table for the population info, with a foreign key to COUNTRIES. Of course, I should remove the columns from the tables, keeping only the foreign key, but that doesn&#8217;t change my example and I keep it simple.<br />
<img loading="lazy" decoding="async" class="aligncenter size-full wp-image-48622" src="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2022/04/Screenshot-2021-03-18-213930.jpg" alt="" width="2282" height="1438" /><br />
The chronology here is: green -&gt; blue -&gt; yellow. Here, the consistency of data is enforced by Foreign Key referential integrity, scalability is ensured by sharding across nodes over the internet, and there&#8217;s no lock conflict thanks to my data model.</p>
<p>There&#8217;s no reason to remove foreign keys here, thanks to correct relational data modeling, and efficient handling even in a the distributed database. If you can&#8217;t rely on the database to handle that, you need lot of additional code, complex testing for race condition and failure scenario, or you risk hard to recover data inconsistency.</p>
<p>L’article <a href="https://www.dbi-services.com/blog/foreign-keys-in-mysql-nosql-newsql/">Foreign Keys in MySQL, SQL, NoSQL, NewSQL</a> est apparu en premier sur <a href="https://www.dbi-services.com/blog">dbi Blog</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.dbi-services.com/blog/foreign-keys-in-mysql-nosql-newsql/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Convert private key generated via OCI Console to ppk</title>
		<link>https://www.dbi-services.com/blog/convert-private-key-generated-via-oci-console-to-ppk/</link>
					<comments>https://www.dbi-services.com/blog/convert-private-key-generated-via-oci-console-to-ppk/#respond</comments>
		
		<dc:creator><![CDATA[Elisa Usai]]></dc:creator>
		<pubDate>Tue, 01 Dec 2020 14:06:38 +0000</pubDate>
				<category><![CDATA[Cloud]]></category>
		<category><![CDATA[Database Administration & Monitoring]]></category>
		<category><![CDATA[MySQL]]></category>
		<category><![CDATA[Cloud; keys]]></category>
		<guid isPermaLink="false">https://www.dbi-services.com/blog/convert-private-key-generated-via-oci-console-to-ppk/</guid>

					<description><![CDATA[<p>I am pretty new on the Oracle Cloud Infrastructure technology, so maybe I am talking about something you already know. But anyway I prefer to share this case: it can help if you encounter the same problem as me. Let&#8217;s take the risk to have too much information rather than nothing! 😉 The problem I [&#8230;]</p>
<p>L’article <a href="https://www.dbi-services.com/blog/convert-private-key-generated-via-oci-console-to-ppk/">Convert private key generated via OCI Console to ppk</a> est apparu en premier sur <a href="https://www.dbi-services.com/blog">dbi Blog</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>I am pretty new on the <a href="https://www.oracle.com/cloud/" target="_blank" rel="noopener noreferrer">Oracle Cloud Infrastructure</a> technology, so maybe I am talking about something you already know. But anyway I prefer to share this case: it can help if you encounter the same problem as me. Let&#8217;s take the risk to have too much information rather than nothing! <img src="https://s.w.org/images/core/emoji/17.0.2/72x72/1f609.png" alt="😉" class="wp-smiley" style="height: 1em; max-height: 1em;" /><br />
<span id="more-15327"></span></p>
<h3>The problem</h3>
<p>I was doing some <a href="https://www.dbi-services.com/blog/installing-mysql-database-service-mds/" target="_blank" rel="noopener noreferrer">tests</a> on the new <a href="https://www.mysql.com/cloud/" target="_blank" rel="noopener noreferrer">MySQL Database Service</a> and during the setup I decided to generate my ssh keys via the OCI console:<br />
<a href="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2022/04/Generate-keys.png"><img loading="lazy" decoding="async" src="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2022/04/Generate-keys.png" alt="" width="300" height="71" class="alignnone size-medium wp-image-45642" /></a> </p>
<p>When I tried to connect via PuTTY or MobaXterm to my compute instance using the opc account and my private key (generated previously), I got the following error:<br />
<a href="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2022/04/Error.png"><img loading="lazy" decoding="async" src="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2022/04/Error.png" alt="" width="300" height="111" class="alignnone size-medium wp-image-45630" /></a></p>
<p>Looking at the keys generated via the Oracle Cloud console, I saw that they were defined in the following format:<br />
<a href="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2022/04/Keys.png"><img loading="lazy" decoding="async" src="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2022/04/Keys.png" alt="" width="300" height="17" class="alignnone size-medium wp-image-45631" /></a></p>
<h3>The solution</h3>
<p>Actually I don&#8217;t work directly on a Linux system. So I need to convert my private key if I want to make it usable via my connection tools.<br />
First step is to transform it to RSA format. I can do it using OpenSSL:</p>
<pre class="brush: bash; gutter: true; first-line: 1">
# openssl rsa -in ssh-key-2020-11-24.key -out ssh-key-2020-11-24.rsa
</pre>
<p>Second and last step is to convert it to ppk format. I can do it using PuTTYgen.<br />
I load the private key:<br />
<a href="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2022/04/Load-private-key.png"><img loading="lazy" decoding="async" src="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2022/04/Load-private-key.png" alt="" width="300" height="296" class="alignnone size-medium wp-image-45632" /></a><br />
I filter on all files types:<br />
<a href="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2022/04/All-files.png"><img loading="lazy" decoding="async" src="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2022/04/All-files.png" alt="" width="300" height="36" class="alignnone size-medium wp-image-45634" /></a><br />
I select my RSA key and I click on Open:<br />
<a href="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2022/04/rsa.png"><img loading="lazy" decoding="async" src="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2022/04/rsa.png" alt="" width="300" height="36" class="alignnone size-medium wp-image-45635" /></a><br />
I click on Ok on the following message:<br />
<a href="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2022/04/Message.png"><img loading="lazy" decoding="async" src="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2022/04/Message.png" alt="" width="300" height="169" class="alignnone size-medium wp-image-45636" /></a><br />
and then on Save private key:<br />
<a href="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2022/04/Save-private-key.png"><img loading="lazy" decoding="async" src="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2022/04/Save-private-key.png" alt="" width="300" height="189" class="alignnone size-medium wp-image-45637" /></a><br />
So I save the key with a ppk format:<br />
<a href="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2022/04/ppk-key.png"><img loading="lazy" decoding="async" src="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2022/04/ppk-key.png" alt="" width="300" height="40" class="alignnone size-medium wp-image-45638" /></a></p>
<h3>Tests</h3>
<p>I can now use my private key to connect to my OCI compute instance via PuTTY:<br />
<a href="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2022/04/PuTTY.png"><img loading="lazy" decoding="async" src="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2022/04/PuTTY.png" alt="" width="300" height="293" class="alignnone size-medium wp-image-45639" /></a><br />
or MobaXterm:<br />
<a href="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2022/04/MobaXterm.png"><img loading="lazy" decoding="async" src="https://www.dbi-services.com/blog/wp-content/uploads/sites/2/2022/04/MobaXterm.png" alt="" width="300" height="85" class="alignnone size-medium wp-image-45640" /></a></p>
<p>Hope this can help you!</p>
<p>L’article <a href="https://www.dbi-services.com/blog/convert-private-key-generated-via-oci-console-to-ppk/">Convert private key generated via OCI Console to ppk</a> est apparu en premier sur <a href="https://www.dbi-services.com/blog">dbi Blog</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://www.dbi-services.com/blog/convert-private-key-generated-via-oci-console-to-ppk/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
	</channel>
</rss>

<!--
Performance optimized by W3 Total Cache. Learn more: https://www.boldgrid.com/w3-total-cache/?utm_source=w3tc&utm_medium=footer_comment&utm_campaign=free_plugin

Page Caching using Disk: Enhanced 
Lazy Loading (feed)

Served from: www.dbi-services.com @ 2026-04-18 06:00:43 by W3 Total Cache
-->