Welcome to the SQLDay!!

With these words, the start of the event was announced punctually at 10 am.

Now the question, what can we expect? A total of 59 speakers in 49 sessions will share their experiences and skills with us.

For 7 years now, data fanatics have been meeting in Poland once a year to share their knowledge with each other. Together with the data community, they form a community of over 2000 members.

And you have to hand it to them, they know how to choose a nice location.

After a coffee during the break, the first session began. “ChatGPT is the best DBA sidekick”. Rudi Bruchez tells us that we shouldn’t avoid new things. Nor do we need to worry about our future. We will not be replaced by tools like ChatGPT. On the contrary, we need to learn how to use these tools efficiently. And how we can do what. Whether it’s writing TSQL or Powershell scripts, analyzing error logs or pasting screenshots with questions about whether the query plan can be improved, we can find all this out using LLM queries and AI. Even if the machine does the work, we are still the client and know exactly what we need.

Now for my personal highlight of the SQL Day event, the topic “SQL Server 2022 Performance Enhancements” will be discussed by none other than Ola Hallengren himself.

A veteran in the MS-SQL world goes through all the performance improvements of the latest MS-SQL version with us. And how. With questions and answers. What can be done if the performance of a query can be improved by a Hinter, but the source code is unchangeable? Right, we can set a query store hint.

The experience that comes out of Ola personally is simply incredible.

Already somewhat exhausted, I was able to discuss the topic “What can we do to protect our SQL Servers?” in a very personal exchange with all those present at the session. David Postlethwaite and Gethy Ellis really took the trouble to involve everyone in the discussion. This was mainly about meeting the CIS benchmarks. Even though these are often seen as “too much” and can often not be met by the vendor due to various special requirements, they still give us a good guideline to have the most necessary things covered. It goes without saying that we all rename and disable the “sa” user, doesn’t it?

Of course, we don’t just want to deal with the on-premise topic, how can you mention Microsoft without Azure these days?

We also had the pleasure of attending some incredible sessions. We had the honor of learning all about “Data Encryption & Integrity” from Pieter Vanhove of Microsoft himself.

I have already dealt with TDE (Transparent Data Encryption) several times on the on-premise machines. It’s no different in Azure. This is already enabled by default in the AzureSQLDBs.

There are 2 options for managing the encryption keys:

CMK – Customer Managed Key

This allows the user to manage the encryption keys themselves. One reason to choose this option is if you want to make sure that nobody has access to the encrypted data and you want to be responsible for the renewal and storage yourself

SMK – System Managed Key

With this option, Microsoft takes care of everything related to TDE encryption for the user (creation, renewal, storage).

However, the somewhat less popular encryption method “Always Encrypted” is also offered by Microsoft Azure.

If you want to restrict the visibility of the data for DBAs, for example, you can activate this encryption.

Unlike SMK TDE encryption, a configuration (creation of column master key and column encryption key) must be carried out at DB level after activation.

There are also 2 encryption variants here:

Determistic – The same “hash” is used for all columns.

Randomized – The hash is created and selected at random for each column. This variant is definitely more secure and more common

After configuration, the DBA is no longer able to see sensitive data such as employee wages in unencrypted form. It is good to know that the performance analysis, such as reading the query plan, is not restricted in any way, the query plan is still visible, only the encrypted variable can no longer be seen.

Even the best and most secure encryption cannot help if data integrity is not guaranteed. If the data has been manipulated by mistake, but you would like to track this, you need the “Ledger” function in Azure. Ledger tables receive a history table where all changes to the table are recorded. The changes are linked to so-called block hashes. Similar to the log backups, if the consecutive block hashes do not match, the integrity is no longer guaranteed.

The integrity check can be carried out with the following command:

DECLARE @digest_locations NVARCHAR(MAX) = (SELECT * FROM sys.database_ledger_digest_locations FOR JSON AUTO, INCLUDE_NULL_VALUES);SELECT @digest_locations as digest_locations;


    EXEC sys.sp_verify_database_ledger_from_digest_storage @digest_locations;

    SELECT 'Ledger verification succeeded.' AS Result;





The important thing is that in order to regain integrity, there is no way around restoring the last clean backup.

In the end, I can say with good will that it was 2 really exciting days in Wroclaw, Poland. I was incredibly pleased to have been part of this huge data community.  I will end the blog with a number: 2,940. That’s how many minutes all the speakers together took to share their knowledge with us. Once again, a huge thank you.