DevOps journeys series – Vertica release pipeline with Azure DevOps – Ep. 02 – build

In a previous post, we’ve described the “from scratch” approach on the development side. When everything works well there, a push (or check-in) triggers the build engine. We must deal with two SQL Server instances (SSIS Servers hereafter), with an environment for each of them:

The build pipeline

The SSIS Servers keep Vertica‘s test and production mappings as well as test and production connection strings for the SQL Server databases. So we need the right variable mapping for all the scenarios, but this is not in the scope of the post, we will speak about it in the next posts. Anyways, here is how the build pipeline works:

Our build process

You may notice that the task “Copy vertica deploy scripts” is disabled. Well, to be honest, right now we’re waiting for the target integration environment.

Build process explained

In the beginning, the build server gets the source files from the repository and creates the target artifacts folder with a Powershell script. This will be the path from which we will push the artifacts to the release pipeline.

The build server generates the .ispac file for the SQL Server Integration Services packages using the dedicated task. The copy tasks will be executed:

As you can see, we’ve got a set of utilities and transformation tools, that will be executed in the release pipeline as well as the environment script. This one contains the SSISDB variables mapping and the SSIS Project configurations statements. Misc files, .sql files for environments and the .ispac file will be copied to the target artifacts folder.

The tasks above copy our template of the .nuspec file to generate the NuGet file (NuGet pack step). This is what we get using NuGet:

Then, we’re ready to publish the files to the release pipeline. We will see how the release pipeline works in the next posts.

Ehm… you miss Vertica

Yes, you’re right. But, it’ll be just a copy of .sql files to the artifacts folder. We will see how the release manager will execute them, so…

Stay tuned!

Fix Corruption Due to Table Partition Error in SQL Server 2005

Microsoft SQL Server is one of the best and full-featured relational database management system. It provides various features like database partitioning in SQL server. But sometimes the user has to face various issues like Corruption due to table partition error in SQL Server 2005. So in this article, we are going to discuss the reason for this problem. Before proceeding further let us discuss database partitioning in SQL Server.

Database table partitioning in SQL Server is a process where large tables are divided into smaller multiple parts. It has many benefits, such as the user can speed up loading and Archiving data. It helps to reduce the overall response time for particular so that users can perform SQL operations easily.

The user can use DBCC CHECKTABLE Command to check the consistency of the tables, In case if the need arises to restore the data then the user can use the last database backup. In case if the user has don’t have the backup available then the user can take the help of SQL Recovery Software.

To understand, the situation let us consider with the help of an example, Suppose you have two partition tables named with Table A and Table B both the table having the same columns and the partitioned against the similar column. The user-created the clustered index on table B and drop It. After that, the user imported the data to table B with the help of the BULK INSERT command along with the TABLOCK option. This can cause corruption to the data, and the user will get an error with ID 8984 or 8988.

Reason Behind This Problem

The user will get this error because of the mismatch of the metadata of the two tables. When the SQL database user drops the clustered index of the table, the entire metadata gets changed. And when the switch the partitioned between the table, the entire information gets mismatched and the user has to face data corruption issues.

The user can try the DBCC CHECKTABLE Command with repair options to correct the data corruption issues. If there is any issue then the user has to delete the damaged table, in case of no backups, or corrupted backups then the user can try the SQL Recovery to recover damaged MDF files.

SQL Recovery software is one of the best tool to recover corrupted MDF file, this is a standalone utility which can easily repair the damaged SQL database. It helps to recover functions, tables, views, stored procedures, etc. After recovering the data the user can easily export to SQL Server.

Final Words

In this article, we have discussed Corruption due to table partition error in SQL Server 2005. Also, we have guided the best possible ways to resolve the problem. The user can try the DBCC CHECKTABLE command but in case if you are still facing the problem then the user can take the help of the automated solution.

Top 3 Methods to Reset SQL Sa Password Without Any Trouble

“Hello all. I am writing this post because I am in big trouble. I recently joined an organization and unfortunately, I forgot the password of SQL database. Not a great situation for a new employee, so can you help me out? Can anyone tell me how do I reset SQL sa password? I will be really grateful if you could suggest any easy solution.”

Are you also suffering from a similar problem as mentioned in the query? Do you also want to know how to reset SQL sa password ? Then this is the right blog for you. Read on.

SQL Server database administrators often find themselves in an awkward position when they forget or lose the database password. This problem can happen to anyone at any given time. If you find yourself in the same situation, do not worry. This post will elaborately discuss various methods that can be implemented to fix this problem.

3 Quick Methods to Reset SQL sa Password

If you have lost your SQL database password, do not jump into the decision of reinstalling the SQL Server. Keep patience and read the solutions stated in this section. Here we will learn two different methods to reset SQL SA password for SQL database.

Method 1: Use Management Studio to Reset SQL SA Password

If you have lost the SA password, you can easily reset it using the management studio. After that, you will regain the access to SA account by Windows Authentication mode.

  • Login to SQL Server. For this, use Windows Authentication.
  • Navigate to Object Explorer to expand Security folder. Now, expand Logins folder and right-click on SA account. From the options, click on Properties.
  • When Properties window opens, add new password and confirm it. Click on OK to set this as your new SA password.

Method 2: Use SQL Script to Reset SQL SA Password
In case you reset SQL database password, users can also try using SQL scripts to add a new password to the database.

  • First of all, you have to launch SQL Server Management Studio.
  • Open a new query in it.
  • Enter the scripts mentioned below for execution:
    GO
    ALTER LOGIN [sa] WITH DEFAULT_DATABASE=[master]
    GO
    USE [master]
    GO
    ALTER LOGIN [sa] WITH PASSWORD=N'NewPassword' MUST_CHANGE
    GO

Note: Here, NewPassword will be the password you want to use for your SA account in place of the lost password.

Method 3: Use SQL Password Recovery Tool

If you find these above-mentioned methods complex, or if you are not willing to perform those processes, we have a better option for you. Presenting SysTools SQL sa Password Reset Tool. This affordable yet effective tool will help you set a new password for SQL database whose password you have lost.

It is really simple to get back the access to your SA account using this utility. Launch the tool and add the MDF file in it. All of its users will get displayed on the screen. You will the password value of SA account is Unknown. Select SA and click on Reset Password. Add a new password and confirm it. Now you are all set to access the SA account using this password.

Attention: We strongly recommend having the backup of MDF file before proceeding with this method.

Conclusion

Forgetting the SQL database SA account password can cause immense trouble for the admins and many people in real life do suffer from this problem. That is why we see a lot of people asking the same question in forums, how to reset SQL database password. For them, we have described three simple solutions to change their password. If the manual methods are not working or seems lengthy, users can go for the tool mentioned here. This software is one of the most popular software

Display Reporting Services usage statistics with Grafana

Introduction

In this post, we will describe an efficient way of showing the usage statistics of our SQL Server Reporting Services hosted reports. Most of the queries below have been addressed in another article published by Steve Stedman. Even though they are really useful, the article shows their results through SQL Server Management Studio.

The problem

One of the problems that often occur in our organization as well as some of our customers, is to get immediate feedback about usage statistics of reports. Usually, the request of creating reports is out of control and some of them are executed only “that time” and not anymore. In the worst-case scenario, many of them aren’t executed at all and some of them could become even overlapped or duplicated.

Therefore, it is important to know the usage statistics, user by user and report by report, to make the reader aware of them, let him interpreting the values of the same query in multiple ways and graphical layouts. While this is not possible with a tabular format (unless you export the values using any external tools such as Excel) it is simpler when it comes to a dashboard.

Our solution: Grafana

We considered two factors: simplicity and efficiency, in order to make this first-sight dashboard. Grafana enables us to get both of them, as well as being very powerful and immediate. Even though this is not the right definition for it, we can say that “it is a portal to create dashboards using connectors, which support the most famous tools that return data”. We can find them in its marketplace. For instance, tools such as PRTG and Prometheus (monitoring), NewRelic (APM), also SQL and NoSQL data sources are supported:

Obviously, we can find SQL Server. Also, we can contribute to create others, as well as to modify Grafana itself, since it is completely an Open Source project. Examples of possible graphical representations are listed below:

Creating a dashboard is really simple. Just add each panel with a button.

Then, write the query and modify settings to get the desired type of representation.

As mentioned before, the connectors are many. Once selected you can to configure them with parameters:

If you would like to install and configure Grafana you can read the official documentation which also includes a short guide that illustrates how to take your first steps.

That’s it!

Conclusions

With half a day of work (including the setup of the server), we have solved one of the most important problems of our customers, derived from the lack of awareness of reports deployed in production environments. We did it with very little effort and the result, as you can see, is pleasant and effective. Everything is now ready to be published every time we update the dashboards also through a delivery software (Octopus Deploy, Jenkins or Azure DevOps) so all these things fall into the second and third way of DevOps (according to The Phoenix Project): Immediate Feedback and Continuous Improvement.

Stay Tuned!

Two tech events in Parma, the city of food

SQL Saturday Parma, six years in a row. DevOpsHeroes, four. Parma has been a great place to reach, also for technical events. I’ve started organizing the first SQL Saturday in my birthplace in November 2014. After two years I tried to create a brand-new event, when the DevOps culture started to grow and when the agile became strong. DevOpsHeroes was born in 2016, a month before the SQL Saturday event, again, in Parma. Why change? The audience has been great (more than 200 attendees), the feedbacks, too. People who come here look comfortable with everything. Then, thanks to the University, which has been the selected location, both the events are still growing.

Let’s go in deep with the events.

SQL Saturday Parma (2014-now)

SQL Saturdays, a great format by Professional Association of SQL Server (PASS), is a well-known event all around the world. You can find hundreds of them here. In Parma, the event is completely free, with no pre-conference. It’s located on the Campus of the University of Parma, in order to make also the students as well as the school aware of this kind of events. Unfortunately, the audience doesn’t gather them as does for the professionals, so I’m working to make a better relationship with them, too. About the audience, some numbers:

Event Attendees Track Speaker Feedback
SQL Sat 355 (2014) 128 3 14 4.46/5
SQL Sat 462 (2015) 157 3 18 4.20/5
SQL Sat 566 (2016) 150 3 18 4.48/5
SQL Sat 675 (2017) 210 4 24 4.47/5
SQL Sat 777 (2018) 230 4 24 4.72/5

In 2014 the sessions were driven by, let’s say, classic topics, like DBA, Development, BI. Starting from 2016 the coverage changed a lot. More data visualizations, more automation, more BI in the cloud, more cloud itself. 2017 has been the game-changer about NoSQL sessions (on Microsoft Azure), too. The latest edition of SQL Saturday Parma introduced the AI and this year we are struggling for selecting the right sessions from a bunch of 70 proposals (all over the world). September, 30 the Call for papers (available here) will close and if you are in the area on November 23, or if you want to come in Italy for a weekend of training on Microsoft Data Platform with friends, #sqlfamily and good food you are welcome!

The event is strongly supported by the Italian #sqlfamily, especially my friends in UGISS. A big thanks go to them.

DevOpsHeroes (2016-now)

Started as a one-shot event, this is a four-year-in-a-row one. Riding the wave of enthusiasm derived from the SQL Saturdays in Parma and thanks to my work experience, who moved meanwhile from DBA skills to Data DevOps and automation, this event has been a pleasant surprise, yet it doesn’t gather as many people as SQL Saturday does (SQL Saturdays has got also its noise and the PASS support). The event was born for spreading the DevOps culture, not just the tools. Tools were described there just to pull out the advantages of the culture, which must be “soaked up” before going deeper. So, the event was born for the culture. And this has been (and it still is) one of our mission.

The event is held typically one month before SQL Saturday in the Campus of the University of Parma. It gets more than 120 attendees and this year the organization is expecting more, due to the great sponsors which help the edition.

As you can see, behind the hood there are two main “helpers”. Engage IT Services, which is the company whose I’m a co-founder, and GetLatestVersion.it, a great Italian online community for DevOps and ALM technologies. The event is totally free, and it will get 18 sessions with 3 tracks. The topics will cover Technologies, Methodologies and use cases (or Experience sessions). The call for paper is already closed and we’re finishing the program of that Saturday, October 26.

Wrapping up

SQL Saturday Parma website: https://www.sqlsaturday.com/895/EventHome.aspx

Registration: https://www.sqlsaturday.com/895/registernow.aspx

DevOpsHeroes website: http://www.devops-heroes.net/

Registration: https://www.eventbrite.it/e/biglietti-devopsheroes-2019-66796826105

Recover Deleted Data From SQL Server Table by Transaction Logs

The task of creating tables, storing data in records look quite easy to SQL Server users. But if the data is being deleted by mistake or because of some other hardware or software issues, then the situation becomes complex. Recovery of deleted data is not a child’s play. So, considering this issue we have come up with this write-up which will help you to know various methods to answer your query how to recover deleted data from SQL server table by transaction logs? Let’s begin with a detailed discussion on the same.

Techniques to Rely On For Recovering The Deleted Data From Server:

1. Manual Method: – Using LSNs (Log Sequence Numbers), but it works only if the time of deletion is known to the user.
2. Automated Solution: – Simple yet secure and reliable solution for recovering deleted data from the server by using SysTools SQL MDF Database Recovery.

Know-How to Recover Deleted Data From SQL Server Table by Transaction Logs

Deleted Records’ Recovery Using SQL Server LSN:- In SQL Server transaction logs, the LSN(Log Sequence Number) is nothing but unique identifiers assigned to each record. Here we can restore the deleted rows of SQL tables if the time when the record was deleted is known.

User has to be ensured that the Full Recovery Model or Logged Recovery Model were created when the data was actually deleted for starting the recovery process. This is the prerequisite for the successful recovery of the deleted records.

The steps are described below to recover the deleted data from SQL Server 2016, 2015, 2014, 2012, 2008 and 2005.

Step 1: Fire the following query to know the total number of records in a table where from th record is being deleted.

Select * From Table_Name
Step 2: Next, run the procedure to take log back using the below-mentioned query:
USE NameOfTheDatabase
GO
BACKUP LOG (NameOfTheDatabase)
TO DISK = N’D:\ NameOfTheDatabase\RDDTrLog.trn’
WITH NOFORMAT, NOINIT,
NAME = N’NameOfTheDatabase-Transaction Log Backup’,
SKIP, NOREWIND, NOUNLOAD, STATS = 10
GO
Step 3: Information has to be collected from the SQL Server table about the deleted records for data recovery. This query will retrieve Transaction ID of the deleted records.
USE NameOfTheDatabase
GO
Select [Current LSN] LSN], [Transaction ID], Operation, Context, AllocUnitName
FROM
fn_dblog(NULL, NULL)
WHERE Operation = ‘LOP_DELETE_ROWS’

Step 4: Execute the query given below to know at what time exactly the records get deleted.
USE NameOfTheDatabase
GO
SELECT
[Current LSN], Operation, [Transaction ID], [Begin Time], [Transaction Name], [Transaction SID]
FROM
fn_dblog(NULL, NULL)
WHERE
[Transaction ID] = ‘000:000001f3′
AND
[Operation] = ‘LOP_BEGIN_XACT’

Ongoing LSN you will be able to find now query.
Step 5: Restore process has to be run to restore the deleted data from the SQL Server Table.
Recover Deleted D USE NameOfTheDatabase
GO
RESTORE DATABASE NameOfTheDatabase _COPY FROM
DISK = ‘D:\ NameOfTheDatabase \RDDFull.bak’
WITH
MOVE ‘NameOfTheDatabase’ TO ‘D:\RecoverDB\ NameOfTheDatabase.mdf’,
MOVE ‘NameOfTheDatabase _log’ TO ‘D:\RecoverDB\ NameOfTheDatabase_log.ldf’,
REPLACE, NORECOVERY;
GO
Step 6: Now is the time to verify if deleted records are recovered or not.

Efficient Way to Recover Deleted Records From SQL Server 2017 / 2016 / 2014

If you failed to recover deleted data from SQL server table by transaction logs then you can take the help SQL Database Recovery software. This software provides you the option to recover deleted records from SQL server table. Also by using this software, the user can preview accidentally deleted SQL tables records in red color. The user can easily recover database objects such as tables, functions, stored procedure. Moreover, This application is compatible with SQL server 2017 and its below version.

download

Follow The Steps to Recover Deleted Records From SQL Server Table

1. Download and Install the software on your machine.
2. Click on Add file button and add the MDF file in the software.
3. Now choose the Scan option and select the SQL server version.
4. Check the option preview deleted SQL database records in red color.
5. Preview the SQL server database items. The software will preview the deleted SQL table records in red color.
6. And click on Export button to Export the SQL database.
7. Now in database authentication choose the server name and the authentication mode.
8. Now choose the destination database
9. Check the Database items you want to export.
10. Choose the option from with only schema and schema and data.
11. Mark the option Export deleted records and finally click on Export button.

Final Words

In this article, we have discussed how to recover deleted data from the SQL server table by transaction logs. The manual solution is quite lengthy and difficult to perform. It requires strong technical knowledge. So it is better to take the help of SQL database recovery tool to recover deleted records easily.

Best Practice in Rebuilding Index in SQL Server

For every Database admin, ensuring the smooth performance of the SQL Server is a headache. They need to perform various tasks and tricks for a productive and fast SQL database. One such common task is to keep the Index fragmentation in check. While this is a challenge for the DBAs, Index Fragmentation can be controlled by reorganizing and rebuilding.

Rebuilding Index in SQL Server is the method often used when the fragmentation level goes higher. If you want to improve the performance of SQL database, you have to rebuild the index. But the question is when and how to perform this task. This write-up will focus on the best practices in rebuilding index of SQL Server. We will also discuss when is the right time to conduct this task easily.

Rebuilding Index in SQL Database – Know Why and When to Perform?

It is known to the SQL Server users that database performance gets significantly hampered if the SQL database becomes full of fragmented indexes. As Index Fragmentation keeps increasing along with the database usage, admins should be careful about the fragmentation rate. Depending on the database size, DBAs should fix a schedule when the index fragmentation will be checked using “sys.dm_db_index_physical_stats” command. When this command is run, users can learn about the percentage of index fragmentation in SQL database.

If the percentage is as low as 10%, no additional action is needed. If the level is between 10%- 30%, you have to rebuild the index to enhance its performance. Only when it crosses the 30% bar, the question of rebuilding index in SQL Server comes to the scene. However, some SQL Server experts recommend performing Index rebuilding only when the fragmentation rate reaches 80% or 90%. Since rebuilding index is a resource-consuming task, database admins should consider how much the fragmentation affects the database performance before rebuilding the index.

Best Practices in Rebuilding Index in SQL Server

If you are interested in rebuilding indexes, it is better to follow certain basic rules, known as the best practices in Index rebuilding. For example, if you are using any SQL Server edition other than the Enterprise Edition, then this task should be done offline. Since the feature of Online Index Rebuilding got introduced in SQL Server 2005 Enterprise Edition, any earlier version users need to perform it offline. With Online index rebuilding, the index never goes offline and table also remains available for use during the process.

In case of SQL Enterprise Editions that support online index rebuild, the online process takes more time than offline rebuilding. That is why, offline index rebuilding is highly recommended if the company can afford downtime. To minimize the downtime, it should be done when minimum people are using the database or it should be done along with scheduled maintenance tasks. Therefore, nighttime is perfect to schedule index rebuilding. It is also suggested to conduct this task at least once a week. If you do not have any maintenance window for your database, you can try doing Online rebuilding.

Rebuilding Index in SQL Server consumes a lot of resources. So performing it too regularly will create inconvenience to the database with scarce resources. Database admins should consider their database capacity and resources before scheduling the index rebuilding.

Note:  Get to know How to Deal with Index Corruption in SQL Server

Concluding Thoughts

Index fragmentation is a common situation in all SQL databases. The productivity of the SQL Server depends on the level of Index fragmentation. Only a controlled Index fragmentation rate is desirable for smooth functioning of SQL Server. Among many approaches that keep the fragmentation level in control, rebuilding Index is a popular one.
In this process, logical index fragmentation is removed, statistics get updated and database page space is emptied. Therefore, users should include this in their maintenance scheduling window. They can also learn about the best practices in rebuilding index in SQL Server from this post. Also, consider the situation of your own SQL database to customize the rebuilding process.

SQL Server Latest Updates (Nov. Dec. 2018)

Directly from the SQL Server Release Service blog, here the latest updates for SQL Server 2016 SP1, SP2, 2017 RTM and 2014 SP2, SP3:

Cumulative Update #1 for SQL Server 2014 SP3

Cumulative Update #15 for SQL Server 2014 SP2

Cumulative Update #12 for SQL Server 2016 SP1

Cumulative Update #4 for SQL Server 2016 SP2

Cumulative Update #13 for SQL Server 2017 RTM

and

Public Preview for SSRS 2017+ Management Pack with Power BI Reporting Server Support

…Stay Tuned, Merry Christmas and a Happy New Year! 🙂

Posting SQL Server notifications to Slack

Introduction

Automation, proactive monitoring, repeatability, reducing waste of time and technical debt. This is something you should know about when trying to do some DevOps.

Why automation? Because you can reduce technical debt and the number of failures that can happen with a manual interaction. You can create environments using a provisioning procedure without falling in common pitfalls like security misconfigurations, wrong configurations and botched monitoring.

Talking about SQL Server, immediate and proactive notifications represent a great step forward toward automation.

We automate whenever we want to stop doing a bunch of recurring or tedious steps manually. At the same time, we are also improving the overall quality and we are reducing the amount of things that can (and will) go wrong.

We are also optimising on how we use our time, because we can just ignore what the automation is doing for us and focus on that something that really needs our attention.

Finally, in this modern and notification-based world, emails generate too much white noise to deal with. In this article, we will learn how to integrate SQL Server tasks’ notifications with one of the most used collaboration tools: Slack.

Keep in mind that this is not the only way to get this done. This guide would help you to better understand why we’re doing this (eventually why DevOps), and not strictly how to do it, even if we’ll see a real working example.

Minimal requirements

You need to setup an account on slack.com (on a paid plan) and a SQL Server edition. I recommend the free developer edition here.

Note: Don’t use SQL Server Express edition. This version doesn’t support any SQL Server Agent task as well as the Database Mail, which we’ll need hereafter. Also, about slack, you must create a paid account, because the integration described below will not work with a free profile.

In order to send emails, we will use an SMTP sever. It can be either a private Microsoft Exchange, PostFix, or any other on-premises solutions, together with a cloud delivery service, like SendGrid, SendInBlue, MailJet, or Office 365.

The scenario

In a team like mine, which uses chat as a daily communication driver, centralizing every business and technical message could be a great step forward for the members of the team in terms of awareness and knowledge sharing. Business roles can use that tool as well, so we can chat to each other switching topics between tech and functional discussions. It’s just a matter of how Slack (in our case) is configured with channels and naming conventions. A good setup helps us to better organize our meetings, small talks and any other topic related to implementations. This is a cool argument to speak about, but a little bit out of the scope of this guide. We will focus on notification bots instead.

SQL Server is able to send emails with its built-in features out-of-the-box, but we’d like to centralize every notification inside Slack, gaining the following advantages:

  • Instant notification
  • Tailored focus (custom sound instead the same popup for all the incoming emails)
  • Opt-out
  • Quickly involve people that are not following the channel by a mention
  • Relay the problem description within the chat
  • Take actions as soon as the notification is received

The proposed solution

Now, how can we send notifications from SQL Server in an easier way than using custom code or a Slack incoming webhook? Is there any integration or a Slack app?  Yes. And guess what? I think you’ll like it because you don’t need to write a single line of code, and you don’t need to choose between CLR, PowerShell or any other language. It’s ironic, but the integration is called “Email”.

Slack

The purpose of this article is just to describe Slack as a collaboration tool. Further details are provided here. As we said before, the following samples work only if you get a Slack account.

The Slack Email integration

This is the app to work with: Email. Its configuration is based on a four-step wizard:

  • Select the channel (or create a new one).

001.png

  • When added, set the name and a short description of the new contact (bot) in Slack.

002.png

  • Change the avatar (it’s important to recognize the bot at a glance)

003

  • After saving, copy the email address the app created for you.

004

A word about the “Hide this address” checkbox: this is useful if you want to hide the address to any other member of your workspace. You will be the only user able to read it if you check that box.

Type of SQL Server notifications and setup

As a DBA, we’re managing the following types of notifications on a daily basis:

  • SQL Server built-in and custom Alerts
  • Job execution status
  • Integration Services custom emails (within the packages)
  • External monitoring tools (which monitor SQL Instances)

With the exception of SSIS custom emails and external monitoring tools, everything is managed by Database Mail. This is a lightweight layer that allows us to send emails directly from a SQL Server Instance, connecting to a SMTP server.

To setup Database Mail you can follow this guide from Microsoft Documentation.

Once this is up and running, you can manage the notifications using SQL Server Operators. An operator is an alias managed by the SQL Server Agent which you can use to send emails and other types of messages, like pagers and Net Send.

Creating an operator is simple, just invoke the following system stored procedure:

USE msdb; 
GO 

EXEC dbo.sp_add_operator 
    @name = N'<name here>',
    @enabled = 1,
    @email_address = N'<email here>';
GO

If you’re asking what email address you should use, it’s easy to say. You must fill the @email_address parameter with the address returned by the Email app integration for the channel you will send to (j8e4b5t2p4y8g4o2@elysteam.slack.com in the example above). But, what about the name parameter? In my opinion, the best name is the one that helps us to understand where the message will be sent to. Suppose that we’d like to notify something about some index maintenance jobs. We could call the operator Slack Indexes Maintenance, Slack Indexes Maintenance Operator and so on. With such names, you will immediately know what we are going to send to Slack as the topic is related to index maintenance.

Thus, you’ll get the following snippet:

USE msdb; 
GO 

EXEC dbo.sp_add_operator 
    @name = N' Slack Indexes Maintenance Operator',
    @enabled = 1,
    @email_address = N'j8e4b5t2p4y8g4o2@elysteam.slack.com';
GO

 

Slack channels naming considerations

I’d like to share with you my thought about the channel naming conventions. The principles to follow when naming channels, are:

  • Readability (clear for everyone)
  • Awareness (know what)
  • Style and Rules (know how)
  • Repeatability (keep using it from now on)

That being said, if the channel name describes a single action (like indexes maintenance in the above example) the operator which will send notifications should be unique. The reason is simple enough: we know that Indexes Maintenance Operator is sending messages to #sql-idx-maint-alerts (readability) and everyone knows that this is a one-to-one communication between a SQL Server Operator and Slack (awareness). Everyone knows that the “sql” channel prefix indicates SQL Server-related notification and the “alerts” suffix indicates that is an issue to pay attention to (style and rules). At the same time, everyone knows how to do the same with another pipeline of messages in the future (repeatability).

On the other hand, using a general purposes channel, like #sql-maint-alerts, allows us to be ready to future changes. Suppose that index maintenance will not be the only operation we’re executing in our servers (and typically isn’t). Does it make sense to create a new operator called for example, Database Concurrency Check Operator, which sends to a specific purpose channel? Clearly not.

In the end, a generic purpose channel gives the opportunity to hold more than one topic. All the notification sent to that channel should be, let’s say, of the same category to avoid too much generalization.

These solutions (one channel for more operators or a one-to-one solution) work equally well, it’s just a matter of how you’re designing your Slack channels. I suggest to avoid the “one channel to rule them all” pattern, because you’ll get thousands of mixed notifications without any clear idea behind them. After all, a noisy channel with messy content is something that will not be considered for a long time and will be eventually dropped.

Binding alerts

Alerts are triggers that communicate to an operator that something went wrong. This Brent Ozar’s article offers a good list of alerts that need attention. Here you can find their descriptions, based on severity. The binding is straightforward. All you need to do is to link the operator to the alert:

005006

When one of those events occur, an operator is alerted. Then, it sends the message using its setup – in our scenario, an email. If the operator uses the Slack Email app, the email will be sent to the Email app, and the integration will redirect it to Slack.

 

Binding job execution statuses

Let’s see how we can use the notification mechanism to monitor SQL Server Agent Jobs. Each job lets you configure what to do in case of failure, success or completion of its execution. The binding is similar to the alert’s one:

007.png

Once the result is collected, based on the configurations you’ve set up, this job will send an email to the app.

 

Binding custom Integration services email

In order to send an email from a SQL Server Integration Services package (aka .dtsx) you need to configure the SMTP server within the package itself. This is a little out of scope, because it’s not really a SQL Server notification. You can leverage the power of SSIS and prepare a rich HTML-formatted message; the result is nice to read and informative like in these examples:

 

 

Cool stuff, isn’t it? It’s simply a .NET script in SSIS, which uses the System.Net namespace. Although the SSIS package is executed within a SQL Server Agent job, the default notification message that SQL generates is not easy to read. The message you always get is:

JOB RUN:<name> was run on <date/time> DURATION: x hours, y minutes, z seconds. STATUS: Failed. MESSAGES: The job failed. The Job was invoked by Schedule xyz (<name>). The last step to run was step xyz (<name>)

Decorating the package with a more detailed email will improve the readability and the accuracy of our notifications.

Setup an external monitor for notifications to Slack

SQL Server is often (hopefully) monitored with specific counters. We’re using PRTG monitoring tool to measure them, and when a baseline changes and a threshold is hit, we send notifications to Slack. How? Again, sending to the Email app integration, specifying the right channel to send to and getting this:

010.png

The above report has been truncated. In a complete version of it, you’ll find the complete details of the measures, like the name of the servers, the sensors links, the grid with all the results, and everything you can see inside a PRTG admin portal.

 

Test

Let’s see a more complete example, using a SQL Server alert. We’ll use the Severity 17 alert. Severity 17 is simple to raise and it describes a missing or insufficient resource when executing a command:

USE msdb; 
GO 

EXEC msdb.dbo.sp_add_alert @name=N'Severity 017',
    @message_id=0,
    @severity=17,
    @enabled=1,
    @delay_between_responses=60,
    @include_event_description_in=1,
    @job_id=N'00000000-0000-0000-0000-000000000000';
GO

Set the Response for the Severity 17 alert to “Notify Operator”, via email:

006

Run the following severity 17 based t-sql script:

RAISERROR(N'An error occurred Severity 17:insufficient resources!', 17, 1) 
WITH LOG; --don’t forget to use WITH LOG
GO

Go to your Slack account. If you’ve configured everything correctly, you should see the following:

011.png

Did it work? Great! If not, continue reading.

 

Troubleshooting

If you don’t see the notification try these steps:

  1. Be sure that your Slack account is confirmed (its email too)
  2. Once the Slack account is confirmed, check if the channel still exists (CTRL+K -> name of the channel)
  3. Click on “Customize Slack” in the drop down menu of your Slack client/webpage, then click on Customize App in order to check whether the Email integration is active or not:

012

013.png

  • Verify Database Mail configuration (try to send the test email)
  • Verify the operator configuration (is it enabled?)
  • Verify the alert configuration (did you bind the response with email to the operator? Is it enabled?)
  • Verify the SQL Server Agent email profile configuration (is it enabled? Is it the right one?)

014

Conclusions

There are some disadvantages when using this kind of integration. For example, you cannot customize the message, unless you do it inside a .NET script. The Slack Email Address is publicly available, albeit hard to discover, so anyone can send message to your private slack channel by sending emails to that special address. Again, you cannot send the notification to more than one Slack channel or outside of the Slack world. In reality native SQL email notifications show the same limits, where email addresses of distribution lists are similar to Slack channels.

For our purposes, this is a very-low-effort automation with a high return in terms of value. With a couple of clicks, you can setup an email address representing a Slack channel, and, with little more, you can get notifications in a smart and comprehensive layout.

Everything is kept inside the collaboration chat tool we are using massively, every day. In the end, this example embeds one of the core DevOps principles (automation) and provides huge cross-role and cross-team value, especially when the channels include also network and server teams.

I hope that you’ll give this a try.