This year starts with a good news:
Here is the list of people in the directory.
This year starts with a good news:
Here is the list of people in the directory.
DevOpsHeroes has been a great event, again! We didn’t expect so many people and we could not imagine that the feedback would be so good. Quick facts:
The following radar chart is about the event date, location, quality of the sessions, quality of the speakers, food, hospitality, event design, and kits:
As we can see, the overall satisfaction is really high (4/5)! The blue line is related to the audience which already has attended our event (40%) and the orange one represents the first-timers (60%).
Indeed, we’ve got a good feedback also for the following questions:
We’re really proud of the third edition of DevOpsHeroes. Engage IT Services and Xebir did a great job together, and, hopefully, both companies will cooperate in the future in order to provide new formats and events like this one. A special thanks goes to Scott Ambler, and also to Italian Agile Moviment for sponsoring and supporting the organisation.
Download sessions here.
See you next year!
Automation, proactive monitoring, repeatability, reducing waste of time and technical debt. This is something you should know about when trying to do some DevOps.
Why automation? Because you can reduce technical debt and the number of failures that can happen with a manual interaction. You can create environments using a provisioning procedure without falling in common pitfalls like security misconfigurations, wrong configurations and botched monitoring.
Talking about SQL Server, immediate and proactive notifications represent a great step forward toward automation.
We automate whenever we want to stop doing a bunch of recurring or tedious steps manually. At the same time, we are also improving the overall quality and we are reducing the amount of things that can (and will) go wrong.
We are also optimising on how we use our time, because we can just ignore what the automation is doing for us and focus on that something that really needs our attention.
Finally, in this modern and notification-based world, emails generate too much white noise to deal with. In this article, we will learn how to integrate SQL Server tasks’ notifications with one of the most used collaboration tools: Slack.
Keep in mind that this is not the only way to get this done. This guide would help you to better understand why we’re doing this (eventually why DevOps), and not strictly how to do it, even if we’ll see a real working example.
Note: Don’t use SQL Server Express edition. This version doesn’t support any SQL Server Agent task as well as the Database Mail, which we’ll need hereafter. Also, about slack, you must create a paid account, because the integration described below will not work with a free profile.
In order to send emails, we will use an SMTP sever. It can be either a private Microsoft Exchange, PostFix, or any other on-premises solutions, together with a cloud delivery service, like SendGrid, SendInBlue, MailJet, or Office 365.
In a team like mine, which uses chat as a daily communication driver, centralizing every business and technical message could be a great step forward for the members of the team in terms of awareness and knowledge sharing. Business roles can use that tool as well, so we can chat to each other switching topics between tech and functional discussions. It’s just a matter of how Slack (in our case) is configured with channels and naming conventions. A good setup helps us to better organize our meetings, small talks and any other topic related to implementations. This is a cool argument to speak about, but a little bit out of the scope of this guide. We will focus on notification bots instead.
SQL Server is able to send emails with its built-in features out-of-the-box, but we’d like to centralize every notification inside Slack, gaining the following advantages:
Now, how can we send notifications from SQL Server in an easier way than using custom code or a Slack incoming webhook? Is there any integration or a Slack app? Yes. And guess what? I think you’ll like it because you don’t need to write a single line of code, and you don’t need to choose between CLR, PowerShell or any other language. It’s ironic, but the integration is called “Email”.
The purpose of this article is just to describe Slack as a collaboration tool. Further details are provided here. As we said before, the following samples work only if you get a Slack account.
This is the app to work with: Email. Its configuration is based on a four-step wizard:
A word about the “Hide this address” checkbox: this is useful if you want to hide the address to any other member of your workspace. You will be the only user able to read it if you check that box.
As a DBA, we’re managing the following types of notifications on a daily basis:
With the exception of SSIS custom emails and external monitoring tools, everything is managed by Database Mail. This is a lightweight layer that allows us to send emails directly from a SQL Server Instance, connecting to a SMTP server.
To setup Database Mail you can follow this guide from Microsoft Documentation.
Once this is up and running, you can manage the notifications using SQL Server Operators. An operator is an alias managed by the SQL Server Agent which you can use to send emails and other types of messages, like pagers and Net Send.
Creating an operator is simple, just invoke the following system stored procedure:
USE msdb; GO EXEC dbo.sp_add_operator @name = N'<name here>', @enabled = 1, @email_address = N'<email here>'; GO
If you’re asking what email address you should use, it’s easy to say. You must fill the @email_address parameter with the address returned by the Email app integration for the channel you will send to (firstname.lastname@example.org in the example above). But, what about the name parameter? In my opinion, the best name is the one that helps us to understand where the message will be sent to. Suppose that we’d like to notify something about some index maintenance jobs. We could call the operator Slack Indexes Maintenance, Slack Indexes Maintenance Operator and so on. With such names, you will immediately know what we are going to send to Slack as the topic is related to index maintenance.
Thus, you’ll get the following snippet:
USE msdb; GO EXEC dbo.sp_add_operator @name = N' Slack Indexes Maintenance Operator', @enabled = 1, @email_address = Nemail@example.com'; GO
I’d like to share with you my thought about the channel naming conventions. The principles to follow when naming channels, are:
That being said, if the channel name describes a single action (like indexes maintenance in the above example) the operator which will send notifications should be unique. The reason is simple enough: we know that Indexes Maintenance Operator is sending messages to #sql-idx-maint-alerts (readability) and everyone knows that this is a one-to-one communication between a SQL Server Operator and Slack (awareness). Everyone knows that the “sql” channel prefix indicates SQL Server-related notification and the “alerts” suffix indicates that is an issue to pay attention to (style and rules). At the same time, everyone knows how to do the same with another pipeline of messages in the future (repeatability).
On the other hand, using a general purposes channel, like #sql-maint-alerts, allows us to be ready to future changes. Suppose that index maintenance will not be the only operation we’re executing in our servers (and typically isn’t). Does it make sense to create a new operator called for example, Database Concurrency Check Operator, which sends to a specific purpose channel? Clearly not.
In the end, a generic purpose channel gives the opportunity to hold more than one topic. All the notification sent to that channel should be, let’s say, of the same category to avoid too much generalization.
These solutions (one channel for more operators or a one-to-one solution) work equally well, it’s just a matter of how you’re designing your Slack channels. I suggest to avoid the “one channel to rule them all” pattern, because you’ll get thousands of mixed notifications without any clear idea behind them. After all, a noisy channel with messy content is something that will not be considered for a long time and will be eventually dropped.
Alerts are triggers that communicate to an operator that something went wrong. This Brent Ozar’s article offers a good list of alerts that need attention. Here you can find their descriptions, based on severity. The binding is straightforward. All you need to do is to link the operator to the alert:
When one of those events occur, an operator is alerted. Then, it sends the message using its setup – in our scenario, an email. If the operator uses the Slack Email app, the email will be sent to the Email app, and the integration will redirect it to Slack.
Let’s see how we can use the notification mechanism to monitor SQL Server Agent Jobs. Each job lets you configure what to do in case of failure, success or completion of its execution. The binding is similar to the alert’s one:
Once the result is collected, based on the configurations you’ve set up, this job will send an email to the app.
In order to send an email from a SQL Server Integration Services package (aka .dtsx) you need to configure the SMTP server within the package itself. This is a little out of scope, because it’s not really a SQL Server notification. You can leverage the power of SSIS and prepare a rich HTML-formatted message; the result is nice to read and informative like in these examples:
Cool stuff, isn’t it? It’s simply a .NET script in SSIS, which uses the System.Net namespace. Although the SSIS package is executed within a SQL Server Agent job, the default notification message that SQL generates is not easy to read. The message you always get is:
JOB RUN:<name> was run on <date/time> DURATION: x hours, y minutes, z seconds. STATUS: Failed. MESSAGES: The job failed. The Job was invoked by Schedule xyz (<name>). The last step to run was step xyz (<name>)
Decorating the package with a more detailed email will improve the readability and the accuracy of our notifications.
SQL Server is often (hopefully) monitored with specific counters. We’re using PRTG monitoring tool to measure them, and when a baseline changes and a threshold is hit, we send notifications to Slack. How? Again, sending to the Email app integration, specifying the right channel to send to and getting this:
The above report has been truncated. In a complete version of it, you’ll find the complete details of the measures, like the name of the servers, the sensors links, the grid with all the results, and everything you can see inside a PRTG admin portal.
Let’s see a more complete example, using a SQL Server alert. We’ll use the Severity 17 alert. Severity 17 is simple to raise and it describes a missing or insufficient resource when executing a command:
USE msdb; GO EXEC msdb.dbo.sp_add_alert @name=N'Severity 017', @message_id=0, @severity=17, @enabled=1, @delay_between_responses=60, @include_event_description_in=1, @job_id=N'00000000-0000-0000-0000-000000000000'; GO
Set the Response for the Severity 17 alert to “Notify Operator”, via email:
Run the following severity 17 based t-sql script:
RAISERROR(N'An error occurred Severity 17:insufficient resources!', 17, 1) WITH LOG; --don’t forget to use WITH LOG GO
Go to your Slack account. If you’ve configured everything correctly, you should see the following:
Did it work? Great! If not, continue reading.
If you don’t see the notification try these steps:
There are some disadvantages when using this kind of integration. For example, you cannot customize the message, unless you do it inside a .NET script. The Slack Email Address is publicly available, albeit hard to discover, so anyone can send message to your private slack channel by sending emails to that special address. Again, you cannot send the notification to more than one Slack channel or outside of the Slack world. In reality native SQL email notifications show the same limits, where email addresses of distribution lists are similar to Slack channels.
For our purposes, this is a very-low-effort automation with a high return in terms of value. With a couple of clicks, you can setup an email address representing a Slack channel, and, with little more, you can get notifications in a smart and comprehensive layout.
Everything is kept inside the collaboration chat tool we are using massively, every day. In the end, this example embeds one of the core DevOps principles (automation) and provides huge cross-role and cross-team value, especially when the channels include also network and server teams.
I hope that you’ll give this a try.
DevOpsHeroes has been a great event. We didn’t expect so many people and we could not imagine that the feedback would be so good. Quick facts:
The following radar chart is about the event date, location, quality of the sessions, quality of the speakers, food, hospitality, event design and kits:
As we can see, the overall satisfaction is really high (4/5) and everything on the venue was rated higher!
Indeed, we’ve received a good feedback also for the following questions:
Additionally, we’ve got 87% of new entries, awesome!
We have to work for the next year’s edition, in order to improve our organisation, but in the end we did very good. The number of event feedback is close to the total number of attendees, so they can be considered the source of any suggestion.
Speaking about tech sessions, including the speech made by Martin Woodward, we’ve received a set of important suggestions. We already know, our speakers are skilled and passionated, but how can we understand the people’s opinions without these surveys?
I’ve spent last two nights in browsing and reading hundreds of those sheets (almost 500, thanks again). Each of them has the following questions:
As we can see in the picture below, left side, the overall satisfaction is very high once again (expectations, interesting, valuable and presentation). On the right side, we notice that our speakers are considered experts and somehow funny and storyteller:
We’re really proud of the second edition of DevOpsHeroes. Engage IT Services and Upgrade did a great job together, and, hopefully, both companies will cooperate in the future in order to provide new formats and events like this one. A special thanks goes to Martin Woodward (Microsoft), which “crossed the seven seas” for being with us, and also to HPE for sponsoring and supporting the organisation. A great kudos goes to Silvio Di Benedetto and Giuliano Latini, who managed and followed the live streaming and the session recordings (coming soon here).
Download sessions, pics and the event stuff here.
See you next year!
It’s September, a month whose the start is the beginning of many things, especially in Italy. Kids start over the schools, Companies invest, and what about projects and events?
You know, I really love to do both. But in this post I’d let you know how I feel proud about my work in the past four years. Thanks to the help of my employees and co-ops, I did organise four SQL Saturdays in Parma (2014/355, 2015/462 and 2016/566), making our small town a big SQL Server City. Who would’ve thought it? Yes, maybe in my mind, there was something that pushed me to try, but it could have an epic fail. However, I did it, and it was an astonishing set of records. It started as a quest in an Role Play game, and now we’re in front of this great SQL Saturday. Something to be really proud of.
Two years ago, I felt the need to add a new event, which would have to support a new way of thinking about the IT roles that I was aware of. And that was when DevOps came out. A new buzzword that needed to get a deep explanation, in terms of culture and approach. Combining my passion to The Simpsons series (especially considering Homer Simpson) and DevOps stuff, I’ve made up DevOpsHeroes, whose acronym is DOH, the Homer’s exclamation. And now, for the second year, Developers, IT Operations people, DBAs and PMOs will share their experience about how to integrate, cooperate and make IT world more productive. Another event, another thing to be proud of.
After the introduction, let me give you some important details about these events, sorted by upcoming date:
DevOpsHeroes 2017 – Parma (http://www.devops-heroes.net/)
Sql Saturday 2017 – Parma (http://www.sqlsaturday.com/675)
However, this year I’d to thank my helpers at Engage IT Services Srl, the great support of Upgrade Srl (Andrea, Federica and Riccardo, that organised with us the DOH event), HPE and also many communities, like DotDotNet.org and mostly our great community GetLatestVersion.it. A special thanks goes to Windowserver.it, which will record sessions, live streaming and interviews. Speaking about SQLSaturday, the sponsor page depicts how the event is supported. A big thank goes to all of them!
What the future will reserve to us in Parma? I think that I could try to add another event, maybe on IoT topics, because I feel that the world will change following that direction and also, hopefully, my newborn Giulio will play with my and some robot 🙂
Alternatives could be changing the format, like with a TED event or something like it. Who knows? I’m pretty sure that I’ll get a lot of help from now on. This year has been a demonstration of it. I started alone, and now we work in five. The last thing to be proud of. Wonderful!
For the ones that have many databases to keep under source control, it can be really useful to speed up the link-to-source-control process. The only way we have now is to use the GUI provided by Red Gate SQL Source Control. Actually, there’s a github project called SOCAutoLinkDatabases by Matthew Flat, a Red Gate engineer, but, unfortunately it works only on a shared database model (centralised) in TFS. Let’s see how to manage the link using Working Folder (which is also good for many SCM) and dedicated database model (distributed).
Last Saturday, I’ve shown the Agile@School project at ITSOS Fornovo Taro, the school in which I’ve studied for five years, just before starting to work as a junior developer.
ITSOS has been (and this is true right now) at the forefront of computer technology and now, thanks to the new “technology course”, the project becomes really interesting. During the year, indeed, the teachers have spoken about development methodologies, both for Agile and Waterfall approaches. This sounds great, especially in Italy.