Two tech events in Parma, the city of food

SQL Saturday Parma, six years in a row. DevOpsHeroes, four. Parma has been a great place to reach, also for technical events. I’ve started organizing the first SQL Saturday in my birthplace in November 2014. After two years I tried to create a brand-new event, when the DevOps culture started to grow and when the agile became strong. DevOpsHeroes was born in 2016, a month before the SQL Saturday event, again, in Parma. Why change? The audience has been great (more than 200 attendees), the feedbacks, too. People who come here look comfortable with everything. Then, thanks to the University, which has been the selected location, both the events are still growing.

Let’s go in deep with the events.

SQL Saturday Parma (2014-now)

SQL Saturdays, a great format by Professional Association of SQL Server (PASS), is a well-known event all around the world. You can find hundreds of them here. In Parma, the event is completely free, with no pre-conference. It’s located on the Campus of the University of Parma, in order to make also the students as well as the school aware of this kind of events. Unfortunately, the audience doesn’t gather them as does for the professionals, so I’m working to make a better relationship with them, too. About the audience, some numbers:

Event Attendees Track Speaker Feedback
SQL Sat 355 (2014) 128 3 14 4.46/5
SQL Sat 462 (2015) 157 3 18 4.20/5
SQL Sat 566 (2016) 150 3 18 4.48/5
SQL Sat 675 (2017) 210 4 24 4.47/5
SQL Sat 777 (2018) 230 4 24 4.72/5

In 2014 the sessions were driven by, let’s say, classic topics, like DBA, Development, BI. Starting from 2016 the coverage changed a lot. More data visualizations, more automation, more BI in the cloud, more cloud itself. 2017 has been the game-changer about NoSQL sessions (on Microsoft Azure), too. The latest edition of SQL Saturday Parma introduced the AI and this year we are struggling for selecting the right sessions from a bunch of 70 proposals (all over the world). September, 30 the Call for papers (available here) will close and if you are in the area on November 23, or if you want to come in Italy for a weekend of training on Microsoft Data Platform with friends, #sqlfamily and good food you are welcome!

The event is strongly supported by the Italian #sqlfamily, especially my friends in UGISS. A big thanks go to them.

DevOpsHeroes (2016-now)

Started as a one-shot event, this is a four-year-in-a-row one. Riding the wave of enthusiasm derived from the SQL Saturdays in Parma and thanks to my work experience, who moved meanwhile from DBA skills to Data DevOps and automation, this event has been a pleasant surprise, yet it doesn’t gather as many people as SQL Saturday does (SQL Saturdays has got also its noise and the PASS support). The event was born for spreading the DevOps culture, not just the tools. Tools were described there just to pull out the advantages of the culture, which must be “soaked up” before going deeper. So, the event was born for the culture. And this has been (and it still is) one of our mission.

The event is held typically one month before SQL Saturday in the Campus of the University of Parma. It gets more than 120 attendees and this year the organization is expecting more, due to the great sponsors which help the edition.

As you can see, behind the hood there are two main “helpers”. Engage IT Services, which is the company whose I’m a co-founder, and GetLatestVersion.it, a great Italian online community for DevOps and ALM technologies. The event is totally free, and it will get 18 sessions with 3 tracks. The topics will cover Technologies, Methodologies and use cases (or Experience sessions). The call for paper is already closed and we’re finishing the program of that Saturday, October 26.

Wrapping up

SQL Saturday Parma website: https://www.sqlsaturday.com/895/EventHome.aspx

Registration: https://www.sqlsaturday.com/895/registernow.aspx

DevOpsHeroes website: http://www.devops-heroes.net/

Registration: https://www.eventbrite.it/e/biglietti-devopsheroes-2019-66796826105

DevOps journeys series – Vertica release pipeline with Azure DevOps – Ep. 01 – development (part 1)

Intro

As a consultant, life could be difficult when it comes to manage platform like Vertica, a columnar RDBMS by MicroFocus. Speaking about DevOps, database management systems like this one neither is well supported by built-in tools nor by any third party suites.

In my experience, which is focused on the Microsoft SQL Server world, tools like the ones made by Redgate or ApexSQL, plays a crucial role when it comes to DevOps. Unfortunately, this time I cannot find any real help. I think that it’s an exciting task, but I’ve to be careful. It’s like “reinventig the wheel” for a car (Vertica) that didn’t realize that it needs wheels. Strong and reliable wheels.

Scenario

In the scenario I’m working on, the data warehouse is managed by a layer of business logic implemented within this platform. Starting from SQL Server database populated by the application layer, the data rows pass through SQL Server Integration Services ETL packages, which transform them onto Vertica repository, and they end into a Business Objects layer (the presentation layer). Just to get the big picture, see the following diagram:

bi-scenario-scenario

Our mission is to share DevOps knowledge, culture and tools in order to automate a set of processes which are managed manually, right now. Not a simple task, even if the guys I’m cooperating with are well-trained and powerful in tech skills. Additionally, they are enthusiast and ready to change (and we know how this is usually a trouble). We will get obstacles only from a technology perspective, actually.

After days of documentation and questions to our MicroFocus contacts, it looks like that Vertica gives us no built-in way or tool for getting an automated process during the phase of the schema comparison. Also third-party tools aren’t that good. We’re trying to dig more and more in order to find out something on the internet. However, everything it’s a “little bit” tricky. To be honest it seems that no one tried to do DevOps with Vertica, even though this is a very good platform for speeding up heavy queries on huge amount of data. Yes, we downloaded a couple of officially suggested tools, whose documentation says that we can compare from and to Vertica itself, but no one fits our scenario. No command lines, no direct integration, no script generation. If you think about automated pipelines, this can be a big problem to deal with. I think that this should be the foundation of a DevOps approach. In the end, it looks like we’re the the first team that is trying to invest efforts into that. It’s funny, though. Hopefully, someone with experience in Vertica DevOps who reads this post could help us!

(After digging more, I’ve found this article that describes almost the same scenario. I hope to reach out the author out).

Prerequisites

As a kick-off, we’ve moved into the prerequisites, starting from the most important thing: make a process that will facilitate the team’s workday, instead of wasting time due to bad choices. This process should be simple and automated.

Then, we’ve argued about the IDE used by developers, evaluating differences between before and after solutions, getting pro and cons of them all. In the end, we agreed on the following solution, development side:

  • Visual Studio 2017 will be the editor for SQL Server Integration services solutions. This team, “BI team” hereafter, is working on SSIS and is separated from the application development team, at least at the moment;
  • TFVC (Azure DevOps on-premises) will be the Version Control System since the BI team is already using it. We will think about a migration path to git then, but for now we’d like to avoid any distraction out of the scope;
  • every get latest from the source control must synchronize all the DDL scripts for Vertica as well as the SQL Server and SSIS solutions, in order to get a sandbox with source code and the local database provisioning scripts (most likely we’ll get a virtual machine with Unix and a Vertica instance, too);
  • each SSIS project must be deployed to the local SQL Server instance when running solutions into the sandbox;
  • when all the SQL Server Integration Services packages are deployed, the list of Vertica DDL must be executed in order to create the database from scratch;
  • Optionally, mock inserts (also from file) could be added to the database in order to get data to work on.

Our idea

As you can see, in this pipeline, we will get everything we need in our sandbox. Just a note, we can choose how to provision the instance of Vertica, in order to scale out resources from local workstations and also use another O.S. It’s up to the team.

A big picture of this process can be summarized as follows:

001

Conclusions

As we’ve seen, the big picture is ready to be implemented and also the team knows perfectly what are the goals. We already know what technologies will be involved and how to keep them all together in order to make our pipeline.

Unfortunately, we cannot find any tool that helps us, so we’re preparing to jot some lines of code down.