DevOps journeys series – Vertica release pipeline with Azure DevOps – Ep. 01 – development (part 2)


In the previous post we’ve described the idea behind the automation we’re trying to implement on a scenario based on MicroFocus Vertica database.

How it works

This “sandbox” is not a real isolated development workstation. Let’s separate it into two parts, the first one for the development on everything but Vertica (Windows local workstation) and the other one for a Vertica instance (probably Unix/Linux VM) shared between developers.

In this shared instance we will get a schema for each developer is working on the solution, in order to let everyone to get his own “environment”.

The source control folder tree (which will be TFVC source control on-premises) will be designed on the desired branch as the following:

            /Tables ...
            /Views ...

As you ca see, under the Project folder there is the Vertica database folder, which contains, schema by schema, all the .sql files for Tables and Views DDLs (CREATE TABLE and CREATE VIEW). You can notice also .ps1 files, which contains the list of executions based on a certain order (business driven).

The file for a, let’s say, “Table1”, can be like this one:

    RowId int NOT NULL,
    RowStringValue varchar(30) NULL,
    CONSTRAINT PK_<schema>Table1 PRIMARY KEY (RowId)

We’ve added a :SCHEMA parameter, which allows each developer to create its own schema as described before. This is the only way we’ve found for isolating developers in a Vertica shared instance, avoiding an instance for each developer, which could be resource intensive for available PCs. Running the application locally, before committing any change set to the Source Control, a simple tool will execute .sql files with the new schema name and in the sort order given by the .ps1 file.

The Tables.ps1 file can be as the following:


$schemaCommand = "vsql -h $hostname -p $port -U $user -w $psw -v SCHEMA=$schemaName -f $(Join-Path $scriptsFolder "Table1.sql")"
Invoke-Expression -command '$schemaCommand'

$schemaCommand = "vsql -h $hostname -p $port -U $user -w $psw -v SCHEMA=$schemaName -f $(Join-Path $scriptsFolder "Table2.sql")"
Invoke-Expression -command '$schemaCommand'

You may notice the term “vsql”, which is the command line provided by Vertica for executing queries. Further information here.

Also, usernames and passwords will be stored in an external config file (or a secured API), like the following one:

"host": "MyHost.Hos.Ext",
"port": 1234,
"user": "user",
"psw": "password",
"schemaName": "MYSCHEMA"

We’ve got the DDLs, the PoSh files for executing them and the Vertica command line. Good, in a development environment, however, a set of tools should be prepared for helping us to keep these artifacts on a single pipeline, too. This is the reason why we’ve created a “builder” script, like  this one:

$config = Get-Content (Join-Path $currentFolder "Build-Config.json") | Out-String | ConvertFrom-Json

$schemaCommand = $(Join-Path $scriptsFolder "Tables.ps1") 
$schemaCommand += " -hostname $($" 
$schemaCommand += " -user $($config.user)"
$schemaCommand += " -port $($config.port)"
$schemaCommand += " -psw '$($config.psw)'"
$schemaCommand += " -schemaName $($config.schemaName)"
$schemaCommand += " -scriptsFolder $scriptsFolder"

Invoke-Expression -command $schemaCommand

This is another layer of management, which allows us to organize every part of the DDLs to be executed against Vertica.

Note: Our scripts will destroy and rebuild any given SCHEMA. But this is the way we like.

Now, let’s see the possible scenarios.

Start from scratch or get started

When someone wants to start from scratch, this is the pipeline to follow:

  1. get latest version of the branch;
  2. check and change the configuration file (JSON);
  3. execute the create-vertica-database-from-scratch.bat file (it contains our powershell “build” script);
  4. that’s it, we’ve got a new schema in Vertica, empty and ready to be consumed.

If you want to preserve your data, this is not the right path for you. Executing the “builder” tool is optional.

New development

When a developer would make and try its changeset:

  1. change Visual Studio application (SSIS or SSRS here) when needed;
  2. change the Vertica schema (adding tables, columns and so on);
  3. get the .sql file of a new object or change the .sql file of an object which has been updated;
  4. replace them into the TFVC file structure;
  5. change the .ps1/.txt files if any DDL has been added (or if something that impacts on the order occurs);
  6. build the Visual Studio application and try it;
  7. When everything works good, check-in.

Now, everyone can get the latest changes in a CI way.

Get delta changes

When a developer is going to get the latest changes that contains an updated version of the Vertica objects, but wants to preserve its data, this is a little bit more tricky. The person who has made the change could share in a collaborative chat tool the ALTER script. This is not so reliable and comfortable, but without any comparison tool, there isn’t any best way to make this happen.

That being said, we’ve implemented our diff-script generator, based on the analysis of Vertica metadata (the catalog, browsing v_internal objects). So, after our friend gets the latest version, he executes a generate-diff-script.bat tool and lets this script to execute the generated ALTER script. Tricky, but it works like a charm (we will speak about this comparison tool in next posts, maybe) . Anyway, we’re looking forward hearing updates from MicroFocus. Hopefully, we’ll get an official diff tool from them soon!


I’ve just shown the way we’re managing tables DDLs and how we’ve created PowerShell scripts, but the real scenario is more complex. We have to drop Vertica schema (CASCADE DROP) before, then re-creating the new parametrized schema, then tables, then views and so on. Sometimes, we’ve got Vertica schemas which are referenced each other, so we have to create for everyone of them tables before, then views. The business logic is managed by how we write the “build” PowerShell script as well as the automated build process will.

Also the build process is not always “straight”. Some of the business processes need to be managed in a dedicated way. Cross reference will occur but this is the job that the “builder” will do. Both for the manual and the automated one. Moving responsibility to the build process allows us to feel more comfortable about our development solution.

Last, but not least, the manual-development-build process allows the developer to choose between re-create the database or not. They should not waste time in managing the things that a script can do repeatedly and efficiently. This is the reason why we kept somehow static the PowerShell instead of writing complicated business logic. Just adding rows of vsql invocation in the right order, that’s it.

DevOps journeys series – Vertica release pipeline with Azure DevOps – Ep. 01 – development (part 1)


As a consultant, life could be difficult when it comes to manage platform like Vertica, a columnar RDBMS by MicroFocus. Speaking about DevOps, database management systems like this one neither is well supported by built-in tools nor by any third party suites.

In my experience, which is focused on the Microsoft SQL Server world, tools like the ones made by Redgate or ApexSQL, plays a crucial role when it comes to DevOps. Unfortunately, this time I cannot find any real help. I think that it’s an exciting task, but I’ve to be careful. It’s like “reinventig the wheel” for a car (Vertica) that didn’t realize that it needs wheels. Strong and reliable wheels.


In the scenario I’m working on, the data warehouse is managed by a layer of business logic implemented within this platform. Starting from SQL Server database populated by the application layer, the data rows pass through SQL Server Integration Services ETL packages, which transform them onto Vertica repository, and they end into a Business Objects layer (the presentation layer). Just to get the big picture, see the following diagram:


Our mission is to share DevOps knowledge, culture and tools in order to automate a set of processes which are managed manually, right now. Not a simple task, even if the guys I’m cooperating with are well-trained and powerful in tech skills. Additionally, they are enthusiast and ready to change (and we know how this is usually a trouble). We will get obstacles only from a technology perspective, actually.

After days of documentation and questions to our MicroFocus contacts, it looks like that Vertica gives us no built-in way or tool for getting an automated process during the phase of the schema comparison. Also third-party tools aren’t that good. We’re trying to dig more and more in order to find out something on the internet. However, everything it’s a “little bit” tricky. To be honest it seems that no one tried to do DevOps with Vertica, even though this is a very good platform for speeding up heavy queries on huge amount of data. Yes, we downloaded a couple of officially suggested tools, whose documentation says that we can compare from and to Vertica itself, but no one fits our scenario. No command lines, no direct integration, no script generation. If you think about automated pipelines, this can be a big problem to deal with. I think that this should be the foundation of a DevOps approach. In the end, it looks like we’re the the first team that is trying to invest efforts into that. It’s funny, though. Hopefully, someone with experience in Vertica DevOps who reads this post could help us!

(After digging more, I’ve found this article that describes almost the same scenario. I hope to reach out the author out).


As a kick-off, we’ve moved into the prerequisites, starting from the most important thing: make a process that will facilitate the team’s workday, instead of wasting time due to bad choices. This process should be simple and automated.

Then, we’ve argued about the IDE used by developers, evaluating differences between before and after solutions, getting pro and cons of them all. In the end, we agreed on the following solution, development side:

  • Visual Studio 2017 will be the editor for SQL Server Integration services solutions. This team, “BI team” hereafter, is working on SSIS and is separated from the application development team, at least at the moment;
  • TFVC (Azure DevOps on-premises) will be the Version Control System since the BI team is already using it. We will think about a migration path to git then, but for now we’d like to avoid any distraction out of the scope;
  • every get latest from the source control must synchronize all the DDL scripts for Vertica as well as the SQL Server and SSIS solutions, in order to get a sandbox with source code and the local database provisioning scripts (most likely we’ll get a virtual machine with Unix and a Vertica instance, too);
  • each SSIS project must be deployed to the local SQL Server instance when running solutions into the sandbox;
  • when all the SQL Server Integration Services packages are deployed, the list of Vertica DDL must be executed in order to create the database from scratch;
  • Optionally, mock inserts (also from file) could be added to the database in order to get data to work on.

Our idea

As you can see, in this pipeline, we will get everything we need in our sandbox. Just a note, we can choose how to provision the instance of Vertica, in order to scale out resources from local workstations and also use another O.S. It’s up to the team.

A big picture of this process can be summarized as follows:



As we’ve seen, the big picture is ready to be implemented and also the team knows perfectly what are the goals. We already know what technologies will be involved and how to keep them all together in order to make our pipeline.

Unfortunately, we cannot find any tool that helps us, so we’re preparing to jot some lines of code down.

Wrong version of ISDeploymentWizard in SSMS 17.8.1

The problem

Years ago, with SQL Server 2016 release, Microsoft came up with a separated brand new version of SQL Server Management Studio. It’s been a happy day for the SQL Server community and database developers.

Shortly afterwards, our company started to migrate every instances from older version of SQL Server to the 2016, using SSMS 17.*. Developers have already jumped into Visual Studio 2017 and everything seemed to work like a charm, until we started deploying integration services via the new SSMS, after we converted them to 2016 TargetServerVersion (which is NOT the Project Version).

The TargetServerVersion is the SSIS version, also for the deploy operations, while the Project Version setting tells to Visual Studio how to open projects based on .dtproj specifications on the XML projects definitions.

Some days ago I’ve realized that after updating to the latest build of SSMS (17.8.1), the .ispac deployment is actually executing the latest build of the Integration Services Deployment wizard (aka ISDeploymentWizard.exe). As a result every deploy of .ispac files, regardless trying double click or deploying directly from the Integration Services Catalog, the SSISDB, failed with one of the most scaring error message ever:

“[…] The Script Task <unique_name> uses version 15.0 script that is not supported in this release of Integration Services […]

What? Why? I’ve a workstation with SQL Server 2016 and the related Integration Services 13.0:


The computer I’m speaking of has the same build of mine, nobody has installed any other Integration Services versions anywhere. Andy Leonard explained this behavior in this blog post. Unfortunately, in my scenario, I cannot solve the problem.

Let’s try to explain better.


The workstation I’m working on has SQL Server 2016 (build 13.0.5149.0) and Integration Services 13.0 on Windows 10 Pro. I’ve got a simple package with a single script task which does literally anything:
blog_ssis_error_002bAs Andy suggested us, I’ve changed the TargetServerVersion to SQL Server 2016, so I’ve got C# 2015 compiler for scripts:

First execution and deploy

Executing it locally, nothing happens, but it happens in a green way (success):


Ok, now we’re going to deploy to the other machine, generating the .ispac file and double clickin on it. It’s important to make sure that you’re double-cliking the .ispac file. Do not right click on the Integration Service Catalog project folders because SSMS will execute the latest build of ISDeploymentWizard.exe by design.

deploy from SSMS 17.8.1


When double clicking, the app selector should use the version related to the TargetServerVersion setting of the .ispac. This works for many of my other computers. But for one of them, here is the screen:

double click on .ispac file

Hey! This is 2017 also when clicking on .ispac file. Let’s try to deploy using the new tool. The deploy succeeded.


And now, let’s try to validate the package execution via SSISDB, right clicking on the project itself and selecting “Validate…”. This is the result of the empty script package validation:


I’ve tried on six different machines, five ran successfully and ONLY ONE returned the above error message. Still stuck in the middle.

What have we changed?

Just the setup of SSMS (17.8.1) updating the 17.7 one. Once again, the same setup on all six machines. Five by six worked, this one is trying to kill me.

For some strange reason, something (I assure you all, not someone) has changed the registry in the .ispac application association, maybe when double clicking for the first time the file in a pending reboot (?). We’re still investigating, since we used to avoid any change in production without permissions and processes. That said, it’s weird. And it was so difficult to get.


Easy to say, now that we’ve figured out the root consequence. Not so good, but changing the registry on the key HKEY_CLASSES_ROOT\.ispac with the 130 executable (IntegrationServices.ProjectDeploymentFile.130) fixed the unwanted behavior. The key has been set to IntegrationServices.ProjectDeploymentFile.140 right after the update from 17.7 and 17.8.1.

Instead of changing it via regedit, you can try an “open with…” with “use default” checked in order to force the association between .ispac file and the right ISDeploymentWizard.exe version. But this time, in this machine, it didn’t work. This is the reason why I tried the regedit action.


I have to say a big thanks to Andrea Amantini, one of my peer, which is well known for his ability to find out “a needle in a haystack.”. Strange things happened here. A combination of Murphy’s law, a sort of “black” friday and a pending reboot. Hopefully this helps someone, at least.

Agile@School – Inizio

Il club delle 6

Anche l’ITI F.Viola/Marchesini di Rovigo si è unito ufficialmente al progetto esperimento/laboratorio Agile@School basato sull’idea e il supporto di Alessandro Alpi.


Il tutto è stato inserito nelle attività ufficiali di alternanza scuola-lavoro dei ragazzi delle due quinte dell’istituto rodigino.

Il laboratorio ha come obiettivo introdurre i ragazzi ai concetti Agile e di lavoro in team: si mettono le mani in pasta e si utilizzano strumenti all’avanguardia del mondo professionale per realizzare un’app con Xamarin.

Nelle tre ore di laboratorio abbiamo toccato svariate argomentazioni e devo fare i complimenti ai ragazzi per essere stati complici e attivi in questo inizio di percorso tutto da esplorare.

  • Abbiamo introdotto i primi concetti e terminologia del mondo Agile e delle sostanziali differenze di cosa vuol dire fare un software per hobby/studio e farlo professionalmente.
  • Abbiamo esplorato le funzionalità di base di VSTS (Visual Studio Team Services) creando varie tipologie di PBI e mettendole…

View original post 158 more words

Agile@School – sesta lezione

Il club delle 6

Sono ripresi oggi gli appuntamenti all’ITI F.Viola / Marchesini di Rovigo di Agile@School dopo le vacanze: siamo al sesto episodio su dieci. Come ripromesso negli episodi precedenti abbiamo ripreso in mano dei principi Agile/DevOps che negli ultimi incontri erano stati parcheggiati in favore di approfondimenti tecnologici su Xamarin Forms.

Continuous Delivery

View original post 219 more words

Agile@School – Terza lezione

Il club delle 6

Eccoci giunti alla terza lezione su dieci del laboratorio Agile@School dell’IIS Viola/Marchesini di Rovigo


Stand-up meeting

Anche questa volta abbiamo fatto pratica con lo stand-up meeting in cui abbiamo fatto un piccolo riassunto della puntata precedente ed è stato introdotto a grandi linee il programma della giornata. La partecipazione al riassunto sugli argomenti passati è stata tiepida ma ugualmente ci ha fatto scontrare col fatto che per riassumere gli episodi precedenti bisogna scegliere l’adeguato livello di astrazione: né troppo elevato, né troppo dettagliato. A seguire ho introdotto gli argomenti della giornata e una previsione sulla lezione futura.

MVVM – Comandi

Abbiamo ripreso il filo della lezione precedente introducendo il concetto di comandi nel mondo MVVM. Con l’occasione abbiamo letto insieme la documentazione MSDN dell’interfaccia ICommand, abbiamo ripassato il concetto di delegati e studiato un’implementazione di ICommand. Ancora una volta il livello di preparazione delle quinte mi ha stupito perché la…

View original post 233 more words

Agile@School – Seconda lezione

Il club delle 6

Anche questa settimana si è svolto l’incontro del laboratorio Agile@School all’IIS Viola/Marchesini.

Stand-up meeting

In questa seconda lezione abbiamo applicato subito uno dei concetti affrontati la lezione scorsa: il feedback per incentivare il miglioramento continuo. Ne abbiamo approfittato per introdurre lo stand-up meeting che viene svolto da molti team. In questi minuti di  incontro “informale” in piedi abbiamo provato a raccogliere sensazioni e pareri sul primo incontro. Dopo un primo momento di silenzio qualcuno ha detto il proprio parere:

  • Alcuni hanno definito l’incontro più interessante del previsto, in particolare è stato trovato interessante il concetto di team e la sua gestione;
  • Qualcun’altro è stato colpito dalla potenza della visualizzazione del lavoro tramite una kanban board;
  • Uno dei professori ha riferito che i principi base riguardanti DevOps sono molto interessanti.

Ho provato anche a incentivare feedback negativi ma nessuno si è sbilanciato. Per ora 3 feedback positivi sono un buon risultato…

View original post 256 more words

Agile@School 2017 – This is the end

All the good things come to an end, right? And this is true also for Agile@School 2017. However, no worries, although I’m becoming a dad, I’m pretty sure I’ll keep making the students more aware of the Agile principles and practices.

There are only good news, to be honest. The first one is that Michele Ferracin, a friend of mine at community, is starting to keep in touch with schools (we really don’t know how many of them will participate) in order to start Agile@School also in Padova. “Bring the project in two cities”, perk reached! The second news is about the students this year. Everything is set up and running. Everyone is ready. Each team has chosen how to present the results to the exam’s board:

Software and Tech

The Messinesi team (Amanda and Alex) has prepared a Prezi presentation, whose goal is to depict the tools they’ve involved. They explain also the technologies and the usability of the collaborative chat they’ve created. Additionally, the board can try the “product” interacting with laptops and mobile phones.


AI and Bots

The Random team (Thomas and Luca) and the Scrubs team (Enea and Sebastiano) have created a 3 min pitch video with:

  • an introduction of their chat bots (via Prezi and Power point)
  • simple explanation of the tech behind them
  • a couple of self interview (twice per bot) in order to add something fun


The Domotic team (Nicodemo and Mattia) and the Bar Santa team (Simone And Mirko) will show how the projects work, since they’re totally based on hardware. Smart Home vs Hacked Remote Controlled car. We’ve got already some spot:

Cognitive Services

The Human Recognizers team (Marco and Francesco) will present a website in which the exam’s board can play with cognitive services. They demonstrate how the webcam can recognize faces, mood and so on, using a mobile application.

This is the Prezi presentation.

As you can see, they’ve worked very well. The project are amazing and we’ve awarded everyone with the diploma, as usual:


This year we’ve reached great results and we’re proud of all the work done together. We’ve asked also to fill in a survey and the feedback are the following:

And now, good luck guys! see you next year, hopefully!

Stay tuned