Agile@School 2017 – This is the end

All the good things come to an end, right? And this is true also for Agile@School 2017. However, no worries, although I’m becoming a dad, I’m pretty sure I’ll keep making the students more aware of the Agile principles and practices.

There are only good news, to be honest. The first one is that Michele Ferracin, a friend of mine at getlatestversion.it community, is starting to keep in touch with schools (we really don’t know how many of them will participate) in order to start Agile@School also in Padova. “Bring the project in two cities”, perk reached! The second news is about the students this year. Everything is set up and running. Everyone is ready. Each team has chosen how to present the results to the exam’s board:

Software and Tech

The Messinesi team (Amanda and Alex) has prepared a Prezi presentation, whose goal is to depict the tools they’ve involved. They explain also the technologies and the usability of the collaborative chat they’ve created. Additionally, the board can try the “product” interacting with laptops and mobile phones.

ice

AI and Bots

The Random team (Thomas and Luca) and the Scrubs team (Enea and Sebastiano) have created a 3 min pitch video with:

  • an introduction of their chat bots (via Prezi and Power point)
  • simple explanation of the tech behind them
  • a couple of self interview (twice per bot) in order to add something fun

IoT

The Domotic team (Nicodemo and Mattia) and the Bar Santa team (Simone And Mirko) will show how the projects work, since they’re totally based on hardware. Smart Home vs Hacked Remote Controlled car. We’ve got already some spot:

Cognitive Services

The Human Recognizers team (Marco and Francesco) will present a website in which the exam’s board can play with cognitive services. They demonstrate how the webcam can recognize faces, mood and so on, using a mobile application.

This is the Prezi presentation.

As you can see, they’ve worked very well. The project are amazing and we’ve awarded everyone with the diploma, as usual:

20170527_111146

This year we’ve reached great results and we’re proud of all the work done together. We’ve asked also to fill in a survey and the feedback are the following:

And now, good luck guys! see you next year, hopefully!

Stay tuned

Agile@School 2017 – let’s start over

As a recurring project, Agile@School is started again on February, with a new set of projects and ideas. Gabriele will help me again, but it will be a very difficult task. During the past year we followed a Scrum approach, in order to comply the team structure. As you can read here, there were one team with a small bunch of members. Now, we’re getting “bigger”. As a result, we’ll have micro-teams of two/three member each. Great chance for Kanban. Let’s give it a try.

01

How will we approach in the beginning?

  • defining a set of micro-team, that we call “task forces”
  • designing a Kanban board
  • describing personas
  • speaking of some ceremonies we’d like to get rid of
  • speaking of some ceremonies we’ll keep
  • describing the customer journey and the story map practices

The task forces

The term not fits very well, actually; indeed, a task force is something that could be considered as a “defcon 1” team. However, we would give the teams a label which is “strong”. To be honest, we have a little amount of time, so in the end we can say that we’re in hurry already 🙂

The Kanbard board

As we said above, we will have more task forces, most likely six. Therefore, the board will use columns (as usual) for the status management and rows (aka Swimlanes) for separating teams and projects.

02

The board will be created in Visual Studio Team Services, in order to use also the Source Control Manager which relies on it.

Personas

Each team member will populate a simple card, the Persona card, which is depicted in the picture below:

03

As you can see (in Italian), the first column is for Persona details, the second for interests and the third is the “role” which the member would like to have. I know that the last column is not included in any best practice, but I feel that some student could start to think about its job and its future. Could be interesting.

The customer journey

During the next meeting, we’ll ask the students to show us their customer journey. Each team will have to describe the journey of a typical user, with mood for each action it takes and the value which it gets by the action itself.

Conclusions

Kanban, task forces, boards, customer journey, personas, etc. This year is full of new things to get knowledge from. Also the source control manager will change. We will use git on VSTS so we will get different projects in the same place in a quicker way.

And now, let’s start over! 🙂

Managing migrations with RedGate SQL Source Control 5

This is a good day for me. I’ve finally tried the improved migrations feature in the latest version of RedGate SQL Source Control  and I’ve tested the feature against all my typical cases so that I can share my findings with you.

What is the migrations feature?

When a change set includes an edit which leads to data loss (or a deploy schema error due to constraints definitions), we need to create a script that avoids any regression/block during the deployment phase. This is true also when working as a distributed team in a continuous integration based environment. We need to avoid members of our team being blocked any time changes are shared and minimize the risk of regressions.

Migrations in SQL Source Control means “migration SQL scripts”. We are speaking about custom scripts, related to the changed objects, that let us avoid regressions. They are created whenever there is a likelihood of a potential problem or data loss occurring on an object and inserted when getting the latest version of our database from our source control system. But the most important thing is that they are applied when deploying our databases to the test/staging/production environments.

Migrations feature history

A migrations feature has been implemented more than one time into Redgate SQL Source Control unfortunately with some trouble, especially when trying to merge the migration scripts between different branches and to integrate the most recent source control systems. (You can read a brief history of migrations in SQL Source Control in this blog post). However, the latest version of migrations in SQL Source Control v5 is a great implementation, which supports also a generic Working Folder (no matter what version control system is installed, we are just using a simple folder). So, you may hear about the previous versions, Migrations V1, V2, and you will find some capability in SQL Compare projects, but, using the new migrations feature in SQL Source Control v5 will resolve everything.

A real scenario

The environment I’m working on is a multi-branch scenario, implemented in Visual Studio Team Service, like the following one:

  • A VSTS local workspace folder mapped to the VSTS team project
  • A local folder for each branch (dev main | release), mapped to the VSTS branches
  • The folder of the database in each branch

The purpose of this article is to demonstrate what are the capabilities when managing two typical cases. The first is about a breaking-change, a NOT NULL column addition to a table. The second is a data-only migration script, which can be inserted during the deployment phase.

 

Migration for NOT NULL column addition

Adding a NOT NULL column to a table can lead to a potential delivery problem. Indeed, in a dev environment, when a developer tries to get the change and the target table has rows, the NOT NULL constraint will be violated. The same could happen in any environment when deploying.

When this kind of change occurred, I used to create a “Pre-release” script for adding the column with NULL, then writing data inside it and finally, adding the NOT NULL constraint. This was manually managed – until now – and it took me some time, typically the day before the deployment (it’s not usually just a matter of a single table). I had to create sql scripts, folders, naming conventions and I had to remember to execute them. When we’re talking about automation, this is a step backwards and it opens us up to the risk of human error. Last but not least, this is a more complex solution to setup on the deployment software we are using, Octopus Deploy.

In the following scenarios we will try to understand what will happen when:

  • replacing a schema change with a migration script
  • sharing change set with other developers
  • merging the branches
  • using data-only migration script

 

Replace a schema change with a migration script

When we need to replace the schema change with our script, a schema-data-schema migration script is the best choice.

Suppose we have two developers, Dev1 and Dev2, who are working with a dedicated database model on SQL Source Control on the dev branch. They have the same table in their databases (StoreDb), called Inventory.Items (ItemId int PK, Name varchar(30), CategoryId smallint)Dev1 table is empty, while the Dev2 one has ten rows. Dev1 executes the following command:

Use StoreDb;
GO

ALTER TABLE Inventory.Items ADD InsertTimestamp datetime NOT NULL;
GO

Since the table is empty for Dev1, the command is executed successfully. But Dev1 ignored the potential problems about data loss/constraint violation. Fortunately, SQL Source Control warns him when he tries to commit the changes:

01 - warn changes

Dev1 can add a migration script, related to the object that is changed:

02 - Migration script

Pressing “Generate script” of the selected object will show the proposed t-sql migration script:

03 - Script

As you can see, the proposed script is not really “completed”, because it’s up to Dev1 to add the desired behavior for the new NOT NULL column. The highlighted part is the migration addition. However, the change that should be added is simple and quick.

Dev1 can “Save and close” and the commit tab becomes as described in the picture below:

04 - SaveAndCommit

The generated migration script replaces the suggested schema change. This allows us to avoid any constraint violation.

 

Sharing changes to other developers

What is going to happen on the other developers’ workstations? When Dev1 execute a check in of the column addition change, Dev2 can get the latest version of the database. Keep in mind that Dev2 has already the database with the Inventory.Items table, with ten rows, a version without the Dev1 change.

06 - table

They are using Working Folder, so they need to get the files from the source control (VSTS, using team explorer, for example) and then apply changes to the databases. Dev2 can see the following:

05 - Get

Without the migration he receives errors (NOT NULL constraint violated), while the migration let him go ahead. Indeed, the get latest works and the ten rows have been updated with the logic of the migration script.

07 - table updated

Merging branches

What if Dev1 and Dev2 merge the branch they are working on with another line? Suppose that Dev2 starts a merge process between dev and main branches. Getting the latest version from the main branch will behave in the same way. This means that the migration script is replicated also switching the repository. This didn’t work in the past, and I can definitely say that is working well now. This is a great point, especially when frequent merges occurs.

 

When using data-only migration scripts?

When the change has been already committed and we need to update the data on the changed object, the data-only (blank) migration script is the right choice. The “split column” or “merge columns” refactors are good examples.

08 - Split

In the split column scenario we have:

  1. create the new columns, then commit
  2. add a migration script for updating values from the composite column, then commit
  3. drop the composite column, then commit

09 - merge

In the merge columns scenario we have:

  1. create the new column, then commit
  2. add a migration script for aggregating values from the other columns, then commit
  3. drop the other columns, then commit

In both cases, sharing or deploying will deliver the changes respecting the commit order. You can read the migrations samples here.

 

What SQL Source Control does under the hood?

A deployment script that involves migrations consists of compare blocks and migration blocks:

10 - blocks

RedGate SQL Source Control creates a set of items inside a “Custom Scripts” folder, which is inside the folder of the database itself:

  • <datetime> ue auto: configurations and settings about the comparison (also the RedGateDatabaseInfo.xml)
  • <datetime> uf user: migration script (sql script and json files for transformations)
  • DeploymentOrder.json file, which is the order of migration deployment

Additionally, a RedGateLocal.DeploymentMetadata is added into the database. This table contains the list of executed migration scripts on the database and it allows us to avoid any duplication when the scripts are applied. More details here.

What about the deployment phase?

SQL Source Control is not used for delivering changes to test/staging/production environments. The other tools in the Redgate SQLToolbelt able to do that are SQL Compare and SQL Data Compare, which compare structures and data, or DLM Automation which plugs into your release management tool.

As the get/apply processes, the comparison one will check for the set of items into the “Custom Scripts” folder and will look up the entries on the RedGateLocal.DeploymentMetadata table, respecting the SQL Source Control commit order. If the migration has been already delivered, it will be skipped, in order to avoid any double execution. This means that we will find the table also in our test/staging/production environments.

Conclusions

The “Migrations” feature in SQL Source Control v5 comes finally with a great implementation. The suggested scripts are good, everything is clear and simple to understand. Additionally the user interface changes are welcome, also speaking about style. The great point is, in my opinion, the advantage that we can get from automation. All manual operations, which were necessary before this release, suddenly disappear. We can simply insert the folder into Octopus (or TeamCity, or another deployment tool) and execute the comparison against our environments avoiding regressions and data loss.

That’s the way we like it!

Agile@School – episode 3

Agile@School project is growing and finally we’re reaching the most important part of the iteration.

In this third episode we played the planning poker game on a round table (which appears like a poker session at a first sight actually 🙂 ) and the students decided to skip the coffee break! You know how is really difficult for them to be focused.

Continue reading “Agile@School – episode 3”

Agile@School – A new way for managing school projects

Last Saturday, I’ve shown the Agile@School project at ITSOS Fornovo Taro, the school in which I’ve studied for five years, just before starting to work as a junior developer.

ITSOS has been (and this is true right now) at the forefront of computer technology and now, thanks to the new “technology course”, the project becomes really interesting. During the year, indeed, the teachers have spoken about development methodologies, both for Agile and Waterfall approaches. This sounds great, especially in Italy.

Continue reading “Agile@School – A new way for managing school projects”

A 2015 full of DLM

After SQL Saturday Pordenone, I’ll keep speaking about DLM (aka ALM on databases) during the following events:

PASS Italian Virtual Chapter, April 14. I’ll demonstrate the SQL source control usage on SQL Server database
PASS SQL Saturday Torino, May 23, I’ve proposed two sessions (source control and unit testing on database)
.Net Campus a Roma, May 30, I’ll speak about continuous integration with SQL Server (source control, unit testing, deploy).
There’s a lot of work to do. But I’m thinking now about two or three new sessions. I hope to finish them in the last months of the year, and I hope to meet you in one of  these events, at least online.
Stay tuned!

 

Accessing SQL Server data with Simple.Data

Recently Michael suggested me a lightweight framework for accessing and manipulating data with SQL Server. I’ve tried it and I’ve found some misunderstanding on the official documentation. The framework I’m speaking for is Simple.Data, a very user-friendly, simple-to-install, quick-to-apply, open and lightweight framework.

After some tests I’ve started to write down an article, that should be taken like a reference due to missing information on official documentation.

You can find it here (in italian).

Hope this helps!

Stay Tuned! 🙂

SQL Saturday Parma – English Slide decks available to download

My SQL Saturday Parma slide decks are available to download on SlideShare.

The main topic of those presentations is database lifecycle management (DLM) on SQL Server.
Concepts: ALM/DLM, team work, differences between code and databases.
We’ve demonstrated the usage of the following tools: Visual Studio with SQL Server Data ToolsRed-Gate ed ApexSQL.
Second session: “Unit testing su database“.
Concepts: unit testing, database testing vs code testing.
We’ve demonstrated testing tools like Red-Gate SQL Test and Visual Studio Unit test projects, and the TSQLUnit framework.
Additionally, I’ve created a set of sample scripts with T-SQL and TSQLUnit on MSDN Code Samples.
Stay Tuned! 🙂