Agile@School – Anno quinto, ep. 2

Eccoci arrivati alla seconda lezione, cominciamo finalmente a fare sul serio!

La volta scorsa avevamo chiesto ai ragazzi di provare a inserire le user story sulle board dei loro progetti. Ora, le board erano lande desolate… Forse la causa è da attribuire al concetto di user story, che può essere in qualche modo complesso e non facile da “digerire”. Oppure il motivo è da ricercare in una gita in Spagna di mezzo?

Nessun problema, è comprensibile che le user story non siano di immediata comprensione, soprattutto per questi ragazzi che sono chiamati ad essere clienti, PO e sviluppatori nello stesso tempo. Soprattutto un portale con così tante feature, non è semplice da navigare, per cui, armati di pazienza, Pier-Paolo ha ripreso i concetti daccapo e li ha approfonditi con esempi concreti.

Questa volta, invertendo di fatto il modus operandi della lezione precedente, ha provato a dare molta più importanza alla pratica, quindi, insieme ai docenti ha seguito i ragazzi nel data entry sulle board.

I gruppi avevano già compilato a inizio corso un elenco di attività da eseguire per sviluppare i loro software: chi salvato su un foglio Excel condiviso sul cloud, ben formattato (soprattutto i gruppi con ragazze, sempre precise), chi invece su un file di testo, copiato per ogni studente, aggiornato via email (questo invece sarà un buon punto di partenza per una delle prossime lezioni, ndr). I ragazzi sono comunque stati veloci ad apprendere e, dopo un primo approccio, hanno iniziato anche ad appassionarsi. Si è vista la crescente curiosità rivola a come si gestiscono gli item e a come dovrebbero essere compilati. Sta crescendo la loro parte proattiva, nulla di più bello da vedere in una classe.

Per terminare rilassandosi un po’, Pier-Paolo ha insegnato poi il poker!

Non quello vero, naturalmente, anche se uno dei progetti è un Casinò virtuale, ma il planning poker. E qui proprio il gruppo che sviluppa il Casinò (sarà un caso?) ha immediatamente trovato un sito per poterlo fare online collaborativamente, che è stato subito utilizzato anche dagli altri. Prima abbiamo visto la proattività dei ragazzi, ora siamo addirittura arrivati alla crescita di team. Si bruciano le tappe quando si capisce il valore di alcune nostre azioni!

La lezione è passata piuttosto in fretta, del resto quando ci si diverte accade spesso. Chiudiamo qui con un compito “facile facile” per la prossima volta, cioè inserire i task nei PBI. Chissà se saranno così bravi da buttare giù anche le stime? Vista la crescita, siamo ottimisti 😊

Stay tuned, la prossima volta tocca a Git!

Display Reporting Services usage statistics with Grafana

Introduction

In this post, we will describe an efficient way of showing the usage statistics of our SQL Server Reporting Services hosted reports. Most of the queries below have been addressed in another article published by Steve Stedman. Even though they are really useful, the article shows their results through SQL Server Management Studio.

The problem

One of the problems that often occur in our organization as well as some of our customers, is to get immediate feedback about usage statistics of reports. Usually, the request of creating reports is out of control and some of them are executed only “that time” and not anymore. In the worst-case scenario, many of them aren’t executed at all and some of them could become even overlapped or duplicated.

Therefore, it is important to know the usage statistics, user by user and report by report, to make the reader aware of them, let him interpreting the values of the same query in multiple ways and graphical layouts. While this is not possible with a tabular format (unless you export the values using any external tools such as Excel) it is simpler when it comes to a dashboard.

Our solution: Grafana

We considered two factors: simplicity and efficiency, in order to make this first-sight dashboard. Grafana enables us to get both of them, as well as being very powerful and immediate. Even though this is not the right definition for it, we can say that “it is a portal to create dashboards using connectors, which support the most famous tools that return data”. We can find them in its marketplace. For instance, tools such as PRTG and Prometheus (monitoring), NewRelic (APM), also SQL and NoSQL data sources are supported:

Obviously, we can find SQL Server. Also, we can contribute to create others, as well as to modify Grafana itself, since it is completely an Open Source project. Examples of possible graphical representations are listed below:

Creating a dashboard is really simple. Just add each panel with a button.

Then, write the query and modify settings to get the desired type of representation.

As mentioned before, the connectors are many. Once selected you can to configure them with parameters:

If you would like to install and configure Grafana you can read the official documentation which also includes a short guide that illustrates how to take your first steps.

That’s it!

Conclusions

With half a day of work (including the setup of the server), we have solved one of the most important problems of our customers, derived from the lack of awareness of reports deployed in production environments. We did it with very little effort and the result, as you can see, is pleasant and effective. Everything is now ready to be published every time we update the dashboards also through a delivery software (Octopus Deploy, Jenkins or Azure DevOps) so all these things fall into the second and third way of DevOps (according to The Phoenix Project): Immediate Feedback and Continuous Improvement.

Stay Tuned!

Reducing the gap between operations and development using Azure DevOps

Intro

As we all know, DevOps is culture. In a company that is going to adopt its practices and principles, everyone should be “on the same side”. Continuous improvement cannot be part of our work in IT if we wouldn’t reduce the gap between development and operations. People like me, that worked in the 90s, know that dev and ops were always isolated in silos and this was “the only” way that everyone followed as a suggestion taken from the market leaders. Ticketing systems and extreme bureaucracy acted as a man-in-the-middle between those silos, transforming each organization in two people-unaware endpoints.

Speaking from a DevOps perspective, in such circumstances, a developer couldn’t know anything about how to deploy, where and how is an environment configured. On the other hand, an operation guy couldn’t get anything from a package in terms of what the application has been made for. Due to this gap we often see performance issues, underestimated hardware stuff and delayed releases. Last but not least, what about the lack of commitment and accountability of the employees working on the solution? In short, reducing such a gap is possible using a combination of DevOps culture and the right tools. We will see hereafter how my organization tries to do so using Azure DevOps.

Scenario

Let’s get started with our team (DevTeam hereafter), which is working with agile methodologies, composed of ten developers and a PO. A quick note, we are using a process decision framework called Disciplined Agile (https://www.disciplinedagileconsortium.org/). Then, we have three operation professionals (OpsTeam hereafter). Build and deploy pipelines already exist. Builds are hosted by Azure DevOps and deploys are managed by Octopus Deploy. Getting this has been a difficult mission.

Everything is related to infrastructure in terms of servers, operative systems, virtual hosts, DNS, firewalls, security and so on, is the responsibility of our OpsTeam. As a result, they do many tasks for managing the environments. Here comes the problem. DevTeams used to create tasks in a dedicated backlog, but OpsTeam didn’t. It’s not just a matter of end-of-pipeline tasks, but also tasks for their daily basis work.

Our solution

Modify the tool, adapting its shape in order to fit in with that real scenario. A piece of cake, when you’re DevOps “inside”. How did we change Azure DevOps? Let’s describe what we did in three parts:

Team on Azure DevOps

To create a team in Azure DevOps is really a piece of cake (according to the latest releases). Just navigate to the options and select Teams:

We can add many teams clicking on New team:

We can set the team’s administrators, the permission set and an area under which every work item will be created. This area will be one of our best friends, especially when we will make our queries for gathering and analyzing the team’s related data. Additionally, also the other teams’ members could create items with this area in order to make the OpsTeam aware of them.

Team’s backlog

Now let’s navigate to Backlogs:

Good! The new backlog has been created. Going on it, we will see the team’s drop-down as well as the one for iterations. Great features!

Once created, we will see the teams’ drop-down:

Work items

Now, let’s create a new work item type. We call it Ops item. Navigate to Process customization page:

Before adding the new work item, we must ensure that the process is already a custom process, otherwise, all the edits will be blocked as shown in the following picture:

We’ve already created a SimplifiedScrum process. let’s add our item now:

Now we are going to modify the fields of the new type. Each team should be able to understand all the item’s properties. We will leave the item as is in this example. Then, we have to map the type to the backlog item list, since only the default work item types are shown initially. To do this, navigate to the Process customization page, Backlog Levels tab:

As we can see, we can also set the default item for our requirements backlog. Moreover, every Sprint backlog, based on iterations, will enable us to add the new Ops item:

Wrapping up

So, we’ve got a backlog for the IT Operations team’s members. Then, we’ve related it to their Azure DevOps team. Additionally, we’ve got a particular work item type (not mandatory, but really useful for querying it or adding it into dashboards) target of IT Operations’ job and a dedicated Area Path. We can make many relationships between items of both our backlogs. Here is an example of how an activity can be managed and organized (extension: Work Item Visualize):

As you can see, the Ops items are Successor of the “development” Product backlog items. Indeed, the Ops Items will be finished after the PBI, typically. Think about creating s DNS or a network path to let the production app work in production.

Conclusions

With this solution, we’re decoupling backlogs. Moreover, we’re isolating the management maintaining the relationships between work items that reside on different boards. At the same time, we’re making a strong synergy between Development and Operations.Then, in a couple of clicks, we can switch teams and backlogs, using Azure DevOps Boards. We can track changes in both the departments, also for audit requirements. In my opinion, this helps the enterprise awareness and facilitates the continuous improvement of all the teams and any member’s skill.

Microsoft Localization Community, perché le traduzioni sono importanti

Scrivo questo post in italiano perché farlo in inglese potrebbe creare confusione… a parte gli scherzi, credo che sia una delle lingue più belle di tutto il pianeta, e non perché è quella con cui sono nato e che ho studiato, bensì perché è vasta, variabile, melodica e tanto altro.

Non è un caso che una delle mie attività preferite (e anche più frequenti) sia quella di supportare la localizzazione dei prodotti software che ognuno di noi, nel mondo della Information Technology, usa tutti i giorni. Seppure il post sia sulla community di Microsoft, sono attivo un po’ dappertutto, da software di altri fornitori, a quelli interni che sviluppiamo in azienda, fino ad arrivare alle app mobile, che spesso distruggono l’italiano come nessun altro. Ma parlo di Microsoft Localization Community perché ogni giorno, prima di iniziare la mia giornata e poco prima di terminarla, uso l’applicazione apposita per migliorare le traduzioni di quello che utilizzo tutti i giorni. Si tratta per di più di Visual Studio Code e Azure Data Studio, ma non trascuro nemmeno SQL su Linux e altre voci legate agli strumenti da sviluppatore.

Seguendo il link di cui sopra, troverete una wiki che vi mostra tutte le possibilità di collaborazione. Nonostante le persone che lavorano a tali attività non siano poche, non sarebbe male avere più contributi, anche per avere conseguentemente più qualità. Solo su Visual Studio Code abbiamo tutti questi volontari:

Ognuno di noi viene menzionato nella pagina delle note di rilascio di VSCode. Quindi, un incentivo in più per dare una mano!

Ma non è tutto qui, vi sono altre dashboard che consentono di vedere un po’ tutta l’attività di traduzione attorno ad un software. Qui di seguito vi mostro quelle per gli strumenti da sviluppatore (Developer Tools), Azure Data Studio, SQL su Linux e ancora VSCode:

Si tratta di non pochi sforzi della community. Tanto lavoro per ottenere strumenti e software sempre più comprensibili e facilmente utilizzabili, anche nella nostra lingua madre, che non ha mai avuto troppo successo nel mondo dell’IT. Vi consiglio di dare un’occhiata a questi contenuti, partendo proprio dal repository github della Microsoft Localization Community.

Vi aspettiamo!

SQL Server Latest Updates (Nov. Dec. 2018)

Directly from the SQL Server Release Service blog, here the latest updates for SQL Server 2016 SP1, SP2, 2017 RTM and 2014 SP2, SP3:

Cumulative Update #1 for SQL Server 2014 SP3

Cumulative Update #15 for SQL Server 2014 SP2

Cumulative Update #12 for SQL Server 2016 SP1

Cumulative Update #4 for SQL Server 2016 SP2

Cumulative Update #13 for SQL Server 2017 RTM

and

Public Preview for SSRS 2017+ Management Pack with Power BI Reporting Server Support

…Stay Tuned, Merry Christmas and a Happy New Year! 🙂

DevOpsHeroes 2018 – Another brick in the wall

Event details

DevOpsHeroes has been a great event, again! We didn’t expect so many people and we could not imagine that the feedback would be so good. Quick facts:

  • Subscription: 244 (150 the past year)
  • Attendees: 122 (94 the past year)
  • drop: 50% (38% the past year)

Attendee’s satisfaction

The following radar chart is about the event date, location, quality of the sessions, quality of the speakers, food, hospitality, event design, and kits:

As we can see, the overall satisfaction is really high (4/5)! The blue line is related to the audience which already has attended our event (40%) and the orange one represents the first-timers (60%).

Indeed, we’ve got a good feedback also for the following questions:

  • will you attend again?
    • Sure, 45%
    • Most likely, 33%
    • Likely, 22%
  • will you suggest the event to other people?
    • Sure, 76%
    • Most likely, 21%
    • Likely, 3%

Conclusions

We’re really proud of the third edition of DevOpsHeroes. Engage IT Services and Xebir did a great job together, and, hopefully, both companies will cooperate in the future in order to provide new formats and events like this one. A special thanks goes to Scott Ambler, and also to Italian Agile Moviment for sponsoring and supporting the organisation.

Download sessions here.

Last but not least, thanks to GetLatestVersion.it and also WindowServer.it which allowed us to get DevOpsHeroes in the best shape possible.

See you next year!

Posting SQL Server notifications to Slack

Introduction

Automation, proactive monitoring, repeatability, reducing waste of time and technical debt. This is something you should know about when trying to do some DevOps.

Why automation? Because you can reduce technical debt and the number of failures that can happen with a manual interaction. You can create environments using a provisioning procedure without falling in common pitfalls like security misconfigurations, wrong configurations and botched monitoring.

Talking about SQL Server, immediate and proactive notifications represent a great step forward toward automation.

We automate whenever we want to stop doing a bunch of recurring or tedious steps manually. At the same time, we are also improving the overall quality and we are reducing the amount of things that can (and will) go wrong.

We are also optimising on how we use our time, because we can just ignore what the automation is doing for us and focus on that something that really needs our attention.

Finally, in this modern and notification-based world, emails generate too much white noise to deal with. In this article, we will learn how to integrate SQL Server tasks’ notifications with one of the most used collaboration tools: Slack.

Keep in mind that this is not the only way to get this done. This guide would help you to better understand why we’re doing this (eventually why DevOps), and not strictly how to do it, even if we’ll see a real working example.

Minimal requirements

You need to setup an account on slack.com (on a paid plan) and a SQL Server edition. I recommend the free developer edition here.

Note: Don’t use SQL Server Express edition. This version doesn’t support any SQL Server Agent task as well as the Database Mail, which we’ll need hereafter. Also, about slack, you must create a paid account, because the integration described below will not work with a free profile.

In order to send emails, we will use an SMTP sever. It can be either a private Microsoft Exchange, PostFix, or any other on-premises solutions, together with a cloud delivery service, like SendGrid, SendInBlue, MailJet, or Office 365.

The scenario

In a team like mine, which uses chat as a daily communication driver, centralizing every business and technical message could be a great step forward for the members of the team in terms of awareness and knowledge sharing. Business roles can use that tool as well, so we can chat to each other switching topics between tech and functional discussions. It’s just a matter of how Slack (in our case) is configured with channels and naming conventions. A good setup helps us to better organize our meetings, small talks and any other topic related to implementations. This is a cool argument to speak about, but a little bit out of the scope of this guide. We will focus on notification bots instead.

SQL Server is able to send emails with its built-in features out-of-the-box, but we’d like to centralize every notification inside Slack, gaining the following advantages:

  • Instant notification
  • Tailored focus (custom sound instead the same popup for all the incoming emails)
  • Opt-out
  • Quickly involve people that are not following the channel by a mention
  • Relay the problem description within the chat
  • Take actions as soon as the notification is received

The proposed solution

Now, how can we send notifications from SQL Server in an easier way than using custom code or a Slack incoming webhook? Is there any integration or a Slack app?  Yes. And guess what? I think you’ll like it because you don’t need to write a single line of code, and you don’t need to choose between CLR, PowerShell or any other language. It’s ironic, but the integration is called “Email”.

Slack

The purpose of this article is just to describe Slack as a collaboration tool. Further details are provided here. As we said before, the following samples work only if you get a Slack account.

The Slack Email integration

This is the app to work with: Email. Its configuration is based on a four-step wizard:

  • Select the channel (or create a new one).

001.png

  • When added, set the name and a short description of the new contact (bot) in Slack.

002.png

  • Change the avatar (it’s important to recognize the bot at a glance)

003

  • After saving, copy the email address the app created for you.

004

A word about the “Hide this address” checkbox: this is useful if you want to hide the address to any other member of your workspace. You will be the only user able to read it if you check that box.

Type of SQL Server notifications and setup

As a DBA, we’re managing the following types of notifications on a daily basis:

  • SQL Server built-in and custom Alerts
  • Job execution status
  • Integration Services custom emails (within the packages)
  • External monitoring tools (which monitor SQL Instances)

With the exception of SSIS custom emails and external monitoring tools, everything is managed by Database Mail. This is a lightweight layer that allows us to send emails directly from a SQL Server Instance, connecting to a SMTP server.

To setup Database Mail you can follow this guide from Microsoft Documentation.

Once this is up and running, you can manage the notifications using SQL Server Operators. An operator is an alias managed by the SQL Server Agent which you can use to send emails and other types of messages, like pagers and Net Send.

Creating an operator is simple, just invoke the following system stored procedure:

USE msdb; 
GO 

EXEC dbo.sp_add_operator 
    @name = N'<name here>',
    @enabled = 1,
    @email_address = N'<email here>';
GO

If you’re asking what email address you should use, it’s easy to say. You must fill the @email_address parameter with the address returned by the Email app integration for the channel you will send to (j8e4b5t2p4y8g4o2@elysteam.slack.com in the example above). But, what about the name parameter? In my opinion, the best name is the one that helps us to understand where the message will be sent to. Suppose that we’d like to notify something about some index maintenance jobs. We could call the operator Slack Indexes Maintenance, Slack Indexes Maintenance Operator and so on. With such names, you will immediately know what we are going to send to Slack as the topic is related to index maintenance.

Thus, you’ll get the following snippet:

USE msdb; 
GO 

EXEC dbo.sp_add_operator 
    @name = N' Slack Indexes Maintenance Operator',
    @enabled = 1,
    @email_address = N'j8e4b5t2p4y8g4o2@elysteam.slack.com';
GO

 

Slack channels naming considerations

I’d like to share with you my thought about the channel naming conventions. The principles to follow when naming channels, are:

  • Readability (clear for everyone)
  • Awareness (know what)
  • Style and Rules (know how)
  • Repeatability (keep using it from now on)

That being said, if the channel name describes a single action (like indexes maintenance in the above example) the operator which will send notifications should be unique. The reason is simple enough: we know that Indexes Maintenance Operator is sending messages to #sql-idx-maint-alerts (readability) and everyone knows that this is a one-to-one communication between a SQL Server Operator and Slack (awareness). Everyone knows that the “sql” channel prefix indicates SQL Server-related notification and the “alerts” suffix indicates that is an issue to pay attention to (style and rules). At the same time, everyone knows how to do the same with another pipeline of messages in the future (repeatability).

On the other hand, using a general purposes channel, like #sql-maint-alerts, allows us to be ready to future changes. Suppose that index maintenance will not be the only operation we’re executing in our servers (and typically isn’t). Does it make sense to create a new operator called for example, Database Concurrency Check Operator, which sends to a specific purpose channel? Clearly not.

In the end, a generic purpose channel gives the opportunity to hold more than one topic. All the notification sent to that channel should be, let’s say, of the same category to avoid too much generalization.

These solutions (one channel for more operators or a one-to-one solution) work equally well, it’s just a matter of how you’re designing your Slack channels. I suggest to avoid the “one channel to rule them all” pattern, because you’ll get thousands of mixed notifications without any clear idea behind them. After all, a noisy channel with messy content is something that will not be considered for a long time and will be eventually dropped.

Binding alerts

Alerts are triggers that communicate to an operator that something went wrong. This Brent Ozar’s article offers a good list of alerts that need attention. Here you can find their descriptions, based on severity. The binding is straightforward. All you need to do is to link the operator to the alert:

005006

When one of those events occur, an operator is alerted. Then, it sends the message using its setup – in our scenario, an email. If the operator uses the Slack Email app, the email will be sent to the Email app, and the integration will redirect it to Slack.

 

Binding job execution statuses

Let’s see how we can use the notification mechanism to monitor SQL Server Agent Jobs. Each job lets you configure what to do in case of failure, success or completion of its execution. The binding is similar to the alert’s one:

007.png

Once the result is collected, based on the configurations you’ve set up, this job will send an email to the app.

 

Binding custom Integration services email

In order to send an email from a SQL Server Integration Services package (aka .dtsx) you need to configure the SMTP server within the package itself. This is a little out of scope, because it’s not really a SQL Server notification. You can leverage the power of SSIS and prepare a rich HTML-formatted message; the result is nice to read and informative like in these examples:

 

 

Cool stuff, isn’t it? It’s simply a .NET script in SSIS, which uses the System.Net namespace. Although the SSIS package is executed within a SQL Server Agent job, the default notification message that SQL generates is not easy to read. The message you always get is:

JOB RUN:<name> was run on <date/time> DURATION: x hours, y minutes, z seconds. STATUS: Failed. MESSAGES: The job failed. The Job was invoked by Schedule xyz (<name>). The last step to run was step xyz (<name>)

Decorating the package with a more detailed email will improve the readability and the accuracy of our notifications.

Setup an external monitor for notifications to Slack

SQL Server is often (hopefully) monitored with specific counters. We’re using PRTG monitoring tool to measure them, and when a baseline changes and a threshold is hit, we send notifications to Slack. How? Again, sending to the Email app integration, specifying the right channel to send to and getting this:

010.png

The above report has been truncated. In a complete version of it, you’ll find the complete details of the measures, like the name of the servers, the sensors links, the grid with all the results, and everything you can see inside a PRTG admin portal.

 

Test

Let’s see a more complete example, using a SQL Server alert. We’ll use the Severity 17 alert. Severity 17 is simple to raise and it describes a missing or insufficient resource when executing a command:

USE msdb; 
GO 

EXEC msdb.dbo.sp_add_alert @name=N'Severity 017',
    @message_id=0,
    @severity=17,
    @enabled=1,
    @delay_between_responses=60,
    @include_event_description_in=1,
    @job_id=N'00000000-0000-0000-0000-000000000000';
GO

Set the Response for the Severity 17 alert to “Notify Operator”, via email:

006

Run the following severity 17 based t-sql script:

RAISERROR(N'An error occurred Severity 17:insufficient resources!', 17, 1) 
WITH LOG; --don’t forget to use WITH LOG
GO

Go to your Slack account. If you’ve configured everything correctly, you should see the following:

011.png

Did it work? Great! If not, continue reading.

 

Troubleshooting

If you don’t see the notification try these steps:

  1. Be sure that your Slack account is confirmed (its email too)
  2. Once the Slack account is confirmed, check if the channel still exists (CTRL+K -> name of the channel)
  3. Click on “Customize Slack” in the drop down menu of your Slack client/webpage, then click on Customize App in order to check whether the Email integration is active or not:

012

013.png

  • Verify Database Mail configuration (try to send the test email)
  • Verify the operator configuration (is it enabled?)
  • Verify the alert configuration (did you bind the response with email to the operator? Is it enabled?)
  • Verify the SQL Server Agent email profile configuration (is it enabled? Is it the right one?)

014

Conclusions

There are some disadvantages when using this kind of integration. For example, you cannot customize the message, unless you do it inside a .NET script. The Slack Email Address is publicly available, albeit hard to discover, so anyone can send message to your private slack channel by sending emails to that special address. Again, you cannot send the notification to more than one Slack channel or outside of the Slack world. In reality native SQL email notifications show the same limits, where email addresses of distribution lists are similar to Slack channels.

For our purposes, this is a very-low-effort automation with a high return in terms of value. With a couple of clicks, you can setup an email address representing a Slack channel, and, with little more, you can get notifications in a smart and comprehensive layout.

Everything is kept inside the collaboration chat tool we are using massively, every day. In the end, this example embeds one of the core DevOps principles (automation) and provides huge cross-role and cross-team value, especially when the channels include also network and server teams.

I hope that you’ll give this a try.

Agile @ School – Anno Terzo – Presentazione dei Progetti e Conclusioni

Anche quest’anno Agile@School è giunto al termine. Il percorso è stato come sempre ricco di nuove esperienze, imprevisti e soddisfazioni; un modo per osservare come le classi di studenti siano in grado di applicare gli insegnamenti portando a termine progetti fatti, finiti e funzionanti.

Sarà proprio questo il tema centrale del post: la presentazione dei progetti. Non ci soffermeremo sui singoli team perchè ognuno di essi, con i propri pregi e difetti, è stato in grado di mostrarci quanto di meglio potesse creare e di esporlo a tutta la classe.

Presentazione dei progetti

Ciascun team è stato chiamato ad esporre sotto forma di *pitch* il proprio lavoro, immaginando di avere di fronte i propri investitori (Gabriele, i professori e il sottoscritto) da convincere, sia dal punto di vista del valore tecnico che da quello commerciale. In 20 minuti, tutti hanno avuto il compito di mostrare le funzionalità del prodotto in termini hardware e software, con l’aggiunta di un possibile piano di vendita a sostenere il tutto. Grazie a quest’ultimo punto, anche i team non tecnicamente eccelsi avrebbero potuto “tenere il passo” puntando di più sul lato vendita e di design.

Per rafforzare quanto detto poco fa, il codice non è nemmeno stato preso in considerazione al momento della presentazione: Gabriele, tuttavia, se n’è occupato in prima persona confrontandosi faccia a faccia con i ragazzi consigliando loro eventuali migliorie, solo per determinare i punti deboli lato sicurezza, ottimizzazione e qualità dei listati.

Per ogni squadra, abbiamo deciso di valutare con il seguente questionario:

quest01

quest02

Giudizi, non voti. Focalizzazione su attitudini, non su skill. Tutto quanto di ottimistico (o critico costruttivo) che possa aiutare i docenti nella valutazione finale.

Guardate qui alcuni estratti di quanto visto durante la giornata:

 

Conclusioni di fine percorso

Vi è da dire che più l’edizione di Agile@School si ripropone e più diventa evidente l’importanza di focalizzarsi sugli aspetti della gestione del team. Ricordo ancora il primo anno, quando tutto era in fase embrionale, momento in cui ci siamo prestati ad aiutare i ragazzi sul codice vero e proprio, dimenticando il motivo per cui questo progetto è nato: fornire ai ragazzi gli strumenti e le attitudini per affrontare un qualsiasi progetto, personale o di team, che può prendere vita in un contesto lavorativo come nella sfera privata (time management).

Ecco, fare una retrospettiva su questo, sempre in puro stile agile, ci rende orgogliosi dei passi fatti fino ad oggi e di quanto ne potremmo ancora fare, affinando tecniche ed organizzazione al fine di lasciare ai ragazzi un’impronta il quanto più possibile ispiratrice per il loro futuro.

Per quest’anno è tutto, grazie per averci letto fino a questo punto.

E stay tuned! 🙂