Top 3 Methods to Reset SQL Sa Password Without Any Trouble

“Hello all. I am writing this post because I am in big trouble. I recently joined an organization and unfortunately, I forgot the password of SQL database. Not a great situation for a new employee, so can you help me out? Can anyone tell me how do I reset SQL sa password? I will be really grateful if you could suggest any easy solution.”

Are you also suffering from a similar problem as mentioned in the query? Do you also want to know how to reset SQL sa password ? Then this is the right blog for you. Read on.

SQL Server database administrators often find themselves in an awkward position when they forget or lose the database password. This problem can happen to anyone at any given time. If you find yourself in the same situation, do not worry. This post will elaborately discuss various methods that can be implemented to fix this problem.

3 Quick Methods to Reset SQL sa Password

If you have lost your SQL database password, do not jump into the decision of reinstalling the SQL Server. Keep patience and read the solutions stated in this section. Here we will learn two different methods to reset SQL SA password for SQL database.

Method 1: Use Management Studio to Reset SQL SA Password

If you have lost the SA password, you can easily reset it using the management studio. After that, you will regain the access to SA account by Windows Authentication mode.

  • Login to SQL Server. For this, use Windows Authentication.
  • Navigate to Object Explorer to expand Security folder. Now, expand Logins folder and right-click on SA account. From the options, click on Properties.
  • When Properties window opens, add new password and confirm it. Click on OK to set this as your new SA password.

Method 2: Use SQL Script to Reset SQL SA Password
In case you reset SQL database password, users can also try using SQL scripts to add a new password to the database.

  • First of all, you have to launch SQL Server Management Studio.
  • Open a new query in it.
  • Enter the scripts mentioned below for execution:
    GO
    ALTER LOGIN [sa] WITH DEFAULT_DATABASE=[master]
    GO
    USE [master]
    GO
    ALTER LOGIN [sa] WITH PASSWORD=N'NewPassword' MUST_CHANGE
    GO

Note: Here, NewPassword will be the password you want to use for your SA account in place of the lost password.

Method 3: Use SQL Password Recovery Tool

If you find these above-mentioned methods complex, or if you are not willing to perform those processes, we have a better option for you. Presenting SysTools SQL sa Password Reset Tool. This affordable yet effective tool will help you set a new password for SQL database whose password you have lost.

It is really simple to get back the access to your SA account using this utility. Launch the tool and add the MDF file in it. All of its users will get displayed on the screen. You will the password value of SA account is Unknown. Select SA and click on Reset Password. Add a new password and confirm it. Now you are all set to access the SA account using this password.

Attention: We strongly recommend having the backup of MDF file before proceeding with this method.

Conclusion

Forgetting the SQL database SA account password can cause immense trouble for the admins and many people in real life do suffer from this problem. That is why we see a lot of people asking the same question in forums, how to reset SQL database password. For them, we have described three simple solutions to change their password. If the manual methods are not working or seems lengthy, users can go for the tool mentioned here. This software is one of the most popular software

Agile@School – Anno quinto, ep. 3

Nella seconda lezione abbiamo deciso di iniziare ad affrontare l’integrazione fra Git e Azure DevOps, in particolare per quanto riguarda il collegamento automatico fra i commit/push e i Task. Questa volta, però, abbiamo incontrato non poche difficoltà. I ragazzi, infatti, hanno lavorato su GitHub fino ad oggi, per cui abbiamo ritenuto opportuno effettuare l’importazione dei loro repository sul nostro Azure DevOps, tramite l’apposita funzione integrata.

Agli studenti, comprensibilmente, non erano stati dati permessi sufficienti per effettuare l’importazione in autonomia, per cui un po’ di tempo è andato perso nel far immettere le loro credenziali, uno alla volta. Superato “facilmente” questo primo ostacolo, i repository sono risultati importati con successo e, dopo una prima revisione dei concetti base di source control e del funzionamento di Git, Pier-Paolo ha seguito i ragazzi nell’operazione di cloning con l’ausilio di SmartGit, a cui erano già da tempo abituati.

log-default-coloring-4f58b1c4.png (1596×792)

Si è partiti con i comandi base di Git (checkout, commit, push/pull) intervenendo direttamente sul codice già scritto dai ragazzi, su schermo condiviso, in modo da dimostrare loro le potenzialità dello strumento. Purtroppo, la condivisione su schermo è stata l’unica strada percorribile, anche in questo caso i permessi erano mancanti. Dobbiamo preparare gli ambienti un po’ meglio la prossima volta!

Altra cosa da notare questa volta è che, durante la lezione, abbiamo avuto modo di toccare, seppure superficialmente, questioni di metodologia di sviluppo. Ad esempio abbiamo affrontato le naming convention, modularità e pulizia del codice, leggibilità, e via discorrendo, tutti argomenti che usualmente non vengono trattati nei classici corso di programmazione.

A fronte di tutte queste problematiche introdotte dal tentativo di migrare i progetti su Azure DevOps, è parso evidente che i ragazzi trarranno più benefici continuando ad utilizzare i loro repository su GitHub. Pier-Paolo si è quindi prefissato di sfruttare la funzionalità “GitHub Connections” la prossima volta. La feature di Azure DevOps permette di collegare un repository GitHub ad un progetto interno, permettendo al tempo stesso di sfruttare il meccanismo di associazione fra commit e Task, come descritto qui.

Kanban board shows GitHub links

Stavolta la lezione è stata forse più importante per noi che per i ragazzi e in retrospettiva individuale abbiamo imparato una importante lezione:

L’utilizzo di strumenti già in essere (non ci stancheremo mai di ripeterlo) è fondamentale per non aggiungere ostacoli ad un già difficoltoso percorso di migrazione culturale. Da queste piccole cose, si denota come una consapevolezza dell’ambiente in cui si lavora e dei tool annessi, avvicini tutti al successo del progetto stesso. E questo vale sia nelle aziende, sia nelle scuole.

La prossima volta tocca al branching e a comandi Git più “avanzati”, come le pull request, e il livello di difficoltà aumenterà notevolmente… Vediamo come sapranno affrontarla!

Agile@School – Anno quinto, ep. 2

Eccoci arrivati alla seconda lezione, cominciamo finalmente a fare sul serio!

La volta scorsa avevamo chiesto ai ragazzi di provare a inserire le user story sulle board dei loro progetti. Ora, le board erano lande desolate… Forse la causa è da attribuire al concetto di user story, che può essere in qualche modo complesso e non facile da “digerire”. Oppure il motivo è da ricercare in una gita in Spagna di mezzo?

Nessun problema, è comprensibile che le user story non siano di immediata comprensione, soprattutto per questi ragazzi che sono chiamati ad essere clienti, PO e sviluppatori nello stesso tempo. Soprattutto un portale con così tante feature, non è semplice da navigare, per cui, armati di pazienza, Pier-Paolo ha ripreso i concetti daccapo e li ha approfonditi con esempi concreti.

Questa volta, invertendo di fatto il modus operandi della lezione precedente, ha provato a dare molta più importanza alla pratica, quindi, insieme ai docenti ha seguito i ragazzi nel data entry sulle board.

I gruppi avevano già compilato a inizio corso un elenco di attività da eseguire per sviluppare i loro software: chi salvato su un foglio Excel condiviso sul cloud, ben formattato (soprattutto i gruppi con ragazze, sempre precise), chi invece su un file di testo, copiato per ogni studente, aggiornato via email (questo invece sarà un buon punto di partenza per una delle prossime lezioni, ndr). I ragazzi sono comunque stati veloci ad apprendere e, dopo un primo approccio, hanno iniziato anche ad appassionarsi. Si è vista la crescente curiosità rivola a come si gestiscono gli item e a come dovrebbero essere compilati. Sta crescendo la loro parte proattiva, nulla di più bello da vedere in una classe.

Per terminare rilassandosi un po’, Pier-Paolo ha insegnato poi il poker!

Non quello vero, naturalmente, anche se uno dei progetti è un Casinò virtuale, ma il planning poker. E qui proprio il gruppo che sviluppa il Casinò (sarà un caso?) ha immediatamente trovato un sito per poterlo fare online collaborativamente, che è stato subito utilizzato anche dagli altri. Prima abbiamo visto la proattività dei ragazzi, ora siamo addirittura arrivati alla crescita di team. Si bruciano le tappe quando si capisce il valore di alcune nostre azioni!

La lezione è passata piuttosto in fretta, del resto quando ci si diverte accade spesso. Chiudiamo qui con un compito “facile facile” per la prossima volta, cioè inserire i task nei PBI. Chissà se saranno così bravi da buttare giù anche le stime? Vista la crescita, siamo ottimisti 😊

Stay tuned, la prossima volta tocca a Git!

Agile@School – Anno Quinto (2020)

Anche per l’anno 2020 siamo stati coinvolti per continuare il progetto Agile@School, svolto in collaborazione con la scuola I.I.S.S. Gadda di Fornovo. Questa continuità negli anni è sicuramente un buon segno, perché significa che negli scorsi anni il progetto ha dato i suoi frutti e che l’esperienza sul campo acquisita dagli studenti è stata utile.

Ma, per chi ci segue da poco, descriviamo innanzitutto in cosa consiste il progetto Agile@School.
Questo progetto è nato nel 2016 con l’intento di far conoscere agli studenti il cosiddetto modo di lavorare agile, che consiste nell’adozione di tutta una serie di comportamenti e discipline orientate alla collaborazione, al team working e alla condivisione e rielaborazione di informazioni, che stanno trovando sempre più larga diffusione nel campo dello sviluppo software (ma facilmente applicabili anche in altre professioni). La nostra impressione è che negli ambienti educativi si dia ancora troppa importanza all’insegnamento della programmazione pura (indipendentemente dal linguaggio scelto) a scapito di quanto sta “a contorno”, mentre saper programmare e basta è sempre meno sufficiente affinché gli studenti siano più preparati all’ingresso nel mondo del lavoro.

Edizione 2020

Quest’anno abbiamo delle novità: la prima, l’insegnante! A parte infatti la prima lezione che è stata quasi unicamente “conoscitiva”, di presentazione (sia da parte dei ragazzi, sia da parte nostra), in cui siamo stati presenti in due, le prossime lezioni saranno tenute principalmente da Pier-Paolo Mammi, uno dei nostri nuovi colleghi di lavoro, con il quale seguiremo il progetto direttamente “sul campo”.
Altra novità: mentre nelle edizioni passate gli studenti sono stati guidati dall’inizio alla formazione dei team e alla scelta dei progetti, nell’edizione attuale le squadre sono già formate e ognuna ha già portato avanti lo sviluppo software del progetto assegnato (li vedremo nel seguito). Avremo quindi l’occasione di concentrarci maggiormente sulla parte organizzativa del lavoro “a regime” e di mostrare loro le applicazioni dell’agile development, nonché l’utilizzo di strumenti informatici a supporto, che noi stessi utilizziamo nel nostro lavoro.

Il piano d’attacco

I punti principali sui quali insisteremo saranno principalmente tre:
– Team working: strumenti di collaborazione e condivisione, boards;
– Gestione del progetto in modalità agile: cerimonie, utilizzo di Azure DevOps e gestione degli sprint;
– Controllo del codice sorgente: utilizzo di Git e integrazione con Azure DevOps.

Terremo sicuramente delle spiegazioni più teoriche, ma l’ottica sarà quella di far applicare il più possibile i metodi direttamente ai team di studenti, perché è con la pratica (e la perseveranza) che si fissano meglio i concetti. Proveremo inoltre anche ad “aumentare la posta in gioco”, utilizzando in alcuni casi durante le lezioni una terminologia più aziendale che, se da una parte potrà essere poco gradita dai ragazzi, dall’altra li renderà più preparati all’ambiente professionale.

L’impatto

I ragazzi erano inzialmente un po’ timidi, come è lecito aspettarsi quando incontrano qualcuno di nuovo, ma pian piano si sono un po’ sciolti. Abbiamo poi chiesto ai team di presentare brevemente i loro progetti, descrivendone le funzionalità e indagando in particolar modo sulle funzionalità specifiche che prevedono di implementare, a chi è rivolto il software, con quali modalità sviluppano e come si passano codice e documentazione. Questo per avere un’idea di come hanno impostato il loro lavoro, per esempio se avessero già previsto della documentazione condivisa, piuttosto che un sistema di versioning. Diciamo che qui dovremo lavorare con impegno!

I progetti

La classe è stata suddivisa in quattro team, ciascuno composto da 4/5 studenti, di cui uno di questi durante le presentazioni è stato “eletto all’unanimità” come Leader. E qui va un doveroso ringraziamento al professor Christian Memoli, incaricato di seguire la classe, che si fa costante premura di insegnare ai ragazzi i concetti dell’agile development, al di là della pura e semplice programmazione!

Questi sono i progetti scelti dagli studenti, per i quali è già presente una buona base di codice:

Progetto Casinò
Il progetto consiste nella realizzazione di una serie di tipici giochi da Casinò (Black Jack, Roulette, etc.).

Progetto ClasseViva
Gli studenti realizzeranno una versione alternativa e migliorata dell’attuale sistema di gestione delle presenze in aula, integrandolo con notifiche e messaggistica fra professori e genitori.

Progetto CUP
I ragazzi vogliono realizzare un software per la gestione online completa delle prenotazioni delle visite sanitarie.

Progetto TEP
Il software gestirà la prenotazione, l’emissione e la gestione di biglietti per le linee TEP da parte dei cittadini, comprensivo di parte amministrativa per i dipendenti.

Prossimamente

La prima lezione introduttiva al mondo dell’agile si è poi conclusa con l’assegnazione dei compiti a casa ai team. Dopo aver impostato su AzureDevOps un ambiente TFS dedicato al progetto, comprensivo di repository Git per i quattro progetti, abbiamo chiesto ai ragazzi di caricare il loro codice sorgente sul proprio repository e di cominciare a inserire i Product Backlog Items, compilati con le user stories e le attività.
Da qui partiremo nella prossima lezione per espandere i concetti oggi solo accennati, relativi all’utilizzo di AzureDevOps per la gestione delle attività e di Git come sistema di versioning del codice sorgente.

Stay Tuned.

Display Reporting Services usage statistics with Grafana

Introduction

In this post, we will describe an efficient way of showing the usage statistics of our SQL Server Reporting Services hosted reports. Most of the queries below have been addressed in another article published by Steve Stedman. Even though they are really useful, the article shows their results through SQL Server Management Studio.

The problem

One of the problems that often occur in our organization as well as some of our customers, is to get immediate feedback about usage statistics of reports. Usually, the request of creating reports is out of control and some of them are executed only “that time” and not anymore. In the worst-case scenario, many of them aren’t executed at all and some of them could become even overlapped or duplicated.

Therefore, it is important to know the usage statistics, user by user and report by report, to make the reader aware of them, let him interpreting the values of the same query in multiple ways and graphical layouts. While this is not possible with a tabular format (unless you export the values using any external tools such as Excel) it is simpler when it comes to a dashboard.

Our solution: Grafana

We considered two factors: simplicity and efficiency, in order to make this first-sight dashboard. Grafana enables us to get both of them, as well as being very powerful and immediate. Even though this is not the right definition for it, we can say that “it is a portal to create dashboards using connectors, which support the most famous tools that return data”. We can find them in its marketplace. For instance, tools such as PRTG and Prometheus (monitoring), NewRelic (APM), also SQL and NoSQL data sources are supported:

Obviously, we can find SQL Server. Also, we can contribute to create others, as well as to modify Grafana itself, since it is completely an Open Source project. Examples of possible graphical representations are listed below:

Creating a dashboard is really simple. Just add each panel with a button.

Then, write the query and modify settings to get the desired type of representation.

As mentioned before, the connectors are many. Once selected you can to configure them with parameters:

If you would like to install and configure Grafana you can read the official documentation which also includes a short guide that illustrates how to take your first steps.

That’s it!

Conclusions

With half a day of work (including the setup of the server), we have solved one of the most important problems of our customers, derived from the lack of awareness of reports deployed in production environments. We did it with very little effort and the result, as you can see, is pleasant and effective. Everything is now ready to be published every time we update the dashboards also through a delivery software (Octopus Deploy, Jenkins or Azure DevOps) so all these things fall into the second and third way of DevOps (according to The Phoenix Project): Immediate Feedback and Continuous Improvement.

Stay Tuned!

Reducing the gap between operations and development using Azure DevOps

Intro

As we all know, DevOps is culture. In a company that is going to adopt its practices and principles, everyone should be “on the same side”. Continuous improvement cannot be part of our work in IT if we wouldn’t reduce the gap between development and operations. People like me, that worked in the 90s, know that dev and ops were always isolated in silos and this was “the only” way that everyone followed as a suggestion taken from the market leaders. Ticketing systems and extreme bureaucracy acted as a man-in-the-middle between those silos, transforming each organization in two people-unaware endpoints.

Speaking from a DevOps perspective, in such circumstances, a developer couldn’t know anything about how to deploy, where and how is an environment configured. On the other hand, an operation guy couldn’t get anything from a package in terms of what the application has been made for. Due to this gap we often see performance issues, underestimated hardware stuff and delayed releases. Last but not least, what about the lack of commitment and accountability of the employees working on the solution? In short, reducing such a gap is possible using a combination of DevOps culture and the right tools. We will see hereafter how my organization tries to do so using Azure DevOps.

Scenario

Let’s get started with our team (DevTeam hereafter), which is working with agile methodologies, composed of ten developers and a PO. A quick note, we are using a process decision framework called Disciplined Agile (https://www.disciplinedagileconsortium.org/). Then, we have three operation professionals (OpsTeam hereafter). Build and deploy pipelines already exist. Builds are hosted by Azure DevOps and deploys are managed by Octopus Deploy. Getting this has been a difficult mission.

Everything is related to infrastructure in terms of servers, operative systems, virtual hosts, DNS, firewalls, security and so on, is the responsibility of our OpsTeam. As a result, they do many tasks for managing the environments. Here comes the problem. DevTeams used to create tasks in a dedicated backlog, but OpsTeam didn’t. It’s not just a matter of end-of-pipeline tasks, but also tasks for their daily basis work.

Our solution

Modify the tool, adapting its shape in order to fit in with that real scenario. A piece of cake, when you’re DevOps “inside”. How did we change Azure DevOps? Let’s describe what we did in three parts:

Team on Azure DevOps

To create a team in Azure DevOps is really a piece of cake (according to the latest releases). Just navigate to the options and select Teams:

We can add many teams clicking on New team:

We can set the team’s administrators, the permission set and an area under which every work item will be created. This area will be one of our best friends, especially when we will make our queries for gathering and analyzing the team’s related data. Additionally, also the other teams’ members could create items with this area in order to make the OpsTeam aware of them.

Team’s backlog

Now let’s navigate to Backlogs:

Good! The new backlog has been created. Going on it, we will see the team’s drop-down as well as the one for iterations. Great features!

Once created, we will see the teams’ drop-down:

Work items

Now, let’s create a new work item type. We call it Ops item. Navigate to Process customization page:

Before adding the new work item, we must ensure that the process is already a custom process, otherwise, all the edits will be blocked as shown in the following picture:

We’ve already created a SimplifiedScrum process. let’s add our item now:

Now we are going to modify the fields of the new type. Each team should be able to understand all the item’s properties. We will leave the item as is in this example. Then, we have to map the type to the backlog item list, since only the default work item types are shown initially. To do this, navigate to the Process customization page, Backlog Levels tab:

As we can see, we can also set the default item for our requirements backlog. Moreover, every Sprint backlog, based on iterations, will enable us to add the new Ops item:

Wrapping up

So, we’ve got a backlog for the IT Operations team’s members. Then, we’ve related it to their Azure DevOps team. Additionally, we’ve got a particular work item type (not mandatory, but really useful for querying it or adding it into dashboards) target of IT Operations’ job and a dedicated Area Path. We can make many relationships between items of both our backlogs. Here is an example of how an activity can be managed and organized (extension: Work Item Visualize):

As you can see, the Ops items are Successor of the “development” Product backlog items. Indeed, the Ops Items will be finished after the PBI, typically. Think about creating s DNS or a network path to let the production app work in production.

Conclusions

With this solution, we’re decoupling backlogs. Moreover, we’re isolating the management maintaining the relationships between work items that reside on different boards. At the same time, we’re making a strong synergy between Development and Operations.Then, in a couple of clicks, we can switch teams and backlogs, using Azure DevOps Boards. We can track changes in both the departments, also for audit requirements. In my opinion, this helps the enterprise awareness and facilitates the continuous improvement of all the teams and any member’s skill.

DevOps journeys series – Vertica release pipeline with Azure DevOps – Ep. 01 – development (part 2)

Intro

In the previous post we’ve described the idea behind the automation we’re trying to implement on a scenario based on MicroFocus Vertica database.

How it works

This “sandbox” is not a real isolated development workstation. Let’s separate it into two parts, the first one for the development on everything but Vertica (Windows local workstation) and the other one for a Vertica instance (probably Unix/Linux VM) shared between developers.

In this shared instance we will get a schema for each developer is working on the solution, in order to let everyone to get his own “environment”.

The source control folder tree (which will be TFVC source control on-premises) will be designed on the desired branch as the following:

/Project
    /Instance
        /Process1
            /_Master
                schema.ps1 
                tables.ps1
                views.ps1
            /Tables
                Table1.sql
                Table2.sql
            /Views
                View1.sql
                View2.sql
            Schema.sql
        /Process2
            /Tables ...
            /Views ...

As you ca see, under the Project folder there is the Vertica database folder, which contains, schema by schema, all the .sql files for Tables and Views DDLs (CREATE TABLE and CREATE VIEW). You can notice also .ps1 files, which contains the list of executions based on a certain order (business driven).

The file for a, let’s say, “Table1”, can be like this one:

CREATE TABLE :SCHEMA.Table1
(
    RowId int NOT NULL,
    RowStringValue varchar(30) NULL,
    CONSTRAINT PK_<schema>Table1 PRIMARY KEY (RowId)
);

We’ve added a :SCHEMA parameter, which allows each developer to create its own schema as described before. This is the only way we’ve found for isolating developers in a Vertica shared instance, avoiding an instance for each developer, which could be resource intensive for available PCs. Running the application locally, before committing any change set to the Source Control, a simple tool will execute .sql files with the new schema name and in the sort order given by the .ps1 file.

The Tables.ps1 file can be as the following:

param(
[parameter(Mandatory=$true)]$hostname,
[parameter(Mandatory=$true)]$port,
[parameter(Mandatory=$true)]$user,
[parameter(Mandatory=$true)]$psw,
[parameter(Mandatory=$true)]$schemaName,  
[parameter(Mandatory=$true)]$scriptsFolder
)

$schemaCommand = "vsql -h $hostname -p $port -U $user -w $psw -v SCHEMA=$schemaName -f $(Join-Path $scriptsFolder "Table1.sql")"
Invoke-Expression -command '$schemaCommand'

$schemaCommand = "vsql -h $hostname -p $port -U $user -w $psw -v SCHEMA=$schemaName -f $(Join-Path $scriptsFolder "Table2.sql")"
Invoke-Expression -command '$schemaCommand'

You may notice the term “vsql”, which is the command line provided by Vertica for executing queries. Further information here.

Also, usernames and passwords will be stored in an external config file (or a secured API), like the following one:

{
"host": "MyHost.Hos.Ext",
"port": 1234,
"user": "user",
"psw": "password",
"schemaName": "MYSCHEMA"
}

We’ve got the DDLs, the PoSh files for executing them and the Vertica command line. Good, in a development environment, however, a set of tools should be prepared for helping us to keep these artifacts on a single pipeline, too. This is the reason why we’ve created a “builder” script, like  this one:

$config = Get-Content (Join-Path $currentFolder "Build-Config.json") | Out-String | ConvertFrom-Json

$schemaCommand = $(Join-Path $scriptsFolder "Tables.ps1") 
$schemaCommand += " -hostname $($config.host)" 
$schemaCommand += " -user $($config.user)"
$schemaCommand += " -port $($config.port)"
$schemaCommand += " -psw '$($config.psw)'"
$schemaCommand += " -schemaName $($config.schemaName)"
$schemaCommand += " -scriptsFolder $scriptsFolder"

Invoke-Expression -command $schemaCommand

This is another layer of management, which allows us to organize every part of the DDLs to be executed against Vertica.

Note: Our scripts will destroy and rebuild any given SCHEMA. But this is the way we like.

Now, let’s see the possible scenarios.

Start from scratch or get started

When someone wants to start from scratch, this is the pipeline to follow:

  1. get latest version of the branch;
  2. check and change the configuration file (JSON);
  3. execute the create-vertica-database-from-scratch.bat file (it contains our powershell “build” script);
  4. that’s it, we’ve got a new schema in Vertica, empty and ready to be consumed.

If you want to preserve your data, this is not the right path for you. Executing the “builder” tool is optional.

New development

When a developer would make and try its changeset:

  1. change Visual Studio application (SSIS or SSRS here) when needed;
  2. change the Vertica schema (adding tables, columns and so on);
  3. get the .sql file of a new object or change the .sql file of an object which has been updated;
  4. replace them into the TFVC file structure;
  5. change the .ps1/.txt files if any DDL has been added (or if something that impacts on the order occurs);
  6. build the Visual Studio application and try it;
  7. When everything works good, check-in.

Now, everyone can get the latest changes in a CI way.

Get delta changes

When a developer is going to get the latest changes that contains an updated version of the Vertica objects, but wants to preserve its data, this is a little bit more tricky. The person who has made the change could share in a collaborative chat tool the ALTER script. This is not so reliable and comfortable, but without any comparison tool, there isn’t any best way to make this happen.

That being said, we’ve implemented our diff-script generator, based on the analysis of Vertica metadata (the catalog, browsing v_internal objects). So, after our friend gets the latest version, he executes a generate-diff-script.bat tool and lets this script to execute the generated ALTER script. Tricky, but it works like a charm (we will speak about this comparison tool in next posts, maybe) . Anyway, we’re looking forward hearing updates from MicroFocus. Hopefully, we’ll get an official diff tool from them soon!

Conclusions

I’ve just shown the way we’re managing tables DDLs and how we’ve created PowerShell scripts, but the real scenario is more complex. We have to drop Vertica schema (CASCADE DROP) before, then re-creating the new parametrized schema, then tables, then views and so on. Sometimes, we’ve got Vertica schemas which are referenced each other, so we have to create for everyone of them tables before, then views. The business logic is managed by how we write the “build” PowerShell script as well as the automated build process will.

Also the build process is not always “straight”. Some of the business processes need to be managed in a dedicated way. Cross reference will occur but this is the job that the “builder” will do. Both for the manual and the automated one. Moving responsibility to the build process allows us to feel more comfortable about our development solution.

Last, but not least, the manual-development-build process allows the developer to choose between re-create the database or not. They should not waste time in managing the things that a script can do repeatedly and efficiently. This is the reason why we kept somehow static the PowerShell instead of writing complicated business logic. Just adding rows of vsql invocation in the right order, that’s it.

Microsoft Localization Community, perché le traduzioni sono importanti

Scrivo questo post in italiano perché farlo in inglese potrebbe creare confusione… a parte gli scherzi, credo che sia una delle lingue più belle di tutto il pianeta, e non perché è quella con cui sono nato e che ho studiato, bensì perché è vasta, variabile, melodica e tanto altro.

Non è un caso che una delle mie attività preferite (e anche più frequenti) sia quella di supportare la localizzazione dei prodotti software che ognuno di noi, nel mondo della Information Technology, usa tutti i giorni. Seppure il post sia sulla community di Microsoft, sono attivo un po’ dappertutto, da software di altri fornitori, a quelli interni che sviluppiamo in azienda, fino ad arrivare alle app mobile, che spesso distruggono l’italiano come nessun altro. Ma parlo di Microsoft Localization Community perché ogni giorno, prima di iniziare la mia giornata e poco prima di terminarla, uso l’applicazione apposita per migliorare le traduzioni di quello che utilizzo tutti i giorni. Si tratta per di più di Visual Studio Code e Azure Data Studio, ma non trascuro nemmeno SQL su Linux e altre voci legate agli strumenti da sviluppatore.

Seguendo il link di cui sopra, troverete una wiki che vi mostra tutte le possibilità di collaborazione. Nonostante le persone che lavorano a tali attività non siano poche, non sarebbe male avere più contributi, anche per avere conseguentemente più qualità. Solo su Visual Studio Code abbiamo tutti questi volontari:

Ognuno di noi viene menzionato nella pagina delle note di rilascio di VSCode. Quindi, un incentivo in più per dare una mano!

Ma non è tutto qui, vi sono altre dashboard che consentono di vedere un po’ tutta l’attività di traduzione attorno ad un software. Qui di seguito vi mostro quelle per gli strumenti da sviluppatore (Developer Tools), Azure Data Studio, SQL su Linux e ancora VSCode:

Si tratta di non pochi sforzi della community. Tanto lavoro per ottenere strumenti e software sempre più comprensibili e facilmente utilizzabili, anche nella nostra lingua madre, che non ha mai avuto troppo successo nel mondo dell’IT. Vi consiglio di dare un’occhiata a questi contenuti, partendo proprio dal repository github della Microsoft Localization Community.

Vi aspettiamo!

Two tech events in Parma, the city of food

SQL Saturday Parma, six years in a row. DevOpsHeroes, four. Parma has been a great place to reach, also for technical events. I’ve started organizing the first SQL Saturday in my birthplace in November 2014. After two years I tried to create a brand-new event, when the DevOps culture started to grow and when the agile became strong. DevOpsHeroes was born in 2016, a month before the SQL Saturday event, again, in Parma. Why change? The audience has been great (more than 200 attendees), the feedbacks, too. People who come here look comfortable with everything. Then, thanks to the University, which has been the selected location, both the events are still growing.

Let’s go in deep with the events.

SQL Saturday Parma (2014-now)

SQL Saturdays, a great format by Professional Association of SQL Server (PASS), is a well-known event all around the world. You can find hundreds of them here. In Parma, the event is completely free, with no pre-conference. It’s located on the Campus of the University of Parma, in order to make also the students as well as the school aware of this kind of events. Unfortunately, the audience doesn’t gather them as does for the professionals, so I’m working to make a better relationship with them, too. About the audience, some numbers:

Event Attendees Track Speaker Feedback
SQL Sat 355 (2014) 128 3 14 4.46/5
SQL Sat 462 (2015) 157 3 18 4.20/5
SQL Sat 566 (2016) 150 3 18 4.48/5
SQL Sat 675 (2017) 210 4 24 4.47/5
SQL Sat 777 (2018) 230 4 24 4.72/5

In 2014 the sessions were driven by, let’s say, classic topics, like DBA, Development, BI. Starting from 2016 the coverage changed a lot. More data visualizations, more automation, more BI in the cloud, more cloud itself. 2017 has been the game-changer about NoSQL sessions (on Microsoft Azure), too. The latest edition of SQL Saturday Parma introduced the AI and this year we are struggling for selecting the right sessions from a bunch of 70 proposals (all over the world). September, 30 the Call for papers (available here) will close and if you are in the area on November 23, or if you want to come in Italy for a weekend of training on Microsoft Data Platform with friends, #sqlfamily and good food you are welcome!

The event is strongly supported by the Italian #sqlfamily, especially my friends in UGISS. A big thanks go to them.

DevOpsHeroes (2016-now)

Started as a one-shot event, this is a four-year-in-a-row one. Riding the wave of enthusiasm derived from the SQL Saturdays in Parma and thanks to my work experience, who moved meanwhile from DBA skills to Data DevOps and automation, this event has been a pleasant surprise, yet it doesn’t gather as many people as SQL Saturday does (SQL Saturdays has got also its noise and the PASS support). The event was born for spreading the DevOps culture, not just the tools. Tools were described there just to pull out the advantages of the culture, which must be “soaked up” before going deeper. So, the event was born for the culture. And this has been (and it still is) one of our mission.

The event is held typically one month before SQL Saturday in the Campus of the University of Parma. It gets more than 120 attendees and this year the organization is expecting more, due to the great sponsors which help the edition.

As you can see, behind the hood there are two main “helpers”. Engage IT Services, which is the company whose I’m a co-founder, and GetLatestVersion.it, a great Italian online community for DevOps and ALM technologies. The event is totally free, and it will get 18 sessions with 3 tracks. The topics will cover Technologies, Methodologies and use cases (or Experience sessions). The call for paper is already closed and we’re finishing the program of that Saturday, October 26.

Wrapping up

SQL Saturday Parma website: https://www.sqlsaturday.com/895/EventHome.aspx

Registration: https://www.sqlsaturday.com/895/registernow.aspx

DevOpsHeroes website: http://www.devops-heroes.net/

Registration: https://www.eventbrite.it/e/biglietti-devopsheroes-2019-66796826105

Recover Deleted Data From SQL Server Table by Transaction Logs

The task of creating tables, storing data in records look quite easy to SQL Server users. But if the data is being deleted by mistake or because of some other hardware or software issues, then the situation becomes complex. Recovery of deleted data is not a child’s play. So, considering this issue we have come up with this write-up which will help you to know various methods to answer your query how to recover deleted data from SQL server table by transaction logs? Let’s begin with a detailed discussion on the same.

Techniques to Rely On For Recovering The Deleted Data From Server:

1. Manual Method: – Using LSNs (Log Sequence Numbers), but it works only if the time of deletion is known to the user.
2. Automated Solution: – Simple yet secure and reliable solution for recovering deleted data from the server by using SysTools SQL MDF Database Recovery.

Know-How to Recover Deleted Data From SQL Server Table by Transaction Logs

Deleted Records’ Recovery Using SQL Server LSN:- In SQL Server transaction logs, the LSN(Log Sequence Number) is nothing but unique identifiers assigned to each record. Here we can restore the deleted rows of SQL tables if the time when the record was deleted is known.

User has to be ensured that the Full Recovery Model or Logged Recovery Model were created when the data was actually deleted for starting the recovery process. This is the prerequisite for the successful recovery of the deleted records.

The steps are described below to recover the deleted data from SQL Server 2016, 2015, 2014, 2012, 2008 and 2005.

Step 1: Fire the following query to know the total number of records in a table where from th record is being deleted.

Select * From Table_Name
Step 2: Next, run the procedure to take log back using the below-mentioned query:
USE NameOfTheDatabase
GO
BACKUP LOG (NameOfTheDatabase)
TO DISK = N’D:\ NameOfTheDatabase\RDDTrLog.trn’
WITH NOFORMAT, NOINIT,
NAME = N’NameOfTheDatabase-Transaction Log Backup’,
SKIP, NOREWIND, NOUNLOAD, STATS = 10
GO
Step 3: Information has to be collected from the SQL Server table about the deleted records for data recovery. This query will retrieve Transaction ID of the deleted records.
USE NameOfTheDatabase
GO
Select [Current LSN] LSN], [Transaction ID], Operation, Context, AllocUnitName
FROM
fn_dblog(NULL, NULL)
WHERE Operation = ‘LOP_DELETE_ROWS’

Step 4: Execute the query given below to know at what time exactly the records get deleted.
USE NameOfTheDatabase
GO
SELECT
[Current LSN], Operation, [Transaction ID], [Begin Time], [Transaction Name], [Transaction SID]
FROM
fn_dblog(NULL, NULL)
WHERE
[Transaction ID] = ‘000:000001f3′
AND
[Operation] = ‘LOP_BEGIN_XACT’

Ongoing LSN you will be able to find now query.
Step 5: Restore process has to be run to restore the deleted data from the SQL Server Table.
Recover Deleted D USE NameOfTheDatabase
GO
RESTORE DATABASE NameOfTheDatabase _COPY FROM
DISK = ‘D:\ NameOfTheDatabase \RDDFull.bak’
WITH
MOVE ‘NameOfTheDatabase’ TO ‘D:\RecoverDB\ NameOfTheDatabase.mdf’,
MOVE ‘NameOfTheDatabase _log’ TO ‘D:\RecoverDB\ NameOfTheDatabase_log.ldf’,
REPLACE, NORECOVERY;
GO
Step 6: Now is the time to verify if deleted records are recovered or not.

Efficient Way to Recover Deleted Records From SQL Server 2017 / 2016 / 2014

If you failed to recover deleted data from SQL server table by transaction logs then you can take the help SQL Database Recovery software. This software provides you the option to recover deleted records from SQL server table. Also by using this software, the user can preview accidentally deleted SQL tables records in red color. The user can easily recover database objects such as tables, functions, stored procedure. Moreover, This application is compatible with SQL server 2017 and its below version.

download

Follow The Steps to Recover Deleted Records From SQL Server Table

1. Download and Install the software on your machine.
2. Click on Add file button and add the MDF file in the software.
3. Now choose the Scan option and select the SQL server version.
4. Check the option preview deleted SQL database records in red color.
5. Preview the SQL server database items. The software will preview the deleted SQL table records in red color.
6. And click on Export button to Export the SQL database.
7. Now in database authentication choose the server name and the authentication mode.
8. Now choose the destination database
9. Check the Database items you want to export.
10. Choose the option from with only schema and schema and data.
11. Mark the option Export deleted records and finally click on Export button.

Final Words

In this article, we have discussed how to recover deleted data from the SQL server table by transaction logs. The manual solution is quite lengthy and difficult to perform. It requires strong technical knowledge. So it is better to take the help of SQL database recovery tool to recover deleted records easily.