Recently, there was a need to transfer all databases (> 50 on one instance of SQL Server) from the dev environment to another instance of SQL Server, which was located on a different hardware. I wanted to minimize manual labor and do everything as quickly as possible.

Disclaimer

Scripts are written for one specific situation: this is a dev environment, all databases in a simple recovery model, data files and transaction logs are on the same place.

Everything written below applies only to this situation, but you can easily finish them for yourself (your conditions) without much effort.

The scripts do not use the new STRING_AGG and other nice things, so everything should work starting with SQL Server 2008 (or 2008 R2, I don’t remember where the backup compression appeared). For older versions, you need to remove WITH COMPRESSION from the backup command, but then there may no longer be time differences with copying files.

Of course, the fastest way would be to simply reconnect the disk shelf to the new server, but that was not our option. Detach – copying – Attach was considered, but did not fit, since the channel was quite narrow and transferring the database in an uncompressed form would take a rather long period of time.

As a result, we decided that we would do backup with compression on the new server, and then restore it there. The hardware in both the old and the new location is not bad, the backup is not bad, the time gain is not bad either.

So the “script generator” was written:

At the output, we get ready-made commands for creating backups to the right place, transferring the database offline, so that their users can’t work with them on the old server and scripts to restore the received backups on the new server (with automatic transfer of all data files and transaction logs to the specified location) 

The problem with this is that either someone has to sit and execute all the scripts in turn (backup-offline recovery), or someone must first start all the backups, then disconnect all the databases, then restore everything  there are fewer actions, but you need to sit and track.

I wanted to automate all these operations. From the one side, everything is simple – there are already ready-made commands, wrap in the cursor and execute. And, in principle, I did just that, added a new server as a linked server on the old one, and started it. On the local server, the command was executed through EXECUTE (@sql_text); on the linked server, EXECUTE (@sql_text) AT [linkedServerName].

Thus, operations were performed sequentially – backup locally, translating the local database offline, restoring to the Linked server. Everything started up, cheers, but it seemed to me that you can speed up the process a little if backups and restorations are performed independently of each other.

Then the invented cursor was divided into two parts – on the old server in the cursor, each database is backed up and transferred offline, after which the second server must understand that a new task has appeared and perform a database restore. To implement this mechanism, I used a record in a table on a linked server and an endless loop (I was too lazy to come up with stopping criteria), which looks to see if there are new records and is trying to restore something, if any.

Decision

On the old server, a global temporary table ## CommandList is created and populated, in which all the commands are collected and it will be possible to track the status of backups in the same place. The table is global so that at any moment from another session you can see what is happening there now.

Well, all the teams gather there to backup / restore all the necessary databases.

The Maintenance database was created on the new server and the CommandList table is in it, which will contain information on restoring the databases:

A linked server was configured on the old server, looking at the new instance of SQL Server. The scripts that are given in this post, I wrote at home and did not bother with a new instance, I used one and connected it as a linked server to myself. Therefore, here I have the same paths and unc-path local.

Now you can declare a cursor in which to backup the databases, disconnect them and write a command to restore to the linked server:

Each action is “logged” on the Messages tab in SSMS – there you can observe the current action. If you use WITH LOG in RAISERROR, in principle, you can put it all in some job and then look at the logs.

At runtime, you can access the ## CommandList and look at the table of what is happening and how.

On the new server, in parallel, was an endless loop:

Everything that it does – looks into the CommandList table, if there is at least one raw record there – takes the database name and the command to restore and tries to execute it using EXEC (@sql_text) ;. If there are no entries, waits 30 seconds and tries again.

Both the cursor and the loop process each record only once. Did not work out? Write an error message to the table and do not return here anymore.

About the condition for stopping – I actually was lazy. While typing, I came up with at least three solutions – as an option – adding the “Ready for recovery \ Not ready for recovery \ Finished” flags, filling in the list of databases and commands immediately, when filling out ## CommandList on the old server and updating the flag inside the cursor. Stop when there are no “ready to restore” records, since we immediately know the full amount of work.

Conclusions

But there are no conclusions. I thought it might be useful / interesting for someone to see how to use metadata to form and execute dynamic sql. The scripts given in the post, as it is, are not suitable for use on the prod, however, they can be slightly finished for themselves and used, for example, for mass customization of log shipping / database mirroring / availability groups.

When performing a backup on a share, the account under which SQL Server is running must have permissions to write there.

The creation of Linked Server was not disclosed in the post (with the mouse in the GUI is intuitively configured in a couple of minutes) and the transfer of logins to the new server. Those who have experienced user migration know that simply re-creating sql logins doesn’t help much, because they have sids with which the database users are connected. Scripts for generating sql logins with current passwords and correct sid are on msdn.