So, while not updated in the last year, due to being busy at work and extremely busy in my spare time (and not JUST killing zombies!). Here is a list of done and upcoming speaking engagements.
4/12/2019 - Codestock, Knoxville, TN - "Why I use Cursors in SQL"
4/17/2019 - Virginia Beach User Group - "Why I use Cursors in SQL"
4/27/2019 - SQL Saturday Raleigh - "Why I use Cursors in SQL" and "I/O Tuning 101"
6/1/2019 - SQL Saturday Dallas -
6/8/2019 - SQL Saturday Virginia Beach -
Anders On SQL
Thursday, April 18, 2019
Sunday, March 4, 2018
SQL Saturday presentations 2018
So far scheduled (or applied) to speak at the following SQL Saturdays:
Richmond, 3/24 - "Why I use Cursors in SQL" - Confirmed
Raleigh, 4/14 - "I/O Tuning 101" - Confirmed
Helsinki, Finland, 5/12 - "I/O Tuning 101" - Confirmed
Denver, 9/15/2018 - "I/O Tuning 101" - Confirmed
Not applied to anything else so far this year.
Richmond, 3/24 - "Why I use Cursors in SQL" - Confirmed
Raleigh, 4/14 - "I/O Tuning 101" - Confirmed
Helsinki, Finland, 5/12 - "I/O Tuning 101" - Confirmed
Denver, 9/15/2018 - "I/O Tuning 101" - Confirmed
Not applied to anything else so far this year.
Year in review.....In March? Sure, why not. (or "late night ramblings")
So while I have never been a super active blogger, not posted much in the last year or so. Couple of good reasons, but mostly, frankly not felt like it for the most part.
2017 had some major ups and downs. Changed job in early 2017, something I realized was a major mistake after 2 weeks. The amount of overtime was vastly undersold during the interview. Even after turning in my notice after 3 weeks, they still wanted me to stay on for my 2 week period even though I was a contract to hire employee. Guess that is a good thing!!! At least I helped the team out by getting some issues in their SentryOne install taken care of and well on the way to full coverage. Belongs in the story that in the time since I left, 2 more DBAs appear to have left the company.
So where did I go? Back to my old company, but in a different role. Instead of being in charge of one divisions SQL Servers (all 2 production servers), I took over operational responsibilities for what was supposed to be about 20 production servers. First day back on the job, before I was even done with the paperwork, my new manager texted me to hang around my desk until she got there.....
Boom! That morning they terminated the DBA at a different division. Here Anders, have a few more servers. Servers I had no idea about, and was doing weird things like scraping emails for CSV files to load into SQL, automatically. Yikes!
2 weeks goes by, oh Anders you're now in charge of these servers in Canada as well.
Then layoffs happen. Including the person that had been acting as DBA on one of our larger teams, and who happened to be the manager for the team. hmm Guess I shall manage the team? Yes? No? None told me, but no new manager had been assigned. Oh well, time to step up. Lead the team. Take charge of 7 developers that are kind of lost due to the "leadership" of the previous manager. Lots of manual processes. Performance problems everywhere. None has time to take care of problems because of the manual processes. I order the team to within 24 hours give me an analysis of manual processes, including how many hours per week they spend on them. Over the course of a few weeks we had quite a bit of it automated. I had put in place proper maintenance on the server. Phew.
Another team has their client threaten to pull the plug due to performance issues. Install SentryOne evaluation. My boss's boss and I poor over the results. In less than 24 hours (some of it that afternoon) we had taken care of some of the worst performance issues. I had been asking for SentryOne for us for years, after management saw what we did using it, I got funding for it in no time.
During all this, I am also teaching people straight out of college for the company. 5 weeks of SQL training to get them good enough to start on teams. I did this for the time I was with the company, some of them have been really good. One took over my position on one team when I left to go work on another team. Some not so good. Sucks to have to cut people, but sometimes you just have to.
In the middle of this spoke at a few SQL Saturdays, Cleveland and Richmond. Had a great time at both. Cleveland was my first "away" event.
At the time of separation from the company in late June, I was responsible for about 80 servers. Most of them humming along just fine, maintenance all standardized. A few emergency recoveries due to a bug in the firmware of a switch. yay! At least I got to test my backup and restore plan. I think the longest downtime I had on a crash (with no stand by server) was about 45 minutes, mostly because I spent 30 minutes writing the script to restore the files. Crash, off course, happened about 2 hours before weekly full backups ;)
Within 3 weeks I had two job offers out of about 5 interviews (1 declined by mutual consent that I did not do well, 1 I let them know I was not interested after the interview). It all came down to two companies. After the in-person interview with ChannelAdvisor I called my wife "I want to work there," something I don't think I have ever said after a job interview before. Due to time constraints of having to answer an offer the evening of the interview, I was a bit nervous about the timing. The recruiter at ChannelAdvisor went to work as soon as the people that interviewed me where done. In record time, while I was driving from Raleigh back home to Charlotte, she had me an offer (Katie you rock!). Phew! Dodged a bullet not having to accept the other job (although I would have gotten to work with one of my favorite co-DBAs of all time if I had taken it).
Accepting position late on a Friday night. 10 days until first day! This started a hectic chain of events. First up, find somewhere for me to live while we get the house sold. By pure happenstance, I went with my wife to an SCA event, one of the people there was from Raleigh. He happened to know someone that had a room to rent, less than 5 miles from my new office! Yay.
3 months of fixing up the house. Getting loan for the major pieces that needed done. A 2 week trip to Norway to speak at SQL Saturday Oslo. Highly recommended event! Speaking at SQL Saturday Charlotte. Buying a house, selling a house. Coordinating all the things! Week after Thanksgiving we did a simultaneous closing on sale and purchase. Moving from a 2900 square foot house, to a 1800 square foot town home has proved, challenging. All credit goes to my wife for keeping things moving. And for being willing to uproot from Charlotte.
Usually for Christmas one, or both of us, goes to be with family on either side. 2017, we decided to stay home. Worked on the house, relaxed, made some decisions about the future.
An eventful year for sure. Loving my new job, full of challenges, something I was really looking for.
Here's to a good year for you, and a quieter year for me!
2017 had some major ups and downs. Changed job in early 2017, something I realized was a major mistake after 2 weeks. The amount of overtime was vastly undersold during the interview. Even after turning in my notice after 3 weeks, they still wanted me to stay on for my 2 week period even though I was a contract to hire employee. Guess that is a good thing!!! At least I helped the team out by getting some issues in their SentryOne install taken care of and well on the way to full coverage. Belongs in the story that in the time since I left, 2 more DBAs appear to have left the company.
So where did I go? Back to my old company, but in a different role. Instead of being in charge of one divisions SQL Servers (all 2 production servers), I took over operational responsibilities for what was supposed to be about 20 production servers. First day back on the job, before I was even done with the paperwork, my new manager texted me to hang around my desk until she got there.....
Boom! That morning they terminated the DBA at a different division. Here Anders, have a few more servers. Servers I had no idea about, and was doing weird things like scraping emails for CSV files to load into SQL, automatically. Yikes!
2 weeks goes by, oh Anders you're now in charge of these servers in Canada as well.
Then layoffs happen. Including the person that had been acting as DBA on one of our larger teams, and who happened to be the manager for the team. hmm Guess I shall manage the team? Yes? No? None told me, but no new manager had been assigned. Oh well, time to step up. Lead the team. Take charge of 7 developers that are kind of lost due to the "leadership" of the previous manager. Lots of manual processes. Performance problems everywhere. None has time to take care of problems because of the manual processes. I order the team to within 24 hours give me an analysis of manual processes, including how many hours per week they spend on them. Over the course of a few weeks we had quite a bit of it automated. I had put in place proper maintenance on the server. Phew.
Another team has their client threaten to pull the plug due to performance issues. Install SentryOne evaluation. My boss's boss and I poor over the results. In less than 24 hours (some of it that afternoon) we had taken care of some of the worst performance issues. I had been asking for SentryOne for us for years, after management saw what we did using it, I got funding for it in no time.
During all this, I am also teaching people straight out of college for the company. 5 weeks of SQL training to get them good enough to start on teams. I did this for the time I was with the company, some of them have been really good. One took over my position on one team when I left to go work on another team. Some not so good. Sucks to have to cut people, but sometimes you just have to.
In the middle of this spoke at a few SQL Saturdays, Cleveland and Richmond. Had a great time at both. Cleveland was my first "away" event.
At the time of separation from the company in late June, I was responsible for about 80 servers. Most of them humming along just fine, maintenance all standardized. A few emergency recoveries due to a bug in the firmware of a switch. yay! At least I got to test my backup and restore plan. I think the longest downtime I had on a crash (with no stand by server) was about 45 minutes, mostly because I spent 30 minutes writing the script to restore the files. Crash, off course, happened about 2 hours before weekly full backups ;)
Within 3 weeks I had two job offers out of about 5 interviews (1 declined by mutual consent that I did not do well, 1 I let them know I was not interested after the interview). It all came down to two companies. After the in-person interview with ChannelAdvisor I called my wife "I want to work there," something I don't think I have ever said after a job interview before. Due to time constraints of having to answer an offer the evening of the interview, I was a bit nervous about the timing. The recruiter at ChannelAdvisor went to work as soon as the people that interviewed me where done. In record time, while I was driving from Raleigh back home to Charlotte, she had me an offer (Katie you rock!). Phew! Dodged a bullet not having to accept the other job (although I would have gotten to work with one of my favorite co-DBAs of all time if I had taken it).
Accepting position late on a Friday night. 10 days until first day! This started a hectic chain of events. First up, find somewhere for me to live while we get the house sold. By pure happenstance, I went with my wife to an SCA event, one of the people there was from Raleigh. He happened to know someone that had a room to rent, less than 5 miles from my new office! Yay.
3 months of fixing up the house. Getting loan for the major pieces that needed done. A 2 week trip to Norway to speak at SQL Saturday Oslo. Highly recommended event! Speaking at SQL Saturday Charlotte. Buying a house, selling a house. Coordinating all the things! Week after Thanksgiving we did a simultaneous closing on sale and purchase. Moving from a 2900 square foot house, to a 1800 square foot town home has proved, challenging. All credit goes to my wife for keeping things moving. And for being willing to uproot from Charlotte.
Usually for Christmas one, or both of us, goes to be with family on either side. 2017, we decided to stay home. Worked on the house, relaxed, made some decisions about the future.
An eventful year for sure. Loving my new job, full of challenges, something I was really looking for.
Here's to a good year for you, and a quieter year for me!
Sunday, July 2, 2017
Archiving Data
Many times as a DBA I have taken over a system that has had data gathering for years. The worst I took over had about 400 million records in the main table, with an average of 24 records in the secondary related table per primary record, data going back 12+ years. In 2017, this might not be that bad, but back in 2002....
Seeing as the data had to be retrievable for any date, I could not simply delete the very old data. These tables also had constant inserts and updates into them, so making sure the tables remained available became important, i.e. needed to have acceptable time that the table was being locked, with time for waiting transactions to finish.
The solution I came up with does this with variable size batches. Now, with modern versions of SQL, there are other ways to do this, but the good thing about this method it works regardless of version of SQL, as well as edition. Azure SQL DB would need some modification to make it work to archive to a separate database.
The following sample illustrates arching data from two tables, in a typical Sales header and detail tables, data consistency must be maintained.
Hope you find this useful if you ever find a need for archiving large amounts of data.
Seeing as the data had to be retrievable for any date, I could not simply delete the very old data. These tables also had constant inserts and updates into them, so making sure the tables remained available became important, i.e. needed to have acceptable time that the table was being locked, with time for waiting transactions to finish.
The solution I came up with does this with variable size batches. Now, with modern versions of SQL, there are other ways to do this, but the good thing about this method it works regardless of version of SQL, as well as edition. Azure SQL DB would need some modification to make it work to archive to a separate database.
The following sample illustrates arching data from two tables, in a typical Sales header and detail tables, data consistency must be maintained.
CREATE OR ALTER PROCEDURE prc_ArchiveSalesData (@StartDate date, @EndDate date, @BatchSize INT)AS
BEGIN
SET NOCOUNT ON
SET XACT_ABORT ON
DECLARE @rowcount INT-- set end date to be one after the date sent in to account for time data.
-- Better performance than converting data in where clause
SET @EndDate = DATEADD(DAY,1,@EndDate)
-- check for existing temp tables , drop and create
IF OBJECT_ID('tempdb..#AllItems') IS NOT NULL
DROP TABLE #AllItems
CREATE TABLE #AllItems (
SalesID INT
)
CREATE INDEX #idxAllItems ON #AllItems(SalesID) --lots of joins to happen on this
IF OBJECT_ID('tempdb..#CurrentBatch') IS NOT NULL
DROP TABLE #CurrentBatch
CREATE TABLE #CurrentBatch (
SalesID INT
)
CREATE INDEX #idx_Current ON #CurrentBatch(SalesID)
-- gather all items for the given date range
INSERT INTO #AllItems
SELECT S.SalesID FROM Sale S WHERE S.SaleDate >= @StartDate AND S.SaleDate < @EndDate
-- loop outside transaction, that way if a failure or need to cancel, will commit the succesful ones
INSERT INTO #CurrentBatch
SELECT TOP (@BatchSize) SalesID FROM #AllItems ORDER BY SalesID ASC -- technically no need to do this order by, but makes debugging easier
SELECT @rowcount = @@ROWCOUNT
WHILE @rowcount > 0
BEGIN
BEGIN TRANSACTION-- copy to archive tables
INSERT INTO SalesArchive.dbo.Sale (SalesID, CustomerID, SalesPersonID, TotalSales, SaleDate)
SELECT S.* -- this would make archive fail if we change the table, which we would want
FROM Sale S
INNER JOIN #CurrentBatch CB
ON S.SalesID = CB.SalesID
INSERT INTO SalesArchive.dbo.SalesDetail ( SalesDetailID, SalesID, ItemID, Quantity)
SELECT SD.*
FROM SalesDetail SD
INNER JOIN #CurrentBatch CB
ON SD.SalesID = CB.SalesID
-- delete from production tables
DELETE FROM SalesDetail
FROM SalesDetail SD
INNER JOIN #CurrentBatch CB
ON SD.SalesID = CB.SalesID
DELETE FROM Sale
FROM Sale S
INNER JOIN #CurrentBatch CB
ON S.SalesID = CB.SalesID
-- delete the current batch from all items
DELETE FROM #AllItems
FROM #AllItems AI
INNER JOIN #CurrentBatch CB
ON AI.SalesID = CB.SalesID
-- delete from current batch completed
DELETE FROM #CurrentBatch
COMMIT TRANSACTION -- committing transaction here makes it possible to stop the archiving and not loose everything
WAITFOR delay '00:00:10' -- 10 second pause before continuing, adjust as needed
INSERT INTO #CurrentBatch
SELECT TOP (@BatchSize) SalesID FROM #AllItems ORDER BY SalesID ASC -- technically no need to do this order by, but makes debugging easier
SELECT @rowcount = @@ROWCOUNT
END -- rowcount loop
END -- proc
Hope you find this useful if you ever find a need for archiving large amounts of data.
Thursday, March 2, 2017
SentryOne cleanup, PerformanceAnalysisDataTableAndIndexCounter table
SentryOne is a great tool for monitoring servers, but with a lot of "chatty" servers, the amount of data collected can become unwieldy. With about 65 servers monitored, the delete functionality in SentryOne could not keep up with the deletes in the PerformanceAnalysisDataTableAndIndexCounter (hereafter just refered to as the table....) table. At the point I got brought in to clean it up there was close to a billion records in this table, and we were about to add another 110+ servers to be monitored. The table itself was about 320 GB, the cleanup function was not working, and the manual method provided by SentryOne support could not handle the deletes without causing a lot of blocking and time out issues.
The delete function provided had multiple manual steps, and the where clause was not well aligned with the indexes on the table. To be able to handle cleanup of this size, and not cause undue blocking and timeouts in the collection of data, I decided to find a better way.
Looking at the code provided by SentryOne, and digging a bit in the database itself, found how I could get the deletes into smaller, adjustable batches.
The first problem was to find the boundaries for the delete, and this step is the single most time consuming step in the process:
SELECT @EndID = MAX(id) FROM table WITH (nolock) WHERE TIMESTAMP <= @EndTimeStamp -- this step can take time
Since data in this table is just written to and never updated, using nolock here is ok.
After this it becomes a simple loop until you meet a goal in batches. The delete is done with a simple top statement in a CTE.
After I was done catching up to the 15 days retention rate we wanted, which is hardcoded into the procedure to avoid accidentally deleting more than we wanted and can be changed if your retention rate is different, the procedure was set up to simply be called once a day. We decided to set the retention rate in SentryOne to be 1 day longer than the procedure, so if anything was ever changed SentryOne delete routine would kick in if this one fails.
Hopefully this is useful for you if you run into this problem with SentryOne. The same setup can be used for other tables as well, but PLEASE check with their support before you start to delete data that it is ok since some tables are detail "collector" tables, some are summaries that you need for your time lines. The same format was used to delete in a couple of other tables, including the one that collects deadlock information.......
The code in its entirety is on Github, or below. I would highly recommend getting it from Github if you are going to use it instead of here.
The parameters to use needs to be adjusted for your use and server performance, the defaults are just a safe small run of deletes.
@BatchSize increased as I got less data in the table, and how busy the system was. The trick is to get each batch to be big enough you will get done in a reasonable time with your cleanup, but not so large that they cause time outs. Only one way to figure out, run the procedure and keep a close eye on your system while it is running.
@MaxIterations is simply how many times to loop through running deletes.
The delete function provided had multiple manual steps, and the where clause was not well aligned with the indexes on the table. To be able to handle cleanup of this size, and not cause undue blocking and timeouts in the collection of data, I decided to find a better way.
Looking at the code provided by SentryOne, and digging a bit in the database itself, found how I could get the deletes into smaller, adjustable batches.
The first problem was to find the boundaries for the delete, and this step is the single most time consuming step in the process:
SELECT @EndID = MAX(id) FROM table WITH (nolock) WHERE TIMESTAMP <= @EndTimeStamp -- this step can take time
Since data in this table is just written to and never updated, using nolock here is ok.
After this it becomes a simple loop until you meet a goal in batches. The delete is done with a simple top statement in a CTE.
After I was done catching up to the 15 days retention rate we wanted, which is hardcoded into the procedure to avoid accidentally deleting more than we wanted and can be changed if your retention rate is different, the procedure was set up to simply be called once a day. We decided to set the retention rate in SentryOne to be 1 day longer than the procedure, so if anything was ever changed SentryOne delete routine would kick in if this one fails.
Hopefully this is useful for you if you run into this problem with SentryOne. The same setup can be used for other tables as well, but PLEASE check with their support before you start to delete data that it is ok since some tables are detail "collector" tables, some are summaries that you need for your time lines. The same format was used to delete in a couple of other tables, including the one that collects deadlock information.......
The code in its entirety is on Github, or below. I would highly recommend getting it from Github if you are going to use it instead of here.
The parameters to use needs to be adjusted for your use and server performance, the defaults are just a safe small run of deletes.
@BatchSize increased as I got less data in the table, and how busy the system was. The trick is to get each batch to be big enough you will get done in a reasonable time with your cleanup, but not so large that they cause time outs. Only one way to figure out, run the procedure and keep a close eye on your system while it is running.
@MaxIterations is simply how many times to loop through running deletes.
USE SQLSentry
-- use SentryOne -- depends if new install, or upgradeCREATE PROCEDURE dbo.BatchDeletePerformanceAnalysisDataTableAndIndexCounterData
@EndDateTime DATETIME,
@MaxIterations INT = 1,
@BatchSize INT = 1000
AS/******************
Anders Pedersen
2017-01-18
Made to batch delete data out of the PerformanceAnalysisDataTableAndIndexCounter table.
Delete statement provided by SentryOne could not handle a lot of data without
a lot of manual intervention (manual batches).
This uses the column with the primary key on it for deletes instead of the timestamp column,
this substantially speeds up deletes.
*/BEGIN
SET NOCOUNT ON
--set DEADLOCK_PRIORITY low -- if you see a lot of other processes being victim, set this
--(unless it is more important to get it cleaned up than loosing a few records)
-- get first time stamp
DECLARE @BeginTimeStamp INT, @EndTimeStamp INT
DECLARE @BeginID bigint, @EndID bigint
-- sanity check, do not let it delete to a date less than 15 days back. Making it 16 to cover the entire day. Close enough
IF DATEDIFF(DAY, @EndDateTime, GETDATE()) <= 16 OR @EndDateTime IS NULL
SET @EndDateTime = DATEADD(DAY, -16, GETDATE())
SELECT @EndTimeStamp = [dbo].[fnConvertDateTimeToTimestamp] (@EndDateTime)
-- get ID for that newest one to delete
SELECT @EndID = MAX(id) FROM PerformanceAnalysisDataTableAndIndexCounter WITH (nolock) WHERE TIMESTAMP <= @EndTimeStamp -- this step can take time
SELECT @BeginID = MIN(id) FROM PerformanceAnalysisDataTableAndIndexCounter WITH (nolock) WHERE ID <= @EndID
DECLARE @cnt bigint = 0
DECLARE @X INT = 1
WHILE @x <= @MaxIterations AND @BeginID < @EndID
BEGIN
IF @x <> 1 -- skip wait on first iteration. Save time if you only running small batches
WAITFOR DELAY '00:00:03'; -- 3 second delay to let transactions through. This might need adjusted
WITH CTE AS
(
SELECT TOP (@BatchSize) * FROM PerformanceAnalysisDataTableAndIndexCounter WHERE ID BETWEEN @BeginID AND @EndID ORDER BY ID ASC
)
DELETE FROM CTE
SELECT @cnt += @@ROWCOUNT
SET @x +=1
END
SELECT @cnt AS RecordsDeleted
END
Monday, November 28, 2016
Backup Restore Chain re-visited, again.
(This is an update to Backup Restore Chain revisited I posted February 2016) .
In the morning hours our disk system lost the log files for all my user databases due to a firmware bug.... The connection to the disks could not be re-established and new drives was set up. With all the databases being in Full Recovery mode, as well as having JUST completed transaction log backups a minute prior to the loss of disks, it was established that we could restore what we had with minimal loss.
However, there was many databases affected, and each would have about 45 files to restore. I quickly pulled the query out from the previous article and the numbers from it matched what I had found just taking a quick look at the backup tables. My previous script would just give me a list of files, in the correct order to be restored. For a total of 450 files, this seemed like too much to manually copy and paste without messing up somewhere. With my boss we decided it was better to spend my time rewriting my query to also give the restore statements.
All comments from previous blog posts on this subject still applies. The script assumes a couple of things, mainly that restores are happening to the same location as they where backed up from, i.e. no WITH MOVE commands included. This would be fairly easy to add if you need it, and I might go ahead and made another version later that does this as time permits.
The output from the procedure when called with a 2 for the output parameter looks like this:
Code is below, can also be found in my Github
set quoted_identifier off;
go
use master
go
create procedure sp_RestoreChainByDB2
@DBName sysname,
@Output int = 1, -- default to previous versions output
@MaxFullBackupDate date = NULL
as
begin
/****************
Created by @SQLSoldier
http://www.sqlsoldier.com/wp/sqlserver/day3of31daysofdisasterrecoverydeterminingfilestorestoredatabase
2015-09-24
Modified by Anders Pedersen
Twitter: @arrowdrive
Blog: Blog.AndersOnSQL.com
Made to work with SQL 2008R2.
2016-02-26
Modified by Anders Pedersen
Fix issue where First_LSN and Last_LSN values are the same on a record for the trasaction log backups.
These transaction log backups can be ignored in the restore chain.
Also made into a procedure, with the database name for a required parameter, and max date for the full backup to start with as a parameter
2016-11-28
Modified by Anders Pedersen
Added support for scripting out restore statements.
Added parameter @output for what the script should print out: 1 = Restore chain only, 2 = Restore Script only, 3 = both
Note that the restore scripts assume the files are going back to the original location.
This part could easily be modified by adding parameters for data and log file location
Or even made table driven if a lot of files going to different locations.
This was made for my own need to quickly create a restore chain to recover a server where the databases existed, but had to be over written.
*****************/
Declare @DBBackupLSN numeric(25, 0)
Declare @Baks Table (
BakID int identity(1, 1) not null primary key,
backup_set_id int not null,
media_set_id int not null,
first_family_number tinyint not null,
last_family_number tinyint not null,
first_lsn numeric(25, 0) null,
last_lsn numeric(25, 0) null,
database_backup_lsn numeric(25, 0) null,
backup_finish_date datetime null,
type char(1) null,
family_sequence_number tinyint not null,
physical_device_name nvarchar(260) not null,
device_type tinyint null,
checkpoint_LSN numeric (25,0)-- Anders 2015-09-24
)
Set NoCount On;
-- Get the most recent full backup with all backup files
Insert Into @Baks (backup_set_id,
media_set_id,
first_family_number,
last_family_number,
first_lsn,
last_lsn,
database_backup_lsn,
backup_finish_date,
type,
family_sequence_number,
physical_device_name,
device_type
,checkpoint_LSN)-- Anders
Select Top(1) With Ties B.backup_set_id,
B.media_set_id,
B.first_family_number,
B.last_family_number,
B.first_lsn,
B.last_lsn,
B.database_backup_lsn,
B.backup_finish_date,
B.type,
BF.family_sequence_number,
BF.physical_device_name,
BF.device_type
, B.checkpoint_lsn -- Anders 2015-09-24
From msdb.dbo.backupset As B
Inner Join msdb.dbo.backupmediafamily As BF
On BF.media_set_id = B.media_set_id
And BF.family_sequence_number Between B.first_family_number And B.last_family_number
Where B.database_name = @DBName
And B.is_copy_only = 0
And B.type = 'D'
And BF.physical_device_name Not In ('Nul', 'Nul:')
and convert(DATE,B.backup_start_date) <=(CASE
when @MaxFullBackupDate is not null then @MaxFullBackupDate
else convert(DATE,B.backup_start_date) end) -- Anders 2016-02-26
Order By backup_finish_date desc, backup_set_id;
-- Get the lsn that the differential backups, if any, will be based on
Select @DBBackupLSN = checkpoint_LSN -- Anders 2015-09-24
From @Baks;
-- Get the most recent differential backup based on that full backup
Insert Into @Baks (backup_set_id,
media_set_id,
first_family_number,
last_family_number,
first_lsn,
last_lsn,
database_backup_lsn,
backup_finish_date,
type,
family_sequence_number,
physical_device_name,
device_type)
Select Top(1) With Ties B.backup_set_id,
B.media_set_id,
B.first_family_number,
B.last_family_number,
B.first_lsn,
B.last_lsn,
B.database_backup_lsn,
B.backup_finish_date,
B.type,
BF.family_sequence_number,
BF.physical_device_name,
BF.device_type
From msdb.dbo.backupset As B
Inner Join msdb.dbo.backupmediafamily As BF
On BF.media_set_id = B.media_set_id
And BF.family_sequence_number Between B.first_family_number And B.last_family_number
Where B.database_name = @DBName
And B.is_copy_only = 0
And B.type = 'I'
And BF.physical_device_name Not In ('Nul', 'Nul:')
And B.database_backup_lsn = @DBBackupLSN
Order By backup_finish_date Desc, backup_set_id;
--select * from @Baks
-- Get the last LSN included in the differential backup,
-- if one was found, or of the full backup
Select Top 1 @DBBackupLSN = last_lsn
From @Baks
Where type In ('D', 'I')
Order By BakID Desc;
-- Get first log backup, if any, for restore, where
-- last_lsn of previous backup is >= first_lsn of the
-- log backup and <= the last_lsn of the log backup
Insert Into @Baks (backup_set_id,
media_set_id,
first_family_number,
last_family_number,
first_lsn,
last_lsn,
database_backup_lsn,
backup_finish_date,
type,
family_sequence_number,
physical_device_name,
device_type)
Select Top(1) With Ties B.backup_set_id,
B.media_set_id,
B.first_family_number,
B.last_family_number,
B.first_lsn,
B.last_lsn,
B.database_backup_lsn,
B.backup_finish_date,
B.type,
BF.family_sequence_number,
BF.physical_device_name,
BF.device_type
From msdb.dbo.backupset B
Inner Join msdb.dbo.backupmediafamily As BF
On BF.media_set_id = B.media_set_id
And BF.family_sequence_number Between B.first_family_number And B.last_family_number
Where B.database_name = @DBName
And B.is_copy_only = 0
And B.type = 'L'
And BF.physical_device_name Not In ('Nul', 'Nul:')
And @DBBackupLSN Between B.first_lsn And B.last_lsn
Order By backup_finish_date, backup_set_id;
-- Get last_lsn of the first log backup that will be restored
Set @DBBackupLSN = Null;
Select @DBBackupLSN = Max(last_lsn)
From @Baks
Where type = 'L';
--select * from @Baks;
-- Recursively get all log backups, in order, to be restored
-- first_lsn of the log backup = last_lsn of the previous log backup
With Logs
As (Select B.backup_set_id,
B.media_set_id,
B.first_family_number,
B.last_family_number,
B.first_lsn,
B.last_lsn,
B.database_backup_lsn,
B.backup_finish_date,
B.type,
BF.family_sequence_number,
BF.physical_device_name,
BF.device_type,
1 As LogLevel
From msdb.dbo.backupset B
Inner Join msdb.dbo.backupmediafamily As BF
On BF.media_set_id = B.media_set_id
And BF.family_sequence_number Between B.first_family_number And B.last_family_number
Where B.database_name = @DBName
And B.is_copy_only = 0
And B.type = 'L'
And BF.physical_device_name Not In ('Nul', 'Nul:')
And B.first_lsn = @DBBackupLSN
and B.First_lsn <> B.Last_Lsn -- Anders 2016-02-26
Union All
Select B.backup_set_id,
B.media_set_id,
B.first_family_number,
B.last_family_number,
B.first_lsn,
B.last_lsn,
B.database_backup_lsn,
B.backup_finish_date,
B.type,
BF.family_sequence_number,
BF.physical_device_name,
BF.device_type,
L.LogLevel + 1
From msdb.dbo.backupset B
Inner Join msdb.dbo.backupmediafamily As BF
On BF.media_set_id = B.media_set_id
And BF.family_sequence_number Between B.first_family_number And B.last_family_number
Inner Join Logs L On L.database_backup_lsn = B.database_backup_lsn
Where B.database_name = @DBName
And B.is_copy_only = 0
And B.type = 'L'
And BF.physical_device_name Not In ('Nul', 'Nul:')
And B.first_lsn = L.last_lsn
and B.First_lsn <> B.Last_Lsn) -- Anders 2016-02-26
Insert Into @Baks (backup_set_id,
media_set_id,
first_family_number,
last_family_number,
first_lsn,
last_lsn,
database_backup_lsn,
backup_finish_date,
type,
family_sequence_number,
physical_device_name,
device_type)
Select backup_set_id,
media_set_id,
first_family_number,
last_family_number,
first_lsn,
last_lsn,
database_backup_lsn,
backup_finish_date,
type,
family_sequence_number,
physical_device_name,
device_type
From Logs
Option(MaxRecursion 0);
-- Select out just the columns needed to script restore
IF OBJECT_ID('tempdb..#Baks') IS NOT NULL
DROP TABLE #Baks
Select RestoreOrder = Row_Number() Over(Partition By family_sequence_number Order By BakID),
RestoreType = Case When type In ('D', 'I') Then 'Database'
When type = 'L' Then 'Log'
End,
DeviceType = Case When device_type in (2, 102) Then 'Disk'
When device_type in (5, 105) Then 'Tape'
End,
PhysicalFileName = replace (physical_device_name, '\\agdata.int\backups\ebusiness\eBiz\MONSANTO\MONSANTO', '\\BU\AndersTest\AndersTest')
into #Baks -- to stop from having to read it twice, this could be done neater
From @Baks
Order By BakID;
Set NoCount on;
set quoted_identifier off;
if @Output in (1,3)
select * from #Baks
if @Output in (2,3)
begin
/****
Anders 2016-11/28
This could be done going into a temp table to create one output, but that would create a column name
Done this way, the script can simply be copied and pasted to a new query window to be executed
*****/
select "restore database " + @DBName +" from DISK = '" + PhysicalFileName + "' with NORECOVERY" + case when RIGHT(PhysicalFileName, 3) = 'bak' then " , REPLACE" else "" end
from #Baks
where RestoreType = 'Database'
order by RestoreOrder
select "restore log " + @DBName +" from DISK = '" + PhysicalFileName + "' with NORECOVERY"
from #Baks
where RestoreType = 'Log'
order by RestoreOrder
-- recover the database
select "RESTORE DATABASE " +@DBName + " WITH RECOVERY"
end
Set NoCount Off;
END -- proc
In the morning hours our disk system lost the log files for all my user databases due to a firmware bug.... The connection to the disks could not be re-established and new drives was set up. With all the databases being in Full Recovery mode, as well as having JUST completed transaction log backups a minute prior to the loss of disks, it was established that we could restore what we had with minimal loss.
However, there was many databases affected, and each would have about 45 files to restore. I quickly pulled the query out from the previous article and the numbers from it matched what I had found just taking a quick look at the backup tables. My previous script would just give me a list of files, in the correct order to be restored. For a total of 450 files, this seemed like too much to manually copy and paste without messing up somewhere. With my boss we decided it was better to spend my time rewriting my query to also give the restore statements.
All comments from previous blog posts on this subject still applies. The script assumes a couple of things, mainly that restores are happening to the same location as they where backed up from, i.e. no WITH MOVE commands included. This would be fairly easy to add if you need it, and I might go ahead and made another version later that does this as time permits.
The output from the procedure when called with a 2 for the output parameter looks like this:
Code is below, can also be found in my Github
set quoted_identifier off;
go
use master
go
create procedure sp_RestoreChainByDB2
@DBName sysname,
@Output int = 1, -- default to previous versions output
@MaxFullBackupDate date = NULL
as
begin
/****************
Created by @SQLSoldier
http://www.sqlsoldier.com/wp/sqlserver/day3of31daysofdisasterrecoverydeterminingfilestorestoredatabase
2015-09-24
Modified by Anders Pedersen
Twitter: @arrowdrive
Blog: Blog.AndersOnSQL.com
Made to work with SQL 2008R2.
2016-02-26
Modified by Anders Pedersen
Fix issue where First_LSN and Last_LSN values are the same on a record for the trasaction log backups.
These transaction log backups can be ignored in the restore chain.
Also made into a procedure, with the database name for a required parameter, and max date for the full backup to start with as a parameter
2016-11-28
Modified by Anders Pedersen
Added support for scripting out restore statements.
Added parameter @output for what the script should print out: 1 = Restore chain only, 2 = Restore Script only, 3 = both
Note that the restore scripts assume the files are going back to the original location.
This part could easily be modified by adding parameters for data and log file location
Or even made table driven if a lot of files going to different locations.
This was made for my own need to quickly create a restore chain to recover a server where the databases existed, but had to be over written.
*****************/
Declare @DBBackupLSN numeric(25, 0)
Declare @Baks Table (
BakID int identity(1, 1) not null primary key,
backup_set_id int not null,
media_set_id int not null,
first_family_number tinyint not null,
last_family_number tinyint not null,
first_lsn numeric(25, 0) null,
last_lsn numeric(25, 0) null,
database_backup_lsn numeric(25, 0) null,
backup_finish_date datetime null,
type char(1) null,
family_sequence_number tinyint not null,
physical_device_name nvarchar(260) not null,
device_type tinyint null,
checkpoint_LSN numeric (25,0)-- Anders 2015-09-24
)
Set NoCount On;
-- Get the most recent full backup with all backup files
Insert Into @Baks (backup_set_id,
media_set_id,
first_family_number,
last_family_number,
first_lsn,
last_lsn,
database_backup_lsn,
backup_finish_date,
type,
family_sequence_number,
physical_device_name,
device_type
,checkpoint_LSN)-- Anders
Select Top(1) With Ties B.backup_set_id,
B.media_set_id,
B.first_family_number,
B.last_family_number,
B.first_lsn,
B.last_lsn,
B.database_backup_lsn,
B.backup_finish_date,
B.type,
BF.family_sequence_number,
BF.physical_device_name,
BF.device_type
, B.checkpoint_lsn -- Anders 2015-09-24
From msdb.dbo.backupset As B
Inner Join msdb.dbo.backupmediafamily As BF
On BF.media_set_id = B.media_set_id
And BF.family_sequence_number Between B.first_family_number And B.last_family_number
Where B.database_name = @DBName
And B.is_copy_only = 0
And B.type = 'D'
And BF.physical_device_name Not In ('Nul', 'Nul:')
and convert(DATE,B.backup_start_date) <=(CASE
when @MaxFullBackupDate is not null then @MaxFullBackupDate
else convert(DATE,B.backup_start_date) end) -- Anders 2016-02-26
Order By backup_finish_date desc, backup_set_id;
-- Get the lsn that the differential backups, if any, will be based on
Select @DBBackupLSN = checkpoint_LSN -- Anders 2015-09-24
From @Baks;
-- Get the most recent differential backup based on that full backup
Insert Into @Baks (backup_set_id,
media_set_id,
first_family_number,
last_family_number,
first_lsn,
last_lsn,
database_backup_lsn,
backup_finish_date,
type,
family_sequence_number,
physical_device_name,
device_type)
Select Top(1) With Ties B.backup_set_id,
B.media_set_id,
B.first_family_number,
B.last_family_number,
B.first_lsn,
B.last_lsn,
B.database_backup_lsn,
B.backup_finish_date,
B.type,
BF.family_sequence_number,
BF.physical_device_name,
BF.device_type
From msdb.dbo.backupset As B
Inner Join msdb.dbo.backupmediafamily As BF
On BF.media_set_id = B.media_set_id
And BF.family_sequence_number Between B.first_family_number And B.last_family_number
Where B.database_name = @DBName
And B.is_copy_only = 0
And B.type = 'I'
And BF.physical_device_name Not In ('Nul', 'Nul:')
And B.database_backup_lsn = @DBBackupLSN
Order By backup_finish_date Desc, backup_set_id;
--select * from @Baks
-- Get the last LSN included in the differential backup,
-- if one was found, or of the full backup
Select Top 1 @DBBackupLSN = last_lsn
From @Baks
Where type In ('D', 'I')
Order By BakID Desc;
-- Get first log backup, if any, for restore, where
-- last_lsn of previous backup is >= first_lsn of the
-- log backup and <= the last_lsn of the log backup
Insert Into @Baks (backup_set_id,
media_set_id,
first_family_number,
last_family_number,
first_lsn,
last_lsn,
database_backup_lsn,
backup_finish_date,
type,
family_sequence_number,
physical_device_name,
device_type)
Select Top(1) With Ties B.backup_set_id,
B.media_set_id,
B.first_family_number,
B.last_family_number,
B.first_lsn,
B.last_lsn,
B.database_backup_lsn,
B.backup_finish_date,
B.type,
BF.family_sequence_number,
BF.physical_device_name,
BF.device_type
From msdb.dbo.backupset B
Inner Join msdb.dbo.backupmediafamily As BF
On BF.media_set_id = B.media_set_id
And BF.family_sequence_number Between B.first_family_number And B.last_family_number
Where B.database_name = @DBName
And B.is_copy_only = 0
And B.type = 'L'
And BF.physical_device_name Not In ('Nul', 'Nul:')
And @DBBackupLSN Between B.first_lsn And B.last_lsn
Order By backup_finish_date, backup_set_id;
-- Get last_lsn of the first log backup that will be restored
Set @DBBackupLSN = Null;
Select @DBBackupLSN = Max(last_lsn)
From @Baks
Where type = 'L';
--select * from @Baks;
-- Recursively get all log backups, in order, to be restored
-- first_lsn of the log backup = last_lsn of the previous log backup
With Logs
As (Select B.backup_set_id,
B.media_set_id,
B.first_family_number,
B.last_family_number,
B.first_lsn,
B.last_lsn,
B.database_backup_lsn,
B.backup_finish_date,
B.type,
BF.family_sequence_number,
BF.physical_device_name,
BF.device_type,
1 As LogLevel
From msdb.dbo.backupset B
Inner Join msdb.dbo.backupmediafamily As BF
On BF.media_set_id = B.media_set_id
And BF.family_sequence_number Between B.first_family_number And B.last_family_number
Where B.database_name = @DBName
And B.is_copy_only = 0
And B.type = 'L'
And BF.physical_device_name Not In ('Nul', 'Nul:')
And B.first_lsn = @DBBackupLSN
and B.First_lsn <> B.Last_Lsn -- Anders 2016-02-26
Union All
Select B.backup_set_id,
B.media_set_id,
B.first_family_number,
B.last_family_number,
B.first_lsn,
B.last_lsn,
B.database_backup_lsn,
B.backup_finish_date,
B.type,
BF.family_sequence_number,
BF.physical_device_name,
BF.device_type,
L.LogLevel + 1
From msdb.dbo.backupset B
Inner Join msdb.dbo.backupmediafamily As BF
On BF.media_set_id = B.media_set_id
And BF.family_sequence_number Between B.first_family_number And B.last_family_number
Inner Join Logs L On L.database_backup_lsn = B.database_backup_lsn
Where B.database_name = @DBName
And B.is_copy_only = 0
And B.type = 'L'
And BF.physical_device_name Not In ('Nul', 'Nul:')
And B.first_lsn = L.last_lsn
and B.First_lsn <> B.Last_Lsn) -- Anders 2016-02-26
Insert Into @Baks (backup_set_id,
media_set_id,
first_family_number,
last_family_number,
first_lsn,
last_lsn,
database_backup_lsn,
backup_finish_date,
type,
family_sequence_number,
physical_device_name,
device_type)
Select backup_set_id,
media_set_id,
first_family_number,
last_family_number,
first_lsn,
last_lsn,
database_backup_lsn,
backup_finish_date,
type,
family_sequence_number,
physical_device_name,
device_type
From Logs
Option(MaxRecursion 0);
-- Select out just the columns needed to script restore
IF OBJECT_ID('tempdb..#Baks') IS NOT NULL
DROP TABLE #Baks
Select RestoreOrder = Row_Number() Over(Partition By family_sequence_number Order By BakID),
RestoreType = Case When type In ('D', 'I') Then 'Database'
When type = 'L' Then 'Log'
End,
DeviceType = Case When device_type in (2, 102) Then 'Disk'
When device_type in (5, 105) Then 'Tape'
End,
PhysicalFileName = replace (physical_device_name, '\\agdata.int\backups\ebusiness\eBiz\MONSANTO\MONSANTO', '\\BU\AndersTest\AndersTest')
into #Baks -- to stop from having to read it twice, this could be done neater
From @Baks
Order By BakID;
Set NoCount on;
set quoted_identifier off;
if @Output in (1,3)
select * from #Baks
if @Output in (2,3)
begin
/****
Anders 2016-11/28
This could be done going into a temp table to create one output, but that would create a column name
Done this way, the script can simply be copied and pasted to a new query window to be executed
*****/
select "restore database " + @DBName +" from DISK = '" + PhysicalFileName + "' with NORECOVERY" + case when RIGHT(PhysicalFileName, 3) = 'bak' then " , REPLACE" else "" end
from #Baks
where RestoreType = 'Database'
order by RestoreOrder
select "restore log " + @DBName +" from DISK = '" + PhysicalFileName + "' with NORECOVERY"
from #Baks
where RestoreType = 'Log'
order by RestoreOrder
-- recover the database
select "RESTORE DATABASE " +@DBName + " WITH RECOVERY"
end
Set NoCount Off;
END -- proc
Tuesday, October 11, 2016
T-SQL Tuesday #83. We're still dealing with the same problems.
This months T-SQL Tuesday is hosted by Andy Mallon at T-SQL Tuesday #83 , the subject is "We're still dealing with the same problems."
In the 20+ years I have been working with MS SQL Server, some issues continue to pop up, miss configured NIC settings, badly configured servers in general, bad disk configurations, horrible use of tempdb, table variables, SQL Variant... just to name a few.
However, in most of the jobs I have had, I come in as the DBA for a system already in use, sometimes for many years. And many times without there ever having been a person in charge of the database(s). I end up dealing with pre-existing issues in the architecture of the databases.
The one thing I see over and over, is a complete lack of foresight in design. Or as I like to call it "Spec Fixation," which is where things are developed exactly to specifications, without any thoughts to how the system will be used, or what could be needed in the future.
Two examples are in order I believe. One worked out, the other, well, still a work in progress.
Example 1:
Working to maintain listener information, such as age groups, for a large nation wide radio company, the developer I was helping was developing it with the 3 buckets hard coded for age groups. I rejected the design and coached him how to make the buckets table driven. He kept insisting that is not what the design called for, to which I replied I really did not care what the spec said as long as we met the minimum. Since I refused to move anything to production that was not table driven, he finally accepted my solution and implemented it. On the day of the deployment, everything worked the way the business wanted it. However, by 4 PM, the business manager came to him and basically said "uhm, you know, we really need more fidelity to this data, can you make it 5 buckets instead of 3?" In 15 minutes of work we had it with the 5 buckets. Estimated delivery time if we had gone with his original design would have added 2 columns to the table, changes to the front end etc. Win!
Example 2:
ETL. Spec said only Address Line 1 is needed to be loaded, so the developers only bring that line in (plus name, city etc.). Fast forward 8 years, I get a request on my desk: "Please add Address Line 2 to import, and all tables. Oh, and we need historical data for previously loaded files. And for all address types".
Groan.
No normalization in this database (which is just one of about 40 databases with SIMILAR structure, but not identical).
2 weeks of research, here is what I found that needed changed:
60 tables
10 views
260+ procedures that reference the above tables and views.
1 DTS(!!!!!!!!!!!!!!!) package
1 SSIS package.
And that was just to get data INTO SQL. Also needed to rewrite to get the data back out to the client after we did our job with it. Oh, and no way to get the historical data loaded.
This could ALL have been prevented if the original developers had just loaded all address information available in the file from day 1 (all lines have been in the file since the very first test file we got, which I managed to dig up). Instead it was a major undertaking.
So, please, take a moment to think how the system will be used, and how it could evolve. It can save major headaches in the future.
Subscribe to:
Posts (Atom)