Shrink Oversized Data Files in Microsoft SQL Server

We’ve all been there: a giant data file is sitting somewhere on a disk drive and you regret letting it get this big in the first place. If your disk is almost out of space, you may have found some objects that you can drop (or some rows that could be deleted) but is that enough? Probably not. So how do you get out of this sticky situation? SATS’ DBAs can tell you how.

Avoid!
Love yourself and your database a little, so avoid hitting those situations in the first place. Here are a few tips:

Transaction Logs
Transaction logs are often the culprit of space issues, so make sure you backup transaction logs frequently (since this will clear them) and instruct the developers to COMMIT (or CHECKPOINT) frequently in their code. If possible, consider running database in simple recovery mode (by executing the SQL statement below)

ALTER DATABASE [MyDatabase] SET RECOVERY SIMPLE;

Simple recovery mode does not allow recovery to any point in time afterwards, but is sometimes the only way around bloat. Transaction logs could also be truncated and shrunk relatively fast if necessary using the following:

BACKUP LOG [MyDatabase] WITH TRUNCATE_ONLY
GO
DBCC SHRINKFILE (MyDatabase_Log,1)

By doing this, you will invalidate your chain of transaction logs, so its is recommended that you make a FULL BACKUP immediately afterwards.

Auto-growth
Additionally, you can disable auto-growth, switch to static data files and install some sort of monitoring (like Cerberus) that will notify when space is running low. It is even recommended to create several file groups and grow them manually by yourself.

Also make sure to have monitoring that alerts you when there are more actionable options to free up space, i.e. will alert when you’re 90% full as opposed to 100%.

Say no to auto-shrink
Many (lazy) DBAs create jobs to shrink a database when there is some free space to reclaim. We have even seen a case where a database was set to auto-grow and auto-shrink simultaneously, burdening all resources. In general — shrinking is not good for your database,hogs a lot of resources, and creates fragmentation. You are better off disabling it like so:

ALTER DATABASE [MyDatabase] SET AUTO_SHRINK OFF;

Don’t put all your eggs in one basket
Make sure to have a few partitions available, and, most importantly, separate the data files from log files and the operation system. There are a number of programs that will allow such things to be done with minimal downtime even after the installation.

Or one can do it manually with transact SQL commands.

Shrinking
Shrinking is the act of moving pages from the end of the data file and trying to move them as close to the beginning of the file as possible. During this process, all links are fixed. If it sounds too good to be true…it is: this works only if there are no clustered or non-clustered indexes involved. However, when small pieces of SQL data that makes our life better (and our queries faster) are being tampered with , the result is usually fragmentation. In fact, every type of shrinking (other than TRUNCATE_ONLY) is not very useful.

If you do take the time to rebuild all the indexes, a problem occurs when you try to recover a large amount of space. For example, let’s say we’re trying to recover 50GB. SQL server will attempt to move 50GB worth of pages and only than it will try to truncate the end of the file. This could take days, which is useless when you’re running out of space and need to clear some up quickly.

Here’s what you can do
Lets say our file MyDatabase_Data is around 300GB right now, we need to release about 50GB. Assuming that you tried the truncating the file by running the following

USE MyDatabase;                             GO                             
DBCC SHRINKFILE (MyDatabase_Data, TRUNCATEONLY);

and it brought no results, you can resort to the following statement:

DBCC SHRINKFILE (MyDatabase_Data, 250000);

This will run for a very long time during which you will not regain any space on the drive. Instead, use the following T-SQL script which will shrink the data in leaps instead and do the work much faster overall:

declare @from int                             
declare @leap int
declare @to int
declare @datafile varchar(128)
declare @cmd varchar (512)
/*settings*/                                                      
set @from = 300000                 /*Current size in MB*/
set @to = 250000 /*Goal size in MB*/
set @datafile = 'MyDatabase_Data' /*Datafile name*/
set @leap = 1000 /*Size of leaps in MB*/
print '--- SATS SHRINK SCRIPT START ---'
while ((@from - @leap) > @to)
begin
set @from = @from - @leap
set @cmd = 'DBCC SHRINKFILE (' + @datafile +', ' + cast(@from as varchar(20)) + ')'
print @cmd
exec(@cmd)
print '==> SATS SHRINK SCRIPT - '+ cast ((@from-@to) as varchar (20)) + 'MB LEFT' end
set @cmd = 'DBCC SHRINKFILE (' + @datafile +', ' + cast(@to as varchar(20)) + ')'
print @cmd
exec(@cmd)
print '--- SATS SHRINK SCRIPT COMPLETE ---'
GO

Edit the settings section to fit your needs and execute against the database where the data file is located. It would still take about the same time, but at least you will be able to see the progress of the operation as well as use the space freed more quickly. After you have shrunk the data, we recommend rebuilding the fragmented indexes of the database and take the needed steps to avoid this from ever happening again. Do not shrink in small steps — keep the total amount of steps under 100.

Got any questions or comments? Feel free to contact SATS directly, or visit us on our site, sats.net.