Encountering issues when trying to shrink the log file in SQL Server can lead to bloated database files, reduced storage efficiency, and potential performance degradation. This comprehensive guide provides detailed instructions on troubleshooting and resolving this issue, along with best practices to avoid similar problems in the future.

However, shrinking a transaction log file is not recommended as a regular or frequent maintenance task, as it can have some drawbacks.

So, first, let’s try to talk you out of it…

First, shrinking a log file can cause physical fragmentation of the log file, as the free space is released from the end of the file, and the remaining space may not be contiguous. Physical fragmentation can affect the performance of sequential I/O operations on the log file, such as backups and restores.

Second, shrinking a log file can increase the number of VLFs in the log file, as each shrink or growth operation creates a certain number of VLFs based on the size of the operation. Too many VLFs can cause performance problems for operations that involve scanning or traversing the log file, such as log clearing, replication, crash recovery, rollbacks, and even regular DML operations.

Third, shrinking a log file can reduce the performance of the database, as the log file may need to grow again to accommodate the expected workload. Log file growth is costly, as it requires time and resources to allocate and initialize the new space. Log file growth can also cause contention and blocking on the log file as multiple transactions compete for the same VLF.

What Is VLF?

VLF stands for virtual log file, which is a logical unit of the transaction log file. Each transaction log file is divided into a number of VLFs, which are sequentially numbered and marked as active or inactive. Active VLFs contain log records that have not been backed up or truncated, while inactive VLFs contain log records that are no longer needed and can be reused. The number and size of VLFs depend on the size and growth of the transaction log file.

The performance of SQL Server can be affected by the VLF and auto-grow values in the following ways:

  • Having too many VLFs can cause performance problems for operations that involve scanning or traversing the transaction log, such as backups, restores, log clearing, replication, crash recovery, rollbacks, and even regular DML operations. Too many VLFs can also increase the recovery time of the database and delay its availability after a restart or a failover. The optimal number of VLFs depends on the transaction log size, but generally speaking, having thousands of VLFs is considered too many.
  • Having too few VLFs can also cause performance problems for operations that involve writing to the transaction log, such as inserts, updates, and deletes. Too few VLFs can cause contention and blocking on the log file, as multiple transactions compete for the same VLF. The optimal number of VLFs depends on the workload and concurrency of the database, but generally speaking, having less than 50 VLFs is considered too few.
  • Frequent auto-grow events can cause performance problems for the database, as each requires some time and resources to allocate and initialize the new space. Frequent auto-grow events can also cause physical fragmentation of the log file, as the new space may not be contiguous with the existing space. Physical fragmentation can degrade the performance of sequential I/O operations on the log file, such as backups and restores. Frequent auto-grow events can also create more VLFs than necessary, as each auto-grow event adds a certain number of VLFs based on the growth size.

To avoid these performance problems, it is recommended to follow these best practices:

  • Set the initial size and the auto-grow value of the transaction log file to a reasonable and appropriate value based on the expected workload and growth rate of the database. Avoid using default or minimal values that can cause frequent auto-grow events. Also, avoid using very large values that can cause excessive VLFs or waste disk space.
  • Monitor the transaction log file size and the number of VLFs regularly and adjust them as needed. Use the DBCC LOGINFO command to check the number and status of the VLFs in the log file. Use the DBCC SHRINKFILE command to shrink the log file and reduce the number of VLFs if necessary. However, keep the log file manageable, as this can cause fragmentation and performance degradation. Only shrink the log file after a one-time operation that caused an unusual growth of the log file, such as a bulk load or an index rebuild.
  • Pre-grow the transaction log file to the desired size and create a balanced number of VLFs.

    You can use the following formula to calculate the optimal number of VLFs for a given log file size: VLFs = LOG(size / 4, 2) + 2, where size is the log file size in GB.

    For example, if the log file size is 8 GB, the optimal number of VLFs is
    LOG(8 / 4, 2) * 2 + 2 =
    4

    To pre-grow the log file, create the optimal number of VLFs, and prevent future auto-grow events from creating too many or too few VLFs, you can use the following steps:
  • Set the auto-grow value of the log file to a fixed amount of 4 GB or 8 GB, depending on the log file size. This will create 8 VLFs per auto-grow event, each with a size of 512 MB or 1 GB, respectively.
  • Grow the log file manually by the desired size (in 4 GB or 8GB chunks, respectively. i.e., grow to 8GB, then extend it to 16GB, then extend it to 24GB, and so forth) so that the number and size of your VLFs are more reasonable (in this case 512MB) and with the optimal number of VLFs.
  • Reset the auto-grow value of the log file to the original value or a more appropriate value (128MB) based on the workload and growth rate of the database.

The best practice is to avoid shrinking the transaction log file unless it is necessary to correct the effects of incorrect auto-growth settings or reclaim the disk space after a one-time operation that caused an unusual log file growth. In that case, shrinking the log file should be done once, then Pre-grow the transaction log file to the desired size using the method described above, and only to the appropriate size that can accommodate the normal workload and growth rate of the database. You do not need to constantly keep shrinking your transaction logs.

Reasons for Unable to Shrink Log File Issue

  • Recovery model: The database’s recovery model determines when and how the log records are truncated and backed up. The simple recovery model truncates the log records automatically after a checkpoint operation, which occurs frequently and does not depend on the user. The full and bulk-logged recovery models truncate the log records only after a log backup operation, which relies on the user and the backup schedule. The simple recovery model requires less log space and maintenance but does not support point-in-time recovery or high-availability features. The full and bulk-logged recovery models require more log space and maintenance, but they support point-in-time recovery and high availability features.
  • Log backup frequency: The log backup frequency determines how often the log records are backed up and truncated in the full and bulk-logged recovery models. The log backup frequency should be based on the recovery point objective (RPO) and the log file size of the database. The more frequent the log backups, the less log space is required and the faster the log truncation and reuse. However, the log backups also consume disk space and network bandwidth, and they may affect the performance of the database. The less frequent the log backups, the more required log space and the slower the log truncation and reuse. However, the log backups also consume less disk space and network bandwidth, and they may have less impact on the performance of the database. The optimal log backup frequency depends on the balance between the log space and the log backup size, as well as the database’s RPO and recovery time objective (RTO).
  • Long-running transactions: If a transaction is active for a long time, it can prevent the log records from being truncated until it is committed or rolled back. This can cause the log file to grow, consume more disk space, and delay log reuse and backup. You can use the DBCC OPENTRAN command to identify the database’s oldest active transaction and start time. You can also use the [log_reuse_wait_desc] column in the [sys].[databases] catalog view to check if the log is waiting for a transaction to end.
  • Replication: If the database is involved in replication, the log records required for replication cannot be truncated until they are distributed to the subscribers. This can cause the log file to grow, consume more disk space, and delay log reuse and backup. You can use the [log_reuse_wait_desc] column in the [sys].[databases] catalog view to check if the log is waiting for replication. You can also use the Replication Monitor or the [sp_replcounters] stored procedure to check the status and performance of the replication agents and the replication latency.
  • Database mirroring: If the database is involved in database mirroring, the log records required for mirroring cannot be truncated until they are sent to the mirror server and hardened on the mirror database. This can cause the log file to grow, consume more disk space, and delay log reuse and backup. You can use the [log_reuse_wait_desc] column in the [sys].[databases] catalog view to check if the log is waiting for mirroring. You can also use the Database Mirroring Monitor or the [sys.database_mirroring] catalog view to check the status and performance of the database mirroring session and the mirroring latency.
  • Always On Availability Groups: If the database is involved in Always On Availability Groups, the log records that are required for availability groups cannot be truncated until they are sent to the secondary replicas and hardened on the secondary databases. This can cause the log file to grow, consume more disk space, and delay log reuse and backup. You can use the [log_reuse_wait_desc] column in the [sys].[databases] catalog view to check if the log is waiting for availability groups. You can also use the Always On Dashboard or the sys.dm_hadr_database_replica_states dynamic management view to check the status and performance of the availability groups and the availability group latency. You should also consider the impact of the availability mode and the backup preference settings on the log truncation and reuse.
  • VLF count: If the log file has many VLFs, it can prevent it from shrinking, even after log truncation. This is because the log file can only be shrunk from the end, and if the last VLF is active, the log file cannot be reduced. You can use the DBCC LOGINFO command to check the number and status of the VLFs in the log file. If the log file has too many VLFs, you can use the DBCC SHRINKFILE command to shrink the log file and reduce the number of VLFs. However, you should also consider the impact of the VLF and auto-grow values on the performance and growth of the log file, as explained in the above sections.

Methods to Shrink Log File

1. Take a Backup of the Transaction Log

  • Take a log backup:
  • Once the backup is completed, attempt to shrink the log file.
  • If that fails, go to step 2

2. Switch to Simple Recovery Model

  • Change the database recovery model to simple.
  • The log file can be shrunk after switching to the simple recovery model without needing log backups.
  • Change the database recovery model back to full
  • Take a full backup of the database
  • Take a transaction log backup of the database

Conclusion

In this blog post, we have learned how to shrink transaction log files in SQL Server and what factors can affect log truncation, reuse, and growth. Shrinking transaction log files can be useful for reclaiming disk space and improving performance. However, it should be done with caution and planning, as it can also cause fragmentation, increased VLF count, and reduced performance. The best practice is to monitor and manage the transaction log file size by choosing the appropriate recovery model, performing regular log backups, avoiding long-running transactions, and configuring the auto-growth settings. We have also learned how to optimize the VLF and auto-grow values to prevent performance problems and ensure optimal log file growth and reuse.

I hope you enjoyed reading this blog post and learned something new. If you have any questions or feedback, please feel free to comment below or contact us at XTIVIA.

Share This