Devart Blog

SQL Server

How to automatically generate SQL Server documentation

Posted by on November 2nd, 2016

In this blog post, we will learn how to quickly create a comprehensive documentation of an SQL Server database using dbForge Documenter for SQL Server, a database documentation generator from Devart.

There are a lot of benefits you get when generating database documentation automatically using dbForge Documenter for SQL Server. You don’t have to spend hours retrieving and writing technical information from scratch. Documenter builds an accurate and error-free documentation in just a few minutes. Besides, the documentation tool from Devart provides a rich set of features for customizing documentation output to meet your specific requirements.

Documenter presents documentation in an easy to view format, so you can share it with your boss or clients, other developers, DBAs, testers, project managers, business executives or other related persons.

The following example demonstrates how to generate documentation for the AdventureWorks2012 sample database.


SQL Server 2016 Stretch Database

Posted by on July 12th, 2016

SQL Server 2016 Stretch Database

On June 1, SQL Server 2016 was released. It brought a large number of novelties into normal development, including the long-announced Stretch Database technology that allows migrating data from SQL Server dynamically to Azure.

From the marketing point of view, Stretch Database has been well promoted. Reasonable arguments were stating that with the growth of historical data the complexity and cost of its maintenance grows as well. As a rational solution to the problem, the automatic migration of obsolescent archival data to the cloud was introduced. Frankly speaking, I enjoyed the idea.

As from SQL Server 2016 RC0, I began testing the Stretch Database technology on two projects I’m helping to develop. The first one is TMetric, a free time tracker with OLTP load, and the second one is an internal project with DW load.

SQL Server 2016 RC0

Posted by on March 15th, 2016

The news about Microsoft’s intention to “chum” SQL Server with Linux keeps floating around. But I haven’t heard a single word about SQL Server 2016 Release Candidate, that recently became available for download.

The release of SQL Server 2016 RTM is set for next month. So, in this overview I will cover some new features that will become available in new version on SQL Server, i.e. installation diffs, default TraceFlags, new functionality and the killer feature for analysis of execution plan.

Enabling AUTO_CLOSE is a bad idea?

Posted by on February 12th, 2016

From a personal perspective, allowing a production database to run with AUTO_CLOSE option is not the best practice. Let me explain why you should not enable AUTO_CLOSE and the consequences of using this option.

The other day, I had to look in Error Log on a test server. After a two-minute timeout, I saw a great number of messages stored in the log, and I decided to check the log size using xp_enumerrorlogs:

EXEC sys.xp_enumerrorlogs

SELECT lod_id, last_log, size_mb = size / 1048576.
lod_id   last_log              size_mb
-------- --------------------- ---------------
0        2016-01-05 08:46:00   567.05288505
1        2015-12-31 12:53:00   1370.39249420
2        2015-12-18 11:32:00   768.46394729
3        2015-12-02 13:54:00   220.20050621
4        2015-12-02 13:16:00   24.04152870
5        2015-11-16 13:37:00   80.07946205
6        2015-10-22 12:13:00   109.33527946


What is faster inside SUM & AVG: 0 or NULL?

Posted by on January 18th, 2016

There are three aggregate functions that are most often used in practice: COUNT, SUM and AVG. The first one has already been discussed previously, while the other two have interesting performance nuances. But first, some theory…

When using aggregate functions in the execution plan, there may be two operators: Stream Aggregate and Hash Match, depending on the input stream.

The first may require pre-sorted set of input values, while Stream Aggregate does not block the execution of subsequent operators.

In turn, Hash Match is a blocking operator (with rare exceptions) and does not require sorting of the input stream. Hash Match uses a hash table that is created in memory, and in the case of incorrect assessment of the expected number of rows, the operator can spill the results into tempdb.

Find invalid objects in your databases

Posted by on January 4th, 2016

DBA has a number of duties that are primarily targeted at supporting of database performance capabilities and data consistency. The administrator can use the CHECKDB command to easily verify the data consistency; however, in case they need to find an invalid object in a database schema, some difficulties may occur.

ORACLE for instance, allows you to get a list of invalid objects:

SELECT owner, object_type, object_name
FROM all_objects
WHERE status = 'INVALID'

SQL Server doesn’t allow to do that directly. In most cases, you need to execute a script to see that an object is invalid. This is very inconvenient…
So let’s create a script that will search invalid objects: (more…)

How to reduce MSDB size from 42Gb to 200Mb

Posted by on December 22nd, 2015

Recently I’ve got a spare minute to see why an old test server was running too slow… I had nothing to do with this, but I was very anxious to find out what was wrong with the server.

First thing, I opened Resource Monitor and looked at the overall load. The sqlserv.exe process took up 100% of CPU and generated a large disk queue exceeding 300… whereas the number greater than 1 is considered problematic.

When analyzing disk activity, I observed continuous IO operations in msdb:


I looked at the size of msdb:

SELECT name, size = size * 8. / 1024, space_used = FILEPROPERTY(name, 'SpaceUsed') * 8. / 1024
FROM sys.database_files

and switch to the “facepalm” mode:

name         size           space_used
------------ -------------- ---------------
MSDBData     42626.000000   42410.374395
MSDBLog      459.125000     6.859375

The data file takes up 42 GB… After a small break, I began to investigate the reason for such «unhealthy» size of msdb, and how to overcome the problems with server performance.

XML, XQuery & Performance Issues

Posted by on December 21st, 2015

In this article, we will discuss some pitfalls related to XML and XQuery that may lead to performance issues.

The following code will generate a test XML file:

What is the fastest way to calculate the record COUNT?

Posted by on December 15th, 2015

I have always liked simple questions with lots of pitfalls. Here is the one: how do you count the total number of records in a table? At first sight, it’s a snap, but if you dig a little deeper, you can reveal lots of peculiar nuances.

So, let’s start from a simple thing. Do the following queries differ in terms of the end result?

SELECT COUNT(*) FROM Sales.SalesOrderDetail
SELECT COUNT_BIG(*) FROM Sales.SalesOrderDetail


IT NonStop 2015 – Kharkiv (Ukraine)

Posted by on December 4th, 2015

The international IT conference IT NONSTOP was held in Kharkiv (Ukraine) on November 28, 2015 and gathered a lot of enthusiastic participants.

A speaker of the conference, Syrovatchenko Sergey (SQL Server DBA / DB Consultant from Devart) talked to the participants of the conference about the «Mortal sins in the development on SQL Server». Here is a brief of his report:

“When people start to deal with something new, they pass though thorny path of trial and error. In my report, I tried to give a list of “traps” I got in many times when I’d just started working with SQL Server. Typical errors relating to filtering values, implicit type conversion, and the proper use of indexes were briefly discussed…”

Awarding dbForge Studio license during the event was a pleasant bonus.