Articles

Database Engine Tuning Advisor (DTA)

In DBA, SQL Server on April 12, 2008 by bharaniszone

Database Engine Tuning Advisor (DTA)

For a complex database with dozens or hundreds of tables, it isn’t so easy to identify the best indexes to create. The Database Engine Tuning Advisor (DTA) can be used to provide a little more science to this monumental task.

The DTA replaces the Index Tuning wizard from SQL Server 2000. It can be used to:

  • Recommend indexes (clustered and nonclustered) to create
  • Recommend indexes to drop
  • Recommend indexed views
  • Analyze the effects of the proposed changes
  • Provide reports summarizing the changes and the effects of the changes

Setup and Configuration

You can launch DTA from SSMS. It can be selected from the Tools drop-down menu. Once you launch it, you will be prompted to identify the instance you want to connect with. You will then see a display similar to Figure.

Not able to display image 😦
Database Engine Tuning Advisor

If you’ve run DTA before, the previous results will appear in the left pane. At any time, you can review the recommendations from previous runs. In the right pane, DTA starts in the General properties page. The session name defaults to the username and the date and time the session was created.

Workload

The workload is what we are measuring. It is a set of T-SQL statements that is executed against the database that we are tuning. Workloads can be derived from trace files or trace tables, a T-SQL script file, or XML files.

In many environments where a production server exists that can’t be taken offline, a test bed is created. Ideally, this server will have similar hardware and up-to-date databases (created from recent backups).

Run SQL Profiler on a test server and capture all the activity on an instance of a production server. Each statement executed against the production server can be captured and then saved in either a trace file or trace table. If you do this during live activity, you can create a file duplicating actual activity on your production server. This is a realistic workload file.

One of the drawbacks of files captured with SQL Profiler is that the files can be quite large. This is one of the reasons we’d save it on a test bed server. Executing DTA on the production server with this large trace file can load it down. Instead we’d execute DTA on the test server.

Tuning Options

This page is divided into three sections.

  • Physical Design Structures (PDS) in use database. This allows you to choose what indexes and/or indexed views to recommend.
  Tip  The first time I came across the phrase “Physical Design Structures” (PDS), I spent a good deal of time trying to define it. To save you some searching, Physical Design Structures are simply indexes and indexed views.
  • Partitioning strategy to employ. Here you can allow the DTA to recommend partitioning. Partitioning strategies can be recommended based on two different goals: performance or maintenance:
    • Full partitioning will recommend partitions for the best performance of the workload.
    • Aligned partitioning will recommend partitions with the added goal of making the partitions easy to maintain.
  • Physical Design Structures to keep in database. You can specify whether you want existing indexes or indexed views to be kept.

Recommendations

Once DTA is run, it provides recommendations. You can immediately apply the recommendations, save the recommendations as a T-SQL script file, or evaluate the recommendations.

In the following exercise, we’ll have DTA analyze a query to determine if the database could benefit from an added index when running it. We’ll allow it to recommend partitions based on best performance. Since we’re running only a single query, we’ll keep existing Physical Design Structures.

Running DTA

  1. Launch SSMS.
  2. From the Tools drop-down menu, select Database Engine Tuning Advisor. Click Connect.
  3. Create a workload file that will be used by DTA:
  1. Using Windows Explorer, browse to the root of C:\ on your hard drive.
  2. With C:\ selected, right-click in the right-hand pane, point to New, and select New Text Document.
  3. Rename the text file Workload.sql. In the Rename Warning dialog box, click Yes.
  Caution  If file extensions are not showing in Windows Explorer, the New Text Document file will be saved as Workload.sql.txt. This is not what you want. To enable file extensions to be shown within Windows Explorer, select Tools | Folder Options | View tab, and deselect the check box to “Hide extensions for known file types.”
  1. Right-click Workload.sql file and click Edit. If the Edit option is not available, select Open and Select Notepad. Enter the following script into the file:
  2. USE AdventureWorks;
  3. Go
  4. SELECT * FROM Sales.Customer
  5. Save and close the file.
  1. In DTA, on the general page, accept the default session name. In the Workload section, ensure File is selected and type in c:\Workload.sql to use the file you just created as the workload.
  2. In “Select databases and tables to tune,” click the link next to AdventureWorks labeled “Click to select individual tables.” Select the Customer table in the Sales schema.
  3. For “Database for workload analysis,” select AdventureWorks. Your display should look similar to figure.

Not able to display image 😦

 The General options page in DTA

  1. Click the Tuning Options tab. Our goal is to identify only missing nonclustered indexes, keep existing indexes, and to allow recommendations for partitions based on best performance.
  1. In the “Physical Design Structures (PDS) to use in database,” select Nonclustered Indexes.
  2. In the “Partitioning strategy to employ,” select Full Partitioning. This will allow it to recommend partitions based on best performance.
  3. In “Physical Design Structures (PDS) to keep in database,” select Keep All Existing PDS. Your display should look similar to figure.

DTA Tuning Options

  1. Click the Advanced Options button. Notice that you can have DTA recommend online or offline recommendations. Accept the default of “All recommendations are offline” and click OK.
  2. Click the Start Analysis button to begin the analysis. After a moment, the DTA will complete and provide recommendations.
  3. On the Recommendations tab, you’ll see that an index is recommended.
  4. Click the Actions drop-down menu and select Save Recommendations. In the Save As dialog box, browse to c:\ and name the file as DTARecommendations.sql.
  5. Using Windows Explorer, browse to the root of c:\ and open the DTARecommendations.sql file. You’ll see that it includes a script to implement the recommendations.

Articles

Transaction log full solution

In DBA, SQL Server on March 9, 2008 by bharaniszone

If the Regular database Transaction logs runs out of space, this is indicated in the SQL ERRORLOG files, use the following process:

1. Free up (unallocated) the space used by the LOG portion of the database with the following command called from the master database:

USE <database>
GO
BACKUP LOG <database> WITH TRUNCATE_ONLY
GO

Notes:

1. After you truncate a database LOG file, the SQL server documentation recommends that you back up your database. In case of a physical failure (for example a power down or hard disk error), the SQL server cannot recover from the transaction log, as it was just truncated.

2. After running this command, the LDF file has been reorganized to have a lot of unallocated space, but the database must be shrunk to release that space to the file system. (It still looks like a large file if you view it from a command prompt directory listing). See next example for how to shrink the database.

Shrinking a Database

You can shrink a database to release the unallocated or unused space (or both) to the file system with the following command:

USE <database>
GO
DBCC SHRINKDATABASE (database)
GO

You can also use the SQL Enterprise Manager to shrink a database by selecting the following menu items: Right click on the Database -> All Tasks -> Shrink >> logfile but this is not recommended for many reasons

Articles

Database Programming Techniques

In DBA, SQL Server on January 12, 2008 by bharaniszone

The goal of defensive database programming is to produce resilient database code; in other words code that does not contain bugs and is not susceptible to being broken by unexpected use cases, small modifications to the underlying database schema, changes in SQL Server settings, and so on.

If you fail to program defensively, then code that runs as expected on a given standalone server, with a specific configuration, may run very differently in a different environment, under different SQL Server settings, against different data, or under conditions of concurrent access. When this happens, you will be susceptible to erratic behavior in your applications, performance problems, data integrity issues and unhappy users.

The process of reducing the number of vulnerabilities in your code, and so increasing its resilience, is one of constantly questioning the assumptions on which your implementation depends, ensuring they are always enforced if they are valid, and removing them if not. It is a process of constantly testing your code, breaking it, and then refining it based on what you have learned.

The best way to get a feel for this process, and for how to expose vulnerabilities in your code and fix them using defensive programming techniques, is to take a look at a few common areas where I see that code is routinely broken by unintended use cases or erroneous assumptions:

Unreliable search patterns
Reliance on specific SQL Server environment settings
Mistakes and ambiguity during data modifications
In each case, we’ll identify the assumptions that lead to code vulnerability, and show how to fix them. All the examples in this article are as simple as possible in that there is no concurrency and the underlying database schema is fixed.

My forthcoming book on this subject introduces many of the additional dangers that can arise when exposing the code to changes in the database schema, running it under high concurrency, and so on.

Reducing Code Vulnerability
There are four key elements to defensive database programming that, when applied, will allow you to eliminate bugs and make your code less vulnerable to be being subsequently broken by cases of unintended use:

Define and understand your assumptions
Test as many use cases as possible
Lay out your code in short, fully testable, and fully tested modules
Reuse your code whenever feasible, so that the code to solve a given problem is implemented in one place only
While I will occasionally make brief mention of the sort of checks and tests that ought to be included in your unit tests (steps 2 and 3), this article is focused on defensive programming, and so on the rigorous application of the first two principles.

Define your Assumptions
One of the most damaging mistakes made during the development of SQL code, and any other code, is a failure to explicitly define the assumptions that have been made regarding how the code should operate, and how it should respond to various inputs. Specifically, we must:

Explicitly list the assumptions that have been made
Ensure that the these assumptions always hold
Systematically remove assumptions that are not essential, or are incorrect
When identifying these assumptions, there can be one of three possible outcomes. Firstly, if an assumption is deemed essential, it must be documented, and then tested rigorously to ensure it always holds; I prefer to use unit tests to document such assumptions. Failure to do so will mean that when the code makes it into production it will inevitably be broken as a result of usage that conflicts with the assumption.

Secondly, if the assumption is deemed non-essential, it should, if possible, be removed. Finally, in the worst case, the code may contain assumptions that are simply wrong, and can threaten the integrity of any data that the code modifies. Such assumptions must be eliminated from the code.

Rigorous Testing
As we develop code, we must use all our imagination to come up with cases of unintended use, trying to break our modules. We should incorporate these cases into our testing suites.

As we test, we will find out how different changes affect code execution and learn how to develop code that does not break when “something”, for example a language setting or the value of ROWCOUNT, changes

Having identified a setting that breaks one of our code modules, we should fix it and then identify and fix all other similar problems in our code. We should not stop at that. The defensive programmer must investigate all other database settings that may affect the way the code runs and then review and amend the code again and again, fixing potential problems before they occur. This process usually takes a lot of iterations, but every time we end up with better, more robust code and we will save a lot of potential wasted time in troubleshooting problems, as well as expensive retesting and redeployment, when the code is deployed to production.

Throughout the rest of this article, we’ll discuss how this basic defensive coding philosophy is applied in practice, by way of some simple practical examples.

Defending Against Cases of Unintended Use
All-too-often, we consider our code to be finished as soon as it passes a few simple tests. We do not take enough time to identify and test all possible, reasonable use cases for our code. When the inevitable happens, and our code is used in a way we failed to consider, it does not work as expected.

To demonstrate these points, we’ll consider an example that shows how (and how not) to use string patterns in searching. We’ll analyze a seemingly working stored procedure that searches a Messages table, construct cases of unintended use, and identify an implicit assumption on which the implementation of this procedure relies. We will then need to decide whether to eliminate the assumption or to guarantee that it always holds. Either way, we will end up with a more robust procedure.

Listing 1 contains the code needed to create a sample Messages table, which holds the subject and body of various text messages, and load it with two sample messages. It then creates the stored procedure, SelectMessagesBySubjectBeginning, which will search the messages using a search pattern based on the LIKE keyword. The stored procedure takes one parameter, SubjectBeginning, and is supposed to return every message whose subject starts with the specified text.

CREATE TABLE dbo.Messages
(
MessageID INT NOT IDENTITY(1,1) NOT NULL
PRIMARY KEY,
Subject VARCHAR(30) NOT NULL ,
Body VARCHAR(100) NOT NULL
) ;
GO
INSERT INTO dbo.Messages
( Subject ,
Body
)
SELECT ‘Next release delayed’ ,
‘Still fixing bugs’
UNION ALL
SELECT ‘New printer arrived’ ,
‘By the kitchen area’ ;
GO
CREATE PROCEDURE dbo.SelectMessagesBySubjectBeginning
@SubjectBeginning VARCHAR(30)
AS
SET NOCOUNT ON ;
SELECT Subject ,
Body
FROM dbo.Messages
WHERE Subject LIKE @SubjectBeginning + ‘%’ ;

Listing 1: Creating and populating the Messages table along with the stored procedure to search the messages

Some preliminary testing against this small set of test data, as shown in Listing 2, does not reveal any problems.

— must return one row

EXEC dbo.SelectMessagesBySubjectBeginning
@SubjectBeginning=’Next’;
Subject Body

—————————— ——————-

Next release delayed Still fixing bugs

— must return one row

EXEC dbo.SelectMessagesBySubjectBeginning

@SubjectBeginning=’New’;

Subject Body

—————————— ——————-

New printer arrived By the kitchen area

— must return two rows

EXEC dbo.SelectMessagesBySubjectBeginning

@SubjectBeginning=’Ne’;

Subject Body

—————————— ——————-

Next release delayed Still fixing bugs

New printer arrived By the kitchen area

— must return nothing

EXEC dbo.SelectMessagesBySubjectBeginning

@SubjectBeginning=’No Such Subject’;

Subject Body

—————————— ——————-

Listing 2: A few simple tests against the provided test data demonstrate that results match expectations

Handling Special Characters in Searching
In defensive database programming, it is essential to construct cases of unintended use with which to break our code. The test data in Listing 1 and the stored procedure calls in Listing 2 demonstrate the cases of intended use, and clearly the procedure works, when it is used as intended.

However, have we considered all the possible cases? Will the procedure continue to work as expected in cases of unintended use? Can we find any hidden bugs in this procedure? In fact, it is embarrassingly easy to break this stored procedure, simply by adding a few “off topic” messages to our table, as shown in Listing 3.

INSERT INTO dbo.Messages

( Subject ,

Body

)

SELECT ‘[OT] Great vacation in Norway!’ ,

‘Pictures already uploaded’

UNION ALL

SELECT ‘[OT] Great new camera’ ,

‘Used it on my vacation’ ;

GO

— must return two rows

EXEC dbo.SelectMessagesBySubjectBeginning
@SubjectBeginning = ‘[OT]’ ;

Subject Body

—————————— ——————-

Listing 3: Our procedure fails to return “off topic” messages

Our procedure fails to return the expected messages. In fact, by loading one more message, as shown in Listing 4, we can demonstrate that this procedure can also return incorrect data.

INSERT INTO dbo.Messages

( Subject ,

Body

)

SELECT ‘Ordered new water cooler’ ,

‘Ordered new water cooler’ ;

EXEC dbo.SelectMessagesBySubjectBeginning
@SubjectBeginning = ‘[OT]’ ;

Subject Body

—————————— ——————-

Ordered new water cooler Ordered new water cooler

Listing 4: Our procedure returns the wrong messages when the search pattern contains [OT]

When using the LIKE keyword, square brackets (“[” and “]”), are treated as wildcard characters, denoting a single character within a given range or set. As a result, while the search was intended to be one for off-topic posts, it in fact searched for “any messages whose subject starts with O or T”. Therefore Listing 3 returns no rows, since no such messages existed at that point, whereas Listing 4 “unexpectedly” returns the message starting with “O”, rather than the off-topic messages.

In a similar vein, we can also prove that the procedure fails for messages with the % sign in subject lines, as shown in Listing 5.

INSERT INTO dbo.Messages

( Subject ,

Body

)

SELECT ‘50% bugs fixed for V2’ ,

‘Congrats to the developers!’

UNION ALL

SELECT ‘500 new customers in Q1’ ,

‘Congrats to all sales!’ ;

GO

EXEC dbo.SelectMessagesBySubjectBeginning
@SubjectBeginning = ‘50%’ ;

Subject Body

—————————— —————-

50% bugs fixed for V2 Congrats to the developers!

500 new customers in Q1 Congrats to all sales!

Listing 5: Our stored procedure returns the wrong messages, along with the correct ones, if the pattern contains %

The problem is basically the same: the % sign is a wildcard character denoting “any string of zero or more characters”. Therefore, the search returns the “500 new customers…” row in addition to the desired “50% bugs fixed…” row.

Our testing has revealed an implicit assumption that underpins the implementation of the SelectMessagesBySubjectBeginning stored procedure: the author of this stored procedure did not anticipate or expect that message subject lines could contain special characters, such as square brackets and percent signs. As a result, the search only works if the specified SubjectBeginning does not contain special characters.

Having identified this assumption, we have a choice: we can either change our stored procedure so that it does not rely on this assumption, or we can enforce it.

Enforcing or Eliminating the Special Characters Assumption
Our first option is to fix our data by enforcing the assumption that messages will not contain special characters in their subject line. We can delete all the rows with special characters in their subject line, and then add a CHECK constraint that forbids their future use, as shown in Listing 6. The patterns used in the DELETE command and in the CHECK constraint are advanced and need some explanation. The first pattern, %[[]%, means the following:

Both percent signs denote “any string of zero or more characters”
[[] in this case denotes “opening square bracket, [”
The whole pattern means “any string of zero or more characters, followed by an opening square bracket, followed by another string of zero or more characters”, which is equivalent to “any string containing at least one opening square bracket”
Similarly, the second pattern, %[%]%, means “any string containing at least one percent sign”.

BEGIN TRAN ;

DELETE FROM dbo.Messages

WHERE Subject LIKE ‘%[[]%’

OR Subject LIKE ‘%[%]%’ ;

ALTER TABLE dbo.Messages

ADD CONSTRAINT Messages_NoSpecialsInSubject

CHECK(Subject NOT LIKE ‘%[[]%’

AND Subject NOT LIKE ‘%[%]%’) ;

ROLLBACK TRAN ;

Listing 6: Enforcing the “no special characters” assumption

Although enforcing the assumption is easy, does it make practical sense? It depends. I would say that under most circumstances special characters in subject lines should be allowed, so let’s consider a second, better option: eliminating the assumption. Note that Listing 6 rolls back the transaction, so that our changes are not persisted in the database.

Listing 7 shows how to alter the stored procedure so that it can handle special characters. To better demonstrate how the procedure escapes special characters, I included some debugging output. Always remember to remove such debugging code before handing over the code for QA and deployment!

ALTER PROCEDURE dbo.SelectMessagesBySubjectBeginning

@SubjectBeginning ng VARCHAR(50)

AS

SET NOCOUNT ON ;

DECLARE @ModifiedSubjectBeginning VARCHAR(150) ;

SET @ModifiedSubjectBeginning =

REPLACE(REPLACE(@SubjectBeginning,

‘[‘,

‘[[]’),

‘%’,

‘[%]’) ;

SELECT @SubjectBeginning AS [@SubjectBeginning] ,

@ModifiedSubjectBeginning AS
[@ModifiedSubjectBeginning] ;

SELECT Subject ,

Body

FROM dbo.Messages

WHERE Subject LIKE @ModifiedSubjectBeginning + ‘%’ ;

GO

Listing 7: Eliminating the “no special characters” assumption

Listing 8 demonstrates that our stored procedure now correctly handles special characters. Of course, in a real world situation, all previous test cases have to be rerun to check that we didn’t break them in the process of fixing the bug.

— must return two rows

EXEC dbo.SelectMessagesBySubjectBeginning

@SubjectBeginning = ‘[OT]’ ;

@SubjectBeginning @ModifiedSubjectBeginning

———————————————————-

[OT] [[]OT]

Subject Body

—————————— —————————-

[OT] Great vacation in Norway! Pictures already uploaded
[OT] Great new camera Used it on my vacation

— must return one row

EXEC dbo.SelectMessagesBySubjectBeginning

@SubjectBeginning=’50%’;

@SubjectBeginning @ModifiedSubjectBeginning

——————————————————————

50% 50[%]

Subject Body

—————————— ———————-

50% bugs fixed for V2 Congrats to the developers!

Listing 8: Our search now correctly handles [ ] and %

Whether we ultimately decide to enforce or eliminate the assumption, we have created a more robust search procedure as a result.

Defending Against Changes in SQL Server Settings
A common mistake made by developers is to develop SQL code on a given SQL Server, with a defined set of properties and settings, and then fail to consider how their code will respond when executed on instances with different settings, or when users change settings at the session level.

Let’s examine a few simple cases of how hidden assumptions with regard to server settings can result in vulnerable code.

How SET ROWCOUNT can break a Trigger
Traditionally, developers have relied on the SET ROWCOUNT command to limit the number of rows returned to a client for a given query, or to limit the number of rows on which a data modification statement (UPDATE, DELETE, MERGE or INSERT) acts. In either case, SET ROWCOUNT works by instructing SQL Server to stop processing after a specified number of rows.

However, use of SET ROWCOUNT can have some unexpected consequences for the unwary developer. Consider a very simple table, Objects, which stores basic size and weight information about objects, as shown in Listing 9.

CREATE TABLE dbo.Objects

(

ObjectID INT NOT NULL PRIMARY KEY ,

SizeInInches FLOAT NOT NULL ,

WeightInPounds FLOAT NOT NULL

) ;

GO

INSERT INTO dbo.Objects

( ObjectID ,

SizeInInches ,

WeightInPounds

)

SELECT 1 ,

10 ,

10

UNION ALL

SELECT 2 ,

12 ,

12

UNION ALL

SELECT 3 ,

20 ,

22 ;

GO

Listing 9: Creating and populating the Objects table

We are required to start logging all updates of existing rows in this table, so we create a second table, ObjectsChangeLog, in which to record the changes made, and a trigger that will fire whenever data in the Objects table is updated, record details of the changes made, and insert them into ObjectsChangeLog.

CREATE TABLE dbo.ObjectsChangeLog

(

ObjectsChangeLogID INT NOT NULL

IDENTITY ,

ObjectID INT NOT NULL ,

ChangedColumnName VARCHAR(20) NOT NULL ,

ChangedAt DATETIME NOT NULL ,

OldValue FLOAT NOT NULL ,

CONSTRAINT PK_ObjectsChangeLog PRIMARY KEY
( ObjectsChangeLogID )

) ;

GO

CREATE TRIGGER Objects_UpdTrigger ON dbo.Objects

FOR UPDATE

AS

BEGIN;

INSERT INTO dbo.ObjectsChangeLog

( ObjectID ,

ChangedColumnName ,

ChangedAt ,

OldValue

)

SELECT i.ObjectID ,

‘SizeInInches’ ,

CURRENT_TIMESTAMP ,

d.SizeInInches

FROM inserted AS i

INNER JOIN deleted AS d ON
i.ObjectID = d.ObjectID

WHERE i.SizeInInches d.SizeInInches

UNION ALL

SELECT i.ObjectID ,

‘WeightInPounds’ ,

CURRENT_TIMESTAMP ,

d.WeightInPounds

FROM inserted AS i

INNER JOIN deleted AS d ON
i.ObjectID = d.ObjectID

WHERE i.WeightInPounds d.WeightInPounds ;

END ;

Listing 10: Logging updates to the Objects table

Please note that my approach to all example in this book is to keep them as simple as they can be while still providing a realistic demonstration of the point, which here is the effect of SET ROWCOUNT. So, in this case, I have omitted:

A “real” key on the ObjectsChangeLog table, enforced by a UNIQUE constraint (ObjectID, ChangedColumnName, ChangedAt), in addition to the surrogate key on ObjectsChangeLogID
The equivalent insert and delete triggers to log INSERT and DELETE modifications, as well as UPDATEs
Likewise, there are several ways of logging changes and the one I chose here may not be the best approach; again my goal was to keep the example focused and simple. Listing 11 shows the code that tests how our trigger logs changes against the Objects table.

BEGIN TRAN ;

— TRUNCATE TABLE can also be used here

DELETE FROM dbo.ObjectsChangeLog ;

UPDATE dbo.Objects

SET SizeInInches = 12 ,

WeightInPounds = 14

WHERE ObjectID = 1 ;

— we are selecting just enough columns

— to demonstrate that the trigger works

SELECT ObjectID ,

ChangedColumnName ,

OldValue

FROM dbo.ObjectsChangeLog ;

— we do not want to change the data,

— only to demonstrate how the trigger works

ROLLBACK ;

— the data has not been modified by this script

ObjectID ChangedColumnName OldValue

———– ——————– ——

1 SizeInInches 10

1 WeightInPounds 10

Listing 11: Testing the trigger

Apparently, our trigger works as expected! However, with a little further testing, we can prove that the trigger will sometimes fail to log UPDATEs made to the Objects table, due to an underlying assumption in the trigger code, of which the developer may not even have been aware!

The ROWCOUNT Assumption
Let’s consider what might happen if, within a given session, a user changed the default value for ROWCOUNT and then updated the Objects table, without resetting ROWCOUNT, as shown in Listing 12.

DELETE FROM dbo.ObjectsChangeLog ;

SET ROWCOUNT 1 ;

— do some other operation(s)

— for which we needed to set rowcount to 1

— do not restore ROWCOUNT setting

— to its default value

BEGIN TRAN ;

UPDATE dbo.Objects

SET SizeInInches = 12 ,

WeightInPounds = 14

WHERE ObjectID = 1 ;

— make sure to restore ROWCOUNT setting

— to its default value so that it does not affect the

— following SELECT

SET ROWCOUNT 0 ;

SELECT ObjectID ,

ChangedColumnName ,

OldValue

FROM dbo.ObjectsChangeLog ;

ROLLBACK ;

ObjectID ChangedColumnName OldValue

———– ——————– ———

1 SizeInInches 10

Listing 12: Breaking the trigger by changing the value of ROWCOUNT

As a result of the change to the ROWCOUNT value, our trigger processes the query that logs changes to the SizeInInches column, returns one row, and then ceases processing. This means that it fails to log the change to WeightInPounds column. Of course, there is no guarantee that the trigger will log the change to the SizeInInches column. On your server, the trigger may log only the change of WeightInPounds but fail to log the change in SizeInInches. Which column will be logged depends on the execution plan chosen by the optimizer, and we cannot assume that the optimizer will always choose one and the same plan for a query.

Although the developer of the trigger may not have realized it, the implied assumption regarding its implementation is that ROWCOUNT is set to its default value. Listing 12 proves that that when this assumption is not true, the trigger will not work as expected.

Enforcing and Eliminating the ROWCOUNT Assumption
Once we understand the problem, we can fix the trigger very easily, by resetting ROWCOUNT to its default value at the very beginning of the body of the trigger, as shown in Listing 13.

ALTER TRIGGER dbo.Objects_UpdTrigger ON dbo.Objects

FOR UPDATE

AS

BEGIN;

— the scope of this setting is the body of the trigger

SET ROWCOUNT 0 ;

INSERT INTO dbo.ObjectsChangeLog

( ObjectID ,

ChangedColumnName ,

ChangedAt ,

OldValue

)

SELECT i.ObjectID ,

‘SizeInInches’ ,

CURRENT_TIMESTAMP ,

d.SizeInInches

FROM inserted AS i

INNER JOIN deleted AS d ON
i.ObjectID = d.ObjectID

WHERE i.SizeInInches d.SizeInInches

UNION ALL

SELECT i.ObjectID ,

‘WeightInPounds’ ,

CURRENT_TIMESTAMP ,

d.WeightInPounds

FROM inserted AS i

INNER JOIN deleted AS d ON
i.ObjectID = d.ObjectID

WHERE i.WeightInPounds
d.WeightInPounds ;

END ;

— after the body of the trigger completes,

— the original value of ROWCOUNT is restored

— by the database engine

Listing 13: Resetting ROWCOUNT at the start of the trigger

We can rerun the test from Listing 12 and this time the trigger will work as required, logging both changes. Note that the scope of our SET ROWCOUNT is the trigger, so our change will not affect the setting valid at the time when the trigger was fired.

SET ROWCOUNT is deprecated in SQL Server 2008…

…and eventually, in some future version, will have no effect on INSERT, UPDATE or DELETE statements. Microsoft advises rewriting any such statements that rely on ROWCOUNT to use TOP instead. As such, this example may be somewhat less relevant for future versions of SQL Server; the trigger might be less vulnerable to being broken, although still not immune. However, at the time of writing, this example is very relevant.

In this case, one simple step both enforces the underlying assumption, by ensuring that it is always valid, and eliminates it, by ensuring that the code continues to work in cases where ROWCOUNT is not at its default value.

Proactively Fixing SET ROWCOUNT Vulnerabilities
We have fixed the ROWCOUNT vulnerability in our trigger, but our job is not done. What about other modules in our system? Might they not have the same vulnerability?

Having learned of the potential side effects of SET ROWCOUNT, we can now analyze all the other modules in our system, determine if they have the same problem, and fix them if they do. For example, our stored procedure SelectMessagesBySubjectBeginning (Listing 1) has the same vulnerability, as demonstrated by the test in Listing 14.

SET ROWCOUNT 1 ;

— must return two rows

EXEC dbo.SelectMessagesBySubjectBeginning
@SubjectBeginning = ‘Ne’ ;

…(Snip)…

Subject Body

—————————— ——————-

Next release delayed Still fixing bugs

Listing 14: SET ROWCOUNT can break a stored procedure just as easily as it can break a trigger

We can apply the same fix, adding SET ROWCOUNT 0; to the very beginning of this stored procedure. Similarly, we should apply this fix to all other modules that need it.

If your code is supposed to exist for a considerable time, then it makes perfect sense to fix problems proactively. It is usually faster and easier to do so than to wait until the problem occurs, spend considerable time troubleshooting, and then eventually implement the same fix.

How SET LANGUAGE can break a Query
Just as the value of ROWCOUNT can be changed at the session level, so can other settings, such as the default language. Many developers test their code only under the default language setting of their server and do not test how their code will respond if executed on a server with a different language setting, or to a change in the setting at the session level.

This practice is perfectly correct as long as our code always runs under the same settings as those under which we develop and test it. However, if or when the code runs under different settings, this practice will often result in code that is vulnerable to errors, especially when dealing with dates.

Consider the case of a stored procedure that is supposed to retrieve from our ObjectsChangeLog table (Listing 10) a listing of all changes made to the Objects table over a given date range. According to the requirements, only the beginning of the range is required; the end of the range is an optional parameter. If an upper bound for the date range is not provided, we are required to use a date far in the future, December 31st, 2099, as the end of our range.

CREATE PROCEDURE dbo.SelectObjectsChangeLogForDateRange

@DateFrom DATETIME ,

@DateTo DATETIME = NULL

AS

SET ROWCOUNT 0 ;

SELECT ObjectID ,

ChangedColumnName ,

ChangedAt ,

OldValue

FROM dbo.ObjectsChangeLog

WHERE ChangedAt BETWEEN @DateFrom

AND COALESCE(@DateTo, ’12/31/2099′) ;

GO

Listing 15: Creating the SelectObjectsChangeLogForDateRange stored procedure

Note that this stored procedure uses a string literal, 12/31/2099, to denote December 31st, 2099. Although 12/31/2099 does represent December 31st, 2099 in many languages, such as US English, in many other cultures, such as Norwegian, this string does not represent a valid date. This means that the author of this stored procedure has made an implicit assumption: the code will always run under language settings where 12/31/2099 represents December 31st, 2099.

When we convert string literals to DATETIME values, we do not have to make assumptions about language settings. Instead, we can explicitly specify the DATETIME format from which we are converting.

The following scripts demonstrate both the safe way to convert character strings to DATETIME values, and the vulnerability of our stored procedure to changes in language settings. The script shown in Listing1.18 populates the ObjectsChangeLog table and calls the SelectObjectsChangeLogForDateRange stored procedure under two different language settings, US English and Norwegian.

— we can populate this table via our trigger, but

— I used INSERTs,to keep the example simple

INSERT INTO dbo.ObjectsChangeLog

( ObjectID ,

ChangedColumnName ,

ChangedAt ,

OldValue

)

SELECT 1 ,

‘SizeInInches’ ,

— the safe way to provide July 7th, 2009

‘20090707’,

12.34 ;

GO

SET LANGUAGE ‘us_english’ ;

— this convertion always works in the same way,

— regardless of the language settings,

— because the format is explicitly specified

EXEC dbo.SelectObjectsChangeLogForDateRange
@DateFrom = ‘20090101’;

SET LANGUAGE ‘Norsk’ ;

EXEC dbo.SelectObjectsChangeLogForDateRange
@DateFrom = ‘20090101’;

— your actual error message may be different from mine,

— depending on the version of SQL Server

Changed language setting to us_english.

(successful output skipped)

Changed language setting to Norsk.

ObjectID ChangedColumnName ChangedAt OldValue

———– ——————– ———————– ————–

Msg 242, Level 16, State 3, Procedure SelectObjectsChangeLogForDateRange, Line 6

The conversion of a char data type to a datetime data type resulted in an out-of-range datetime value.

Listing 16: Our stored procedure breaks under Norwegian language settings

Under the Norwegian language settings we receive an error at the point where it attempts to convert 12/31/2099 into a DATETIME string.

Note that we are, in fact, quite fortunate to receive an error message right away. Should we, in some other script or procedure, convert ’10/12/2008′ to DATETIME, SQL Server would silently convert this constant to a wrong value and we’d get incorrect results. Listing 17 shows how our stored procedure can return unexpected results without raising errors; such silent bugs may be very different to troubleshoot.

INSERT INTO dbo.ObjectsChangeLog

( ObjectID ,

ChangedColumnName ,

ChangedAt ,

OldValue

)

SELECT 1 ,

‘SizeInInches’ ,

— this means June 15th, 2009

‘20090615’,

12.3

UNION ALL

SELECT 1 ,

‘SizeInInches’ ,

— this means September 15th, 2009

‘20090915’,

12.5

SET LANGUAGE ‘us_english’ ;

— this call returns rows from Jul 6th to Sep 10th, 2009

— one log entry meets the criteria

EXEC SelectObjectsChangeLogForDateRange

@DateFrom = ’07/06/2009′,

@DateTo = ’09/10/2009′ ;

SET LANGUAGE ‘Norsk’ ;

— this call returns rows from Jun 7th to Oct 9th, 2009

— three log entries meet the criteria

EXEC SelectObjectsChangeLogForDateRange
@DateFrom = ’07/06/2009′,

@DateTo = ’09/10/2009′ ;

Changed language setting to us_english.

ObjectID ChangedColumnName ChangedAt OldValue

———– ——————– ———————–

1 SizeInInches 2009-07-07 12.34

— because the stored procedure does not have an ORDER BY
— clause, your results may show up in a different
— order

Changed language setting to Norsk.

ObjectID ChangedColumnName ChangedAt OldValue

———– ——————– ———————–

1 SizeInInches 2009-07-07 12.34

1 SizeInInches 2009-06-15 12.3

1 SizeInInches 2009-09-15 12.5

Listing 17: Our stored procedure call returns different results, depending on language settings

To fix the stored procedure, as shown in Listing 18, we need to explicitly specify the format from which we convert the VARCHAR values provided when the stored procedure is executed.

ALTER PROCEDURE dbo.SelectObjectsChangeLogForDateRange

@DateFrom DATETIME ,

@DateTo DATETIME = NULL

AS

SET ROWCOUNT 0 ;

SELECT ObjectID ,

ChangedColumnName ,

ChangedAt ,

OldValue

FROM dbo.ObjectsChangeLog

WHERE ChangedAt BETWEEN @DateFrom

AND COALESCE(@DateTo,

‘20991231’) ;

Listing 18: Fixing the stored procedure

The stored procedure will now run correctly, regardless of the language settings. In this case, we chose to fix the problem by eliminating the assumption. Alternatively, in some cases, we might choose to enforce it by setting the language at the beginning of the stored procedure, just as we did with the ROWCOUNT setting.

Of course, there are situations when our code will always run under one and the same settings, in which case there is no need to do anything. For example, if a module implements business rules specific to the state of Minnesota, it is reasonable to assume that it will always run under the same language settings.

Defensive Data Modification
Data modification is, in general, an area in which I see developers getting into trouble time and again. We’ll start with a case that demonstrates how data can be erroneously updated as a result of a false assumption in the stored procedure that modifies it. It is a simple example, but the underlying problem is a very common one: using search criteria that affect more rows than intended.

We’ll then discuss a second, somewhat more complex case, where an UPDATE can go wrong because it fails to unambiguously identify the row(s) to be modified, perhaps falsely assuming that the underlying data structures will ensure that no such ambiguity exists.

Updating more rows than intended
Listing 19 creates a simple Employee table, and a SetEmployeeManager stored procedure that assigns a manager to a given employee.

CREATE TABLE dbo.Employee

(

EmployeeID INT NOT NULL ,

ManagerID INT NULL ,

FirstName VARCHAR(50) NULL ,

LastName VARCHAR(50) NULL ,

CONSTRAINT PK_Employee_EmployeeID
PRIMARY KEY CLUSTERED ( EmployeeID ASC ) ,

CONSTRAINT FK_Employee_EmployeeID_ManagerID
FOREIGN KEY ( ManagerID )
REFERENCES dbo.Employee ( EmployeeID )

) ;

GO

CREATE PROCEDURE dbo.SetEmployeeManager

@FirstName VARCHAR(50) ,

@LastName VARCHAR(50) ,

@ManagerID INT

AS

SET NOCOUNT ON ;

UPDATE dbo.Employee

SET ManagerID = @ManagerID

WHERE FirstName = @FirstName

AND LastName = @LastName ;

Listing 19: The Employee table and SetEmployeeManager stored procedure

Clearly, the person who developed the stored procedure assumed that, at most, one employee may have the provided first and last name. If there happens to be two people in the organization with the same name then this stored procedure will assign them both to the same manager.

Again, having uncovered the assumption, we need to decide whether to enforce it or eliminate it. We could enforce it simply by placing a UNIQUE constraint on the FirstName and LastName columns. However, in this case, it seems much more reasonable to assume that there may well be more than one employee with the same first and last name, and that these namesake employees may report to different managers. Therefore, we need to eliminate the incorrect assumption. There are many ways to do this, the simplest being to ensure that the parameter supplied to the stored procedure, and used in the search criteria, identifies a unique row, as shown in Listing 20.

ALTER PROCEDURE dbo.SetEmployeeManager

@EmployeeID INT ,

@ManagerID INT

AS

SET NOCOUNT ON ;

UPDATE dbo.Employee

SET ManagerID = @ManagerID

WHERE EmployeeID = @EmployeeID ;

Listing 20: Using unambiguous search criteria

As long as EmployeeID is the primary key on the dbo.Employee table, this procedure will work correctly.

The Problem of Ambiguous Updates
The results of data modifications may be unpredictable in the hands of the careless programmer. Let’s consider a very common requirement: populating a permanent table from a staging table. First of all, let’s create our permanent table, Codes, and a staging table, CodesStaging, as shown in Listing 21. Note that CodesStaging does not have a primary key. This is very common for staging tables, because data is often loaded into such tables before detecting duplicates and other data integrity violations.

CREATE TABLE dbo.Codes

(

Code VARCHAR(5) NOT NULL ,

Description VARCHAR(40) NOT NULL ,

CONSTRAINT PK_Codes PRIMARY KEY ( Code )

) ;

GO

CREATE TABLE dbo.CodesStaging

(

Code VARCHAR(10) NOT NULL ,

Description VARCHAR(40) NOT NULL

) ;

GO

Listing 21: Creating the Codes and CodesStaging tables

Now, let’s populate each table with some sample data, as shown in Listing 22.

DELETE FROM dbo.Codes ;

INSERT INTO dbo.Codes

( Code ,

Description

)

SELECT ‘AR’ ,

‘Old description for Arkansas’

UNION ALL

SELECT ‘IN’ ,

‘Old description for Indiana’ ;

DELETE FROM dbo.CodesStaging ;

INSERT INTO dbo.CodesStaging

( Code ,

Description

)

SELECT ‘AR’ ,

‘description for Argentina’

UNION ALL

SELECT ‘AR’ ,

‘new description for Arkansas’

UNION ALL

SELECT ‘IN’ ,

‘new description for Indiana ‘ ;

Listing 22: Populating the Codes and CodesStaging tables

Now, we’ll examine two different ways of updating data in the permanent table, based on data in the staging table, both of which are subject to ambiguities if care is not taken:

Using UPDATE…FROM
Updating an inline view
We’ll then discuss strategies for avoiding such ambiguities.

Using UPDATE…FROM
Notice in Listing 22 that the incoming data in our staging table has a duplicate: the code AR occurs twice, with different descriptions. Suppose that we have not detected or resolved this duplicate, and that we are updating our Codes table from the staging table.

UPDATE dbo.Codes

SET Description = s.Description

FROM dbo.Codes AS c INNER JOIN dbo.CodesStaging AS s
ON c.Code = s.Code ;

SELECT Code ,

Description

FROM dbo.Codes ;

Code Description

———- —————————————-

AR description for Argentina

IN new description for Indiana

(2 row(s) affected)

Listing 23: An ambiguous UPDATE…FROM, when loading data from a staging table (CodesStaging) into a target table (Codes)

Although two descriptions were provided for the AR code, the UPDATE…FROM command did not raise an error; it just silently updated the corresponding row in Codes table with one of the two provided values. In this case, the ‘old description for Arkansas’ has been overwritten with the ‘description for Argentina’.

Updating Inline Views
When we update inline views, we may encounter exactly the same problem. First, repopulate each of the tables with the original data, using the code from Listing 22. Next, create an inline view, and then use it to implement exactly the same functionality as the previous UPDATE…FROM commands, as shown in Listing 24.

WITH c AS ( SELECT c.Code ,

c.Description ,

s.Description AS NewDescription

FROM dbo.Codes AS c

INNER JOIN dbo.CodesStaging AS s
ON c.Code = s.Code

)

UPDATE c

SET Description = NewDescription ;

SELECT Code ,

Description

FROM dbo.Codes ;

Code Description

———- —————————————-

AR description for Argentina

IN new description for Indiana

Listing 24: An ambiguous update of an inline view

Note that neither in this example nor the previous UPDATE…FROM example, can we predict which of these two values will end up in the target table – that, as usual, depends on the execution plan and as such is completely unpredictable. It is by pure chance that, in my examples, Argentina was chosen over Arkansas in both cases. I was able to get different results, with the description of Arkansas rather than Argentina inserted into Codes, just by changing the order in which the rows are inserted into CodesStaging. However, again, there is no guarantee that you will get the same results on your box. Also, bear in mind that if we ever did add an index to the staging table, this would almost certainly affect the result as well.

How to Avoid Ambiguous Updates
In both previous examples, the developer has written the UPDATE command apparently under the assumption that there can be no duplicate data in the CodesStaging – which cannot be guaranteed in the absence of a UNIQUE or PRIMARY KEY constraint on the Code column – or that any duplicate data should have been removed before updating the permanent table.

Generally, performing this sort of ambiguous update is unacceptable. In some cases, we might want to refine the query to make sure it never yields ambiguous results. However, typically we want either to raise an error when an ambiguity is detected, or to update only what is unambiguous.

In SQL Server 2008, we can circumvent such problems with UPDATE…FROM or CTE-based updates, by use of the MERGE command. However, prior SQL Server 2008, we have to detect these ambiguities.

Using MERGE to Detect Ambiguity (SQL Server 2008 only)
If you are working with SQL Server 2008, then easily the best option is to use the MERGE command. In Listing 25, we use the MERGE command to update our primary table from our staging table and immediately encounter the expected error.

MERGE INTO dbo.Codes AS c

USING dbo.CodesStaging AS s

ON c.Code = s.Code

WHEN MATCHED

THEN UPDATE

SET c.Description = s.Description ;

Msg 8672, Level 16, State 1, Line 1

The MERGE statement attempted to UPDATE or DELETE the same row more than once. This happens when a target row matches more than one source row. A MERGE statement cannot UPDATE/DELETE the same row of the target table multiple times. Refine the ON clause to ensure a target row matches at most one source row, or use the GROUP BY clause to group the source rows.

Listing 25: MERGE detects an ambiguity in incoming data

An ANSI-standard Method
Pre-SQL Server 2008, we are forced to seek alternative ways to raise an error whenever there is an ambiguity. The code in Listing 26 is ANSI-standard SQL and accomplishes that goal.

— rerun the code from Listing 22

— before executing this code

UPDATE dbo.Codes

SET Description =

( SELECT Description

FROM dbo.CodesStaging

WHERE Codes.Code = CodesStaging.Code

)

WHERE EXISTS ( SELECT *

FROM dbo.CodesStaging AS s

WHERE Codes.Code = s.Code

) ;

Msg 512, Level 16, State 1, Line 3

Subquery returned more than 1 value. This is not permitted when the subquery follows =, !=, <, , >= or when the subquery is used as an expression.

The statement has been terminated.

Listing 26: An ANSI Standard UPDATE command, which raises an error when there is an ambiguity

Note that in order to update just one column we had to use two almost identical subqueries in this command. This is definitely not a good practice. Should we need to update ten columns, we would have to repeat almost the same code eleven times! If, at some later time, we need to modify the subquery, we will have to make one and the same change in eleven places, which is very prone to errors.

Defensive Inline View Updates
Fortunately, there are several ways to improve the robustness of inline view updates, as well as UPDATE…FROM updates (covered in the next section), which work with SQL 2005.

In the previous two examples, an error was raised when ambiguity was detected. This is usually preferable but, if your business rules allow you to ignore ambiguities, and only update that which is unambiguous, then the solution shown in Listing 27 will work.

— rerun the code from Listing 22

— before executing this code

BEGIN TRAN ;

WITH c AS ( SELECT c.Code ,

c.Description ,

s.Description AS NewDescription

FROM dbo.Codes AS c

INNER JOIN dbo.CodesStaging AS s
ON c.Code = s.Code

AND ( SELECT COUNT(*)

FROM dbo.CodesStaging AS s1

WHERE c.Code = s1.Code

) = 1

)

UPDATE c

SET Description = NewDescription ;

ROLLBACK ;

Listing 27: Using a subquery to ignore ambiguities when updating an inline view

This time, only the description of Indiana is updated. In a similar fashion, we could filter out (i.e. ignore) ambiguities with the help of an analytical function, as shown in Listing 28.

— rerun the code from Listing 22

— before executing this code

BEGIN TRAN ;

WITH c AS ( SELECT c.Code ,

c.Description ,

s.Description AS NewDescription ,

COUNT(*) OVER ( PARTITION BY s.Code )
AS NumVersions

FROM dbo.Codes AS c

INNER JOIN dbo.CodesStaging AS s
ON c.Code = s.Code

)

UPDATE c

SET Description = NewDescription

WHERE NumVersions = 1 ;

ROLLBACK ;

Listing 28: Using PARTITION BY to ignore ambiguities when updating an inline view

In some cases, the approach of only performing unambiguous updates, and silently ignoring ambiguous ones, is unacceptable. In the absence of built in methods, we can use tricky workarounds to reuse the code as much as possible and still raise an error if there is an ambiguity. Consider the example shown in Listing 29, in which a divide by zero occurs if there is an ambiguity.

— rerun the code from Listing 22

— before executing this code

DECLARE @ambiguityDetector INT ;

WITH c AS ( SELECT c.Code ,

c.Description ,

s.Description AS NewDescription ,

COUNT(*) OVER ( PARTITION BY s.Code )
AS NumVersions

FROM dbo.Codes AS c

INNER JOIN dbo.CodesStaging AS s
ON c.Code = s.Code

)

UPDATE c

SET Description = NewDescription ,

@ambiguityDetector = CASE WHEN NumVersions = 1
THEN 1

— if we have ambiguities, the following branch executes

— and raises the following error:

— Divide by zero error encountered.

ELSE 1 / 0

END ;

Msg 8134, Level 16, State 1, Line 4

Divide by zero error encountered.

The statement has been terminated.

Listing 29: An UPDATE command using an inline view and raising a divide by zero error when there is an ambiguity

Of course, the error message raised by this code (divide by zero) is misleading, so we should only use this approach when any of the previous alternatives are not viable.

Defensive UPDATE…FROM
Some of the approaches just outlined for improving the robustness of inline view updates, apply equally as well to improving the UPDATE…FROM command.

For example, we can use a sub-query to ignore ambiguities, as shown in Listing 30.

— rerun the code from Listing 22

— before executing this code

BEGIN TRAN ;

UPDATE dbo.Codes

SET Description = ‘Old Description’ ;

UPDATE dbo.Codes

SET Description = s.Description

FROM dbo.Codes AS c

INNER JOIN dbo.CodesStaging AS s
ON c.Code = s.Code

AND ( SELECT COUNT(*)

FROM dbo.CodesStaging AS s1

WHERE s.Code = s1.Code

) = 1 ;

SELECT Code ,

Description

FROM dbo.Codes ;

ROLLBACK ;

Listing 30: Using a subquery to ignore ambiguities when using UPDATE…FROM

Likewise, we can use an analytical function for detecting and ignoring ambiguities, as shown in Listing 31.

— rerun the code from Listing 22

— before executing this code

BEGIN TRAN ;

UPDATE dbo.Codes

SET Description = ‘Old Description’ ;

UPDATE dbo.Codes

SET Description = s.Description

FROM dbo.Codes AS c

INNER JOIN ( SELECT Code ,

Description ,

COUNT(*) OVER ( PARTITION BY Code )
AS NumValues

FROM dbo.CodesStaging

) AS s
ON c.Code = s.Code

AND NumValues = 1 ;

SELECT Code ,

Description

FROM dbo.Codes ;

ROLLBACK ;

Listing 31: Using an analytical function to detect and ignore ambiguities when using UPDATE…FROM

Summary
The goal of this article was to introduce, by way of some simple examples, some of the basic ideas that underpin defensive database programming. It is vital that you understand and document the assumptions that underpin your implementation, test them to ensure their validity, and eliminate them if they are not. It is also vital that you consider as many use cases as possible for your code, and ensure it behaves consistently in each case. Where inconsistencies or incorrect behavior are found, the defensive programmer will not only fix the offending module, but also test all other modules that might suffer from a similar problem and proactively safeguard against it.

Along the way, I hope you’ve learned the following specific lessons in defensive programming:

How to use complex patterns to improve the robustness of LIKE searches
How to avoid potential difficulties with SET ROWCOUNT
The importance of safe date formats and of explicitly specifying the required format when converting dates
How to avoid dangerous ambiguity when performing updates by, for example:
Using MERGE, in SQL Server 2008
Using subqueries, pre-SQL Server 2008
How to use subqueries or the COUNT(*) OVER analytic function to improve the robustness of modifications when using UPDATE…FROM, or updating inline views, so that ambiguous updates are ignored.

Articles

SQL Server Backup status script in 2000 2005 2008

In DBA, SQL Server on January 12, 2008 by bharaniszone Tagged:

DECLARE @DBNAME VARCHAR(100)
SET DBNAME=NULL  — Default NULL(All Databses)
select ‘BackUp Name’=BS.name,
‘User Name’=BS.user_name,
‘Start Date’=BS.backup_start_date,
‘Finish Date’=BS.backup_finish_date,
‘Backup Type’=Case when BS.type=’D’ then ‘FULL Backup’
              when BS.type=’L’ then ‘Transaction Log Backup’
              when BS.type=’I’ then ‘Differential Backup’ end
,’BackupSizeMB’=floor(((BS.backup_size/1024)/1024))
,’DbName’=BS.database_name
,’Server Name’=BS.server_name
,MF.physical_device_name
,’IS Ever Restored’=case when BS.backup_set_id in
(select backup_set_id from msdb.dbo.restorehistory)
 then ‘Yes’ else ‘No’ end
,’Destination Db’
=isnull(RH.destination_database_name,’Yet Not Restored From This BackUpSet’)
,’Restore Path’
=isnull(min(RF.destination_phys_name),’Yet Not Restored From This BackUpSet’)
,’restore Type’=isnull(CASE WHEN RH.restore_type = ‘D’ THEN ‘Database’
              WHEN RH.restore_type = ‘F’ THEN ‘File’
              WHEN RH.restore_type = ‘G’ THEN ‘Filegroup’
              WHEN RH.restore_type = ‘I’ THEN ‘Differential’
              WHEN RH.restore_type = ‘L’ THEN ‘Log’
              WHEN RH.restore_type = ‘V’ THEN ‘Verifyonly’
              WHEN RH.restore_type = ‘R’ THEN ‘Revert’
              ELSE RH.restore_type
             END ,’Yet Not’)
,Rh.restore_date,’Restore By’=isnull(RH.user_name,’No One’)
,’Time Taken’
=cast(datediff(ss,BS.backup_start_date,BS.backup_finish_date)/3600 as varchar(10))
+’ Hours, ‘ +
cast(datediff(ss,BS.backup_start_date,BS.backup_finish_date)/60 as varchar(10))
+ ‘ Minutes, ‘ +
cast(datediff(ss,BS.backup_start_date,BS.backup_finish_date)%60 as varchar(10))
+’ Seconds’
from msdb..backupset BS
JOIN msdb..backupmediafamily MF
on BS.media_set_id=MF.media_set_id
left outer join msdb..restorehistory RH
on BS.backup_set_id =RH.backup_set_id
left outer join msdb..restorefile RF
on RF.restore_history_id=Rh.restore_history_id
where BS.database_name = isnull(@DBNAME,BS.database_name)
group by BS.name,BS.user_name,BS.backup_start_date,BS.backup_finish_date,
BS.TYPE,BS.backup_size,BS.database_name,BS.server_name
,MF.physical_device_name,BS.backup_set_id,RH.destination_database_name
,RH.restore_type,Rh.restore_date,RH.user_name
 
 

 ================================================================================

 

 

SELECT
A.NAME,B.TOTAL_ELAPSED_TIME/60000 AS [Running Time], 
B.ESTIMATED_COMPLETION_TIME/60000 AS [Remaining], 
B.PERCENT_COMPLETE as [%], 
(SELECT
 TEXT 
FROM
sys.dm_exec_sql_text(B.SQL_HANDLE))AS COMMAND 
FROM
MASTER..SYSDATABASES A, 
sys.dm_exec_requests B 
WHERE
A.DBID=B.DATABASE_ID AND B.COMMAND LIKE ‘%BACKUP%’
ORDER BY
percent_complete DESC, 
B.TOTAL_ELAPSED_TIME/60000 DESC

Articles

Oracle Locked & Blocked Objects in Schema

In DBA, Oracle on February 4, 2006 by bharaniszone

Please test it before you execute in your production servers:

SELECT DECODE(request,0,’Holder: ‘,’Waiter: ‘)||sid sess, id1, id2, lmode, request, type FROM V$LOCK WHERE (id1, id2, type) IN (SELECT id1, id2, type FROM V$LOCK WHERE request>0) ORDER BY id1, request ;

——————————————————————————————–

SELECT a.object_name, SUBSTR(a.owner,1,15) object_owner, SUBSTR(DECODE(b.locked_mode, 0, ‘None’, 1, ‘Null (NULL)’, 2, ‘Row-S (SS)’, 3, ‘Row-X (SX)’, 4, ‘Share ‘, 5, ‘S/Row-X (SSX)’, 6, ‘Exclusive ‘, b.locked_mode),1,15) locked_mode, b.session_id sid, SUBSTR(b.oracle_username,1,15) ORACLE_USERNAME, b.os_user_name FROM all_objects a, v$locked_object b WHERE a.object_id = b.object_id AND a.object_name LIKE ‘XXTC%’ ORDER BY 1;

 ——————————————————————————————

select sess.sid,sess.serial#,p.spid,sess.last_call_et,sess.logon_time, sess.osuser,sess.machine,sess.process, p.username,sess.terminal,sess.module from v$instance inst, v$session sess, v$process p where and sess.paddr=p.addr and sess.sid in (select sid from v$lock where block >0);

 

Please provide me feedback i will try to post new queries which will be helpful for us in case of emergency. 🙂