Pages

Tuesday, 6 December 2011

[SQLSTATE HY000] (Error 1204)


Recently I moved a database used by Vcenter application, to a new instance, this instance had 2 GB RAM. AS part of the move there were jobs that had to be moved as well.
Three jobs in particular started failing immediately thereafter and the message logged in job history was as below:

“The instance of the SQL Server Database Engine cannot obtain a LOCK resource at this time. Rerun your statement when there are fewer active users. Ask the database administrator to check the lock and memory configuration for this instance, or to check for long-running transactions. [SQLSTATE HY000] (Error 1204).  The step failed.”

I had not observed this on the earlier server, and that instance had up to 6 GB memory. And so, I increased the RAM to 3 GB, on the new instance. Then, post restart the problem seemed to resolve. But next day the issue was back.
I have read that up to 60% of the available memory is the maximum that SQL server will use for locks.  So I tried checking the amount of memory used by the lock manager, using PERFMON counter lock memory (KB) and it read 1.7 GB, which was nearly 57 % of the total 3 GB available to the instance. But then, before increasing the RAM up to 6 GB, as it was in the old server, I wanted to check the DML fired by the jobs.
I reviewed each of the jobs, and these called stored procedures, all three are similar, used to purge historical data, (3 jobs for 4 separate tables, all structurally same)

The section of code in the procedure is below:

BEGIN TRANSACTION
DELETE FROM
VPX_HIST_STAT2 WITH (ROWLOCK)
WHERE TIME_ID IN  (SELECT TOP (@batchsize) TIME_ID FROM
VPX_HIST_STAT2 WHERE NOT EXISTS  ( SELECT 1 FROM
VPX_SAMPLE_TIME2 VTD                            
WHERE VTD.TIME_id=VPX_HIST_STAT2.time_id ))      
SET @deletedHistStatsCount = @@ROWCOUNT
COMMIT TRANSACTION

I checked for the total number of records and found table- VPX_HIST_STAT1 had over 10 million rows, similarly other jobs deleted rows from tables similarly structured, with almost as many rows VPX_HIST_STAT2, VPX_HIST_STAT3, VPX_HIST_STAT4
I noticed on these tables there was not a usable index which the DELETE’s WHERE clause (in this case, column name TIME_ID) could make use of.
The column name TIME_ID was part of the composite PK clustered index on the table, but this column was the second column, and was therefore of no use to query.
I decided to add a non clustered index on time_id column, and wanted to analyze the query using database tuning advisor. As expected, it suggested the same index.

The section of code is below:

CREATE NONCLUSTERED INDEX [NIDX_TIME_ID] ON [dbo].[VPX_HIST_STAT2]
(
                [TIME_ID] ASC
)
INCLUDE ([COUNTER_ID],
[STAT_VAL]) WITH (SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF) ON [PRIMARY]
  
 I created the required index, re-ran the jobs, all success and didn’t require a memory increase as I initial would have wanted to do.
Another point I would like to mention here is that each of the delete DML included a rowlock query hint. The delete query affecting a table with nearly 10 million, meant the rowlock would request that many rowlocks, causing a high number of locks and hence an increase in the lock memory. Under normal cases, with no rowlock query hint, by default SQL server would request a rowlock, but when it figures a large part of the table is being affected, higher level locks, like page lock or table lock are requested ( known as lock escalation ). Note, that lock escalation can be an issue in certain other circumstances. In this case, I felt, with a delete affecting so many rows, rowlock hint was not required; rather allowing SQL Server to decide the type of lock was a much better choice.
In fact, I want to test this scenario on the old server. Reduce the memory from 6 GB to 2 GB, not create the new index, and rather eliminate the rowlock query hints. Hopefully, I can do it sometime soon and validate my understanding and resolution further. 

This issue is also covered under an article at http://support.microsoft.com/kb/323630

Saturday, 15 October 2011

Error 5184



Server: Msg 5184, Level 16, State 2, Line 1. Cannot use file '%.*ls' for clustered server. Only formatted files on which the cluster resource of the server has a dependency can be used.


Last night we had a planned database file movement from a slower disk to a new faster disk. This database is on a two node cluster. I was informed all actions pertaining to cluster were already taken care off.
All I had in my plate was to detach the database, move the relevant files over to the new disk and attach it back. I detached, I moved the file, so far so good.
I tried to attach the files, after pointing to the new locations. This is when I received this error 5184. The error message was self explanatory and I figured that the disk was not added to the dependencies list for SQL server service in the cluster group.
So to achieve this, I took the SQL service offline from cluster administrator, and included the new disk in the dependency list from the properties page of the SQL server service. Then proceeded to start the SQL server, and tried attaching the database. And it succeeded this time.


This is also covered in this article:
http://support.microsoft.com/kb/295732


Tuesday, 26 April 2011

Whats better, temporary table or table variable?


For some time now, I believed table variables were a better choice than temp tables and that was because I read table variables were in-memory structures. I recently read something otherwise. So I thought I would write this down so it stays “in-memory”.


A DMV useful here is sys.dm_db_session_space_usage, and the columns of interest to us are session_id, database_id and user_objects_alloc_page_count which will show us the pages allocated for objects in tempdb.

Let us create a temporary table as below  then query the sys.dm_db_session_space_usage  DMV to see  the number of pages allocated for this object and the database in which it is allocated.

create table #temptable (COL1 INT)
select session_id, db_NAME (database_id) AS DBName, user_objects_alloc_page_count from master.sys.dm_db_session_space_usage where session_id = @@SPID

session_id
DBName
user_objects_alloc_page_count
55
tempdb
0

When a create table is run, SQL Server does not allocate page to the table upfront, but rather when the first data is inserted, pages are allocated. So, let us go ahead and insert a single row that is all we need to see the allocation of pages.

insert into #temptable (col1) values (1)

Now, let us run the DMV query again, we get the results below:

select session_id, db_NAME (database_id), USER_objects_alloc_page_count from master.sys.dm_db_session_space_usage where session_id = @@SPID

session_id
DBName
user_objects_alloc_page_count
55
tempdb
1
We see that 1 page is now allocated to the table #temptable in tempdb.

Let us see if this is the case with table variables as well. So, create a table variable and insert a row in a single batch, (since the scope a table variable is limited to the batch it is declared, unlike a temporary table whose scope is the entire session that created it) as shown below:

DECLARE @TABLEVAR TABLE (COL1 INT)
INSERT INTO @TABLEVAR (COL1) VALUES (1)
GO

Let us query the DMV again,

select session_id, db_NAME (database_id) AS DBName, user_objects_alloc_page_count from master.sys.dm_db_session_space_usage where session_id = @@SPID

session_id
DBName
user_objects_alloc_page_count
55
tempdb
2

We see that a new page was allocated, and even in case of a table variable, page was allocated in the tempdb and not in the memory.

So both temporary tables and table variables are not in-memory structures, and that being said, let us look at specific leverages temp tables provide as against table variables. 


Temporary tables allow constraints, indexes, statistics, and schema changes on them, just like any permanent table would. Also, "select into" and "insert into ... exec" are allowed with a temp table. 

On the contrary, a table variable allows only constraints, and hence indexes associated with primary key and unique key constraints. Otherwise, we cannot create indexes or statistics; neither can the schema be changed once the table variable is declared. Although indexes, associated with primary key or unique are available, these are not useful, since they are not available at compile time. No statistics, no useful indexes means that the query plan can be bad.

So, with all this information at hand, how do we decide which one to use in our TSQL code or when to use.

Although, temp tables do allow constraints, indexes and statistics, we know for a fact, on small tables, an index scan will be relatively more costly than a table scan. So for smaller tables, we don’t quite see a need for indexes or statistics, and hence making table variables a better choice. Also, if the table is small enough, and assuming there is enough memory, the entire table might well be available in the memory, think about it - I guess that is where this myth about table variables being "in-memory structures" originated. Or more so, because the table variables are declared as variables, and variables have been in-memory objects in programming languages. 


And, needless to say, if you foresee more number of rows, a temporary table is a better choice to make.

Tuesday, 19 April 2011

My encounters with DBCC CHECKDB - 1


After recovering from a disk crash on one of the production SQL Server 2000 servers, I ran DBCC CHECKDB on every database to ensure we don’t have any errors. It turned out otherwise, and we did have 2 consistency errors reported.
The part of DBCC CHECKDB output containing the error is copied below:
Server: Msg 8928, Level 16, State 1, Line 1
Object ID 1993058136, index ID 2: Page (6:23238457) could not be processed. See other errors for details.
Server: Msg 8944, Level 16, State 1, Line 1
Table error: Object ID 1993058136, index ID 2, page (6:23238457), row 10. Test (ColumnOffsets <= (nextRec - pRec)) failed. Values are 28416 and 37.

And the last part of CHECKDB output, summarized as below:
CHECKDB found 0 allocation errors and 2 consistency errors in database 'MIRA_OTHERS_CONTENT_DB'.
repair_allow_data_loss is the minimum repair level for the errors found by DBCC CHECKDB (MIRA_OTHERS_CONTENT_DB ).
DBCC execution completed. If DBCC printed error messages, contact your system administrator.

To start with, here is some info pertaining to SQL Server, using SELECT @@VERSION
Microsoft SQL Server  2000 - 8.00.2055 (Intel X86)   Dec 16 2008 19:46:53   Copyright (c) 1988-2003 Microsoft Corporation  Workgroup Edition on Windows NT 5.2 (Build 3790: Service Pack 2)

The database involved is called: MIRA_OTHERS_CONTENT_DB;
Using the object ID value from the above output, I could find out the table on which errors were reported.
SELECT OBJECT_NAME (1993058136) as TableName
TableName: Docs
Going back to output above, it was obvious that the error was more precisely on an index, index id 2. Using this information and the object id, one could easily identify the index name affected:
USE MIRA_OTHERS_CONTENT_DB;
SELECT NAME AS IndexName from SYSINDEXES WHERE ID = 1993058136 and INDID = 2;
Returned, IndexName: Docs_IdUnique
Next, I ran SP_HELPINDEX Docs, for better understanding of the index definition.
Index Name
Index Description
Index Keys
Docs_IdUnique
nonclustered, unique, unique key located on PRIMARY
Id
Docs_PK
clustered, unique, primary key located on PRIMARY
SiteId, DirName, LeafName

Once the index name and the table name are known, all one has to do is to just re-create the non clustered index. We could use DBCC DBREINDEX or CREATE INDEX… DROP_EXISTIN to re-create the non-clustered index to eliminate corruption errors as below:
Option 1:
DBCC DBREINDEX ( ‘Docs’,’ Docs_IdUnique’)
Option 2:
CREATE UNIQUE   INDEX [Docs_IdUnique] ON [dbo].[Docs] ([Id])
WITH   FILLFACTOR = 90, DROP_EXISTING ON [PRIMARY]
Initially, I thought I could manually drop and create the index, as shown below
DROP INDEX Docs. Docs_IdUnique
But this statement failed, since the unique index is bound by the unique constraint. So then I thought I will drop the constraint. So I ran,
ALTER TABLE Docs DROP CONSTRAINT Docs_IdUnique
And to my surprise, I figure this index enforces the full-text key for Docs table
Cannot drop index 'Docs_IdUnique' because it enforces the full-text key for table 'Docs'.
At that time, I thought over, and figured I could use one of the above two options. If they did not work, I would do a drop and create the clustered index, which in SQL Server 2000 would force the non-clustered index to be rebuilt.
Note:  I would like to remind, that rebuild index behavior is different in SQL Server 2005 and 2008.