How to Enable Instant File Initialization – SQL Server

Data and log files are first initialized by filling the files with zeros when you perform one of the following operations:
1) Create a database.
2) Add files, log or data, to an existing database.
3) Increase the size of an existing file (including autogrow operations).
4) Restore a database or filegroup.

Instant file initialization reclaims used disk space without filling that space with zeros.
This will help data files initialization instantaneously.

Steps to Enable Instant file initialization :

ifi1

ifi2

Go to Administrative tools –> Local security policy –> under local policies … select user Rights Assignment –>select …perform volume maintenance tasks –> properties — > add your SQL Server service account.

This will made database creations, file growths and file creations fast.

Create Extend Events session to Capture deadlocks – SQL server 2012

Deadlock can be detected and resolved automatically without manual intervention in SQL Server. This article will help to configure extended events using GUI to capture deadlocks data.

Below code will help you to reproduce deadlock:

— Create table 1 in the database
CREATE TABLE [dbo].[syn_st](
[na] [varchar](10) NULL,
[nom] [int] NULL
) ON [PRIMARY]
— — Create table 2 in the database
CREATE TABLE [dbo].[syn_st1](
[na] [varchar](10) NULL,
[nom] [int] NULL
) ON [PRIMARY]
— populate 1 or 2 records into both tables
—Step 1 in session 1
begin tran
update syn_st set nom = 1224 where nom = 1225

—Step 2 in session 2
begin tran
update syn_st1 set nom = 1224 where nom = 1225

—Step 3 in session 1
update syn_st1 set nom = 1224 where nom = 1225

— step 4 in session 2
update syn_st set nom = 1224 where nom = 1225

 

This slideshow requires JavaScript.

Creating Extended Events session to capture long running queries – SQL Server 2012

In this article, we’ll create a extended event session that collects event information about T-SQL statements which are running more than 10 seconds.

  1. In Object Explorer, expand the Extended events then right-click the Sessions folder and then click New Session wizard. Type a name for your session in the Session name text box.
  2. select Do not use a template.
  3. select events to capture ( sql_statement_completed, sp_statement_completed)
  4. capture global fields (sql_text, tsql_stack)
  5. specify a file name to storage captured data and click next and finish.
  6. To filter the statements based on duration , right click on the new session -> properties -> events -> click on configure button on right top of the page.
  7. select the event and specify the filter values (duration > 10000000) and select ok.
  8. Now right click on new session and select start session  then select watch live data to watch the captured sessions.

This slideshow requires JavaScript.

Understanding the cntr_type values from sys.dm_os_performance_counters – SQL server 2012

Before using the cntr_value from sys.dm_os_performance_counters , we need to know the cntr_type and what the cntr_type  value is indicating.

  1. 65792: This counter value shows the last observed value directly.
  2. 272696576 : This counter value represents a rate metric. The cntr_value is cumulative.
  3. 1073874176 :This counter value represents an average metric. The cntr_value is cumulative.  To calculate actual value you need to use base values.

EX: Average Latch Wait Time (ms) for the interval = (A2 – A1) / (B2 – B1)
= (14272 – 14257) / (360 – 359)
= 15.00 ms

A (A1, A2) : Average Latch Wait Time (ms)

B (B1,B2) :Average Latch Wait Time Base

4.   537003264 : This counter value represents a fractional value as a ratio to its        corresponding  counter value. This is not final value , we have to use base value    (1073939712 cntr_type) to calculate the actual value.

EX: Hit ratio %  = 100 * MSSQL$SQLSVR:Buffer Manager\Buffer cache hit ratio /                                             MSSQL$SQLSVR:Buffer Manager\Buffer cache hit ratio base
= 100 * 2911 / 3170
= 91.83%

5. 1073939712 : This counter value is raw data that is used as the denominator of a counter that presents a instantaneous arithmetic fraction.

For ex: Buffer cache hit ratio base value is the base for the MSSQL$SQLSVR:Buffer Manager\Buffer cache hit ratio calculation. 

Query to find blocking sessions on a database – SQL Server

select
l.resource_type,
l.request_mode,
l.request_status,
l.request_session_id,
r.command,
r.status,
r.blocking_session_id,
r.wait_type,
r.wait_time,
r.wait_resource,
request_sql_text = st.text,
s.program_name,
most_recent_sql_text = stc.text
from sys.dm_tran_locks l
left join sys.dm_exec_requests r
on l.request_session_id = r.session_id
left join sys.dm_exec_sessions s
on l.request_session_id = s.session_id
left join sys.dm_exec_connections c
on s.session_id = c.session_id
outer apply sys.dm_exec_sql_text(r.sql_handle) st
outer apply sys.dm_exec_sql_text(c.most_recent_sql_handle) stc
where l.resource_database_id = db_id(‘DB_MAINT’)
order by request_session_id;

SQL Server Perfmon Counters

Listed below are few important Perfmon Counters and threshold values for CPU, Memory,Disk trouble shooting.

Metric Object Counter Threshold value
CPU Process (sqlservr) %Processor Time < 80%
System Processor Queue Length < 4 per CPU
Memory Memory Available Mbytes > 100MB
SQL Server: Buffer Manager Lazy Writes/Sec < 20
SQL Server: Buffer Manager Page Life Expectancy (PLE) > 300
SQL Server: Buffer Manager Page Lookups/sec <100
SQL Server: Buffer Manager Page Reads/sec <90
SQL Server: Buffer Manager Readahead/sec < 20% of page Reads/sec
SQL Server: Buffer Manager Buffer Cache Hit Ratio >99 %
Disk Physical Disk Avg. Disk Sec/Read < 8ms
PhysicalDisk Avg. Disk sec/Write < 8ms
Index SQLServer:Access Methods Full Scans / sec 1 Full Scan/sec per 1000 Index Searches/sec
SQLServer:SQL Statistics SQL Compilations/sec < 10% of the number of the Batch Requests/sec
SQLServer:SQL Statistics SQL Re-Compilations/sec < 10% of the number of SQL compilations/sec

How to capture Database files I/O latencies – SQL Server

SELECT
–virtual file latency
[ReadLatency] =
CASE WHEN [num_of_reads] = 0
THEN 0 ELSE ([io_stall_read_ms] / [num_of_reads]) END,
[WriteLatency] =
CASE WHEN [num_of_writes] = 0
THEN 0 ELSE ([io_stall_write_ms] / [num_of_writes]) END,
[Latency] =
CASE WHEN ([num_of_reads] = 0 AND [num_of_writes] = 0)
THEN 0 ELSE ([io_stall] / ([num_of_reads] + [num_of_writes])) END,
–avg bytes per IOP
[AvgBPerRead] =
CASE WHEN [num_of_reads] = 0
THEN 0 ELSE ([num_of_bytes_read] / [num_of_reads]) END,
[AvgBPerWrite] =
CASE WHEN [num_of_writes] = 0
THEN 0 ELSE ([num_of_bytes_written] / [num_of_writes]) END,
[AvgBPerTransfer] =
CASE WHEN ([num_of_reads] = 0 AND [num_of_writes] = 0)
THEN 0 ELSE
(([num_of_bytes_read] + [num_of_bytes_written]) /
([num_of_reads] + [num_of_writes])) END,
LEFT ([mf].[physical_name], 2) AS [Drive],
DB_NAME ([vfs].[database_id]) AS [DB],
–[vfs].*,
[mf].[physical_name]
FROM
sys.dm_io_virtual_file_stats (NULL,NULL) AS [vfs]
JOIN sys.master_files AS [mf]
ON [vfs].[database_id] = [mf].[database_id]
AND [vfs].[file_id] = [mf].[file_id]
ORDER BY [Latency] DESC

How to find Top CPU intensive queries on a Database – SQL Server

SELECT TOP 20 SUBSTRING(qt.TEXT, (qs.statement_start_offset/2)+1,
((CASE qs.statement_end_offset
WHEN -1 THEN DATALENGTH(qt.TEXT)
ELSE qs.statement_end_offset
END – qs.statement_start_offset)/2)+1),
qs.execution_count,
qs.total_logical_reads, qs.last_logical_reads,
qs.total_logical_writes, qs.last_logical_writes,
qs.total_worker_time,
qs.last_worker_time,
qs.total_elapsed_time/1000000 total_elapsed_time_in_S,
qs.last_elapsed_time/1000000 last_elapsed_time_in_S,
qs.last_execution_time,
qp.query_plan
FROM sys.dm_exec_query_stats qs
CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) qt
CROSS APPLY sys.dm_exec_query_plan(qs.plan_handle) qp
ORDER BY qs.total_worker_time DESC — CPU time

Find IO statistics of a Database – SQL server

USE [Database_Name]
GO
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO

ALTER PROCEDURE [dbo].[IO_stats]
as
;with IOS as(SELECT TOP 100
[Average IO] = (total_logical_reads + total_logical_writes) / qs.execution_count,
[Total IO] = (total_logical_reads + total_logical_writes),
[Execution count] = qs.execution_count,
qs.last_execution_time as [Time],
qs.total_rows as [Total_rows],qs.total_physical_reads as [Total_physical_reads],qs.last_physical_reads as [Last_physical_reads],
[Individual Query] = SUBSTRING (qt.text,qs.statement_start_offset/2,
(CASE WHEN qs.statement_end_offset = -1
THEN LEN (CONVERT (NVARCHAR (MAX), qt.text)) * 2
ELSE qs.statement_end_offset END – qs.statement_start_offset)/2),
[Parent Query] = qt.text,
qt.objectid as ‘Object_ID’,
DatabaseName = DB_NAME(qt.dbid)
FROM sys.dm_exec_query_stats qs
CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) as qt
where qt.dbid = DB_ID(‘Database_Name’)
ORDER BY [Average IO] DESC)

select P.Name as [Procedure Name],IOS.[Average IO],IOS.[Total IO],IOS.[Execution count],IOS.[Time],IOS.[Total_physical_reads],IOS.[Total_rows],IOS.[Individual Query],IOS.DatabaseName
from IOS left join sys.procedures P on IOS.Object_ID=P.object_id

Go
Exec IO_stats

Threshold values for Batch Requests ,Compilations and Re-compilations.

Object: – SQLServer:SQL Statistics
Counter: – Batch Requests/Sec
Preferred Value: – > 300
Description: – The higher this number, the better. What it generally means is that your SQL Server can scale when needed. If you see peaks hit over 2000, then you are on your way to an optimized box. I have worked with servers that never got over 150. Then after changing a critical process like removing a scalar UDF from a computed column, or rewriting a critical process to not use looping, see this number hit over 500 to 2000.

Object: – SQLServer:SQL Statistics
Counter: – SQL Compilations/sec
Preferred Value: – < 10% of the number of Batch Requests / sec
Description: – Not a major indicator. I would not worry too much about it, however be aware of it.

Object: – SQLServer:SQL Statistics
Counter: – SQL Re-Compilations/sec
Preferred Value: – < 10% of the number of SQL Compilations/sec
Description: – No you don’t want recompilations, and the higher the number here, probably the more temp tables you use. Not much you can do about this except change code.

These three SQL Statistics counters provide information about how frequently SQL Server is compiling or recompiling an execution plan, in relation to the number of batches being executed against the server. The higher the number of SQL Compilations/sec in relation to the Batch Requests/sec, the more likely the SQL Server is experiencing an ad hoc workload that is not making optimal using of plan caching. The higher the number of SQL Re-Compilations/sec in relation to the Batch Requests/sec, the more likely it is that there is an inefficiency in the code design that is forcing a recompile of the code being executed in the SQL Server. In either case, investigation of the Plan Cache, as detailed in the next section, should identify why the server has to consistently compile execution plans for the workload.

Recompilation:
Once the plan is cached, there are situations when the optimizer feels something has changed and it has to compile/generate the query plan again, this is called Recompilation. Recompilation is generally caused when there is a change in schema , statistics change and execute the statements with recompile options…

There are various ways by which recompile can be captured. Easiest way is to capture profiler trace. There are two events available in profiler called “SP:Recompile” and “SQL:StmtRecompile”

Below is the query which can identify various possible reasons which is stored in profiler related catalog views. Note that both the above events have exactly same reasons:

SELECT sv.subclass_value,sv.subclass_name
FROM sys.trace_events AS e, sys.trace_subclass_values AS sv
WHERE e.trace_event_id = sv.trace_event_id
AND e.name = ‘SP:Recompile’
AND sv.subclass_value < 1000
ORDER BY sv.subclass_value;

Here is the output from this query:

subclass_value subclass_name
1 Schema changed
2 Statistics changed
3 Deferred compile
4 Set option change
5 Temp table changed
6 Remote rowset changed
7 For browse permissions changed
8 Query notification environment changed
9 PartitionView changed
10 Cursor options changed
11 Option (recompile) requested
12 Parameterized plan flushed
13 Test plan linearization
14 Plan affecting database version changed

These can be seen in the Profiler, under the “Event Subclass” column, whenever there is a recompile event.