top of page

Query performance

Often the first step you take to improve your SQL Server query performance is to add indexes to tables.

This is also usually true for bulk inserts. There is a relevant question I discussed with Kevin in Shop Talk, " given a new stored procedure which inserts batches of rows into a table, I am seeing resulting worse database performance. What can I do about this? The table is a heap. Will a unique clustered index help? "

Tibor Karaszi's SQL Server blog talks about this issue => http://sqlblog.karaszi.com/are-inserts-quicker-to-heap-or-clustered-tables

Note: An index improves query performance only if it is up-to-date and selected by SQL Server to resolve queries. Therefore, you should periodically review and optimize indexes as part of your regular routine.

SQL Server uses statistics to determine a column’s cardinality, which is the number of rows containing a specific value in a column. For example, when a value is unique in a column, such as a primary key, the cardinality is 1. When a column is highly unique like this, it is commonly characterized as having high selectivity or high cardinality. Conversely, when the number of unique values in a column are fewer or non-existent, the column has low selectivity and low cardinality.

This information influences the query optimizer’s selection of an appropriate index as well as an efficient query plan. If the wrong index or query plan is selected, or if an index is missing, a query might not execute as optimally as possible. Therefore, the accuracy of the statistics is critical to query performance.

If the indexes are consuming resources without helping query performance. You should consider dropping these indexes if further investigation reveals that there is no need to continue to maintain them.

With regard to query performance, all is well. However, SQL Server has an additional index to manage and requires more disk space for the database when similar indexes exist. This situation can occur when you create new indexes based on the DMVs for missing indexes or follow the recommendations from the Database Engine Tuning Advisor because they do not always account for existing indexes. Therefore, when using the DMVs or Database Engine Tuning Advisor to identify missing indexes, you might consider simply adding a new column to an existing index by adding it as another key or as an included column. Meanwhile, when you encounter overlapping indexes, you should drop one of them so that database maintenance tasks run faster and less storage is required.

 

For a SQL Server 2016 feature, we can use Query Store to review SQL Server’s selection of query plans for a query over time. For Azure SQL Database implementations, Azure SQL Database Performance Insight provides visibility into query performance in the cloud.

(Although you use the Azure portal to view the information available from QueryPerformance Insight, you must use SQL Server Management Studio to first enable QueryStore on a SQL Database.)

Query Performance Insight is an Azure SQL Database features that graphically shows you which queries are consuming the most CPU, memory, IO, and DTU resources over time.

 

Not only can a query plan tell you the specific steps performed by SQL Server during query execution, it can also help you discover which step in the sequence is performing poorly. Each step in the query plan is a separate operation performed by the database engine and is represented as an icon known as an operator. As you analyze a graphical query plan, you should check for the following conditions that can affect query performance.

if this table were not memory optimized and you were investigating poor query performance, the presence of the Table Scan operator should lead you to consider adding a clustered index to the table or look for ways to filter the query to return fewer rows.

Another potential problem for query performance is the use of the Clustered Index Scan operator. It is similar in concept to the Table Scan operator in that SQL Server must scan all the data. This might occur when the query must return so many rows that using the index to find specific rows is no longer advantageous, index selectivity is too low, or statistics are obsolete.

Index Scan operator can indicate the source of poor query performance and merits further investigation to determine whether you can modify the query by adding a WHERE clause to return only the needed rows, as long as a proper index exists for the column used in the filter. Importantly, the WHERE clause must have a predicate that includes a column on one side of the operator and an expression on the other side with both the column and expression having the same data type. If you use the LIKE operator in the WHERE clause, you cannot use a wildcard as the first character in the search string.

The query performance will suffer when the sort must use tempdb instead of memory. Use the tooltip for the SELECT operator to check the Memory Grant property which shows how much memory that SQL Server is allocating to the query. In SQL Server 2016, you can now add a query hint to request a minimum memory grant size as a percentage of the default limit to override the minimum memory per query property that is set on the server.

Taken together, your options for improving the query performance based on this query plan includeadding or revising indexes, filtering the data by using a WHERE clause, or fixing aWHERE clause that prevents the query optimizer from using an existing index.

 

Extended Events is a robust, yet lightweight tracing infrastructure that you can use to monitor and analyze SQL Server activity by collecting as much or as little information as you need. At minimum, you can use Extended Events for any diagnostic task that you can perform by using SQL Trace, but it can do so much more. In particular, Extended Events offers greater flexibility because you can filter events with more granularity.

One of the important function that Extended Events support is Query performance diagnostics. It helps find historical deadlocks or queries that did not end, troubleshoot waits for a particular session or query, capture queries that match a specific pattern, and get query plans, to name a few. You can count the number of occurrences of a specific event to determine if a problem is recurring frequently.

Besides query performance diagnostics, you also should be familiar with the following best practice use cases for extend events: system health, resource utilization monitoring and troubleshooting, and security audits.

 

One aspect of query performance is the efficiency of your storage and IO subsystems which you can optimize by managing file placement for system, data, and log files. Use filegroups as separate containers that can be spread across separate disks and optionally use secondary data files and partitioning to improve data access.

To further optimize query performance, you can implement natively compiled stored procedures as long as the stored procedure accesses memory-optimized tables only. A natively compiled stored procedure is a stored procedure compiled into machine language for faster execution, lower latency, and lower CPU utilization.

The database engine must process a high number of inserts, either as a steady stream or in bursts. Bottlenecks from locking and latching are a common problem in this scenario. Furthermore, last-page contention can occur when many threads attempt to access the same page in a standard B-tree and indexes intended to improve query performance add overhead time to insert operations. Performance is often measured in terms of throughput rate or the number of rows loaded per second.

A common scenario for this workload is the Internet of Things in which multiple sensors are sending data to SQL Server. Other examples include of applications producing data at a high rate include financial trading, manufacturing, telemetry, and security monitoring.

Whereas disk-based tables can have difficulty managing the rate of inserts, memory-optimized tables can eliminate resource contention and reduce logging. In some cases, requirements permitting, you can further reduce transaction execution time by implementing delayed durability, which we describe in greater detail in the next section.

 

Example:

Answer:

bottom of page