Have you experienced that an application you built was working excellent while testing, but soon after deployment, it began to run slow as the data volume expanded in the database. In couple of months, the application started performing so slow that the developers had to start to optimize the database and consequently, upgrade the application. At the point when a database based application runs slow, there is a probability as high as 90% that the data access schedules of that application are not upgraded. In this way, you need to review and optimize your data manipulation routines for enhancing the overall application performance.
Microsoft SQL Server is an endeavour relational database application by which a desktop user can productively make and control database frameworks. It targets enterprise environments that require versatility, execution, stability, and security. It follows a simple syntax to retrieve data. There are numerous aspects to the plan and many parameters to consider if you want to accurately design a piece of SQL Database because not all the database functions operate with the same effectiveness in any given scenario. The task to maintain Database Performance can be an incredibly difficult task, especially when working with large-scale data where even a minor change can dramatically affect on performance.
SQL optimizer relies on indexes defined for a for a specific table. Indexes are two-edged sword: no index will reduce performance of your SELECT statements and excessively indexes will slow down your DML (INSERT, UPDATE, and DELETE) queries. In this way, it is vital to have a right balance of index on tables. Other than the quantity of indexes, fields that are included and their order is also vital.
Indexing should be the initial phase in the execution optimization process for a database. But purpose for applying indexing appropriately in the database in any case is:
A correlated subquery is one which utilizes values from the parent query. This sort of query leads to run row-by-row, once done for every row it is returned back by the external query. In this kind of queries, a table alias must be used to determine which table reference is to be used. It runs rapidly, because it doesn’t assess the subquery various times.
Temporary tables usually build a query’s complexity. If your code can be written in a basic, simple manner than avoid using temp tables. But if you have a stored procedure with data manipulation that can’t be handled with a single query, you can utilize temp tables as mediator to help you to create a final result.
When you have to join a large tables and there are conditions on table, you can expand database performance by moving your information in a temp table and after that making a join on that. Your temp table will have fewer columns than the first table, so the join will complete quicker!
At last, when you’re done with your temp table, delete it to clear tempdb assets.
The benefit of making a JOIN is that you can assess tables in an alternate request from that which is defined by the query. The benefit of using a subquery is that is that,most of the time it is not important to scan all rows from the subquery to assess the subquery expression.
Buffer cache provides the memory to arrange cache and decreases the requirement to perform physical I/O. SQL Server queries pages from the buffer cache, not directly from a disk. If the page is not in the cache, I/O is queued to remove data from the disk. Effectively utilizing the buffer cache helps to optimize physical I/O since various pages can be read and written in one operation and next can be implemented.
Cache thrashing is frequently a result of large tables or indexing being scanned. To quickly decide which tables are transmitting up the most space, review buffer descriptions or use index DMVs.
To improve the performance of SQL queries, developers use query optimization techniques. Query optimizations involves information of procedures such as cost based and heuristic based optimizers, in addition SQL platform provides the tools which can be used to clarify a query execution plan. The ideal approach to performance is to attempt to write your queries in distinctive ways and analyze execution plans.
Usually you have to run bunches of read operations on the database. In this way, after applying all the optimization techniques as such, if you find that some of your data recovery operations are still not performing proficiently, you have to think about applying as some kind of de-normalization. In this way, while de-normalizing, you need to do some exchanges between data redundancy and the SELECT operation’s performance. Apply de-normalization (when required) only on the key tables that are included in the expensive data access schedules.
Using a registered column to do the entire calculation in the back-end might be costly if the table contains a extensive number of rows. You can use distinctive indexing techniques to enhance performance on processed columns.
Create “Indexed Views“: Creating indexes on views provides you performance support, when data in the base table changes, the database engine needs to update the index also. So, you should consider creating indexed Views when the view needs to handle an excessive number of rows with aggregate functions, and when data and the base table don’t change frequently.
Create indexes on User Defined Functions (UDF): It gives a tremendous performance benefit when you incorporate the UDF in a query particularly if you use UDFs in the join conditions between different tables/views.
At the point when a SQL Server database is made, the database server inside makes several files in the file system. Every database related object that gets created later in the database is actually stored inside these files.
It has the following three kinds of files:
Table partitioning means splitting a large table into numerous small tables so that queries need to examine fewer amounts of data while recovering.
When you have a large table in your database and when you find that querying on the table is executing gradually, you should consider dividing the table to enhance execution by partitioning the data into smaller, more manageable sections.Table partitioning means splitting a large table into numerous small tables so that queries need to examine fewer amounts of data while recovering.
It is very important that you have skilled dotnet developer to create efficient & effective structure to implement optimized database structure & have it optimized from day 1 to ensure you don’t have scalability issues in long run.
WRITTEN BY: Parth Patel
Parth Patel is a Microsoft Certified Solution Associate (MCSA) and DotNet Team Lead at CMARIX, a leading Asp.net mvc development company. With 10+ years of…
FEW MORE POSTS BY Parth Patel: