Thursday, January 31, 2013

Transaction Log in Sybase ASE


Transaction Log in ASE


Database in Sybase ASE has 3 segments by default - Default (for System Objects), Data (To store actual data pages of Tables) and Log Segment (to store log pages). In ASE, we have a write ahead log i.e. before any transaction is applied to database, it is written on to a table called syslogs in log segment to ensure recovery in even of failure.
On a high level, transaction log has two portions - the active portion and inactive portion. Inactive portion of transaction log has committed and rolled back transactions and this portion can only be truncated.
Active portion of transaction log has open transactions also in addition to committed and rolled back transactions.
There is also a marker called Truncation Point in transaction log which marks the end of inactive portion and beginning of active portion. (This marker is just 1-2 pages behind the end of inactive portion). Refer to figure below for further clarification -

To understand logging in Transaction log further, Let's assume that following command is issued against  an ASE database -
begin tran
INSERT INTO TABS ('1','ARDENTDBA','INDIA')
commit
Following logical high level process will happen in background -
  •     Begin tran would be logged in syslogs table of database
  •     Identified extent of table TABS would be bought into cache by ASE
  •     Insert statement would be logged in syslogs table
  •     Identified page in cache of TABS table will be written  with this new row's  data
  •     Commit statement would be logged in syslogs table
  •     Page of TABS table which is written with new row will be marked as dirty page
  •     This dirty page would be written to disk during checkpoint.

So as Transaction log stores all the transactions running against a database, transactions get accumulated here with time, which in turn starts filling log segment of the database. As a DBA, Its your responsibility to manage this table so that the log segment doesn't fills up and bring your database to stand still. You can choose either to archive-no truncate, archive-truncate or truncate the inactive portion of Transaction log depending on the criticality of the database.  For mission critical databases, were we need to ensure full recovery of data, we archive-truncate the transaction log. For dev/test environments where data is not very critical, we truncate transaction log. Archive-no truncate is used during special case of recovery from Database failure.
Dump Transaction command in Sybase ASE is used to manage transaction log of database. For example,
  • Backup Transaction Log to File issue following command
    • dump transaction <databasename> to <filename>
  • To truncate transaction log, issue following command
    • dump transaction <databasename>  with truncate_only
  • To archive transaction log without truncating the log, use below command
    • dump transaction <databasename>  with no_truncate

Ardent-DBA

SQL Server - Quick Query Optimization Tips based on Execution Plan


SQL Server Quick Query Optimization Tips based on Execution Plan



1.    If hash or merge algorithms are selected by the query engine for the join, data retrieved from the table should be sorted by the join key. If a clustered index is built on the join key, retrieved data is already sorted.

2.    Always look at the query plan first. It will show you the optimal current execution plan from the query engine's point of view. Find the most expensive part of the execution plan and start optimizing from there. However, even before that, make sure that the statistics on all tables in your query are up to date, by running the update statistics <TableName> command on all tables in your query.

3.    If you see clustered index scan, find out whether it can be replaced with index seek. For that, find what conditions are applied to this table. Find out the most selective condition (that is, the condition that would produce the smallest number of records if applied alone), and see whether an index on this field exists. Any index that lists this field first will qualify. If there is no such index, create it and see whether the query engine picks it up.

4.    If you see bookmark lookup, it means that your index is not covering. Try to make it covering if it makes sense (see the preceding guidelines).       

5.    Avoid any operations on the fields, where possible. Some operations will prevent the use of the index on this field even if it exists—for example, the infamous ltrim(rtrim(FieldName)); other operations will degrade the performance. For example, instead of using the condition cast(DateField as varchar(20)) = @dateString, try to convert @dateString to an expression of datetime type first, and then compare it to DateField.

6.    When it is not possible to avoid operation on the field, use an index built on that expression. This can be done in two ways:
Create a calculated field based on your expression.
Create a view, and build an index on it.


7.    Indexed views are a good way to further speed up the query if you are not satisfied with the results. Indexed view is a clustered index built over the view's select list. You can also define additional indexes for the indexed view, just as you can for any regular table. Indexed views take disk space and involve some maintenance overhead (every time underlying tables change, the indexed view also has to change), but they usually provide a good boost in performance, even after all other optimization techniques are exhausted.
8.    Check for deadlocks happening; Open a new query editor window and exec below commands, it will give you all the deadlocks which are happening.  This command Returns the resources and types of locks participating in a deadlock and also the current command affected.
dbcc traceon(3604)
go
dbcc traceon(1204)
go

or the better command to check for deadlock in 2005+ servers is below

DBCC TRACEON (1222, 3605, -1)

1222 – deadlock detection
3605 – O/P to errorlog
-1 – signifies that flag is enabled at global level and not local

9.    In an execution plan, if there is difference between “expected rows” and “actual rows” then there is something wrong with Statistics.


Ardent-DBA

Monday, January 28, 2013

SQL Server High Level Architecture


SQL Server high Level Architecture

SQL Server Architecture can be broadly split into following 4 layers and Every query which is submitted to SQL has to go through each of these.

External Protocols -  Client Sends query to SQL server and this query is converted into the form which Relational Engine understands by this component.  After the query is handled by relational engine, the result set/error/warning/exception is sent back to client by this component.

Relational Engine - It parses, compiles, optimizes and executes the query which it get from the protocol layer. While executing the query, it interacts with the storage engine to read/write data.

Storage Engine - Storage Engine manages data access both through transactions and through bulk operations like BCP/Backups/ restores/ DBCC commands etc.

SQL OS - This layer of SQL Server interacts with OS and handles Operating System like tasks - thread management, memory management, deadlock detection, buffer pool management etc.



Ardent-DBA