Wednesday, May 29, 2013

SYBASE ASE - IN MEMORY DATABASE


SYBASE ASE - IN MEMORY DATABASE -

An In Memory Database(IMDB) is a memory resident database introduced in Sybase ASE 15.5. It provides much better performance by eliminating physical IO over Durability. This database is created on memory resident devices which are created on a named cache.  A named cache created for IMDB can be used by only one IMDB and it cannot be used by any other IMDB or cache as its structurally different from a normal cache.

Steps to Create IMDB

1. Create a named cache for IMDB using sp_cacheconfig with "inmemory_storage" parameter as shown below
                sp_cacheconfig imdb_dba_cache, '50M', inmemory_storage
                Memory allocated for the imdb_dba_cache cachelet 1: 51200 Kb
                The change is completed. The option is dynamic and ASE need not be rebooted for
                the change to take effect.
                (return status = 0)

Note that the above command will fail if free memory available is less than 2GB

2. Ensure that this is refelected in configuration file -
                cat $SYBASE/$SYBASE_ASE/ASE1.cfg
                [Named Cache:imdb_dba_cache]
        cache size = 50M
        cache status = in-memory storage cache
        cache replacement policy = none
        local cache partition number = DEFAULT

3. Create device on this cache using disk init. phys_name would be name of cache created above.
                1> disk init name = 'imdb_dba_data_dev',physname = 'imdb_dba_cache',size = '10M', type = "inmemory"
                2> go
                00:00:00000:00012:2013/05/29 13:10:23.27 server  Initializing the Virtual Cache Device 'imdb_dba_data_dev' from Cache 'imdb_dba_cache'
                00:00:00000:00012:2013/05/29 13:10:23.27 kernel  Virtual device 10 started using cache device i/o.
                Virtual cache device 'imdb_dba_data_dev' of size = 10 MB has been created on
                cache 'imdb_dba_cache'.

4. Create an inmemory database on the devices created above using create inmemory database command -

1>      create inmemory database dba_imdb1
        on imdb_dba_data_dev = '6M'
        log on imdb_dba_data_dev = '2M'
        with override, durability = no_recovery2> 3> 4>
                5> go
                00:00:00000:00012:2013/05/29 13:21:57.94 server  Timestamp for database 'dba_imdb1' is (0x0000, 0x0000162d).
                CREATE DATABASE: allocating 3072 logical pages (6.0 megabytes) on disk
                'imdb_dba_data_dev' (3072 logical pages requested).
                CREATE DATABASE: allocating 1024 logical pages (2.0 megabytes) on disk
                'imdb_dba_data_dev' (1024 logical pages requested).
                Warning: Database 'dba_imdb1' has a durability level of NO_RECOVERY; changes to
                it will be lost when you restart Adaptive Server.
                Database 'dba_imdb1' is now online.

5. Your inmemory database is created now. Run sp_helpdb dba_imdb1 and compare how it is different from that of a disk resident database -
1> sp_helpdb dba_imdb1
2> go
 name               db_size                    owner      dbid   
         created                  durability             status            
 ------------------ -------------------------- ---------- --------
         ------------------------ ---------------------- ------------------
 dba_imdb1                 8.0 MB              sa            8   
         May 29, 2013             no_recovery            in-memory         

(1 row affected)
 name               attribute_class                attribute                
         int_value          char_value                   comments        
 ------------------ ------------------------------ --------------------------
         ------------------ ---------------------------- ----------------
 dba_imdb1          buffer manager                 cache binding            
                 1          imdb_dba_cache               NULL            
 device_fragments                                            
         size                       usage                                  
         created                                          
         free kbytes                     
 ------------------------------------------------------------
         -------------------------- ----------------------------------------
         --------------------------------------------------
         --------------------------------
 imdb_dba_data_dev                                          
                6.0 MB              data only                              
         May 29 2013  1:21PM                              
                     4416                
 imdb_dba_data_dev                                          
                2.0 MB              log only                               
         May 29 2013  1:21PM                              
         not applicable                  
                                                                                                                              
 ----------------------------------------------------------------------------------------------------------------------------
 log only free kbytes = 2026                                                                                                  
(return status = 0)
1>



Tuesday, February 5, 2013

Archive Databases in Sybase ASE

Archive Databases in Sybase ASE (15.0.2 +) - Quick Overview

NOW ACCESS DATA DIRECTLY FROM BACKUP FILE WITHOUT RESTORING IT !!!!

Archive Database is a type of database in ASE which is built primarily on database backup files and partly on server devices.  A system table, sysaltusages, maps these database backup files to archive database. This system table is stored on a user defined database which should be designated as "scratch database"
Server devices  (like the one used by normal databases, initialized by disk init) on Archive databases are used to store pages which are result of recovery operation performed while loading archive database with database dump.  When an archive DB is loaded, a logical mapping is created between pages in database dump file and archive database. In light of this, the size of your archive database will be very less as compared to size of source database.
Figure below shows a high level view of Archive Database.

Add caption

Possible applications of Archive Databases -
    Before loading a dump file to a database, in order to ensure that there is no corruption, an archive database can first be created on it and consistency checks could be run. Creating an archive database is very fast as only logical mappings are to be created.

    Archive Databases can be used for Object level Recovery


Limitations
Read Only Databases (Although dbcc with fix can be applied on it)
Database dumps from files/disks can be only used, tapes are not supported
Compressed dumps should be created using "with compression" and not "compression::"

Implementation Steps -
    1.Either create a new database or designate an existing databases as scratch db.
--create DB
Create database scratchdb on data_dev = 100 log on log_dev = 50
--mark it as acratch db
sp_dboption scratchdb, 'scratch database', true
2. Create an archive database
create archive database archive_db on data_dev1 = 20 with scratch_database = scratchdb
3. If you are going to load a compressed dump, set a compression memory pool first using below command -
sp_configure 'compression memory size' , 2560
4. Load archive database with dump of desired database -
load database archive_db from '/dev/ddumpd/prod_db_01'
stripe on from '/dev/ddumpd/prod_db_02'

Bring it online
Online database archive_db
5.Use load database with norecovery, This give you immediate access to the dump files as recovery will not happen. You don't need to execute online database either.




Ardent DBA

Monday, February 4, 2013

Sybase ASE Upgradation from 12.5.X to 15.X - Dump Load



  • Install ASE 15 instance on the same box at the different partition/directory exactly Same as ASE 12.5.X instance (this includes devices, databases, logins, loginroles etc). (NOTE: System db's Size will differ)
  •  Install backup server for ASE 15.0
  •  ASE 15.0 RUN_server/Backupserver file and ASE 12.5.X RUN_server/Backupserver file should have same share memory, Trace Flags, other backup server parameters like -P, -N, -m etc. Please confirm. 
  • Copy 12.5.X cfg file to ASE 15.0 cfg file
  • Recycle server 15
  • Run sp_countmetadata "number of open partitions" and set whatever value it provides with sp_configure.
  • BCP out master..syslogins, master..sysloginroles from ASE 12.5.X and BCP in ASE 15.0 master.
  • Block all the users on Current Production server
  • In 12.5 server, For "default data cache" set large 16k pool with APF=90%  and Set 2k pool also bigger around 150% of current size with APF=50% for running dbcc checkstorage. (can be done from sp_poolconfig)
  • BCP out all important master..system tables (sysdevices,sysdatabase,syslogins,sysloginroles,sysusages,sysconfigures
  • Run DBCC Checkstorage on all User databases on 12.5.x server
  • Unpartition all the partition tables (if any).
  • Put Database in "dbo use" mode (sp_dboption DBNAME, "dbu use", true).
  • Run sp_flushstats (stats are flushed to disk from cache)
  • - Wait for 10 to 30 seconds, depending on the database size and activity.
  • run checkpoint against the database to flush updated pages.
  • Dump 12.5.X all databases (user databases)
  • Load user Database (on traget i.e. ASE 15 db) 
  • Online database.
  • Run sp_checkreswords (in each user database without any parameters) for all user databases. If there are any reservewords fix it.
  • Run sp_checksource (in each user database without any parameters) to verify we have all source text for compiled objects. Then you can run dbcc upgrade_object (in each user database without any parameters) for to find potential problems and upgrade objects manually.
  • Partition all those unpartition tables from ASE 12.5.X
  • Increase default datacache 3 folds and procedure cache 4-6 folds.
  • Increase default value of “number of lava Operators”
  • Increase default value of “stack size”
  • Increase number of histogram steps and histogram tuning factor.
  • Run update stats on new databases.

Ardent DBA

Sybase ASE Installation From Resource File

Sybase ASE Installation From Resource Files

  1. Go To Software Product Download Center and Download appropriate Product
  2. Generate License (no of engines,hostID,served(req license server) or unserved etc)
  3. Talk to SA and ensure that all OS patches are up-to-date and NW drivers, SAN drivers etc are up to date.
  4. Create an account Sybase and give it permissions on $SYBASE DIR as well as all raw devices and filesystem devices
  5. Specify the license type
  6. Have SA adjust OS Shared memory param for Sybase (UNIX /etc/projects; LINUX sysctl kernal.shmmax)
  7. Work with SA and have RAW devices created for all databases except tempdb. For tempdb have filesystem device created. Ensure that Sybase is set to be owner of these devices. Its quite handy to create Links to Raw devices (ln -s /dev/hda4  /var/sybase/ASE1/master.dat)
  8. Ask SA to install Sybase ASE Software
  9. Copy resourcefile from $SYBASE/$SYBASE_ASE/init (srvbuilders -r <resourcefilename>). Following imp params
sybinit.release_directory: USE_DEFAULT
sybinit.product: sqlsrv
sqlsrv.server_name: ASE1
sqlsrv.network_hostname_list: <your hostname>
sqlsrv.network_port_list: 4000
sqlsrv.server_page_size: USE_DEFAULT
sqlsrv.master_device_physical_name: /var/sybase/ASE1/master.dev
sqlsrv.master_device_size: 100
sqlsrv.master_database_size: 60
sqlsrv.errorlog: /opt/sybase/admin/ASE1/errorlog
sqlsrv.sybsystemprocs_device_physical_name: /var/sybase/ASE1/sybsystemprocs.dev
sqlsrv.sybsystemprocs_device_size: 200
sqlsrv.sybsystemprocs_database_size: 200
sqlsrv.sybsystemdb_device_physical_name: /var/sybase/ASE1/sybsystemdb.dev
sqlsrv.sybsystemdb_device_size: 20
sqlsrv.sybsystemdb_database_size: 20
sqlsrv.tempdb_device_physical_name: /var/sybase/ASE1/tempdb.dev
sqlsrv.tempdb_device_size: 100
sqlsrv.tempdb_database_size: 100M
sqlsrv.default_backup_server: SYB_BACKUP

10 .The instmsgs.ebf file contains the latest update for Sybase error messages.isql -i $SYBASE/$SYBASE_ASE/scripts/instmsgs.ebf
11. Create backup serverver svrbuilders -r<backup Server RS>
12. Modify RUN_SYB_BACKUP as follows
          Change location of logfile with -e flag
13. Go ahead and create user databases

Ardent DBA

Thursday, January 31, 2013

Transaction Log in Sybase ASE


Transaction Log in ASE


Database in Sybase ASE has 3 segments by default - Default (for System Objects), Data (To store actual data pages of Tables) and Log Segment (to store log pages). In ASE, we have a write ahead log i.e. before any transaction is applied to database, it is written on to a table called syslogs in log segment to ensure recovery in even of failure.
On a high level, transaction log has two portions - the active portion and inactive portion. Inactive portion of transaction log has committed and rolled back transactions and this portion can only be truncated.
Active portion of transaction log has open transactions also in addition to committed and rolled back transactions.
There is also a marker called Truncation Point in transaction log which marks the end of inactive portion and beginning of active portion. (This marker is just 1-2 pages behind the end of inactive portion). Refer to figure below for further clarification -

To understand logging in Transaction log further, Let's assume that following command is issued against  an ASE database -
begin tran
INSERT INTO TABS ('1','ARDENTDBA','INDIA')
commit
Following logical high level process will happen in background -
  •     Begin tran would be logged in syslogs table of database
  •     Identified extent of table TABS would be bought into cache by ASE
  •     Insert statement would be logged in syslogs table
  •     Identified page in cache of TABS table will be written  with this new row's  data
  •     Commit statement would be logged in syslogs table
  •     Page of TABS table which is written with new row will be marked as dirty page
  •     This dirty page would be written to disk during checkpoint.

So as Transaction log stores all the transactions running against a database, transactions get accumulated here with time, which in turn starts filling log segment of the database. As a DBA, Its your responsibility to manage this table so that the log segment doesn't fills up and bring your database to stand still. You can choose either to archive-no truncate, archive-truncate or truncate the inactive portion of Transaction log depending on the criticality of the database.  For mission critical databases, were we need to ensure full recovery of data, we archive-truncate the transaction log. For dev/test environments where data is not very critical, we truncate transaction log. Archive-no truncate is used during special case of recovery from Database failure.
Dump Transaction command in Sybase ASE is used to manage transaction log of database. For example,
  • Backup Transaction Log to File issue following command
    • dump transaction <databasename> to <filename>
  • To truncate transaction log, issue following command
    • dump transaction <databasename>  with truncate_only
  • To archive transaction log without truncating the log, use below command
    • dump transaction <databasename>  with no_truncate

Ardent-DBA

SQL Server - Quick Query Optimization Tips based on Execution Plan


SQL Server Quick Query Optimization Tips based on Execution Plan



1.    If hash or merge algorithms are selected by the query engine for the join, data retrieved from the table should be sorted by the join key. If a clustered index is built on the join key, retrieved data is already sorted.

2.    Always look at the query plan first. It will show you the optimal current execution plan from the query engine's point of view. Find the most expensive part of the execution plan and start optimizing from there. However, even before that, make sure that the statistics on all tables in your query are up to date, by running the update statistics <TableName> command on all tables in your query.

3.    If you see clustered index scan, find out whether it can be replaced with index seek. For that, find what conditions are applied to this table. Find out the most selective condition (that is, the condition that would produce the smallest number of records if applied alone), and see whether an index on this field exists. Any index that lists this field first will qualify. If there is no such index, create it and see whether the query engine picks it up.

4.    If you see bookmark lookup, it means that your index is not covering. Try to make it covering if it makes sense (see the preceding guidelines).       

5.    Avoid any operations on the fields, where possible. Some operations will prevent the use of the index on this field even if it exists—for example, the infamous ltrim(rtrim(FieldName)); other operations will degrade the performance. For example, instead of using the condition cast(DateField as varchar(20)) = @dateString, try to convert @dateString to an expression of datetime type first, and then compare it to DateField.

6.    When it is not possible to avoid operation on the field, use an index built on that expression. This can be done in two ways:
Create a calculated field based on your expression.
Create a view, and build an index on it.


7.    Indexed views are a good way to further speed up the query if you are not satisfied with the results. Indexed view is a clustered index built over the view's select list. You can also define additional indexes for the indexed view, just as you can for any regular table. Indexed views take disk space and involve some maintenance overhead (every time underlying tables change, the indexed view also has to change), but they usually provide a good boost in performance, even after all other optimization techniques are exhausted.
8.    Check for deadlocks happening; Open a new query editor window and exec below commands, it will give you all the deadlocks which are happening.  This command Returns the resources and types of locks participating in a deadlock and also the current command affected.
dbcc traceon(3604)
go
dbcc traceon(1204)
go

or the better command to check for deadlock in 2005+ servers is below

DBCC TRACEON (1222, 3605, -1)

1222 – deadlock detection
3605 – O/P to errorlog
-1 – signifies that flag is enabled at global level and not local

9.    In an execution plan, if there is difference between “expected rows” and “actual rows” then there is something wrong with Statistics.


Ardent-DBA