Pages

Search This Blog

Saturday, November 3, 2012

[SQL Server] Code to Enable TDE on a Database

-- The master key must be in the master database.

USE Master;

GO

-- Create the master key.

CREATE MASTER KEY ENCRYPTION

BY PASSWORD='MyStrongPassword';

GO

-- Create a certificate.

CREATE CERTIFICATE MySQLCert

WITH SUBJECT='MyDatabase DEK';

GO

-- Use the database to enable TDE.

USE MyDatabase

GO

-- Associate the certificate to MyDatabase.

CREATE DATABASE ENCRYPTION KEY

WITH ALGORITHM = AES_128

ENCRYPTION BY SERVER CERTIFICATE MySQLCert;

GO

-- Encrypt the database.

ALTER DATABASE MyDatabase

SET ENCRYPTION ON;

GO

Saturday, August 25, 2012

[T-SQL] Database backup procedure - advanced

I have got this script from one of the blogs:

if exists (select * from sysobjects where id = object_id(N'[dbo].[spBackupAllDatabases]') and OBJECTPROPERTY(id, N'IsProcedure') = 1)
drop procedure [dbo].[spBackupAllDatabases]
go

Create Procedure spBackupAllDatabases
@Path varchar(128) ,
@Type varchar(4) -- Full / Log
as
/*
Backup file format
_Full_yyyymmdd_hhmmss.bak
_Log_yyyymmdd_hhmmss.bak

exec spBackupAllDatabases 'c:\SQLBackups\', 'Full'
*/
/*
drop table DatabaseBackup

Create table DatabaseBackup
(
Name varchar(128) primary key nonclustered ,
BackupFlagFull varchar(1) not null check (BackupFlagFull in ('Y','N')) ,
BackupFlagLog varchar(1) not null check (BackupFlagLog in ('Y','N')) ,
RetentionPeriodFull datetime not null ,
RetentionPeriodLog datetime not null
)
*/
set nocount on
declare @sql varchar(1000)

-- Get all database names
create table #DBName
(
ID int identity (1,1) ,
Name varchar(128) not null ,
RetentionPeriod datetime null
)

insert #DBName
(Name)
select name
from master..sysdatabases

-- Include any new databases in the backup
insert DatabaseBackup
(
Name ,
BackupFlagFull ,
BackupFlagLog ,
RetentionPeriodFull ,
RetentionPeriodLog
)
select #DBName.Name ,
'Y' ,
'N' ,
'7 jan 1900' , -- default one week
'1 jan 1900'
from #DBName
left outer join DatabaseBackup
on DatabaseBackup.Name = #DBName.Name
where DatabaseBackup.Name is null
and lower(#DBName.Name) <> 'tempdb'

-- Remove any non-existant databases
delete DatabaseBackup
where not exists
(
select *
from #DBName
where #DBName.Name = DatabaseBackup.Name
)

delete #DBName

create table #ExistingBackups
(
Name varchar(128) ,
ID int identity (1,1)
)

-- loop through databases
declare @Name varchar(128) ,
@RetentionPeriod datetime ,
@LastBackupToKeep varchar(8) ,
@ID int ,
@MaxID int

insert #DBName
(Name, RetentionPeriod)
select Name, case when @Type = 'Full' then RetentionPeriodFull else RetentionPeriodLog end
from DatabaseBackup
where (@Type = 'Full' and BackupFlagFull = 'Y')
or (@Type = 'Log' and BackupFlagLog = 'Y')

select @MaxID = max(ID) ,
@ID = 0
from #DBName

while @ID < @MaxID begin -- get next database to backup select @ID = min(ID) from #DBName where ID > @ID

select @Name = Name ,
@RetentionPeriod = RetentionPeriod
from #DBName
where ID = @ID

-- Delete old backups
delete #ExistingBackups
select @sql = 'dir /B ' + @Path
select @sql = @sql + @Name + '_' + @Type + '*.*'

insert #ExistingBackups exec master..xp_cmdshell @sql

if exists (select * from #ExistingBackups where Name like '%File Not Found%')
delete #ExistingBackups

select @LastBackupToKeep = convert(varchar(8),getdate() - @RetentionPeriod,112)
delete #ExistingBackups where Name > @Name + '_' + @Type + '_' + @LastBackupToKeep

declare @eID int ,
@eMaxID int ,
@eName varchar(128)

-- loop round all the out of date backups
select @eID = 0 ,
@eMaxID = coalesce(max(ID), 0)
from #ExistingBackups

while @eID < @eMaxID begin select @eID = min(ID) from #ExistingBackups where ID > @eID
select @eName = Name from #ExistingBackups where ID = @eID

select @sql = 'del ' + @Path + @eName
exec master..xp_cmdshell @sql
end
delete #ExistingBackups

-- now do the backup
select @sql = @Path + @Name + '_' + @Type + '_'
+ convert(varchar(8),getdate(),112) + '_'
+ replace(convert(varchar(8),getdate(),108),':','') + '.bak'
if @Type = 'Full'
backup database @Name
to disk = @sql
else
backup log @Name
to disk = @sql
end

go

[Informatica] Tutorial Part 1

Here is the link if Informatica Tutorial Part 1 for beginners:

http://www.youtube.com/watch?v=ufH_n5exxQw

[T-SQL] Shrink Log File


use DatabaseName

BACKUP LOG DatabaseName With TRUNCATE_ONLY
GO
DBCC SHRINKFILE (Database_Log_FileName,300)
GO

[T-SQL] DELETE, TRUNCATE and RESEED Identity

Create a temp table with Identity column beginning with value 11. The seed value is 11.
USE [TempDB]
GO
-- Create TableCREATE TABLE [dbo].[TestTable]([ID] [int] IDENTITY(11,1) NOT NULL,[var] [nchar](10) NULL
)
ON [PRIMARY]
GO
-- Build sample dataINSERT INTO [TestTable]VALUES ('val')GO

When seed value is 11 the next value which is inserted has the identity column value as 11.
– Select Data
SELECT *FROM [TestTable]
GO


Effect of DELETE statement

-- Delete DataDELETE FROM [TestTable]
GO

When the DELETE statement is executed without WHERE clause it will delete all the rows. However, when a new record is inserted the identity value is increased from 11 to 12. It does not reset but keep on increasing.
-- Build sample dataINSERT INTO [TestTable]VALUES ('val')GO-- Select DataSELECT *FROM [TestTable]

Effect of TRUNCATE statement

-- Truncate tableTRUNCATE TABLE [TestTable]
GO

When the TRUNCATE statement is executed it will remove all the rows. However, when a new record is inserted the identity value is increased from 11 (which is original value). TRUNCATE resets the identity value to the original seed value of the table.
-- Build sample dataINSERT INTO [TestTable]VALUES ('val')GO-- Select DataSELECT *FROM [TestTable]
GO

Effect of RESEED statement

If you notice I am using the reseed value as 1. The original seed value when I created table is 11. However, I am reseeding it with value 1.
-- ReseedDBCC CHECKIDENT ('TestTable', RESEED, 1)GO
When we insert the one more value and check the value it will generate the new value as 2. This new value logic is Reseed Value + Interval Value – in this case it will be 1+1 = 2.
-- Build sample dataINSERT INTO [TestTable]VALUES ('val')GO-- Select DataSELECT *FROM [TestTable]
GO

Here is the clean up act.
-- Clean upDROP TABLE [TestTable]
GO

Monday, January 23, 2012

[Informatica] How to execute one or more mappings from workflow

There may be scenarios that you need to execute one or more mapping from eniter workflow which may contains number of mappings.

Here is the way of doing that

dos2unix olapjoblist.properties
cd $INFA_HOME/bin/
while read line  
  do
  
   test=$( ./pmcmd starttask -sv $INTEGRATION_SERVICE -d $DOMAIN_NAME -uv INFA_USER -pv INFA_PASS -f $FOLDER_NAME -paramfile $parameterFile -w wf_olapconsolidation -wait $line)
   
  
   if [ "$?" != 0 ]; then
    # handle error
    echo "Summary: Could not execute task."
   else
    # no error
    echo "Summary: Task executed successfully."
   fi
   echo "Error Code:$?"
   echo "Description: $test"
  
done < $currentpath/olapjoblist.properties

so olapjoblist.properties if a file will all the session names with it

Thursday, January 12, 2012

[Oracle] How to optimize queries - for Beginners

D.1 Optimizing Single-Table Queries

To improve the performance of a query that selects rows of a table based on a specific column value, create an index on that column. For example, the following query performs better if the NAME column of the EMP table has an index.
SELECT * 
FROM EMP 
WHERE NAME = 'Smith'; 

D.2 Optimizing Join Queries

The following can improve the performance of a join query (a query with more than one table reference in the FROM clause).

D.2.1 Create an Index on the Join Column(s) of the Inner Table

In the following example, the inner table of the join query is DEPT and the join column of DEPT is DEPT#. An index on DEPT.DEPT# improves the performance of the query. In this example, since DEPT# is the primary key of DEPT, an index is implicitly created for it. The optimizer will detect the presence of the index and decide to use DEPT as the inner table. In case there is also an index on EMP.WORKS_IN column the optimizer evaluates the cost of both orders of execution;DEPT followed by EMP (where EMP is the inner table) and EMP followed by DEPT (where DEPT is the inner table) and picks the least expensive execution plan.
SELECT e.SS#, e.NAME, d.BUDGET
FROM EMP e, DEPT d 
WHERE e.WORKS_IN = DEPT.DEPT# 
AND e.JOB_TITLE = 'Manager'; 

D.2.2 Bypassing the Query Optimizer

Normally optimizer picks the best execution plan, an optimal order of tables to be joined. In case the optimizer is not producing a good execution plan you can control the order of execution using the HINTS feature SQL. For more information see the Oracle9i Lite SQL Reference.
For example, if you want to select the name of each department along with the name of its manager, you can write the query in one of two ways. In the first example which follows, the hint /++ordered++/ says to do the join in the order the tables appear in the FROM clause with attempting to optimize the join order.
SELECT /++ordered++/ d.NAME, e.NAME
FROM DEPT d, EMP e
WHERE d.MGR = e.SS# 
or: 
SELECT /++ordered++/ d.NAME, e.NAME 
FROM EMP e, DEPT d 
WHERE d.MGR = e.SS# 
Suppose that there are 10 departments and 1000 employees, and that the inner  table in each query has an index on the join column. In the first query, the  first table produces 10 qualifying rows (in this case, the whole table). In the  second query, the first table produces 1000 qualifying rows. The first query  will access the EMP table 10 times and scan the DEPT table once. The second  query will scan the EMP table once but will access the DEPT table 1000 times.  Therefore the first query will perform much better. As a rule of thumb, tables  should be arranged from smallest effective number rows to largest effective  number of rows. The effective row size of a table in a query is obtained by  applying the logical conditions that are resolved entirely on that table. 
In another example, consider a query to retrieve the social security numbers and names of employees in a given location, such as New York. According to the sample schema, the query would have three table references in the FROM clause. The three tables could be ordered in six different ways. Although the result is the same regardless of which order you choose, the performance could be quite different.
Suppose the effective row size of the LOCATION table is small, for example select count(*) from LOCATION where LOC_NAME = 'New York' is a small set. Based on the above rules, the LOCATION table should be the first table in the FROM clause. There should be an index on LOCATION.LOC_NAME. Since LOCATION must be joined with DEPT, DEPT should be the second table and there should be an index on the LOC column of DEPT. Similarly, the third table should be EMP and there should be an index on EMP#. You could write this query as:
SELECT /++ordered++/ e.SS#, e.NAME 
FROM LOCATION l, DEPT d, EMP e 
WHERE l.LOC_NAME = 'New York' AND 
l.LOC# = d.LOC AND 
d.DEPT# = e.WORKS_IN;

D.3 Optimizing with Order By and Group By Clauses

Various performance improvements have been made so that SELECT statements run faster and consume less memory cache. Group by and Order by clauses attempt to avoid sorting if a suitable index is available.

D.3.1 IN subquery conversion

Converts IN subquery to a join when the select list in the subquery is uniquely indexed.
For example, the following IN subquery statement is converted to its corresponding join statement. This assumes that c1 is the primary key of table t2:
SELECT c2 FROM t1 WHERE 
c2 IN (SELECT c1 FROM t2);

becomes:
SELECT c2 FROM t1, t2 WHERE t1.c2 = t2.c1;

D.3.2 ORDER BY optimization with no GROUP BY

This eliminates the sorting step for an ORDER BY clause in a select statement if ALL of the following conditions are met:
  1. All ORDER BY columns are in ascending order or in descending order.
  2. Only columns appear in the ORDER BY clause. That is, no expressions are used in the ORDER BY clause.
  3. ORDER BY columns are a prefix of some base table index.
  4. The cost of accessing by the index is less than sorting the result set.

D.3.3 GROUP BY optimization with no ORDER BY

This eliminates the sorting step for the grouping operation if GROUP BY columns are the prefix of some base table index.

D.3.4 ORDER BY optimization with GROUP BY

When ORDER BY columns are the prefix of GROUP BY columns, and all columns are sorted in either ascending or in descending order, the sorting step for the query result is eliminated. If GROUP BY columns are the prefix of a base table index, the sorting step in the grouping operation is also eliminated.

D.3.5 Cache subquery results

If the optimizer determines that the number of rows returned by a subquery is small and the query is non-correlated, then the query result will be cached in memory for better performance. Currently the number of rows is set at 2000. For example:
select * from t1 where 
t1.c1 = (select sum(salary) 
from t2 where t2.deptno = 100);