Pages

Search This Blog

Monday, December 12, 2011

[Oracle] While creating a sequence, what does cache and nocache options mean?

One common question about sequences is:
Question: While creating a sequence, what does cache and nocache options mean? For example, you could create a sequence with a cache of 20 as follows:
CREATE SEQUENCE supplier_seq
MINVALUE 1
START WITH 1
INCREMENT BY 1
CACHE 20;
Or you could create the same sequence with the nocache option:
CREATE SEQUENCE supplier_seq
MINVALUE 1
START WITH 1
INCREMENT BY 1
NOCACHE;
Answer: With respect to a sequence, the cache option specifies how many sequence values will be stored in memory for faster access.
The downside of creating a sequence with a cache is that if a system failure occurs, all cached sequence values that have not be used, will be "lost". This results in a "gap" in the assigned sequence values. When the system comes back up, Oracle will cache new numbers from where it left off in the sequence, ignoring the so called "lost" sequence values.
Note: To recover the lost sequence values, you can always execute an ALTER SEQUENCE command to reset the counter to the correct value.
Nocache means that none of the sequence values are stored in memory. This option may sacrifice some performance, however, you should not encounter a gap in the assigned sequence values.

[Oracle] Enable / Disable all triggers in Schema

Disable Triggers
BEGIN
FOR i IN (SELECT trigger_name
FROM user_triggers) LOOP
EXECUTE IMMEDIATE 'ALTER TRIGGER ' || i.trigger_name || ' DISABLE';
END LOOP;
END;
/
Enable Triggers
BEGIN
FOR i IN (SELECT trigger_name
FROM user_triggers) LOOP
EXECUTE IMMEDIATE 'ALTER TRIGGER ' || i.trigger_name || ' ENABLE';
END LOOP;
END;
/



Tuesday, November 29, 2011

[Oracle] How to see last sql statements executed

Recently we get into scenario where we need to see that which sql statements are being executed by activity upload.


So here is an easy way to do this:


select sql_text from v$sql where rownum<=100 and parsing_schema_name='NC_SDA_1_301';


Parsing Schema Name ; should be your schema name for which you need to see the sql statements executed on.

Thursday, November 24, 2011

[Oracle] Enable / Disable all constraints in DB

Disable Constraints:


BEGIN

FOR c IN

(SELECT c.owner, c.table_name, c.constraint_name

FROM user_constraints c, user_tables t

WHERE c.table_name = t.table_name

AND c.status = 'ENABLED'

ORDER BY c.constraint_type DESC)

LOOP

dbms_utility.exec_ddl_statement('alter table ' || c.owner || '.' || c.table_name || ' disable constraint ' || c.constraint_name);

END LOOP;

END;


Enable Constraints:


BEGIN

FOR c IN

(SELECT c.owner, c.table_name, c.constraint_name

FROM user_constraints c, user_tables t

WHERE c.table_name = t.table_name

AND c.status = 'DISABLED'

ORDER BY c.constraint_type)

LOOP

dbms_utility.exec_ddl_statement('alter table ' || c.owner || '.' || c.table_name || ' enable constraint ' || c.constraint_name);

END LOOP;

END;

Tuesday, October 25, 2011

[T-SQL] Change SQL Server instance name

EXEC sp_dropserver 'Old Server Name'
EXEC sp_addserver 'New Server Name','local'
GO

Then restart the SQL service and confirm the new server name with @@SERVERNAME.

Tuesday, September 20, 2011

[Oracle] Update Data in Table trough Joins

create table source_table

(source_id    number primary key

,s_col1       number

,s_col2       number

,s_col3       number

);



create table target_table

(target_id    number primary key

,t_col1       number

,t_col2       number

,t_col3       number

);



insert into source_table(source_id,s_col1,s_col2,s_col3) values (101,111,1.01,-1);

insert into source_table(source_id,s_col1,s_col2,s_col3) values (102,222,2.02,-2);

insert into source_table(source_id,s_col1,s_col2,s_col3) values (103,333,3.03,-3);

insert into source_table(source_id,s_col1,s_col2,s_col3) values (104,444,4.04,-4);

insert into source_table(source_id,s_col1,s_col2,s_col3) values (105,555,5.05,-5);



insert into target_table(target_id,t_col1,t_col2,t_col3) values (101,null,null,null);

insert into target_table(target_id,t_col1,t_col2,t_col3) values (102,null,null,null);

insert into target_table(target_id,t_col1,t_col2,t_col3) values (103,null,null,null);

insert into target_table(target_id,t_col1,t_col2,t_col3) values (104,null,null,null);

insert into target_table(target_id,t_col1,t_col2,t_col3) values (105,null,null,null);

select * from source_table;
select * from target_table;

update
(
   select t_col1,t_col2,t_col3,s_col1,s_col2,s_col3
   from   target_table t
         ,source_table s
   where  t.target_id = s.source_id
)
set t_col1 = s_col1
   ,t_col2 = s_col2
   ,t_col3 = s_col3
;
select * from source_table;
select * from target_table;

Monday, August 29, 2011

[SQLServer] Why SQL Server is better than any other RDBMS Applications? Part 2

I like SQL Server because of its SIMPLICITY. I learned SQL commands over Oracle (ah! it was too long that I used oracle and never touched it again after being FRIENDLY with SQL :) ).
I like following attributes of SQL Server hence I loved this one… and as I am trying to deep dive into technology I am being passionate lover… :)
1. First thing I liked is as said its SIMPLICITY.
2. Easy Availability– one don’t have to pay huge bucks to make SQL Available.. he can use Express edition available free.
3. Uncomplicated installation process – follows the same standard as of other MS Product like MS-Office etc… that no one need special knowledge of installation process to get ready to use SQL.
4. Can be install on Average system – like few other RDMS, SQL do not have special hardware requirement. so one can be benefited without having his/her system upgraded.
5. User friendly UI. again abide to MS standard for all other application so anyone feel friendly on first visit.
6. Easy navigation through common shortcuts and tool box. I don’t have to remember multiple keys to navigate.. what works in MS-Office / VS etc.. almost works in SQL as well.
7. BOL – help documentation is easily available and more friendly to browse to get the correct help.
8. Perfect match for all level of organizations… small firms to big enterprise to data store.
9. Perfect suite of application – good clubbing and packaging of Database engine, Agent Service, Notification Service, Reporting Service, Analysis Service, Integration Service.
10. Support for Multi- Instance and side by side installation of multiple versions.
11. Enhanced Security
12. Enhanced Optimizer
13. availability for various technologies to support high availability like Log Shipping, Replication, Clustering, Mirroring etc..
14. Easy to tune up, one can easily tune up the performance with help of wizards and additional tool like Profiler etc litle knowledge.
15. Advancement in the technology.. like SQL Azure, CLR integration, DAC, Intellisense (though I didnt like current intellisense but it helps often).
16. Broader community – I like the SQL community, in Fact I have sharpened my skill through this community.. where you can easily catch up with SQL MVP or MS SQL Team. I love dedication you guys shown up in communities. I believe that if someone is in crisis, this community will never let him get down with SQL.
17. non ANSI functions in SQL are similar to other programming languages hence feels comfortable and don’t have to dig more to find the right function that is equivalent in something like in .net.

SOURCE: COPIED FROM A CONTEST - 1 RUNNER UP POST

[SQLServer] Why SQL Server is better than any other RDBMS Applications? Part 1

SQL Server is like most RDMS systems, a database engine but what make it better than other RDMS systems (especially SQL 2008 onwards) are new features and other bells and whistles it comes with.
Here are the few:
1. Easy integration with Microsoft Operating systems.
2. East integration with world’s most common database: Spreadsheet, Microsoft excel in particular and power pivot has significantly enhanced its value.
3. User friendly interface
4. Easy to create maintenance plans.
5. Integrated Security (windows authentication): This definitely help streamline server access based on Active directory policies and groups.
6. Mixed Authentication: Not only it gives the option to use windows authentication, it allows other applications to authenticate based on username and password
7. Disaster recovery: SQL Server is very easy to restore after a disaster (proper backup and recovery strategy needs to be implemented) as compared to other RDMS. All you need is File system restore followed by system state and then restore databases from backup. I have been involved in DR for a lot systems and doing DR on a SQL Server is much easier than any other RDMS system. Other benefits include Failover Clustering and Transaction Replication
8. Licensing: The licensing structure of SQL Server is much better as compared to other RDMS systems. Other RDMS systems have a very complex licensing structure which comes out to be much costly than SQL Server.
9. SQL Server Management Studio (SSMS): As compare to other RDMS systems tools, SSMS is the best tool for a developer or a DBA.
10. SQL Server Business Intelligence: Business Intelligence in SQL Server has come a long way and has evolved so much. It is one of the best if not the best in the market at the moment.
Since SQL Server Integration Services, SQL Server Analysis Services and SQL Server Reporting Services comes with the SQL Server license, customer doesn’t need to spend extra money on these additional tools, instead that money can be spend on something else e.g. upgrade SAN, user training.
11. Visualization
12. Administering and Monitoring: SQL Server 2008 R2 has really scored high in its new administration and monitoring tools. It has made the life of a DBA (even accidental DBA) a lot easier.
13. Backup Compression: Even though it is a new feature in SQL Server 2008, it has really helped companies to upgrade to the new version and speeding a lot (if not all) of the backup operations.
14. Data Encryption: SQL Server 2008 provides a lot of encryption options to encrypt data and code. Transparent Data Encryption has really safe guarded some of the crucial data movement operations our company.

SOURCE: COPIED FROM A CONTEST - WINNER POST

Friday, August 26, 2011

[SQLServer] SQL Server Value Calculator

Yes, you have data management options.

Considering a better data management solution? Consider SQL Server 2008 Enterprise to efficiently manage your business-critical applications. And with Oracle raising their prices, you can still get a solution based on SQL Server for around a third of their cost.* Do the math on SQL Server value and you’ll see the efficiencies add up.

Wednesday, August 17, 2011

[T-SQL] Contained Database

A Contained Database is a database which contains all the necessary settings and metadata, making database easily portable to another server. This database will contain all the necessary details and will not have to depend on any server where it is installed for anything. You can take this database and move it to another server without having any worries.
The real question is, “What about users who are connecting to this database?” Once the contained database is moved, the users are moved as well, and users who belong to the contained database will have no access outside the contained database.
In summary, “Database is now self-contained. Database which is ’contained’ will not depend on anything on the server where it is installed.”
Let us try out this feature on SQL Server Denali. We will do the following steps:
  1. Enable Contained Database
  2. Create Contained Database
  3. Create User in Contained Database
  4. Try if the user can access outside Contained Database
We can do various tests on this subject; however, in this blog post we will limit out exercise to the above four points.

Enable Contained Database

Run the following code on SQL Server Denali. This code will enable the settings for the contained database.
sp_configure 'show advanced options',1
GO
RECONFIGURE WITH OVERRIDE
GO
sp_configure 'contained database authentication', 1
GO
RECONFIGURE WITH OVERRIDE
GO

Create Contained Database

CREATE DATABASE [ContainedDatabase]
CONTAINMENT
= PARTIALON PRIMARY( NAME = N'ContainedDatabase', FILENAME = N'C:\ContainedDatabase.mdf')LOG ON( NAME = N'ContainedDatabase_log', FILENAME = N'C:\ContainedDatabase_log.ldf')GO

Create User in Contained Database

USE [ContainedDatabase]
GO
CREATE USER ContainedUserWITH PASSWORD = 'pass@word';GO

Try if this user can access out side Contained Database

To test this, we will attempt to login in the database with default settings (where login database is the master).
When we attempt this, we will be not able to login in the server simply because the user does not exist at the server level.
Now, let us try to login in the system using the username which was created in the Contained Database.
You will notice that the login would be successful in the server. When expanded it, the user will have access to the contained database only, and not to any other database.
We will tackle more about this interesting subject in the future.

Reference: Pinal Dave (http://blog.SQLAuthority.com)

Tuesday, August 2, 2011

[T-SQL] Wildcard

Wildcard Basics Recap

Lets start off with something most of us know already. Most SQL folks understand the usefulness and power of the basic uses of wildcards. Using wildcards allows you to do pattern matches in a column. In this case our criteria does not want to use the = sign to find a pattern match. The operator that allows you to do approximate predicates is LIKE. The LIKE operator allows you to do special relative searches to filter your result set.
--Find all LastNames that start with the letter ASELECT *FROM EmployeeWHERE LastName LIKE 'A%'
To find everyone whose last name starts with the letter B, you need “B” to be the first letter. After the letter B you can have any number of characters. Using B% in single quotes after the LIKE operator gets all last names starting with the letter B.
--Find all LastNames that start with the letter BSELECT *FROM EmployeeWHERE LastName LIKE 'B%'

Wildcard ranges or set specifiers

If you want to find all LastName values starting with the letters A or B you can use two predicates in your WHERE clause. You need to separate them with the OR operator.
--Find all LastNames that start with the letter BSELECT *FROM EmployeeWHERE LastName LIKE 'A%'OR LastName LIKE 'B%'
Finding names beginning with A or B is easy. How about the registration desk example where want the names ranging from A-K? This works well until you want a range of A-K as in the example below:
--Find all LastNames ranging from A-KSELECT *FROM EmployeeWHERE LastName LIKE 'A%'OR LastName LIKE 'B%'OR LastName LIKE 'C%'OR LastName LIKE 'D%'OR LastName LIKE 'E%'OR LastName LIKE 'F%'OR LastName LIKE 'G%'OR LastName LIKE 'H%'OR LastName LIKE 'I%'OR LastName LIKE 'J%'OR LastName LIKE 'K%'
The previous query does find LastName values starting from A-K. However, if you need a range of letters, the LIKE operator has many better options. We only really care about the first letter of the last name and there a several first letters that fit with what were looking for. The first letter of the last name can be A,B,C,D,E,F,G,H,I,J or K. Simply list all the choices you want for the first letter inside a set of square brackets.
--LastNames ranging from A to K using a set of 11 lettersSELECT *FROM EmployeeWHERE LastName LIKE '[ABCDEFGHIJK]%'
Square brackets with wildcards enclose ranges or sets for 1 position. In this case the first position is a set of 11 different possible letters. This is not a series of letter but a multiple choice of letters. For example this works regardless of the order you put your letters in. This code sample below does the exact same thing.
--LastNames ranging from A to K using a set of 11 lettersSELECT *FROM EmployeeWHERE LastName LIKE '[KBCDEFGHIJA]%'
Again the set is how many letters you put in the square brackets. The code below is a logical mistake where you won’t get A to K but you just get A or K for the first letter.
--Find all LastNames starting with A or K (MistakeSELECT *FROM EmployeeWHERE LastName LIKE '[AK]%'
Since we’re looking for the first letter to be within a range from A to K, we specify that range in square brackets. This is even easier than using a set. The wildcard after the brackets allows any number of characters after the range.
--LastNames ranging from A to K using a rangeSELECT *FROM EmployeeWHERE LastName LIKE '[A-K]%'
Note: this range will not work if your LIKE was changed to an equal (=) sign. The following code will not return any records to your result set:
--Bad query (it won’t error but returns no records)SELECT *FROM EmployeeWHERE LastName = '[A-K]%'

Monday, July 11, 2011

[Informatica] Tutorial - Working with text files

Here is the link of video in which i have created a mapping , workflow for how to take data from one csv and transform and create another csv.

http://www.youtube.com/watch?v=kbwbafnlvoc

informatica_tutorial_textfiles

Monday, June 27, 2011

[SQL Server] 10 reasons why SQL Server 2008 is going to rock


10.  Plug-in model for SSMS.   SSMS 2005 also had a plug-in model, but it was not published, so the few developers that braved that environment were flying blind.  Apparently for 2008, the plug-in model will be published and a thousand add-ins will bloom. 
9.  Inline variable assignment.  I often wondered why, as a language, SQL languishes behind the times.  I mean, it has barely any modern syntactic sugar.  Well, in this version, they are at least scratching the the tip of the iceberg. 
Instead of:
DECLARE @myVar int 
SET @myVar = 5

you can do it in one line:
DECLARE @myVar int = 5

Sweet. 
8.  C like math syntax.  SET @i += 5.  Enough said.  They finally let a C# developer on the SQL team. 
7.  Auditing.  It's a 10 dollar word for storing changes to your data for later review, debugging or in response to regulatory laws.  It's a thankless and a mundane task and no one is ever excited by the prospect of writing triggers to handle it.  SQL Server 2008 introduces automatic auditing, so we can now check one thing off our to do list.
6.  Compression.  You may think that this feature is a waste of time, but it's not what it sounds like.  The release will offer row-level and page-level compression.  The compression mostly takes place on the metadata.  For instance, page compression will store common data for affected rows in a single place. 
The metadata storage for variable length fields is going to be completely crazy: they are pushing things into bits (instead of bytes).  For instance, length of the varchar will be stored in 3 bits. 
Anyway, I don't really care about space savings - storage is cheap.  What I do care about is that the feature promised (key word here "promises") to reduce I/O and RAM utilization, while increasing CPU utilization.  Every single performance problem I ever dealt with had to do with I/O overloading.  Will see how this plays out.  I am skeptical until I see some real world production benchmarks.
5.  Filtered Indexes.  This is another feature that sounds great - will have to see how it plays out.  Anyway, it allows you to create an index while specifying what rows are not to be in the index.  For example, index all rows where Status != null.  Theoretically, it'll get rid of all the dead weight in the index, allowing for faster queries. 
4.  Resource governor.  All I can say is FINALLY.  Sybase has had it since version 12 (that's last millennium, people).  Basically it allows the DBA to specify how much resources (e.g. CPU/RAM) each user is entitled to.  At the very least, it'll prevent people, with sparse SQL knowledge from shooting off a query with a Cartesian product and bringing down the box.
Actually Sybase is still ahead of MS on this feature.  Its ASE server allows you to prioritize one user over another - a feature that I found immensely useful.
3.  Plan freezing.  This is a solution to my personal pet peeve. Sometimes SQL Server decides to change its plan on you (in response to data changes, etc...).  If you've achieved your optimal query plan, now you can stick with it.  Yeah, I know, hints are evil, but there are situations when you want to take a hammer to SQL Server - well, this is the chill pill.
2.  Processing of delimited strings.   This is awesome and I could have used this feature...well, always.  Currently, we pass in delimited strings in the following manner:
exec sp_MySproc 'murphy,35;galen,31;samuels,27;colton,42'

Then the stored proc needs to parse the string into a usable form - a mindless task.
In 2008, Microsoft introduced Table Value Parameters (TVP). 
CREATE TYPE PeepsType AS TABLE (Name varchar(20), Age int) 
DECLARE @myPeeps PeepsType 
INSERT @myPeeps SELECT 'murphy', 35 
INSERT @myPeeps SELECT 'galen', 31 
INSERT @myPeeps SELECT 'samuels', 27 
INSERT @myPeeps SELECT 'colton', 42

exec sp_MySproc2 @myPeeps 

And the sproc would look like this:
CREATE PROCEDURE sp_MySproc2(@myPeeps PeepsType READONLY) ...

The advantage here is that you can treat the Table Type as a regular table, use it in joins, etc.  Say goodbye to all those string parsing routines.
1. Intellisense in the SQL Server Management Studio (SSMS).  This has been previously possible in SQL Server 2000 and 2005 with Intellisenseuse of 3rd party add-ins like SQL Prompt ($195).  But these tools are a horrible hack at best (e.g. they hook into the editor window and try to interpret what the application is doing).  
Built-in intellisense is huge - it means new people can easily learn the database schema as they go.

Friday, May 27, 2011

[Personal] Visit to Iran

This month i was on vacations and during vacations i visit Iran for ziarat and recreational purpose. Had an excellent time with my whole family(MashAllah).

I visited following cities:
1. Tehran
2. Meshad
3. Qum
4. Miami

This was my 7th visit to Iran and like always i enjoyed this trip as well, the speciality of this trip was that this time my parents were also with me.

Iran is a very peaceful and beautiful country. I visited many holy shrines there some of which include Imam Reza(a.s) , Bibi Masooma Qum (s.a) , Shah Abol Azim , Hazrat Zaid shaheed and many more.

Wednesday, April 20, 2011

[Informatica] Tutorial Part 5

Here is the last session of informatica tutorial:

http://www.youtube.com/watch?v=8zdPvAkMv90

Thursday, April 14, 2011

[T-SQL] Find current location of data and log file of all database

In general we see that some times we go trough very small queries but it has really a big impact during ongoing scenario.

Here is one of the query by which you can see the location of data and log file of all the databases.

SELECT name, physical_name AS current_file_location
FROM sys.master_files

Tuesday, April 12, 2011

[T-SQL] Add new column [Not Null] column in a table already having data - Part 2

Previously i wrote an article with a method that how can we include not null column in a table already having data.

But that method has number of scripts to execute. SQL Server handle it in a very good way.

If you execute the script

ALTER TABLE TestTable
ADD ThirdCol INT NOT NULL DEFAULT(0)
GO

Then SQL Server will itself insert the default value 0 for the rows which are already present in database.

[SQLServer] TempDB Recommendations

I was going trough a nice article , so sharing apart from it regarding the recommendations for TempDB.

Source: http://blogs.msdn.com/b/cindygross/archive/2009/11/20/compilation-of-sql-server-tempdb-io-best-practices.aspx

Pre-size TempDB appropriately. Leave autogrow on with instant file initialization enabled, but try to configure the database so that it never hits an autogrow event. Make sure the autogrow growth increment is appropriate.

· Follow general IO recommendations for fast IO.

· If your TempDB experiences metadata contention (waitresource = 2:1:1 or 2:1:3), you should split out your data onto multiple files. Generally you will want somewhere between 1/4 and 1 file per physical core. If you don't want to wait to see if any metadata contention occurs you may want to start out with around 1/4 to 1/2 the number of data files as CPUs up to about 8 files. If you think you might need more than 8 files we should do some testing first to see what the impact is. For example, if you have 8 physical CPUs you may want to start with 2-4 data files and monitor for metadata contention.

· All TempDB data files should be of equal size.

· As with any database, your TempDB performance may improve if you spread it out over multiple drives. This only helps if each drive or mount point is truly a separate IO path. Whether each TempDB will have a measurable improvement from using multiple drives depends on the specific system.

· In general you only need one log file. If you need to have multiple log files because you don't have enough disk space on one drive that is fine, but there is no direct benefit from having the log on multiple files or drives.

· On SQL Server 2000 and more rarely on SQL Server 2005 or later you may want to enable trace flag -T1118.

· Avoid shrinking TempDB (or any database) files unless you are very certain you will never need the space again.

Monday, March 28, 2011

[Oracle] Account Expiry

Question: How can we auto lock a schema or user in oracle.
Ans: You can not directly lock a schema but you can lock user trough profile. First create a profile with the method defined below and then while creating user give that profile name instead of default.

CREATE PROFILE prof LIMIT
FAILED_LOGIN_ATTEMPTS 4
PASSWORD_LOCK_TIME 30
PASSWORD_LIFE_TIME 90;

ALTER USER mabbas PROFILE prof;

User mabbas can use his password for 90 days before the user (or DBA) is forced to change the password.

Friday, March 25, 2011

[T-SQL] Change SQL Server 2005 SA Password

Changing the password in SQL Server was much easier. You just type the following thing and password for the account is changed:

sp_password 'Old','New', Account

But this dont work for Sql Server 2005. Initially I thought whats worng with the SQL Server, but later I realized that it may be different for the two versions. At last I found it and here it is:

ALTER LOGIN Account_Name WITH
PASSWORD = 'New_Password'
OLD_PASSWORD = 'Old_Password';
GO

Thursday, March 17, 2011

[Informatica] Tutorial Part 4

Here is the third part of Informatica Tutorial Part 4.


http://www.youtube.com/watch?v=AuCVmPpRvy4

Tuesday, March 8, 2011

[T-SQL] Concurrency Issues and Isolation Level

There are four types of concurrency problems visible in the normal programming.

1) Lost Update – This problem occurs when there are two transactions involved and both are unaware of each other. The transaction which occurs later overwrites the transactions created by the earlier update.

2) Dirty Reads – This problem occurs when a transactions selects data that isn’t committed by another transaction leading to read the data which may not exists when transactions are over.

Example: Transaction 1 changes the row. Transaction 2 changes the row. Transaction 1 rolls back the changes. Transaction 2 has selected the row which does not exist.

3) Nonrepeatable Reads – This problem occurs when two SELECT statements of the same data results in different values because another transactions has updated the data between the two SELECT statements.

Example: Transaction 1 selects a row, which is later on updated by Transaction 2. When Transaction A later on selects the row it gets different value.

4) Phantom Reads – This problem occurs when UPDATE/DELETE is happening on one set of data and INSERT/UPDATE is happening on the same set of data leading inconsistent data in earlier transaction when both the transactions are over.

Example: Transaction 1 is deleting 10 rows which are marked as deleting rows, during the same time Transaction 2 inserts row marked as deleted. When Transaction 1 is done deleting rows, there will be still rows marked to be deleted.

When two or more transactions are updating the data, concurrency is the biggest issue. I commonly see people toying around with isolation level or locking hints (e.g. NOLOCK) etc, which can very well compromise your data integrity leading to much larger issue in future.

Here is the quick mapping of the isolation level with concurrency problems:

Isolation Dirty Reads Lost Update Nonrepeatable Reads Phantom Reads
Read Uncommitted Yes Yes Yes Yes
Read Committed No Yes Yes Yes
Repeatable Read No No No Yes
Snapshot No No No No
Serializable No No No No

Source: sqlauthority.com

Saturday, March 5, 2011

[Informatica] Tutorial Part 3

Here is the third part of Informatica Tutorial Part 3.

http://www.youtube.com/watch?v=upwtGPn4pDI

Wednesday, March 2, 2011

[Informatica] Tutorial Part 2

Informatica Tutorial Part 2 for two beginners:

http://www.youtube.com/watch?v=oM2d-IHfRUw

Monday, February 28, 2011

[Informatica] Tutorial Part 1

Here is the link if Informatica Tutorial Part 1 for beginners:

http://www.youtube.com/watch?v=ufH_n5exxQw

Friday, February 18, 2011

[Informatica] Configure Power Center Integration Service

Finally we need to configure Integration Service through which we can execute our workflows.
1. In the Administrator tool, click the Domain tab.
2. On the Navigator Actions menu, click New > PowerCenter Integration Service.
The New Integration Service dialog box appears.
3. Enter the Service Name, License, And Primary Node.
4. Now provide the repository information which you just created, its included repository name, username and password.
5. Click Finish.

[Informatica] Configure Power Center Repository Service

After the successful installation of Informatica we need to configure repository service. These steps need to be done from windows operating system. Any browser should be installed for performing these steps.
1. Open up the browser and type in below format http://DomainName:PortNo/administrator/index.html
Sample: http://informatica-test:6007/administrator/index.html
2. Enter the username and password provided during the installation.
3. In the Administrator tool, click the Domain tab.
4. In the Navigator, select the folder where you want to create the PowerCenter Repository Service. Note: If you do not select a folder, you can move the PowerCenter Repository Service into a folder after you create it.
5. In the Domain Actions menu, click New > PowerCenter Repository Service. The Create New Repository Service dialog box appears.
6. Provide the Repository Service Name and all other database parameters.
7. Click the option to create repository contents.
8. Select the license, node and all other information that need to be provided.
9. Select “UTF-8 encoding of Unicode” in Code page combo box.
10. Click Ok.

[Informatica] Server Installation 9.0.1

Here are steps to install informatica server 9.0.1:

1. Verify that your environment meets the minimum system requirements, and complete the pre-installation tasks as completed in steps 3 and 4 of this guide.
2. Log in to the machine with a system user account.
3. Close all other applications.
4. On a shell command line, run the install.sh file located in the root directory. The installer displays the message to verify that the locale environment variables are set.
5. If the environment variables are not set, enter N to exit the installer and set them as required. If the environment variables are set, enter Y to continue.
6. Enter C for console mode installation.
7. Press Enter.
8. Enter 1 to install Informatica 9.0.1. Review the system requirements and pre-installation tasks. Verify that all installation requirements are met before you continue the installation.
9. Press Enter.
10. Enter the path and file name of the Informatica license key.
11. Enter the path for the installation directory or press Enter to use the default directory. The directory names in the path must not contain spaces or the following especial characters: @|* $ # ! % ( ) { } [ ] , ; '
12. Review the installation information, and press Enter to continue. The installer copies the Informatica files to the installation directory.
13. Enter 1 to create a domain. If you create a domain, the node on the current machine becomes a gateway node on the domain. The gateway node contains a Service Manager that manages all domain operations.
14. Select whether to set up a secure connection between the Service Manager and the Administrator tool. Press 1 to Enable HTTPS for Informatica Administrator.
15. Select the database to use for the domain configuration repository. Press 1 to select Oracle.
16. Enter the Database User ID and Password.
17. Select how you want to specify the database connection URL. Press 1 for JDBC URL.
18. Now enter the properties at the prompt.
a. Database Address: Enter HostName:PortNo
b. Database service name: Enter SID
c. Configure JDBC parameters: Select yes, enter the parameters or press enter to accept the default.
19. If the database contains a domain configuration repository for a previous domain, select 2 to overwrite the data or set up another database. Press Continue to overwrite.
20. Enter the following information at the prompt:
a. Domain name: Enter Domain Name
b. Domain host name: Enter Host Name
c. Node name: Press Enter for default Node Name
d. Domain port number: Press Enter for default port name
e. Domain user name: Enter username, this would be used for initial setup of Informatica
f. Domain user password: Enter Password
g. Confirm password: Enter Password again
21. Now it would ask whether to display the default ports for the domain and node components assigned by the installer. Press 1 for No.
22. Installation is now complete.
23. Go to the Informatica installation directory and write the below command to bring up the Informatica service.
infaservice.sh startup

[Informatica] How to import workflow in Informatica PC

Here are the steps to import workflow in Informatica Power Centre:

1. Verify that xml file is present in source folder.
2. Open Informatica PC Repository Manager.
3. Connect to the repository by providing relevant username and password.
4. Create a new folder in repository.
5. Go to that folder and then under Repository menu at the top click import objects.
6. Select the file that is present in informaticarepository folder and click next.
7. In this screen click button add all as you would need all the objects to execute workflow which are currently present in xml file and click next.
8. Select the folder which was created recently and click next.
9. Now click next one more time and click import. It would show you all the objects import status.
10. Now you can verify in the folder that workflow exists.

Tuesday, February 15, 2011

[T-SQL] Database backup procedure - advanced

I have got this script from one of the blogs:

if exists (select * from sysobjects where id = object_id(N'[dbo].[spBackupAllDatabases]') and OBJECTPROPERTY(id, N'IsProcedure') = 1)
drop procedure [dbo].[spBackupAllDatabases]
go

Create Procedure spBackupAllDatabases
@Path varchar(128) ,
@Type varchar(4) -- Full / Log
as
/*
Backup file format
_Full_yyyymmdd_hhmmss.bak
_Log_yyyymmdd_hhmmss.bak

exec spBackupAllDatabases 'c:\SQLBackups\', 'Full'
*/
/*
drop table DatabaseBackup

Create table DatabaseBackup
(
Name varchar(128) primary key nonclustered ,
BackupFlagFull varchar(1) not null check (BackupFlagFull in ('Y','N')) ,
BackupFlagLog varchar(1) not null check (BackupFlagLog in ('Y','N')) ,
RetentionPeriodFull datetime not null ,
RetentionPeriodLog datetime not null
)
*/
set nocount on
declare @sql varchar(1000)

-- Get all database names
create table #DBName
(
ID int identity (1,1) ,
Name varchar(128) not null ,
RetentionPeriod datetime null
)

insert #DBName
(Name)
select name
from master..sysdatabases

-- Include any new databases in the backup
insert DatabaseBackup
(
Name ,
BackupFlagFull ,
BackupFlagLog ,
RetentionPeriodFull ,
RetentionPeriodLog
)
select #DBName.Name ,
'Y' ,
'N' ,
'7 jan 1900' , -- default one week
'1 jan 1900'
from #DBName
left outer join DatabaseBackup
on DatabaseBackup.Name = #DBName.Name
where DatabaseBackup.Name is null
and lower(#DBName.Name) <> 'tempdb'

-- Remove any non-existant databases
delete DatabaseBackup
where not exists
(
select *
from #DBName
where #DBName.Name = DatabaseBackup.Name
)

delete #DBName

create table #ExistingBackups
(
Name varchar(128) ,
ID int identity (1,1)
)

-- loop through databases
declare @Name varchar(128) ,
@RetentionPeriod datetime ,
@LastBackupToKeep varchar(8) ,
@ID int ,
@MaxID int

insert #DBName
(Name, RetentionPeriod)
select Name, case when @Type = 'Full' then RetentionPeriodFull else RetentionPeriodLog end
from DatabaseBackup
where (@Type = 'Full' and BackupFlagFull = 'Y')
or (@Type = 'Log' and BackupFlagLog = 'Y')

select @MaxID = max(ID) ,
@ID = 0
from #DBName

while @ID < @MaxID begin -- get next database to backup select @ID = min(ID) from #DBName where ID > @ID

select @Name = Name ,
@RetentionPeriod = RetentionPeriod
from #DBName
where ID = @ID

-- Delete old backups
delete #ExistingBackups
select @sql = 'dir /B ' + @Path
select @sql = @sql + @Name + '_' + @Type + '*.*'

insert #ExistingBackups exec master..xp_cmdshell @sql

if exists (select * from #ExistingBackups where Name like '%File Not Found%')
delete #ExistingBackups

select @LastBackupToKeep = convert(varchar(8),getdate() - @RetentionPeriod,112)
delete #ExistingBackups where Name > @Name + '_' + @Type + '_' + @LastBackupToKeep

declare @eID int ,
@eMaxID int ,
@eName varchar(128)

-- loop round all the out of date backups
select @eID = 0 ,
@eMaxID = coalesce(max(ID), 0)
from #ExistingBackups

while @eID < @eMaxID begin select @eID = min(ID) from #ExistingBackups where ID > @eID
select @eName = Name from #ExistingBackups where ID = @eID

select @sql = 'del ' + @Path + @eName
exec master..xp_cmdshell @sql
end
delete #ExistingBackups

-- now do the backup
select @sql = @Path + @Name + '_' + @Type + '_'
+ convert(varchar(8),getdate(),112) + '_'
+ replace(convert(varchar(8),getdate(),108),':','') + '.bak'
if @Type = 'Full'
backup database @Name
to disk = @sql
else
backup log @Name
to disk = @sql
end

go

[T-SQL] Database backup script

Today i got a request to provide a database backup script in such a way that every time it generated new name for database. So here is the script:

DECLARE @name VARCHAR(50) -- database name
DECLARE @path VARCHAR(256) -- path for backup files
DECLARE @fileName VARCHAR(256) -- filename for backup
DECLARE @fileDate VARCHAR(20) -- used for file name

SET @path = 'C:\Backup\'

SELECT @fileDate = CONVERT(VARCHAR(20),GETDATE(),114)

DECLARE db_cursor CURSOR FOR
SELECT name
FROM master.dbo.sysdatabases
WHERE name NOT IN ('master','model','msdb','tempdb')

OPEN db_cursor
FETCH NEXT FROM db_cursor INTO @name

WHILE @@FETCH_STATUS = 0
BEGIN
SET @fileName = @path + @name + '_' + @fileDate + '.BAK'
BACKUP DATABASE @name TO DISK = @fileName
Print @filename
FETCH NEXT FROM db_cursor INTO @name
END

CLOSE db_cursor
DEALLOCATE db_cursor

Friday, January 28, 2011

[Informatica] Software Architecture illustrated

Informatica >> Beginners >> Informatica System Architecture

Informatica Software Architecture illustrated
Informatica ETL product, known as Informatica Power Center consists of 3 main components.

1. Informatica PowerCenter Client Tools:
These are the development tools installed at developer end. These tools enable a developer to

•Define transformation process, known as mapping. (Designer)
•Define run-time properties for a mapping, known as sessions (Workflow Manager)
•Monitor execution of sessions (Workflow Monitor)
•Manage repository, useful for administrators (Repository Manager)
•Report Metadata (Metadata Reporter)

2. Informatica PowerCenter Repository:
Repository is the heart of Informatica tools. Repository is a kind of data inventory where all the data related to mappings, sources, targets etc is kept. This is the place where all the metadata for your application is stored. All the client tools and Informatica Server fetch data from Repository. Informatica client and server without repository is same as a PC without memory/harddisk, which has got the ability to process data but has no data to process. This can be treated as backend of Informatica.

3. Informatica PowerCenter Server:
Server is the place, where all the executions take place. Server makes physical connections to sources/targets, fetches data, applies the transformations mentioned in the mapping and loads the data in the target system.

[SQLServer] Scaling Up Your Data Warehouse with SQL Server 2008 R2

Abstract from the Microsoft Official site:

SQL Server 2008 introduced many new functional and performance improvements for data warehousing, and SQL Server 2008 R2 includes all these and more. This paper discusses how to use SQL Server 2008 R2 to get great performance as your data warehouse scales up. We present lessons learned during extensive internal data warehouse testing on a 64-core HP Integrity Superdome during the development of the SQL Server 2008 release, and via production experience with large-scale SQL Server customers. Our testing indicates that many customers can expect their performance to nearly double on the same hardware they are currently using, merely by upgrading to SQL Server 2008 R2 from SQL Server 2005 or earlier, and compressing their fact tables. We cover techniques to improve manageability and performance at high-scale, encompassing data loading (extract, transform, load), query processing, partitioning, index maintenance, indexed view (aggregate) management, and backup and restore.

Tuesday, January 18, 2011

[T-SQL] DB_ID() and DB_NAME() - PURPOSE

-- Get Current DatabaseID
SELECT DB_ID() DatabaseID;
-- Get Current DatabaseName
SELECT DB_NAME() DatabaseName;
-- Get DatabaseName from DatabaseID
SELECT DB_NAME(4) DatabaseID;
-- Get DatabaseID from DatabaseName
SELECT DB_ID('tempdb') DatabaseName;
-- Get all DatabaseName and DBID
SELECT name,database_id
FROM sys.databases;

[T-SQL] Different methods to find leap year

Method 1 : General method

declare @year int
set @year=2000

select
case
when @year%400=0 then 1
when @year%100=0 then 0
when @year%4=0 then 1
else 0
end as is_leap_year
Method 2 : Find if the day difference between Jan 01 of this year to Jan 01 of next year is 366

declare @year int
set @year=2000

select
case
when datediff(day,DATEADD(year,@year-1900,0),DATEADD(year,@year-1900+1,0))=366 then 1
else 0
end as is_leap_year
Method 3 : Find if the day difference between Feb 01 to Mar 01 of this year is 29

select
case
when datediff(day,dateadd(month,1,DATEADD(year,@year-1900,0)),dateadd(month,2,DATEADD(year,@year-1900,0)))=29 then 1
else 0
end as is_leap_year
Method 4 : Find if the 60th day of this year falls in February last day (Feb 29)

declare @year int
set @year=2000

select
case
when day(dateadd(day,59,DATEADD(year,@year-1900,0)))=29 then 1
else 0
end as is_leap_year

Tuesday, January 4, 2011

[T-SQL] Delete vs Truncate

Here is good comparision between delete and truncate

DELETE:

  • DELETE supports a WHERE clause
  • DELETE removes rows from a table, row-by-row
  • Because DELETE moves row-by-row, it acquires a row-level lock
  • Depending upon the recovery model of the database, DELETE is a fully-logged operation.
  • Because DELETE moves row-by-row, it can fire off triggers

TRUNCATE:

  • TRUNCATE does not support a WHERE clause
  • TRUNCATE works by directly removing the individual data pages of a table
  • TRUNCATE directly occupies a table-level lock.
  • (Because a lock is acquired, and because TRUNCATE can also participate in a transaction, it has to be a logged operation)
  • TRUNCATE is, therefore, a minimally-logged operation; again, this depends upon the recovery model of the database
  • Triggers are not fired when TRUNCATE is used (because individual row deletions are not logged)

[T-SQL] Data Cache in Memory

Here is the query by which you can see data cache in memory:

USE AdventureWorks
GO
SELECT COUNT(*) AS cached_pages_count,
name AS BaseTableName, IndexName,
IndexTypeDesc
FROM sys.dm_os_buffer_descriptors AS bd
INNER JOIN
(
SELECT s_obj.name, s_obj.index_id,
s_obj.allocation_unit_id, s_obj.OBJECT_ID,
i.name IndexName, i.type_desc IndexTypeDesc
FROM
(
SELECT OBJECT_NAME(OBJECT_ID) AS name,
index_id ,allocation_unit_id, OBJECT_ID
FROM sys.allocation_units AS au
INNER JOIN sys.partitions AS p
ON au.container_id = p.hobt_id
AND (au.TYPE = 1 OR au.TYPE = 3)
UNION ALL
SELECT OBJECT_NAME(OBJECT_ID) AS name,
index_id, allocation_unit_id, OBJECT_ID
FROM sys.allocation_units AS au
INNER JOIN sys.partitions AS p
ON au.container_id = p.partition_id
AND au.TYPE = 2
) AS s_obj
LEFT JOIN sys.indexes i ON i.index_id = s_obj.index_id
AND i.OBJECT_ID = s_obj.OBJECT_ID ) AS obj
ON bd.allocation_unit_id = obj.allocation_unit_id
WHERE database_id = DB_ID()
GROUP BY name, index_id, IndexName, IndexTypeDesc
ORDER BY cached_pages_count DESC;
GO

[T-SQL] Plan Cache in Memory

Here is the query by which you can see plan cache in memory:

USE AdventureWorks
GO
SELECT [text], cp.size_in_bytes, plan_handle
FROM sys.dm_exec_cached_plans AS cp
CROSS APPLY sys.dm_exec_sql_text(plan_handle)
WHERE cp.cacheobjtype = N'Compiled Plan'
ORDER BY cp.size_in_bytes DESC
GO

[T-SQL] DBCC LOGINFO

Do you know that there are multiple small log files in the large log file (LDF). So now your next question would be how can we know that.

Veri simple method

DBCC LOGINFO

You would find the file size and other relevant information in the resultant.