Pages

Search This Blog

Showing posts with label log. Show all posts
Showing posts with label log. Show all posts

Tuesday, November 29, 2011

[Oracle] How to see last sql statements executed

Recently we get into scenario where we need to see that which sql statements are being executed by activity upload.


So here is an easy way to do this:


select sql_text from v$sql where rownum<=100 and parsing_schema_name='NC_SDA_1_301';


Parsing Schema Name ; should be your schema name for which you need to see the sql statements executed on.

Tuesday, January 4, 2011

[T-SQL] DBCC LOGINFO

Do you know that there are multiple small log files in the large log file (LDF). So now your next question would be how can we know that.

Veri simple method

DBCC LOGINFO

You would find the file size and other relevant information in the resultant.

Sunday, November 21, 2010

[T-SQL] How to create new error log file

There are total two methods to do this:

1. Restart the server
2. EXEC sp_cycle_errorlog

First option is not a suitable one as we restart the server every time.

Hope this post is helpful to many of the dba's.

Sunday, September 26, 2010

[T-SQL] How to Stop Growing Log File Too Big

In continuation of my last post "http://muhammadabbas.blogspot.com/2010/09/t-sql-truncate-log-file.html" , i am writing this article.

Today i was reading an article and this article highlighted that how could we stop the log file to grow. Here is the text:

Two different solutions for it:

1) Convert the Recovery Model to Simple Recovery

If you are truncating the transaction logs, this means you are breaking the T-Log LSN (Log Sequence Numbers). This follows that if disaster comes, you would not be able to restore your T-Logs and there would be no option for you to do point in time recovery. If you are fine with this situation and there is nothing to worry, I suggest that you change your recovery model to Simple Recovery Model. This way, you will not have extra ordinary growth of your log file.

2) Start Taking Transaction Log Backup

If your business does not support loss of data or requires having point in time recovery, you cannot afford anything less than Full Recovery Model. In Full Recovery Model, your transaction log will grow until you take a backup of it. You need to take the T-Log Backup at a regular interval. This way, your log would not grow beyond some limits. If you are taking an hourly T-Log backup, your T-Log would grow until one hour but after this the T-Log backup would truncate all the ‘committed’ transactions once you take it. Doing this would lead the size of the T-Log not to go down much, but it would rather be marked as empty for the next hour’s T-Log to populate.

With this method, you can restore your database at Point of Time if a disaster ever happens at your server.

Let us run an example to demonstrate this. In this case, I have done the following steps:

1.Create Sample Database in FULL RECOVERY Model
2.Take Full Backup (full backup is must for taking subsequent backup)
3.Repeat Following Operation
1.Take Log Backup
2.Insert Some rows
3.Check the size of Log File
4.Clean Up
After a short while, you will notice that the Log file (ldf) will stop increasing but the size of the backup will increase.

If you have an issue with your log file growth, I suggest that you follow either of the above solutions instead of truncating it.

/* FULL Recovery and Log File Growth */
USE [master]
GO
-- Create Database SimpleTran
IF EXISTS (SELECT name FROM sys.databases WHERE name = N'SimpleTran')
BEGIN
ALTER DATABASE [SimpleTran] SET SINGLE_USER WITH ROLLBACK IMMEDIATE;
DROP DATABASE [SimpleTran]
END
GO
CREATE DATABASE [SimpleTran]
GO
-- Set Database backup model to FULL
ALTER DATABASE [SimpleTran] SET RECOVERY FULL
GO
BACKUP DATABASE [SimpleTran] TO DISK = N'D:\SimpleTran.bak' WITH NOFORMAT, NOINIT, NAME = N'SimpleTran-Full Database Backup', SKIP, NOREWIND, NOUNLOAD, STATS = 10
GO
-- Check Database Log File Size
SELECT DB_NAME(database_id) AS DatabaseName,
Name AS Logical_Name,
Physical_Name, (size*8)/1024 SizeMB
FROM sys.master_files
WHERE DB_NAME(database_id) = 'SimpleTran'
GO
-- Create Table in Database with Transaction
USE SimpleTran
GO
IF EXISTS (SELECT * FROM sys.objects WHERE OBJECT_ID = OBJECT_ID(N'[dbo].[RealTempTable]') AND TYPE IN (N'U'))
DROP TABLE [dbo].[RealTempTable]
GO
CREATE TABLE RealTempTable (ID INT)
INSERT INTO RealTempTable (ID)
SELECT TOP 50000 ROW_NUMBER() OVER (ORDER BY a.name) RowID
FROM sys.all_objects a
CROSS JOIN sys.all_objects b
GO
-- Check the size of the Database
SELECT DB_NAME(database_id) AS DatabaseName,
Name AS Logical_Name,
Physical_Name, (size*8)/1024 SizeMB
FROM sys.master_files
WHERE DB_NAME(database_id) = 'SimpleTran'
GO
-- Take Full Backup
BACKUP LOG [SimpleTran] TO DISK = N'D:\SimpleTran.bak' WITH NOFORMAT, NOINIT, NAME = N'SimpleTran-Transaction Log Backup', SKIP, NOREWIND, NOUNLOAD, STATS = 10
GO
-- Run following transaction multiple times and check the size of T-Log
INSERT INTO RealTempTable (ID)
SELECT TOP 50000 ROW_NUMBER() OVER (ORDER BY a.name) RowID
FROM sys.all_objects a
CROSS JOIN sys.all_objects b
GO
-- Check the size of the Database
SELECT DB_NAME(database_id) AS DatabaseName,
Name AS Logical_Name,
Physical_Name, (size*8)/1024 SizeMB
FROM sys.master_files
WHERE DB_NAME(database_id) = 'SimpleTran'
GO
/*
Now run following code multiple times.
You will notice that it will not increase the size of .ldf file but will for sure
increasethe size of the log backup.
*/
-- Second Time
-- START
BACKUP LOG [SimpleTran] TO DISK = N'D:\SimpleTran.log' WITH NOFORMAT, NOINIT, NAME = N'SimpleTran-Transaction Log Backup', SKIP, NOREWIND, NOUNLOAD, STATS = 10
GO
-- Run following transaction and check the size of T-Log
INSERT INTO RealTempTable (ID)
SELECT TOP 50000 ROW_NUMBER() OVER (ORDER BY a.name) RowID
FROM sys.all_objects a
CROSS JOIN sys.all_objects b
GO
-- Check the size of the Database
SELECT DB_NAME(database_id) AS DatabaseName,
Name AS Logical_Name,
Physical_Name, (size*8)/1024 SizeMB
FROM sys.master_files
WHERE DB_NAME(database_id) = 'SimpleTran'
GO
-- END
--Clean Up
USE MASTER
GO
IF EXISTS (SELECT name FROM sys.databases WHERE name = N'SimpleTran')
BEGIN
ALTER DATABASE [SimpleTran] SET SINGLE_USER WITH ROLLBACK IMMEDIATE;
DROP DATABASE [SimpleTran]
END

[T-SQL] Truncate Log File

Few days back , i was called by my friend who was working on community database with a problem that log file of his db was growing bigger and he was worried about it. So i went there and saw that its not a big issue and told him that i would prepare a script and give him so that he could run that once in a month:

USE TEST;
GO
-- Truncate the log by changing the database recovery model to SIMPLE.
ALTER DATABASE TEST
SET RECOVERY SIMPLE;
GO
-- Shrink the truncated log file to 1 MB.
DBCC SHRINKFILE (TEST_log, 1);
GO
-- Reset the database recovery model.
ALTER DATABASE TEST
SET RECOVERY FULL;
GO

Monday, August 16, 2010

[T-SQL] Shrink log in all your user databases

DECLARE @SQL NVARCHAR(MAX)

SELECT @SQL = COALESCE(@SQL + CHAR(13) + CHAR(10),'') + N'
Use ' + QUOTENAME(d.[name]) + ';' + CHAR(13) + '
ALTER DATABASE ' + QUOTENAME(d.[name]) + ' SET RECOVERY SIMPLE;
DBCC SHRINKFILE (' + QUOTENAME(mf.[name],'''') + ', 1);
ALTER DATABASE ' + QUOTENAME(d.[name]) + ' SET RECOVERY FULL;'
FROM sys.databases d
INNER JOIN sys.master_files mf ON [d].[database_id] = [mf].[database_id]
WHERE
d.[database_id] > 4 --no sys dbs
AND d.recovery_model = 1
AND d.is_read_only = 0
AND mf.[type] = 1 --log files
ORDER BY d.name

--print @SQL

EXECUTE (@SQL)