There are times when it is really convenient to gather data in a view and then use that view in one or more stored procs to build a complex dataset. Sometimes, however, that base dataset can run much more efficiently if you use one or more temp tables to join data. Views can;t handle this type of logic, but stored procs can. Historically, it was difficult to use the dataset returned rom a stored proc in the logic of other stored procs, but nowadays it is relatively easy using temp table variables.
Say we have a stored proc that returns a dataset like this:
create proc myBaseData(
@param1 int,
@param2 datetime
)
as
begin
select @param1 as int1,
@param2 as date1
end
go
Now say we want to call that proc and use the dataset in another proc, but we don;t want to save the records to a physical table. We can insert the proc results in a table variable like so:
create proc MyCallingProc(
@param1 int
)
as
begin
declare @mytable as table(
col1 int,
col2 datetime
)
declare @d datetime
select @d=convert(varchar,GETDATE(),112)
insert into @mytable
exec myBaseData @param1,@d
select * from @mytable
end
go
We can then execute it as follows:
exec MyCallingProc 1
The big bonus here is that I don;t have to create a view and take the performance hit of filtering it every time I join it. The params on the proc can be used to filter the results before joining.
Another bonus is that I can set a primary key (unique index) on the table variable, which is something I can't do in the view.
Did this help you? If so, please leave a comment!
David Jedziniak maintains this blog and posts helpful tips and tricks on SQL Server and Dynamics GP development. The opinions expressed here are those of the author and do not represent the views, express or implied, of any past or present employer. The information here is provided without warranty of fitness for any particular purpose.
Friday, February 8, 2013
Wednesday, September 26, 2012
Table variable tests show that they are much faster than #temp!
Working on a client's data migration recently has given me the chance to benchmark table variables against traditional temp tables in SQL 2008.
The server I used was not especially fast and tempdb is even stored on the same drive as the rest of the databases. Still, the results have shown that table variables were at least 10x faster in every test.
The most telling scenario was as follows:
There was a table with 2.8Million rows. One of the fields of interest was a 500 character comment field.
The application that populated this table added a record for each comment with no regard to keys (there was no unique key). However, I noticed that there was a combination of fields that could be used for a unique key, but there were several thousand instances where I would need to concatenate the comment field from more than one record to make the key unique.
Example:
Key Comment
1 abc
1 def
2 der
3 mmd
3 xyz
Need to be imported to the new table as:
1 abcdef
2 der
3 mmdxyz
There was a bit more complexity involved, but hopefully, you get the idea.
So I needed to use xml_path to concat the values like this:
SELECT A.KEY,
(SELECT COMMENTFIELD AS 'text()'
FROM MYTABLE
WHERE KEY=A.KEY
ORDER BY KEY ASC
FOR XML_PATH(''))
FROM MYTABLE A
The problem is that this returns a recordset like this:
1 abcdef
1 abcdef
2 der
3 mmdxyz
3 mmdxyz
No problem, I just need to group them, right? Well there are 2.8M rows here! Time to benchmark!
I first wrote this using #temp tables.
SELECT A.KEY,
(SELECT COMMENTFIELD AS 'text()'
FROM MYTABLE
WHERE KEY=A.KEY
ORDER BY KEY ASC
FOR XML_PATH(''))
INTO #MYTEMP
FROM MYTABLE A
The insert and grouping alone took 9.5 hours.
I then rewrote it to use table variables
DECLARE @MYTEMP AS TABLE(
KEY AS INT,
COMMENTFIELD AS VARCHAR(1000)
)
INSERT INTO @MYTEMP
SELECT A.KEY,
(SELECT COMMENTFIELD AS 'text()'
FROM MYTABLE
WHERE KEY=A.KEY
ORDER BY KEY ASC
FOR XML_PATH(''))
FROM MYTABLE A
This same logic took 21 minutes! That means it only took 4% as long to run!
If that doesn't make the case for using table variables over temp tables, I don't know what will. From now on, I will default to the table variable when writing code and only consider the temp table for special cases where I need to pass the dataset around.
Did this help you? If so, please leave a comment!
The server I used was not especially fast and tempdb is even stored on the same drive as the rest of the databases. Still, the results have shown that table variables were at least 10x faster in every test.
The most telling scenario was as follows:
There was a table with 2.8Million rows. One of the fields of interest was a 500 character comment field.
The application that populated this table added a record for each comment with no regard to keys (there was no unique key). However, I noticed that there was a combination of fields that could be used for a unique key, but there were several thousand instances where I would need to concatenate the comment field from more than one record to make the key unique.
Example:
Key Comment
1 abc
1 def
2 der
3 mmd
3 xyz
Need to be imported to the new table as:
1 abcdef
2 der
3 mmdxyz
There was a bit more complexity involved, but hopefully, you get the idea.
So I needed to use xml_path to concat the values like this:
SELECT A.KEY,
(SELECT COMMENTFIELD AS 'text()'
FROM MYTABLE
WHERE KEY=A.KEY
ORDER BY KEY ASC
FOR XML_PATH(''))
FROM MYTABLE A
The problem is that this returns a recordset like this:
1 abcdef
1 abcdef
2 der
3 mmdxyz
3 mmdxyz
No problem, I just need to group them, right? Well there are 2.8M rows here! Time to benchmark!
I first wrote this using #temp tables.
SELECT A.KEY,
(SELECT COMMENTFIELD AS 'text()'
FROM MYTABLE
WHERE KEY=A.KEY
ORDER BY KEY ASC
FOR XML_PATH(''))
INTO #MYTEMP
FROM MYTABLE A
The insert and grouping alone took 9.5 hours.
I then rewrote it to use table variables
DECLARE @MYTEMP AS TABLE(
KEY AS INT,
COMMENTFIELD AS VARCHAR(1000)
)
INSERT INTO @MYTEMP
SELECT A.KEY,
(SELECT COMMENTFIELD AS 'text()'
FROM MYTABLE
WHERE KEY=A.KEY
ORDER BY KEY ASC
FOR XML_PATH(''))
FROM MYTABLE A
This same logic took 21 minutes! That means it only took 4% as long to run!
If that doesn't make the case for using table variables over temp tables, I don't know what will. From now on, I will default to the table variable when writing code and only consider the temp table for special cases where I need to pass the dataset around.
Did this help you? If so, please leave a comment!
Thursday, August 9, 2012
Why you should not store user data on the same drive as SQL files
I
am not aware of any official “Best Practice” statement from Microsoft in
relation to storing user files on the same drive as SQL database and log files,
however, I would recommend against the practice for three reasons:
If this is still not clear, hopefully this analogy will shed some light.
Imagine a swimming pool filled with barrels with a garden hose running to it that can be moved from barrel to barrel.
The swimming pool is the hard drive.
The barrels are the database growth allocation blocks.
The garden hose is the HBA.
The water is the data.
SQL pulls and pushes water through the garden hose, usually at a trickle.
A user storing or reading a file blasts water at high pressure through the hose intermittently. There is nothing stopping the user from blasting so much water in that they overflow the pool. If they blast or suck water through the hose for an extended period of time (seconds is extended), then SQL has to wait for them to finish to resume the trickle.
Users can put water in any barrel, but SQL prefers an empty barrel. If there are no empty ones, SQL must use part of two or more barrels, but must remember which barrels go together.
This quickly becomes laborious to keep track of, and translates into degraded performance.
Did this help you? If so, please leave a comment!
- Drive space. SQL database and log files can grow quickly without advance notice. Best practice is to keep 15% drive space free at all times to safeguard against unexpected growth causing the drive to run out of space. Users are much more unpredictable than the database, and can run the drive out of space without warning, causing SQL to crash.
- Disk I/O. Each HBA (Host Bus Adapter) has a finite amount of I/O throughput that it can handle. Even if the concerns from #1 are handled using disk space quotas or some other limiting measure, I am not aware of a good way to limit certain users I/O capacity. User file operations are typically larger and slower than SQL data operations. So even if the users are not filling up the drive, they could easily use enough of the I/O throughput to cause a choke point for SQL and degrade performance. These types of performance hits would be very difficult to diagnose and track down, so better to avoid them up front.
- Fragmentation. When set up properly, SQL server grows data and log files by allocating a large block of contiguous disk space. Since user files are much smaller, they can create thousands of tiny gaps in disk space. The result is that SQL Server cannot find a continuous block large enough for the next growth operation. The SQL data and log files will become fragmented. There is no safe way to defragment them with windows tools, and the SQL Server Maintenance operations that do so can only handle defragmenting the SQL file if there is sufficient continuous disk space, not moving the user files. The end result is a performance hit and can become significant over time. This is also one of the reasons that the SQL user databases and tempdb data and log files should not be stored on the OS drive.
If this is still not clear, hopefully this analogy will shed some light.
Imagine a swimming pool filled with barrels with a garden hose running to it that can be moved from barrel to barrel.
The swimming pool is the hard drive.
The barrels are the database growth allocation blocks.
The garden hose is the HBA.
The water is the data.
SQL pulls and pushes water through the garden hose, usually at a trickle.
A user storing or reading a file blasts water at high pressure through the hose intermittently. There is nothing stopping the user from blasting so much water in that they overflow the pool. If they blast or suck water through the hose for an extended period of time (seconds is extended), then SQL has to wait for them to finish to resume the trickle.
Users can put water in any barrel, but SQL prefers an empty barrel. If there are no empty ones, SQL must use part of two or more barrels, but must remember which barrels go together.
This quickly becomes laborious to keep track of, and translates into degraded performance.
Did this help you? If so, please leave a comment!
Tuesday, July 3, 2012
Dream SQL Install
I see debates all the time over which way is best to set up the hardware for a particular SQL install. So I decided to lay out what the hardware setup would look like in my ideal GP SQL installation if money was no object.
First, I would buy a Dell PowerEdge and an HP Proliant and us them as doorstops while the boys carry in a new maxed out SGI IDC3212-RP4
Specs here: http://www.sgi.com/products/servers/infinitedata_cluster/configs.html
We may also need step-stools later, so I would then toss them in the corner. I would then hook up a SAN with a fibre channel backbone. I am only installing TWO in my imaginary world, so I would put in 5 2TB SSDs. I would install the OS and SQL Server binaries on SSD1. Then the following:
SSD2: MDF files (Except tempdb)
SSD3: LDF files
SSD4: tempdb files
SSD5: BAK files
I would set the initial db size on DYNAMICS and TWO to 250GB to prevent file growth for a while.
I would then connect the entire rig to a 16Tbps connection and replicate the install on the east and west coast, I would then set up SQL clustering to mirror the entire setup between the systems, and use SQL replication features to allow seamless switching from one instance to another in case of failure.
So there is the upper limit. Anything else is a compromise, and must be weighed against an organization's needs and budget.
I hope you enjoyed my little foray in dreamland.
First, I would buy a Dell PowerEdge and an HP Proliant and us them as doorstops while the boys carry in a new maxed out SGI IDC3212-RP4
Specs here: http://www.sgi.com/products/servers/infinitedata_cluster/configs.html
We may also need step-stools later, so I would then toss them in the corner. I would then hook up a SAN with a fibre channel backbone. I am only installing TWO in my imaginary world, so I would put in 5 2TB SSDs. I would install the OS and SQL Server binaries on SSD1. Then the following:
SSD2: MDF files (Except tempdb)
SSD3: LDF files
SSD4: tempdb files
SSD5: BAK files
I would set the initial db size on DYNAMICS and TWO to 250GB to prevent file growth for a while.
I would then connect the entire rig to a 16Tbps connection and replicate the install on the east and west coast, I would then set up SQL clustering to mirror the entire setup between the systems, and use SQL replication features to allow seamless switching from one instance to another in case of failure.
So there is the upper limit. Anything else is a compromise, and must be weighed against an organization's needs and budget.
I hope you enjoyed my little foray in dreamland.
Tuesday, June 19, 2012
SQL Table variables
How to use table variables (introduced in T-SQL in SQL Server 2000)
First, lets look at the reason we are using a non-permanent table in the first place. Say I have a table with 100,000 rows. I need to join that table to another and I will only find matches in a small number of records. Additionally, I only need a work with a few columns of mylarge table. Joining this large table to my query can severely degrade performance. I can often overcome this issue and increase performance by creating a table on the fly in code, loading it with the records from the large table that I will actually be working with, and using that instead.
In prior versions of SQL, the only way to do this was with a temp table.
Here is a sample of a temp table declaration:
CREATE TABLE #MYTEMPTABLE ([object_id] INT,[name] VARCHAR(128),max_length SMALLINT);
This creates a table in the temp database (on disk). I can then load this table like so:
INSERT INTO #MYTEMPTABLE SELECT [object_id],[name],max_length FROM sys.all_columns WHERE max_length>2000;
Note that if I were to just use the all_columns table, my query would have to do a table scann through many thousands of records to find the ones with max_length>2000. This way we will do that one time up front, before a join.
I can work with the temp table much like a normal table:
SELECT * FROM sys.columns C INNER JOIN #MYTEMPTABLE T ON C.[object_id]=T.[object_id];
I must remember to drop the temp table when I am done with it. Otherwise it could cause object conflicts.
DROP TABLE #MYTEMPTABLE;
The down side to doing it this way is that the few records I am working with are still being stored in a physical database (tempdb), which is always slower than RAM (though not by much if the database is on an SSD).
This code can be made to run faster (sometimes by orders of magnitude) by using a table variable instead of a temp table.
Here is the same functionality:
DECLARE @MYTEMPTABLE AS TABLE ([object_id] INT,[name] VARCHAR(128),max_length SMALLINT);
INSERT INTO @MYTEMPTABLE SELECT [object_id],[name],max_length FROM sys.all_columns WHERE max_length>2000;
SELECT * FROM sys.columns C INNER JOIN @MYTEMPTABLE T ON C.[object_id]=T.[object_id];
Advantages of table variables over temporary tables:
1. They have well-defined scopes.
2. They're cleaned up automatically at the end of the stored proc in which they're defined
3. They tend to result in fewer recompilations
4. They result in less locking in stored procs and transactions
5. For recordsets smaller than 100,000 they are almost always faster than a temporary table. In 12 years, I have never run into a real world instance where using the table variable was slower than the temporary table as long as the number of rows stored in the table variable was less than 100,000. In cases where the dataset is larger than that, you typically get more benefit from being able to define indexes on the temporary table.
Did this help you? If so, please leave a comment!
First, lets look at the reason we are using a non-permanent table in the first place. Say I have a table with 100,000 rows. I need to join that table to another and I will only find matches in a small number of records. Additionally, I only need a work with a few columns of mylarge table. Joining this large table to my query can severely degrade performance. I can often overcome this issue and increase performance by creating a table on the fly in code, loading it with the records from the large table that I will actually be working with, and using that instead.
In prior versions of SQL, the only way to do this was with a temp table.
Here is a sample of a temp table declaration:
CREATE TABLE #MYTEMPTABLE ([object_id] INT,[name] VARCHAR(128),max_length SMALLINT);
This creates a table in the temp database (on disk). I can then load this table like so:
INSERT INTO #MYTEMPTABLE SELECT [object_id],[name],max_length FROM sys.all_columns WHERE max_length>2000;
Note that if I were to just use the all_columns table, my query would have to do a table scann through many thousands of records to find the ones with max_length>2000. This way we will do that one time up front, before a join.
I can work with the temp table much like a normal table:
SELECT * FROM sys.columns C INNER JOIN #MYTEMPTABLE T ON C.[object_id]=T.[object_id];
I must remember to drop the temp table when I am done with it. Otherwise it could cause object conflicts.
DROP TABLE #MYTEMPTABLE;
The down side to doing it this way is that the few records I am working with are still being stored in a physical database (tempdb), which is always slower than RAM (though not by much if the database is on an SSD).
This code can be made to run faster (sometimes by orders of magnitude) by using a table variable instead of a temp table.
Here is the same functionality:
DECLARE @MYTEMPTABLE AS TABLE ([object_id] INT,[name] VARCHAR(128),max_length SMALLINT);
INSERT INTO @MYTEMPTABLE SELECT [object_id],[name],max_length FROM sys.all_columns WHERE max_length>2000;
SELECT * FROM sys.columns C INNER JOIN @MYTEMPTABLE T ON C.[object_id]=T.[object_id];
Advantages of table variables over temporary tables:
1. They have well-defined scopes.
2. They're cleaned up automatically at the end of the stored proc in which they're defined
3. They tend to result in fewer recompilations
4. They result in less locking in stored procs and transactions
5. For recordsets smaller than 100,000 they are almost always faster than a temporary table. In 12 years, I have never run into a real world instance where using the table variable was slower than the temporary table as long as the number of rows stored in the table variable was less than 100,000. In cases where the dataset is larger than that, you typically get more benefit from being able to define indexes on the temporary table.
Did this help you? If so, please leave a comment!
Friday, February 10, 2012
SQL Server Setup and Maintenance Tips
Server Maintenance
- Windows Updates
- Virus Scanners
SQL Server Setup Best Practices
- Plan the data storage scheme before installing SQL
- Pre-size MDF and LDF files
- Do not set auto-growth of MDF or LDF files to a percentage. Set it to a value in MB
- Turn on Instant File Initialization in SQL
- Turn on Auto Statistics
- Weekly (or more frequent) Full database backup, transported off site.
- Nightly incremental database backup
- Periodic log backups throughout the day
Did this help you? If so, please leave a comment!
- Windows Updates
Run windows updates manually and frequently. Turning on automatic updates is not advised, based on the history of windows updates breaking certain products. Before installing updates, check the internet for reports of issues it could cause with your applications.
Exclude MDF, NDF, LDF, BAK, TRN and BCP file extensions and make sure schedules scans don;t coincide with other scheduled tasks.- Backups
Perform a full system backup periodically and before and after installations are run.- Defragmentation
SANs and SSDs are less likely to get fragmented in normal operation. If a drive is used only for database files, it is less likely to need defragmentation. Drives that hold BAK files will need frequent defragmentation.
SQL Server Setup Best Practices
- Plan the data storage scheme before installing SQL
Disk 1: OS, page file, SQL Server binaries- Check for Physical File Fragmentation before creating a database or log file
Disk 2 or LUN 1: MDF files (Except tempdb)
Disk 3 or LUN 2: LDF files
Disk 4 or LUN 3: tempdb files
Disk 5 or LUN 4: BAK files
- Pre-size MDF and LDF files
- Do not set auto-growth of MDF or LDF files to a percentage. Set it to a value in MB
Revisit periodically and reset initial size if necessary- Plan disk sizes so that you can keep free drive space above 15%
- Turn on Instant File Initialization in SQL
- Turn on Auto Statistics
SQL Server Maintenance Best Practice for a Typical GP Instance- Use Full backup model
- Weekly (or more frequent) Full database backup, transported off site.
- Nightly incremental database backup
- Periodic log backups throughout the day
Use backup compression- What should you back up?
master- When backing up data, the tasks should be run in this order
msdb
DYNAMICS
All company databases
NEVER back up tempdb
Model only needs to be backed up when making changes to master (such as when upgrading or patching SQL Server)
1. Check DB- When backup up logs, maintenance tasks are not necessary
2. Reorganize indexes
3. Update Statistics
4. Backup Database
SQL Server Manual Period Maintenance Tasks- Index Maintenance
(These tasks could be scheduled, but their results should be checked before allowing users back into the system)
Add missing indexes- Msdb Maintenance
Remove unused and duplicate indexes
Monitor indexes for excessive fragmentation
Periodically delete backup, restore, job, and maintenance plan history- Shrinking
sp_delete_backuphistory [oldest date]
Use log file viewer for others
Always rebuild indexes after shrinking. If you need to avoid growing the file with a rebuild after shrinking, then at least re-organize indexes. Shrinking leaves indexes nearly 100% fragmented by design.- Monitor Drive free space
Never shrink tempdb
Shrinking should be a rare occurrence if the growth settings are tuned properly
Keep more than 15% free- Monitor SQL Server logs and Windows event logs for errors and warnings
Did this help you? If so, please leave a comment!
Tuesday, November 29, 2011
SQL Decryption Paradox
In SQL 2000 and prior, an object that was decrypted using the 'with encryption' could easily be read out of the comments as clear text (many times with SQL profiler).
In SQL 2005, Microsoft removed this 'feature' by replacing the comment text with a null value and storing the encrypted SQL as an image value in sysobjvalues. The text is just bit shifted and can be bit shifted back with a little effort. After all, SQL has to decrypt it to re-compile it anyway.
If you spend any time at all supporting real-world environments, you will eventually find the need to debug or even repair an encrypted object that someone else wrote. I found myself in this situation many times. A quick google search turned up dozens of tools that promised to do this for SQL 2005 and SQL 2008. I tried a couple and was disappointed with the results. I also really needed something that I did not have to install on a customer's server. Ideally, I would be able to connect to the customer's database, decrypt the proc without affecting any of their existing SQL objects, and copy paste the code into a local editor window to take a look. I found several different discussions from which I gleaned an understanding of how to get the text back using nothing but SQL itself. (Again, logically you would expect this, since SQL decrypts the procs all the time in order to compile and run them).
Before I go further, let me state that I am not condoning stealing proprietary code. I am also not condoning the use of SQL encryption to protect your proprietary code.
I feel that if your code is super-secret, you should use one of the following methods to protect it:
Ok, let me climb down off my soap box and show you what I came up with.
After some experimentation, I created a stored proc that will run on 2005 and 2008 and decrypt the majority of the stored procs I have pointed it at. It is either all or nothing. Either you get the entire stored proc or you get null back.
Edit: On very long procs I have noticed that it returns extra line breaks when reading across pages. These can occur in the middle of words, so some clean up of these may be required if you want to execute the output. Some day I may get around to fixing it programmatically. If you get an itch and want to do it yourself, I would welcome you solution in the comments to this post.
As an additional learning experiment, I decided to encrypt the proc and create it so that it could decrypt itself.
You must use a dedicated admin connection or DAC to create and run the proc, since it requires access to sysobjvalues.
Edit: For SQL 2012, you can connect to DAC by closing all connections and windows in SSMS, then click the Database Engine Query button on the toolbar. Login as Admin:xxxxx using SQL authentication and sa user.
Edit: OK, so today I got tired of removing the extra line breaks after each block. Not easy to fix. So I added an easy to find and replace string that I can use to quickly remove those breaks. In SSMS you can do a find/replace on the output using wildcards like this \n|||remove|||\n That will remove the extra line breaks and the script should compile.
In SQL 2005, Microsoft removed this 'feature' by replacing the comment text with a null value and storing the encrypted SQL as an image value in sysobjvalues. The text is just bit shifted and can be bit shifted back with a little effort. After all, SQL has to decrypt it to re-compile it anyway.
If you spend any time at all supporting real-world environments, you will eventually find the need to debug or even repair an encrypted object that someone else wrote. I found myself in this situation many times. A quick google search turned up dozens of tools that promised to do this for SQL 2005 and SQL 2008. I tried a couple and was disappointed with the results. I also really needed something that I did not have to install on a customer's server. Ideally, I would be able to connect to the customer's database, decrypt the proc without affecting any of their existing SQL objects, and copy paste the code into a local editor window to take a look. I found several different discussions from which I gleaned an understanding of how to get the text back using nothing but SQL itself. (Again, logically you would expect this, since SQL decrypts the procs all the time in order to compile and run them).
Before I go further, let me state that I am not condoning stealing proprietary code. I am also not condoning the use of SQL encryption to protect your proprietary code.
I feel that if your code is super-secret, you should use one of the following methods to protect it:
- Put the logic in the application layer, rather than the data layer.
- Host the databases for your customers so you can control access to the objects
Ok, let me climb down off my soap box and show you what I came up with.
After some experimentation, I created a stored proc that will run on 2005 and 2008 and decrypt the majority of the stored procs I have pointed it at. It is either all or nothing. Either you get the entire stored proc or you get null back.
Edit: On very long procs I have noticed that it returns extra line breaks when reading across pages. These can occur in the middle of words, so some clean up of these may be required if you want to execute the output. Some day I may get around to fixing it programmatically. If you get an itch and want to do it yourself, I would welcome you solution in the comments to this post.
As an additional learning experiment, I decided to encrypt the proc and create it so that it could decrypt itself.
You must use a dedicated admin connection or DAC to create and run the proc, since it requires access to sysobjvalues.
Edit: For SQL 2012, you can connect to DAC by closing all connections and windows in SSMS, then click the Database Engine Query button on the toolbar. Login as Admin:xxxxx using SQL authentication and sa user.
Edit: OK, so today I got tired of removing the extra line breaks after each block. Not easy to fix. So I added an easy to find and replace string that I can use to quickly remove those breaks. In SSMS you can do a find/replace on the output using wildcards like this \n|||remove|||\n That will remove the extra line breaks and the script should compile.
--you must connect to Admin: SQLInstance
DROP PROC DecryptObject
Did this help you? If so, please leave a comment!
GO
CREATE PROCEDURE DecryptObject (
@schema nvarchar(255),
@ObjectName nvarchar(255))
with encryption
AS
BEGIN
DECLARE @id INT,
@TYPE NVARCHAR(50),
@Statement NVARCHAR(4000),
@IsEncrypted BIT
--GET THE ID FOR THE OBJECT
SELECT TOP 1 @id=object_id
FROM sys.all_objects
WHERE [type] NOT IN ('S','U','PK','F','D','SQ','IT','X','PC','FS','AF')
AND name = @ObjectName
AND SCHEMA_NAME([schema_id]) = COALESCE(@schema,'dbo')
IF @id IS NULL
BEGIN
PRINT @ObjectName + ' not found!'
RETURN
END
--SEE IF IT IS ENCRYPTED
SELECT @IsEncrypted=[encrypted]
FROM syscomments
WHERE [id] = @id
AND colid = 1
IF @IsEncrypted = 0
BEGIN
PRINT @ObjectName + ' not encrypted!'
RETURN
END
IF NOT EXISTS(SELECT 1 FROM sys.dm_exec_connections C INNER JOIN sys.endpoints E ON C.[endpoint_id]=E.[endpoint_id] WHERE E.[name]='Dedicated Admin Connection' AND C.[session_id] = @@SPID)
BEGIN
PRINT 'This procedure can only be run using a DAC Connection'
RETURN
END
--GET THE REST OF THE VALUES
SELECT @ObjectName=[name],
@TYPE= CASE WHEN [type] = 'P' THEN N'PROCEDURE'
WHEN [type] = 'V' THEN 'VIEW'
WHEN [type] IN ('FN','TF','IF') THEN N'FUNCTION'
ELSE [type]
END,
@Statement=CASE WHEN [type] = 'P' THEN N'WITH ENCRYPTION AS'
WHEN [type] = 'V' THEN N'WITH ENCRYPTION AS SELECT 123 ABC'
WHEN [type] IN ('FN','TF','IF') THEN N' () RETURNS INT WITH ENCRYPTION AS BEGIN RETURN 1 END'
ELSE [type]
END
FROM sys.all_objects
WHERE object_id = @id
DECLARE @BlockNumber INT,
@Block NVARCHAR(MAX),
@pos INT,
@DummyBlock NVARCHAR(MAX),
@DummyImage VARBINARY(MAX),
@EncyptedBlock NVARCHAR(MAX),
@EncyptedImage VARBINARY(MAX),
@pos2 INT,
@part1 NVARCHAR(MAX),
@part2 NVARCHAR(MAX),
@DummyImageSQL NVARCHAR(MAX),
@NewText NVARCHAR(4000)
--GET THE ECRYPTED IMAGE VALUE
SELECT @EncyptedImage = imageval
FROM sys.sysobjvalues
WHERE [objid] = @id
AND valclass = 1
BEGIN TRANSACTION
--BUILD THE SQL FOR THE DUMMY IMAGE
SELECT @pos2 = 1,
@part1= N'ALTER ' + @TYPE + ' ' + ISNULL((@schema+'.'),'') + @ObjectName + ' ' + @Statement
--FILL THE REST OF THE STRINGS AND START THE SQL FOR THE DUMMY IMAGE
SELECT @part1 = @part1 + REPLICATE('-',4000-LEN(@part1)),
@part2 = REPLICATE('-',8000),
@DummyImageSQL = N'EXEC(@part1'
--PROCESS THE ENCRYPTED IMAGE ONE BLOCK AT A TIME
WHILE @pos2 <= CEILING(DATALENGTH(@EncyptedImage) / 8000.0)
BEGIN
--BUILD THE SQL FOR THE DUMMY IMAGE
SET @DummyImageSQL=@DummyImageSQL + N'+@part2'
SET @pos2 = @pos2 + 1
END
--FINISH THE SQL
SELECT @DummyImageSQL = @DummyImageSQL + ')'
--EXEC SQL TO CREATE DUMMY IMAGE
EXEC sp_executesql @DummyImageSQL,
N'@part1 NVARCHAR(4000),@part2 VARCHAR(8000)',
@part1=@part1,
@part2=@part2
--GO GET THE DUMMY IMAGE THAT WAS CREATED
SELECT @DummyImage = imageval
FROM [sys].[sysobjvalues]
WHERE [objid] = @id
and [valclass] = 1
ROLLBACK TRANSACTION
SET @BlockNumber=1
--PROCESS ONE BLOCK AT A TIME
WHILE @BlockNumber<=CEILING(DATALENGTH(@EncyptedImage) / 8000.0)
BEGIN
--GET THE NEXT BLOCK FROM THE ENCRYPTED IMAGE AND THE DUMMY IMAGE
SELECT @EncyptedBlock = SUBSTRING(@EncyptedImage, (@BlockNumber - 1) * 8000 + 1, 8000),
@DummyBlock = SUBSTRING(@DummyImage, (@BlockNumber - 1) * 8000 + 1, 8000)
IF @BlockNumber=1
BEGIN
--BUILD THE CREATE STATEMENT HEADER
SELECT @NewText = N'CREATE ' + @TYPE + ' ' + ISNULL((@schema + '.'),'') + @ObjectName + ' ' + @Statement + REPLICATE('-',4000)
END
ELSE
BEGIN
--FILL IN HALF A BLOCK
SELECT @NewText=REPLICATE('-', 4000)
END
--FILL HALF THE SIZE OF THE ENCRYPTED BLOCK
SELECT @Block = REPLICATE(N'A', (DATALENGTH(@EncyptedBlock) / 2)),
@pos=1
--PROCESS ONE BIT AT A TIME
WHILE @pos<=DATALENGTH(@EncyptedBlock)/2
BEGIN
--USE BITWISE OR TO DECRYPT THE BIT
SELECT @Block =
STUFF(@Block, @pos, 1,
NCHAR(UNICODE(SUBSTRING(@EncyptedBlock, @pos, 1))
^ (UNICODE(SUBSTRING(@NewText, @pos, 1))
^ UNICODE(SUBSTRING(@DummyBlock, @pos, 1))
)
)
)
--NEXT POSITION
SELECT @pos = @pos+1
END
--PRINT THIS BLOCK BEFORE MOVING TO THE NEXT, THIS WILL HAVE THE EFFECT OF
--GENERATING ONE LONG STATEMENT IN THE RESULTS WINDOW
PRINT @Block
print '|||remove|||'
SELECT @BlockNumber = @BlockNumber + 1
print '|||remove|||'
SELECT @BlockNumber = @BlockNumber + 1
END
END
GO
exec DecryptObject 'dbo','DecryptObject'
Subscribe to:
Posts (Atom)
SQL 2022 TSQL snapshot backups!
SQL 2022 now actually supports snapshot backups! More specifically, T-SQL snapshot backups. Of course this is hardware-dependent. Here ...
-
Requirement: Trim, truncate, or otherwise modify the text from one SharePoint list field to make it appear in another. Solution: Make the...
-
SQL Job to refresh TEST from PRODUCTION Last Updated: 2018.11.12 I like to include each of these steps as a separate job step. If you ...
-
I am reposting this information from https://community.dynamics.com/gp/b/gplesliev/archive/2014/02/20/dex-ini-switches-my-complete-list in...