Tuesday, November 29, 2011

SQL Decryption Paradox

In SQL 2000 and prior, an object that was decrypted using the 'with encryption' could easily be read out of the comments as clear text (many times with SQL profiler).

In SQL 2005, Microsoft removed this 'feature' by replacing the comment text with a null value and storing the encrypted SQL as an image value in sysobjvalues.  The text is just bit shifted and can be bit shifted back with a little effort.  After all, SQL has to decrypt it to re-compile it anyway.

If you spend any time at all supporting real-world environments, you will eventually find the need to debug or even repair an encrypted object that someone else wrote.  I found myself in this situation many times.  A quick google search turned up dozens of tools that promised to do this for SQL 2005 and SQL 2008.  I tried a couple and was disappointed with the results.  I also really needed something that I did not have to install on a customer's server.  Ideally, I would be able to connect to the customer's database, decrypt the proc without affecting any of their existing SQL objects, and copy paste the code into a local editor window to take a look.  I found several different discussions from which I gleaned an understanding of how to get the text back using nothing but SQL itself. (Again, logically you would expect this, since SQL decrypts the procs all the time in order to compile and run them).

Before I go further, let me state that I am not condoning stealing proprietary code.  I am also not condoning the use of SQL encryption to protect your proprietary code.

I feel that if your code is super-secret, you should use one of the following methods to protect it:
  1. Put the logic in the application layer, rather than the data layer.
  2. Host the databases for your customers so you can control access to the objects
Regardless of what you choose to do, above all, you should understand this: if you are not willing to support your code indefinitely (and no-one is), someone else *will* require access to it eventually, and they *will* find a way to get to it.

Ok, let me climb down off my soap box and show you what I came up with.
After some experimentation, I created a stored proc that will run on 2005 and 2008 and decrypt the majority of the stored procs I have pointed it at.  It is either all or nothing.  Either you get the entire stored proc or you get null back.

Edit: On very long procs I have noticed that it returns extra line breaks when reading across pages.  These can occur in the middle of words, so some clean up of these may be required if you want to execute the output.  Some day I may get around to fixing it programmatically.  If you get an itch and want to do it yourself, I would welcome you solution in the comments to this post.


As an additional learning experiment, I decided to encrypt the proc and create it so that it could decrypt itself.
You must use a dedicated admin connection or DAC to create and run the proc, since it requires access to sysobjvalues.

Edit:  For SQL 2012, you can connect to DAC by closing all connections and windows in SSMS, then click the Database Engine Query button on the toolbar.  Login as Admin:xxxxx using SQL authentication and sa user.

Edit: OK, so today I got tired of removing the extra line breaks after each block.  Not easy to fix.  So I added an easy to find and replace string that I can use to quickly remove those breaks.  In SSMS you can do a find/replace on the output using wildcards like this  \n|||remove|||\n  That will remove the extra line breaks and the script should compile.

--you must connect to Admin: SQLInstance

DROP PROC DecryptObject

GO



CREATE PROCEDURE DecryptObject (

     @schema nvarchar(255), 

     @ObjectName nvarchar(255))

with encryption
AS
BEGIN
     DECLARE @id INT,
          @TYPE NVARCHAR(50),
          @Statement NVARCHAR(4000),
          @IsEncrypted BIT

     --GET THE ID FOR THE OBJECT
     SELECT TOP 1 @id=object_id
     FROM sys.all_objects 
     WHERE [type] NOT IN ('S','U','PK','F','D','SQ','IT','X','PC','FS','AF') 
          AND name = @ObjectName 
          AND SCHEMA_NAME([schema_id]) = COALESCE(@schema,'dbo') 
     IF @id IS NULL
     BEGIN
          PRINT @ObjectName + ' not found!'
          RETURN
     END
     --SEE IF IT IS ENCRYPTED
     SELECT @IsEncrypted=[encrypted]
     FROM syscomments
     WHERE [id] = @id 
          AND colid = 1
     IF @IsEncrypted = 0
     BEGIN
          PRINT @ObjectName + ' not encrypted!'
          RETURN
     END
     IF NOT EXISTS(SELECT 1 FROM sys.dm_exec_connections C INNER JOIN sys.endpoints E ON C.[endpoint_id]=E.[endpoint_id]     WHERE E.[name]='Dedicated Admin Connection'     AND C.[session_id] = @@SPID)
     BEGIN
          PRINT 'This procedure can only be run using a DAC Connection'
          RETURN
     END     
     --GET THE REST OF THE VALUES
     SELECT @ObjectName=[name],
          @TYPE=     CASE WHEN [type] = 'P' THEN N'PROCEDURE' 
                         WHEN [type] = 'V' THEN 'VIEW' 
                         WHEN [type] IN ('FN','TF','IF') THEN N'FUNCTION' 
                         ELSE [type]
                    END,
          @Statement=CASE WHEN [type] = 'P' THEN N'WITH ENCRYPTION AS' 
                         WHEN [type] = 'V' THEN N'WITH ENCRYPTION AS SELECT 123 ABC' 
                         WHEN [type] IN ('FN','TF','IF') THEN N' () RETURNS INT WITH ENCRYPTION AS BEGIN RETURN 1 END' 
                         ELSE [type]
                    END
     FROM sys.all_objects 
     WHERE object_id = @id
     DECLARE @BlockNumber INT,
          @Block NVARCHAR(MAX),
          @pos INT,
          @DummyBlock NVARCHAR(MAX),
          @DummyImage VARBINARY(MAX),
          @EncyptedBlock NVARCHAR(MAX),
          @EncyptedImage VARBINARY(MAX),
          @pos2 INT,
          @part1 NVARCHAR(MAX),
          @part2 NVARCHAR(MAX),
          @DummyImageSQL NVARCHAR(MAX),
          @NewText NVARCHAR(4000)


     --GET THE ECRYPTED IMAGE VALUE
     SELECT @EncyptedImage = imageval 
     FROM sys.sysobjvalues 
     WHERE [objid] = @id 
          AND valclass = 1
     BEGIN TRANSACTION
          --BUILD THE SQL FOR THE DUMMY IMAGE
          SELECT @pos2 = 1,
               @part1= N'ALTER ' + @TYPE + ' ' + ISNULL((@schema+'.'),'') + @ObjectName + ' ' + @Statement    
           
          --FILL THE REST OF THE STRINGS AND START THE SQL FOR THE DUMMY IMAGE
          SELECT @part1 = @part1 + REPLICATE('-',4000-LEN(@part1)),
               @part2 = REPLICATE('-',8000),
               @DummyImageSQL = N'EXEC(@part1'
          --PROCESS THE ENCRYPTED IMAGE ONE BLOCK AT A TIME
        WHILE @pos2 <= CEILING(DATALENGTH(@EncyptedImage) / 8000.0)
        BEGIN
               --BUILD THE SQL FOR THE DUMMY IMAGE               
               SET @DummyImageSQL=@DummyImageSQL + N'+@part2'
               SET @pos2 = @pos2 + 1
        END
                
          --FINISH THE SQL
          SELECT @DummyImageSQL = @DummyImageSQL + ')'                    
          --EXEC SQL TO CREATE DUMMY IMAGE
          EXEC sp_executesql @DummyImageSQL,
                    N'@part1 NVARCHAR(4000),@part2 VARCHAR(8000)',
                    @part1=@part1,
                    @part2=@part2
          --GO GET THE DUMMY IMAGE THAT WAS CREATED
          SELECT @DummyImage = imageval
          FROM [sys].[sysobjvalues] 
          WHERE [objid] = @id 
               and [valclass] = 1            
     ROLLBACK TRANSACTION
     SET @BlockNumber=1
     --PROCESS ONE BLOCK AT A TIME
     WHILE @BlockNumber<=CEILING(DATALENGTH(@EncyptedImage) / 8000.0)
     BEGIN
          --GET THE NEXT BLOCK FROM THE ENCRYPTED IMAGE AND THE DUMMY IMAGE
          SELECT @EncyptedBlock = SUBSTRING(@EncyptedImage, (@BlockNumber - 1) * 8000 + 1, 8000),
               @DummyBlock = SUBSTRING(@DummyImage, (@BlockNumber - 1) * 8000 + 1, 8000)
          IF @BlockNumber=1
          BEGIN          
               --BUILD THE CREATE STATEMENT HEADER
               SELECT @NewText = N'CREATE ' + @TYPE + ' ' + ISNULL((@schema + '.'),'') + @ObjectName + ' ' + @Statement + REPLICATE('-',4000)               
          END
          ELSE
          BEGIN
               --FILL IN HALF A BLOCK
               SELECT @NewText=REPLICATE('-', 4000)
          END                    
           
          --FILL HALF THE SIZE OF THE ENCRYPTED BLOCK
          SELECT @Block = REPLICATE(N'A', (DATALENGTH(@EncyptedBlock) / 2)),                        
               @pos=1
          --PROCESS ONE BIT AT A TIME
          WHILE @pos<=DATALENGTH(@EncyptedBlock)/2
          BEGIN
               --USE BITWISE OR TO DECRYPT THE BIT
               SELECT @Block = 
                    STUFF(@Block, @pos, 1, 
                         NCHAR(UNICODE(SUBSTRING(@EncyptedBlock, @pos, 1)) 
                              ^ (UNICODE(SUBSTRING(@NewText, @pos, 1)) 
                                   ^ UNICODE(SUBSTRING(@DummyBlock, @pos, 1))
                                   )
                              )
                         )
               --NEXT POSITION
               SELECT @pos = @pos+1
          END
          --PRINT THIS BLOCK BEFORE MOVING TO THE NEXT, THIS WILL HAVE THE EFFECT OF 
          --GENERATING ONE LONG STATEMENT IN THE RESULTS WINDOW
          PRINT @Block
  print '|||remove|||'
          SELECT @BlockNumber = @BlockNumber + 1
     END   
END
GO
exec DecryptObject 'dbo','DecryptObject'
Did this help you?  If so, please leave a comment!

Thursday, October 27, 2011

Script to update WSRepts table for SRS

SELECT *
FROM WSRepts
UPDATE WSRepts
SET ReportLocation='http://xpsql08:8080/ReportServer/TWO/WennSoft Service/Service Call Work Order'
where ReportReference='SV_Workorder_1'

Did this help you?  If so, please leave a comment!

Script to view index fragmentation

--Use a script like this to monitor index fragmentation:
SELECT d.name,
s.OBJECT_ID,
s.index_id,
s.index_type_desc,
s.avg_fragmentation_in_percent,
s.avg_page_space_used_in_percent
FROM sys.databases AS d
INNER JOIN sys.dm_db_index_physical_stats(NULL, NULL, NULL,
NULL, 'SAMPLED')
AS s ON d.database_id = s.database_id
WHERE d.NAME = 'DYNAMICS'
ORDER BY s.avg_fragmentation_in_percent DESC



Did this help you?  If so, please leave a comment!

Script to Move tempdb database

--To move tempdb database:
SELECT name, physical_name AS CurrentLocation, state_desc
FROM sys.master_files
WHERE database_id = DB_ID(N'tempdb')
ALTER DATABASE [mydb] SET OFFLINE WITH ROLLBACK IMMEDIATE
GO
ALTER DATABASE [mydb] MODIFY FILE ( NAME = [logicalname], FILENAME = 'V:\tempdb\tempdb.mdf' )
ALTER DATABASE [mydb] MODIFY FILE ( NAME = [logicalname], FILENAME = 'V:\tempdb\templog.ldf' )
ALTER DATABASE [mydb] SET ONLINE

Did this help you?  If so, please leave a comment!

SQL Scripts from Synergy - Example of cross join

--A LONG RUNNING QUERY
SELECT 'LETS CREATE A CARTESIAN PRODUCT'
GO  --WITHOUT THE GO, IT RUNS AS A SINGLE BATCH

SELECT *
FROM SV00400
CROSS JOIN SV00500


Did this help you?  If so, please leave a comment!

SQL Scripts from Synergy - example of table variable

--LET SAY I NEED THE DISTINCT LIST OF KEY VALUES FROM CONTRACT MASTER AND HISTORY
SELECT CUSTNMBR,ADRSCODE,Contract_Number,WSCONTSQ
FROM dbo.SV00500
UNION
SELECT CUSTNMBR,ADRSCODE,Contract_Number,WSCONTSQ
FROM dbo.SV00501
--I WILL BE DOING DIFFERENT THINGS WITH THE DATA RETURNED
--RATHER THAN RUN THE ABOVE QUERY MULTIPLE TIMES
--I CAN USE A TABLE VARIABLE
DECLARE @CONTRACTS AS TABLE(
 CUSTNMBR CHAR(15),
 ADRSCODE CHAR(15),
 Contract_Number CHAR(11),
 WSCONTSQ INT
 ,UNIQUE(CUSTNMBR,ADRSCODE,Contract_Number,WSCONTSQ)  --THIS TRICKS SQL INTO GIVING ME AN INDEX
 )
INSERT INTO @CONTRACTS
SELECT CUSTNMBR,ADRSCODE,Contract_Number,WSCONTSQ
FROM dbo.SV00500
UNION
SELECT CUSTNMBR,ADRSCODE,Contract_Number,WSCONTSQ
FROM dbo.SV00501
--NOW I CAN USE MY TABLE VARIABLE PRETTY MUCH LIKE A NORMAL TABLE
SELECT CUSTNMBR,ADRSCODE,Contract_Number,WSCONTSQ
FROM @CONTRACTS
SELECT *
FROM dbo.SV00582 S
 INNER JOIN @CONTRACTS C
  ON S.CUSTNMBR=C.CUSTNMBR
   AND S.ADRSCODE=C.ADRSCODE
   AND S.Contract_Number=C.Contract_Number
   AND S.WSCONTSQ=C.WSCONTSQ


Did this help you?  If so, please leave a comment!

Script to turn on Auto Create Statistics

sp_helpdb TWO
GO
ALTER DATABASE TWO SET AUTO_CREATE_STATISTICS ON
GO
ALTER DATABASE TWO SET AUTO_UPDATE_STATISTICS ON
GO
ALTER DATABASE TWO SET AUTO_UPDATE_STATISTICS_ASYNC ON
GO

Did this help you?  If so, please leave a comment!

SQL Scripts from Synergy - examples for execution plans

--BASIC SELECT *
SELECT *
FROM dbo.SV00585

--SPECIFY COLUMNS
SELECT Contract_Number,WSCONTSQ,Equipment_ID
FROM dbo.SV00585
WHERE WSCONTSQ=1

--USING AN INDEX TO GET RID OF THE TABLE SCAN
SELECT T.Contract_Number,T.WSCONTSQ,T.Equipment_ID
FROM dbo.SV00585 T
 INNER JOIN dbo.SV00501 H
  ON T.Contract_Number=H.Contract_Number
   AND T.WSCONTSQ=H.WSCONTSQ
   AND T.ADRSCODE=H.ADRSCODE
   AND T.CUSTNMBR=H.CUSTNMBR
WHERE T.WSCONTSQ=1

--EXAMPLE OF ELIMINATING TABLE SCAN ON DATE FUNCTION
SELECT T.Contract_Number,T.WSCONTSQ,T.Equipment_ID
FROM dbo.SV00585 T
 INNER JOIN dbo.SV00501 H
  ON T.Contract_Number=H.Contract_Number
   AND T.WSCONTSQ=H.WSCONTSQ
   AND T.ADRSCODE=H.ADRSCODE
   AND T.CUSTNMBR=H.CUSTNMBR
WHERE DATEADD(day, 15, T.Schedule_Date) = '01/16/2017'
SELECT T.Contract_Number,T.WSCONTSQ,T.Equipment_ID
FROM dbo.SV00585 T
 INNER JOIN dbo.SV00501 H
  ON T.Contract_Number=H.Contract_Number
   AND T.WSCONTSQ=H.WSCONTSQ
   AND T.ADRSCODE=H.ADRSCODE
   AND T.CUSTNMBR=H.CUSTNMBR
WHERE T.Schedule_Date = DATEADD(day, -15, '01/16/2017')


--EXAMPLE OF LIKE OPERATOR
SELECT A.Equipment_ID
FROM dbo.SV00400 A
WHERE A.Equipment_ID LIKE '%A%'

--IMPLICIT CONVERSIONS
DECLARE @NCHAR NCHAR(15),
  @CHAR CHAR(15)
SELECT @NCHAR='MAIN OFFICE',
  @CHAR='MAIN OFFICE'
SELECT T.Contract_Number,T.WSCONTSQ,T.Equipment_ID,T.CUSTNMBR,T.ADRSCODE,H.*
FROM dbo.SV00585 T
 INNER JOIN dbo.SV00501 H
  ON T.Contract_Number=H.Contract_Number
   AND T.WSCONTSQ=H.WSCONTSQ
   AND T.ADRSCODE=H.ADRSCODE
   AND T.CUSTNMBR=H.CUSTNMBR
WHERE H.ADRSCODE=@NCHAR
SELECT T.Contract_Number,T.WSCONTSQ,T.Equipment_ID,T.CUSTNMBR,T.ADRSCODE,H.*
FROM dbo.SV00585 T
 INNER JOIN dbo.SV00501 H
  ON T.Contract_Number=H.Contract_Number
   AND T.WSCONTSQ=H.WSCONTSQ
   AND T.ADRSCODE=H.ADRSCODE
   AND T.CUSTNMBR=H.CUSTNMBR
WHERE H.ADRSCODE=@CHAR

Did this help you?  If so, please leave a comment!

SQL Scripts from Synergy - view compiled execution plans

--Query to view compiled execution plans
SELECT [cp].[refcounts]
, [cp].[usecounts]
, [cp].[objtype]
, [st].[dbid]
, [st].[objectid]
, [st].[text]
, [qp].[query_plan]
FROM sys.dm_exec_cached_plans cp
CROSS APPLY sys.dm_exec_sql_text ( cp.plan_handle ) st
CROSS APPLY sys.dm_exec_query_plan ( cp.plan_handle ) qp


Did this help you?  If so, please leave a comment!

Monday, April 25, 2011

Backup Strategies for SQL Server

References:
http://msdn.microsoft.com/en-us/library/ms191164.aspx
http://msdn.microsoft.com/en-us/library/ms190217.aspx

There are two recovery models in SQL Server:

Simple: Used when some data loss after recovery is acceptable. The absence of separate log backups simplifies managing backup and restore. . However, a database can be restored only to the point of the most recent backup.

Full: (Full is really a misnomer here, since both are full db backups.  A better name would be Complex) The full recovery model uses log backups to minimize data loss.  The advantage of using log backups is that they enable point-in-time recovery.  Assuming you can back up the active log after a disaster occurs, you can restore the database up to the point of failure without data loss. The disadvantages of using log backups are that they require storage space and increase restore time and complexity.

For most Dynamics GP installations, the Simple recovery model will be sufficient. 

The first step is for the client to determine how much data loss is acceptable
For example, in the event of a drive failure, how many days of work would it be OK to have to re-key?  Keep in mind that the shorter the interval, the more expensive and labor intensive the backup solution will be.

Set the backup interval of the full or differential backup to a frequency that will keep data loss to the required amount
For example, if the client determines that 1 day is acceptable, set the backup frequency to nightly. 

If the frequency must be less than 24hours, the full recovery model is a better fit. 
In that case we set the full database backup to nightly and set differential log backups more frequently. 
The differentials will add more complexity to the restore process, but they will be less resource intensive during business hours and will take up less drive space.

Backups should be stored on a different physical drive than the application, database, and log files.

Backup files should be periodically shipped off site
There are a variety of acceptable ways to accomplish this.  One acceptable disk storage solution would be:
Drive C:  Single drive for Applications
Drive D:  RAID 5 disk for data files
Drive E:  RAID 5 disk for log files
Drive F:  Single drive for tempdb
Drive G:  RAID 1 disk for backups

While much of the backup process can be automated, there is a certain amount of labor required. 
Let's say a company uses the above drive configuration and uses a simple recovery plan with weekly full backups and nightly differentials.  The weekly full backups are transferred to one or more portable drives (or the RAID 1 slave drive is swapped out) that are then secured off site.  Thirteen months of history will be kept this way. 

In this scenario, the files will continue to collect on Drive G.  Someone will have to retrieve the portable drive, attach it to the network, transfer the weekly backup files, delete files older than 13 months from Drive G and the portable drives, and take the portable drives back off site.  Changing the procedure to copy files to a remote location only removes the labor associated with transporting the drives and it substantially increases the cost.

The most important labor process, however, is a yearly disaster recovery simulation
If this is skipped, the entire backup plan is put in jeopardy. 
There must be a written plan for disaster recovery and someone who knows how to carry it out. 
A good disaster recovery plan will have every conceivable scenario covered.  Some examples of scenarios to plan for:
1. A sudden event (such as a fire or theft) destroys the hardware.
2. A slow event (such as a hurricane or flood) threatens the hardware.
3. An event (such as discovery of asbestos) makes the current location uninhabitable for an extended period, but the hardware is intact. (In this case, there is no reason any data needs to be lost)
4. External malicious intrusion destroys or corrupts data
5. Internal malicious event destroys or corrupts the data.
6. Internal non-malicious event destroys or corrupts data.

Each of these events should have a plan spelled out, and in most cases note who is responsible for carrying it out.  It should also spell out what is to be done after the disaster to recover.  The responsible parties should be given a chance to practice the plan at least once a year. 

To assist in writing each part of the plan, I have included some questions that the plan should address.
1. A sudden event (such as a fire or theft) destroys the hardware.
Where is the hardware inventory kept?  Who maintains it?
Who will be responsible for determining what is missing/damaged?
Who will notify management?
If the SQL Server is damaged, who will restore it?  To what point?
Who will notify users?

2. A slow event (such as a hurricane or flood) threatens the hardware.
How long before event impact will emergency procedures begin?  Who makes this call?  Who notifies users?
Who makes the call to evacuate?
What will be done to protect user hardware?  Who is responsible?
What will be done to protect network infrastructure?  Who is responsible?
How long before event impact will final backups be made?  What will be backed up?  Who is responsible?
What hardware will be evacuated?  How long before event impact?  Who will prep it?  Who will transport it?
Who will assess damages after the event?
Who will make the call to restore operations?
What procedures will be used to restore the network infrastructure?  Who is responsible?
What procedures will be used to restore user hardware?  Who is responsible?

3. An event (such as discovery of asbestos) makes the current location uninhabitable for an extended period, but the hardware is intact.
Will the hardware be moved? (If so, ask questions from #2)  Who makes this call?
Will the network be replicated elsewhere or will the company access the current network remotely?
Same questions as #2

4. External malicious intrusion destroys or corrupts data
5. Internal malicious event destroys or corrupts the data.
6. Internal non-malicious event destroys or corrupts data.
Who will notify management?
What will be done in the short term to prevent further intrusion/corruption?  Who is responsible?
Who will notify users?
Who will be responsible for hardening the system after the intrusion?
If the SQL data is damaged, who will determine what point to restore to?  Who will restore the data to this point?
What procedures will be followed to restore to that point in time?

For example, in Florida there are hurricanes yearly.  This is a slow event.  The recovery plan should state what hardware/data should be taken off site during evacuation, and who is responsible for doing this.  It should also state what procedures should be followed before abandoning the hardware site.  Once a year, the responsible parties should be permitted (during down time) to actually practice following the procedures and removing the hardware/data (going so far as to loading it for transport), then bringing it back in and reintegrating/restoring (in a test environment if necessary).

Here is a partial example of a recovery plan (notice that the SQL backups are on a RAID 1 disk, so would be taken off site).:
Disaster Recovery Plan
1. Fire, theft, or other sudden event

a. The IT director will keep an updated inventory of all infrastructure hardware and will use this list to determine hardware loss.
b. ...procedures for notification of management team and users
c. In the event that the SQL Server machine or storage drives have been damaged:
- ...Procedures for procuring new hardware or using existing
- If necessary, SQL Server will be installed
...same drive configuration as above
- The portable drive will be retrieved from off-site and the last weekly backup will be restored.
- If the G Drive is intact, the last nightly or differential backup will be restored.
d. ...procedures for other parts of the network

2. Hurricane or other event where warning is given.
  a. The IT director will monitor the likelihood of the disaster affecting infrastructure when the disaster moves within 3 days and will send daily updates to the management team.  The management team may make a decision to evacuate before the government orders a mandatory evacuation.
  b. Upon issuance of a mandatory evacuation order, or upon direction from the management team, the IT director will issue a 2 hour warning to all employees via email before beginning safeguarding procedures.
  c. All employees will be responsible for backing up their own local data to a portable device to take with them. Employees should also take their laptops and portable projectors with them as they evacuate.
  d. At the end of the 2 hour warning, IT staff will execute all weekly backup jobs 
  f. IT staff will be responsible for ensuring that all user hardware is shut down and unplugged.
  g. Once the jobs are complete, all network infrastructure will be shut down and physically disconnected from power sources.
  h. The IT director will keep an updated inventory of all infrastructure hardware.  The IT director will evacuate with this list and all RAID 1 slave drives and the portable off-site backup drives.
  i.  At the earlier possible time after the disaster has passed, the IT director will return to the premises with the management team to assess the damages.
  j.  Once the management team gives the direction to bring the systems back online, the IT team will replace the RAID 1 slave drives and power up the network infrastructure.  Once all systems are online and verified, the IT team will reconnect the employee hardware to power.


For the purposes of SQL Server recovery, it is important for the IT staff to practice step c above
Practice will not only bring up questions about how to install/recover, but it will also bring attention to any areas of the backup plan that are failing (such as a bad backup job, failing drive, or missed database).  It is not necessary to spend a ton of money replicating your entire server rack.  Simply adding a few cheap drives to the existing network can be enough to provide an area that staff can practice restoring hardware.  Remember, hardware and off the shelf software are easily replaced.  It is your data that is valuable.  Consider how many man-hours are spent generating that data each year and you will quickly see that it is much cheaper to spent a little money to plan ahead than to regenerate that data.


Did this help you?  If so, please leave a comment!

Tuesday, April 12, 2011

Find missing records in SQL

I recently had a client that lost data from some tables in SQL.  Since it was accidental, they knew some of the tables that were affected, but did not know for sure all of the tables affected.

They did, however, have a fairly recent backup.  All we needed to do was determine which records were present in the backup that were no longer present in the current database.

First I restored the backup to a new database.

Then I wrote this handy script to determine the differences.  Since it was a bit much to look at in SQL Management Studio, I used BCP to write the results to a text file that I could review (and keep as a backup of the changes made).

--set this to the production database
USE TWO

--To get xp_cmdshell to work:
sp_configure 'xp_cmdshell',1

go
reconfigure
go

--we need al these to generate the sql statements
declare @PROD varchar(5),
@TEST varchar(5),
@TABLE varchar(128),
@select varchar(2000),
@from varchar(2000),
@join varchar(2000),
@where varchar(2000),
@statement varchar(2000),
@col varchar(255),
@colnum int,
@order varchar(2000)

--this controls which two databases we are checking
select @PROD='TWO', @TEST='TEST'
exec master..xp_cmdshell 'del C:\temp.txt'
exec master..xp_cmdshell 'del C:\results.txt'

--this controls which tables we compare.  The following compares all tables that start with UPR
--not advisable to remove the like restriction unless you have a LOT of memory
declare c_tables cursor for
select name from sysobjects where name like 'UPR%' and type='U'

open c_tables
fetch next from c_tables into @TABLE
while @@FETCH_STATUS=0
begin

--here is how we get all the primary key columns for a table
declare c_col cursor for
select COLUMN_NAME
from INFORMATION_SCHEMA.CONSTRAINT_COLUMN_USAGE a
INNER JOIN INFORMATION_SCHEMA.TABLE_CONSTRAINTS b
ON a. CONSTRAINT_NAME=b.CONSTRAINT_NAME
where b.CONSTRAINT_TYPE='PRIMARY KEY'
AND a.TABLE_NAME =@TABLE

--start building the statements
select @select='select ''' + @TABLE + ''', '
select @from=' FROM ' + @TEST + '..' + @TABLE + ' A ' + ' FULL OUTER JOIN ' + @PROD + '..' + @TABLE + ' B '
select @join=' ON '
select @where=' WHERE '
select @order=' ORDER BY '

open c_col
fetch next from c_col into @col
select @colnum=1
while @@FETCH_STATUS=0
begin
IF @colnum=1
BEGIN

--put the column names in the statements
select @select=@select + 'ISNULL(CONVERT(VARCHAR(2000),A.' + @col + '),' + '''MISSING''' + ') AS TEST,ISNULL(CONVERT(VARCHAR(2000),B.' + @col + '),' + '''MISSING''' + ') AS PROD'
select @join=@join + ' A.' + @col + '=' + 'B.' + @col
select @where=@where + 'A.' + @col + ' IS NULL OR B.' + @col + ' IS NULL '
select @order=@order + 'A.' + @col + ',B.' + @col

END
ELSE
BEGIN

--if there is more than one column in the primary key, handle that
select @join=@join + 'AND A.' + @col + '=' + 'B.' + @col
select @where=@where + 'OR A.' + @col + ' IS NULL OR B.' + @col + ' IS NULL '

END

select @colnum=@colnum+1
fetch next from c_col into @col

end

close c_col
deallocate c_col

--now put the statement together and execute in memory to see if we get any records back
select @statement=@select + @from + @join + @where
exec( @statement)
if @@ERROR=0
BEGIN

--since we know we have records, add the bcp commands to the statement to write the results to temp file
select @statement='bcp "' + @select + @from + @join + @where + @order + '" queryout "C:\temp.txt" -c -T'
exec master..xp_cmdshell @statement

--now append the temp file to the results file
exec master..xp_cmdshell 'type C:\temp.txt >> C:\results.txt'

--append a blank line for readability

exec master..xp_cmdshell 'echo. >> C:\results.txt'

--get rid of the temp file
exec master..xp_cmdshell 'del C:\temp.txt'

END
ELSE
BEGIN

--we have no records that are different, write a line to the results file anyway so we know we checked the table
select @statement='echo 0 differences found for table: ' + @TABLE + ' >> C:\results.txt'
exec master..xp_cmdshell @statement
exec master..xp_cmdshell 'echo. >> C:\results.txt'

END

fetch next from c_tables into @TABLE

end

close c_tables
deallocate c_tables
go


Did this help you?  If so, please leave a comment!

Wednesday, March 16, 2011

Converting SSRS 2008 reports to SSRS2005

If you are reading this, you are probably aware of the fact that you cannot deploy SSRS 2008 reports on an SSRS2005 server.  SSRS reports are housed in rdl files.  rdl files are basically xml files that follow a predetermined schema.  The schemas for 2005 and 2008 are very different.  The objects in use on the same report between the two versions can be radically different, and Microsoft does not support a downgrade path between the two.

There are probably not many reasons why you would end up in this situation, but if you find yourself there, here are your optons:
1. Upgrade the existing server to SQL Server 2008
2. Deploy the reports on another server that is already running SQL Server 2008
3. Set up a new server on SQL Server 2008
4. Rewrite the 2008 reports in 2005 manually
5. Attempt to use a conversion tool to downgrade the reports automatically

There are a couple of unsupported "tools" out there that individuals have written for their specific circumstances, so the conversion CAN be done.  However, it would not be easy due to new objects like the tablix. Options 1-3 would most likely be more cost effective than paying to have each report downgraded and tested.

If you are interested in trying, here are some links that might help:

This person claims to have written a tool that they will sell you - amarsale@gmail.com


RDL spec for 2008 - http://download.microsoft.com/download/6/5/7/6575f1c8-4607-48d2-941d-c69622e11c32/RDL_spec_08.pdf


RDL spec for 2005 - http://download.microsoft.com/download/c/2/0/c2091a26-d7bf-4464-8535-dbc31fb45d3c/rdlNov05.pdf


Did this help you?  If so, please leave a comment!

Wednesday, March 9, 2011

Error: Attempted to Read or Write Protected Memory

This error is displayed when trying to open or close a window with a .NET addin on it.  It is usually followed my the error
BAD_MEM_HANDLE

or simply the dreaded "The application has to close" error

Then GP crashes.

The causes are data buffer related.  Either there is a table buffer that has been opened in .NET that has not been closed, or a reader has not been closed.

The reader error can be hard to detect when reading the code.  For example, the following code will result in the reader being left open.

OdbcDataReader reader = oCmd.ExecuteReader();


if (reader.Read())
{
return  reader.GetString(0);
}
else
{
return string.Empty;
}

 
The reader can't be closed before the return statement, since the reader is needed for the return value.
 
To fix it, we have the take the long way around and store the return value before closing the reader.
 
OdbcDataReader reader = oCmd.ExecuteReader();

string result = "";
if (reader.Read())
{
result = reader.GetString(0);
}
else
{
result= string.Empty;
}
if (!reader.IsClosed)
{
reader.Close();
}
return result;

 
So the lesson learned here is that when working with readers, it is a good idea to store your reader results and close the reader before operating on them.. That way you won;t try to take a shortcut like the first example.

Did this help you?  If so, please leave a comment!

Monday, January 10, 2011

Handy zip code table

Thank goodness for the US Census!  The 1999 Census contained public domain info on zip codes (info the businesses usually pay a pretty penny for for the postal service).

I was able to use this data to create a SQL table with some useful zip code info in it like:
City
State
ZipCode class (tells you if it is a unique zip code or po box only)
Lat/Lon (for mapping applications)

I used SQL 2008's great Generate Scripts Wizard to export the table structure, index definitions, and all the data to a single script file.  You can find the zip file here (I just know there is a pun in there somewhere).

I will keep an eye out for a newer census report, but zip codes don;t change all that often, so this list should be pretty accurate.


Did this help you?  If so, please leave a comment!

Friday, January 7, 2011

SRS default parameters

I ran into an issue today where I had a report parameter default that I couldn't seem to get rid of.

In BIDS, I deleted the defaults for all parameters, then deployed the report, choosing the overwrite option.

In BIDS, the defaults were gone when I ran the report.  However, when opening from the SRS report viewer window, one of the parameters was still there.

After a bit of troubleshooting and digging around, I found out that the properties that are set up through the SRS report manager in the browser are not always reset when a report is re-deployed.  That is what was happening here.  Even though all the other changes and even the clearing of the defaults on the other parameters was flowing down when I deployed the report, this one parameter default was remaining in the SRS SQL table somehow.

So I used the report manager to delete this parameter's default too, and we are in business.

I still do not know what caused the issue, so I am adding a step to my personal best-practice for SRS report deployment - open the report manager and review the properties.

Happy Coding!



Did this help you?  If so, please leave a comment!

SQL 2022 TSQL snapshot backups!

  SQL 2022 now actually supports snapshot backups!  More specifically, T-SQL snapshot backups. Of course this is hardware-dependent. Here ...