Tuesday, January 27, 2015

Reset GP passwords after copying database

After copying a GP database from one GP instance to another, the user ids typically come along for the ride, but the encrypted passwords do not.  Here is a handy script to reset the passwords.


/*
                1. Run this script in a SQL Server Management Studio window to create output list of users
                2. Copy and paste the output list into a new SQL Server Management Studio window
                                A. review and remove from the list any user's you do not want to reset
                                B. execute the script

                                FYI: list will look like this:
                                                ALTER LOGIN djedziniak ENABLE
                                                ALTER LOGIN djedziniak WITH PASSWORD = ''
                                                ALTER LOGIN rgeorge ENABLE
                                                ALTER LOGIN rgeorge WITH PASSWORD = ''


                3. On next Logon to GP users should
                                A. Enter USERID
                                B. Click OK    (leave password field empty)
                                C. Click YES to the "Blank passwords are not allowed. Do you want to change your password now?" prompt
                                D. Enter New Password > Tab > re-enter new password > click OK
                                E. Logon to GP using the new password

C/O: Ricky George

*/
USE master
GO
SET NOCOUNT ON
DECLARE @USER_Name varchar(30)

DECLARE dbname_cursor CURSOR FOR
SELECT USERID 
FROM DYNAMICS.dbo.SY01400
WHERE USERID NOT IN ('LESSONUSER1', 'sa', 'DYNSA', 'LESSONUSER2')
ORDER BY USERID

OPEN dbname_cursor 
FETCH NEXT FROM dbname_cursor INTO @USER_name
WHILE @@FETCH_STATUS = 0
BEGIN
PRINT 'ALTER LOGIN '+ RTRIM(@USER_name)+ ' ENABLE'
PRINT 'ALTER LOGIN '+ RTRIM(@USER_name)+ ' WITH PASSWORD = '''''
FETCH NEXT FROM dbname_cursor INTO @USER_name
END
CLOSE dbname_cursor 
DEALLOCATE dbname_cursor 

Thursday, January 15, 2015

Creating a good set of SQL maintenance plans

Here is an example of a good set of SQL maintenance plans for a server hosting the following:
- Dynamics GP production and test databases
- SSRS databases

First, the database properties need to be checked.
Determine which databases you will do log backups on.  These will need the recovery model set to Full.  All other system and user databases needs the recovery model set to Simple to prevent runaway log file growth.

Usually:
DYNAMICS and all production company databases are set to Full.
Mobiletech databases (RESCO databases) should be set to Simple (recommendation from WennSoft)
ReportServer databases are set to Simple

Next, determine which databases need to be backed up and how often
Do not backup tempdb.  It gets rebuilt when SQL Server is restarted, so you would never restore it anyway.  When I refer to system databases, I am only referring to master, model, and msdb.

Remember to backup ReportServerTempDB.  This is not rebuilt like tempdb and you will need it if you ever need to restore ReportServer.  You may not necessarily need the contents, but you will need the structure.

A typical schedule would be:
system databases - monthly - these databases rarely change
SSRS and non-production databases - weekly - SSRS databases do not significantly change unless new reports are deployed and test databases are typically more tolerant of data loss in a recovery situation.  Much of the time, test databases do not need backups as long as they are regularly refreshed from production so the log file does not grow too large.
DYNAMICS and production databases - nightly - these databases should also be Full recovery model with log backups throughout the day.

Counting the logs, this is 4 schedules, so you will need 4 plans.


Next, set up the plans

monthly - scheduled for 1st Sunday of the month at 10pm
1. Check database integrity task - no point in backing up a corrupt database
 - specify all system databases
2. Backup database task
- backup type Full
- specify all system databases
- choose create backup file for each database
- specify the backup folder
- set backup extension to bak
- check verify backup integrity
3. Rebuild Index Task
- specify all system databases
- take the defaults
4. Update Statistics Task
- specify all system databases
- take the defaults
5. Maintenance Cleanup Task
- choose backup files
- choose search folder
- specify the backup folder
- set backup extension to bak
- check delete files based on age
- set age to 3 months
6. History Cleanup Task
- select all options
- set age to 3 months

weekly - scheduled for Sundays at 11pm
1. Check database integrity task - no point in backing up a corrupt database
 - specify all non-system databases (you may be able to omit test databases)
2. Backup database task
- backup type Full
 - specify all non-system databases  (you may be able to omit test databases)
- choose create backup file for each database
- specify the backup folder
- set backup extension to bak
- check verify backup integrity
3. Rebuild Index Task - we will do this for all databases weekly
 - specify all non-system databases (you may be able to omit test databases)
- take the defaults
4. Update Statistics Task
 - specify all non-system databases (you may be able to omit test databases)
- take the defaults
5. Maintenance Cleanup Task
- choose backup files
- choose search folder
- specify the backup folder
- set backup extension to bak
- check delete files based on age
- set age to 12 weeks


daily - scheduled for Mon thru Sat at 11pm
1. Check database integrity task - no point in backing up a corrupt database
 - specify all non-system databases that are set to Full 
2. Backup database task
- backup type Differential
 - specify all databases that are set to Full
- choose create backup file for each database
- specify the backup folder
- set backup extension to dif
- check verify backup integrity
3. Reorganize Index Task 
 - specify all  non-system databases (you may be able to omit test databases)
- take the defaults
4. Update Statistics Task
 - specify all  non-system databases (you may be able to omit test databases)
- take the defaults
5. Maintenance Cleanup Task
- choose backup files
- choose search folder
- specify the backup folder
- set backup extension to dif
- check delete files based on age
- set age to 2 weeks


log backup - scheduled daily every 4 hours from 8am to 8pm
1. Backup database task
- backup type Transaction Log
- specify all databases that are set to Full
- choose create backup file for each database
- specify the backup folder
- set backup extension to trn
- check verify backup integrity

If a database is set to Full recovery model, you NEED TO setup transaction log backups.  If you forget this, your log file will grow until it hits the max size or fills up the drive. If you do not want to do transaction log backups on a database, set it to Simple recovery model. 


Manually run the plans in the following order;
monthly
weekly
daily
logs

This is important, not just to test them, but to establish full backups for the differentials to work off of.


Monday, January 12, 2015

Fun with Pivot tables

Update:
Here is a link to a much better explanation of Pivot Tables


Wow, it has been almost a year since my last post!  Well, back to it.  This is an update to the original post below.  Additional info added to the end:

Original post 2012.04.19:
I continue to get questions on how to generate pivot tables.  Although the concept seems complicated, the process can actually be quite simple.

Say I have data that looks like this:













and I want it to look like this:


You can accomplish with using a pivot query.

First, let's look at the SQL that generates the data set you are working with.
SELECT
  WS_Appointment_ID,
  Service_Call_ID,
  Appointment,
  Technician,
  Appointment_Status,  --this will be out pivot column
  (MODIFDT + Modified_Time) as ModDateTime
FROM
  SV30301
WHERE
  WS_Appointment_ID > 3000

The first step to pivoting the data is to clearly define the base query (above).  Any concatenations or case statements need to be done here.  It is important that you identify which column you will pivot on.  That is the column you want to group horizontally.  In this case, it is the status.  You will then need to identify all possible values in this column.
BROADCAST
RECEIVED
ACCEPTED
IN ROUTE
ON SITE
PARTS HOLD
COMPLETE

These values will become your column names.

Next, you should determine what data you want to aggregate by these groupings.  Any aggregate function applies here, so you can use functions like SUM, AVG, MAX, and COUNT.  (see full list of aggregate functions)  In this case, we want to get the max date for each status.

So we first change our original query to prepare for adding the pivot logic:

SELECT
  WS_Appointment_ID,
  Service_Call_ID,
  Appointment,
  Technician
FROM (
  SELECT
    WS_Appointment_ID,
    Service_Call_ID,
    Appointment,
    Technician,
    Appointment_Status,
    (MODIFDT + Modified_Time) as ModDateTime
  FROM
    SV30301
  WHERE
    WS_Appointment_ID > 3000
) AS source


Notice that we did not include the pivot column or aggregate column.  Those two columns will before a single column in the next step.

Next, we add the pivot logic:
SELECT
  WS_Appointment_ID,
  Service_Call_ID,
  Appointment,
  Technician,

  ISNULL(BROADCAST,GETDATE()) AS BROADCAST,
  RECEIVED,
  ACCEPTED,
  [IN ROUTE],
  [ON SITE],
  [PARTS HOLD],
  COMPLETE
,
  UNASSIGNED 
FROM (
  SELECT
    WS_Appointment_ID,
    Service_Call_ID,
    Appointment,
    Technician,
    Appointment_Status,
    (MODIFDT + Modified_Time) as ModDateTime
  FROM
     dbo.SV30301

) AS source
PIVOT(
  MAX(ModDateTime)
  FOR Appointment_Status
  IN (
    [BROADCAST],
    [RECEIVED],
    [ACCEPTED],
    [IN ROUTE],
    [ON SITE],
    [PARTS HOLD],
    [COMPLETE]
,
[UNASSIGNED]
  )
) as pvt


In the pivot code, we first perform the aggregate operation on the aggregate column.  We issue the FOR keyword followed by the pivot column and then the list of all possible values in the pivot column.  Each value should be surrounded by [] (square brackets).  We then list all of those values as column names in the upper select statement.  Standard select logic applies here.  You can rename the columns at this point or perform other operations on them.
--------------------------------------

Update 20150112:

So the above method is great if you already know the values that are in the columns.  But what if you don't know the values and still need to pivot.    In that case, you will need to dynamically build the query.  That means you will need to use a stored proc instead of a view.

--recall that we are pivoting on the appointment status
--first we need to declare a variable and fill it with the possible values from the table
DECLARE @STATUS VARCHAR(MAX)
SELECT @STATUS=(SELECT DISTINCT '[' + RTRIM(Appointment_Status) + '],' FROM dbo.SV30301 WHERE RTRIM(Appointment_Status)!='' FOR XML PATH(''))
SELECT @STATUS=SUBSTRING(@STATUS,1,LEN(@STATUS)-1)

--now @STATUS will replace the "IN" portion and some of the fields in the select
--we must place the query in a variable as a string and then execute it

DECLARE @SQL  VARCHAR(MAX)
SELECT @SQL='SELECT
  WS_Appointment_ID, 
  Service_Call_ID, 
  Appointment, 
  Technician,' + @STATUS + ' FROM (
  SELECT
    WS_Appointment_ID, 
    Service_Call_ID, 
    Appointment, 
    Technician, 
    Appointment_Status,
    (MODIFDT + Modified_Time) as ModDateTime
  FROM
    dbo.SV30301

) AS source
PIVOT(
  MAX(ModDateTime)
  FOR Appointment_Status
  IN (' + @STATUS + ')
) as pvt'

EXEC (@SQL)

--that's it. 



Did this help you?  If so, please leave a comment!

Monday, February 17, 2014

The progression of inefficiency

I have thought about the irony of this from time to time, so I though I would write it down:

1970’s: The promise: Workers save 20 min every morning by driving their own car to work rather than waiting on a bus or walking.
The reality: Workers waste an hour in traffic each morning.

1980’s: The promise:  Computers will save time by letting us type it once and print it multiple times
The reality:  Workers waste an hour a day dealing with paper jams in dot matrix printers

1990’s:  The promise:  Microsoft Office will save us time by eliminating spelling errors and providing all the formatting options available from outside print houses.
The reality:  Workers waste an hour per document trying to fix small formatting errors.  An entire generation is born that will never know how to spell receive or which homonym to use.

2000’s:  The promise:  More features in the OS and larger hard drives will give us a paperless office, saving time looking for lost papers.
The reality:   Workers waste 20 minutes each morning waiting for their bloated OS to boot up and hours each day looking for lost documents in their huge hard drives.

2010’s:  The promise:  The cloud will save us time by allowing us to save our data and use our programs from anywhere.
The reality:  Slow transfer rates make fedex faster than large file transfers.  Losing your internet connection means your entire system is down.


If you think of any others, please leave a comment!

Link to post on SSRS report parameters not working

http://chrisblog.betterithosting.com/?p=38

Sunday, February 16, 2014

SSRS divide by zero annoyance

I have this formula in a textbox on my report:

= IIF(Fields!CallInvoiceAmount.Value=0,0.00,
  Fields!CallProfit.Value/Fields!CallInvoiceAmount.Value
 )


Basically, I am just trying to calculate a percentage and I need to handle the cases where the denominator is zero to avoid a divide by zero error.

Now if all of the fields above are int fields, the formula works.  However, they are numeric fields (money values).  SSRS throws a #Error every time the CallInvoiceAmount value is 0.

After much aggravation trying to get this to work, I finally found some info on some message boards that spelled out the problem.  SSRS apparently evaluates the true and false conditions when there are numeric values involved, BEFORE it evaluates the IIF condition.

Now isn't that just illogical enough to raise Spock's eyebrow!

The prevailing suggestion was to embed code in the report a write a custom function to handle it and then call that.  However, there are issues invoved with that when working with aggregates, and I also need the following to work:

= IIF(sum(Fields!CallInvoiceAmount.Value)=0,0.00,
  sum(Fields!CallProfit.Value)/sum(Fields!CallInvoiceAmount.Value)
 )


The answer turned out to be simple, if still a work around.  I added a very tiny amount to the denominator.  This also allows me to remove the iif statement entirely:


= sum(Fields!CallProfit.Value)/(sum(Fields!CallInvoiceAmount.Value)+0.00001)

Of course, if the denominator or really zero you will get a huge percentage, but that is fitting since the real answer is undefined.
 


Did this help you?  If so, please leave a comment!

Monday, February 10, 2014

Converting a string to Numeric in SQL

I have run into this again and had to spend time compiling the answer, so I am posting it for future use.

I needed to convert a string to numeric in SQL.  Seems pretty straightforward, right?

select convert(numeric(19,5),'0.0')

So what happens with my string looks like this?

select convert(numeric(19,5),'abc0.0')

Error converting data type varchar to numeric.
Ok, so I need to make sure it is a number first.

select case isnumeric('abc0.0') when 1 then convert(numeric(19,5),'abc0.0') else 0 end

All done, right?  Not so fast.  All of the following return 1 (true)
select isnumeric('$0.0')

select isnumeric('123,000.0')

But the following result in an error:
select convert(numeric(19,5),'$0.0')

select convert(numeric(19,5),'123,000.0')

So we need to make some basic decisions about what we consider to be valid in our numeric value and replace out those offending characters before evaluating it.

select case isnumeric(replace(replace(replace(replace('123,000.0',',',''),'$',''),'+',''),'-','')) when 1 then convert(numeric(19,5),replace(replace(replace(replace('123,000.0',',',''),'$',''),'+',''),'-','')) else 0 end



Did this help you?  If so, please leave a comment!

SQL 2022 TSQL snapshot backups!

  SQL 2022 now actually supports snapshot backups!  More specifically, T-SQL snapshot backups. Of course this is hardware-dependent. Here ...