A question came up recently on how to relate taxes in tax history (TX30000) to the service performed (SV000815)
To understand how the tables fit together, we must first understand how the records get into those tables and what they represent. (the below is not an exhaustive list of tables, just the ones we are interested in for this question)
SV00500 holds the contract information
SV00400 holds the equipment information
SV00403 holds the information on equipment assigned to contracts
SV00560 holds the main task information (such as cost and billable amounts)
SV00582 holds the information on tasks assigned to a contract and also holds information on cost and pricing
When a service call is created, there are typically 3 tables that hold the data.
SV00300 holds the main information about the call.
SV00301 holds the information on the appointments assigned to the call.
SV00302 holds the information on the tasks assigned to the call.
When a service invoice is created, information from the 8 tables above is pulled together to create records in the following:
SV00700 is the service invoice header. The table holds up to 4 tax amounts for the invoice.
SV000810 is the service call cost information. The table also holds the task code and billable amount is applicable.
RM10301 is the Receivables Header.
RM10101 is the Receivables distributions.
RM10601 is the Receivables taxes.
When the RM tables above are populated, the TX00102 and TX00201 tables are read based on the billable customer's tax schedule and the RM10601 is populated. Whatever tax is calculated there is then added the the RM10101 distributions and then summed up and added to SV00700.
When the service invoice is posted, the following records are moved:
SV000810 to SV000815
RM10301 to RM20101
SV00700 to SV00701
RM10601 is copied to TX30000, but RM10601 records remain
When the receivable is moved to history, the following records are moved:
RM10101 to RM30301
RM10601 to RM30601
RM20101 to RM30101
TX30000 is removed.
So depending on what point in the workflow the transaction is in, there might not be a record in TX30000 or in SV000815.
If the TX30000 record is related to a service invoice, the SV00701 Tax_Amount1 thru Tax_Amount4 fields should hold the values from the first 4 lines if TX30000.TAXAMNT for the invoice.
The tables can be joined in a one to many relationship as follows:
SV00701 >> TX30000
on RMDNUMWK=DOCNUMBR
and RMDTYPAL=DOCTYPE
and 3=SERIES
SV00701 >> SV000815
on Call_Invoice_Number=Call_Invoice_Number
and CUSTNMBR=CUSTNMBR
and ADRSCODE=ADRSCODE
(Note that I did not include Service_Call_ID. This is because on grouped invoices, the SV00701 Service_Call_ID is be 'GROUPED' while the SV000815 will show the actual call ids)
So if you need to get from TX30000 to SV000815 you have to go through SV00701
TX30000 << SV00701 >> SV000815
David Jedziniak maintains this blog and posts helpful tips and tricks on SQL Server and Dynamics GP development. The opinions expressed here are those of the author and do not represent the views, express or implied, of any past or present employer. The information here is provided without warranty of fitness for any particular purpose.
Tuesday, October 23, 2018
Friday, August 10, 2018
T-SQL performance killers
These are some performance killers (in no particular order) that are easily avoided. None-the-less, I see them all the time in production code.
1. Forgetting the schema name for the table.
select col1, col2, col3 from table1
While this looks correct at first glance, it is a sneaky performance killer. You will find code written like this all over the internet. The reason is that people posting on the internet usually don't know what your schema name will be and maybe not even which type of database you are using.
SQL Server uses a query optimizer to build a plan for how it will execute a query. This plan generation takes time. Even a simple join or pivot can take a few seconds to generate a plan. Once the plan is generated it is stored in a cache and SQL can re-use it for subsequent calls to that same query. The catch is that in order to reuse the plan, SQL needs to ensure that the query is actually the same query the next time. One of the ways it does this is by storing the schema that the plan was generated against. If the schema is not specified for *all* the tables in a query, the plan is still generated but is not stored in a way that it can ever be re-used.
For example, consider the following query. Let's say this query gets run 100 times a day by various users.
select a.col1 from table1 a inner join table2 b on a.x=b.x
Lets say that SQL takes 2 seconds (an eternity, but a real world number) to generate and store the plan for this query, and another .5 seconds to bring back the results. The first time a user runs the query it will take 2.5 seconds for the results to return. The next time a user runs that same query it will still take 2.5 seconds. Over the course of a day, that is 250 seconds (2.5*100)!!!
Here is the same query with the schema names.
select a.col1 from dbo.table1 a inner join dbo.table2 b on a.x=b.x
It will still take 2.5 seconds the first time a user runs it. But each subsequent run will re-use the plan and run in .5 seconds. Over the course of a day that is (99*.5)+2.5=52 seconds. Just over 1/5 the time! And that is just one query! This can add up to a huge performance difference quickly!
2. Select *
This is a lazy way of typing the query. It saves the developer time and that is where the advantage ends. Unless you really need to use *all* of the columns in a table, the query will always perform better if the columns are explicitly listed. How much better will depend on the size and number of columns in the table.
Where I see this crop up most often is in a statement such as this:
if exists(select * from dbo.table)
This forces SQL to expand the * to all column names. This admittedly only takes a fraction of a second, but when we are talking about a production accounting system, those fractions add up quickly.
3. Improper use of if exists statements
The best practice is:
if exists(select 1 from dbo.table where [condition])
This selects the first column regardless of name. There will always be at least one column in a table. A developer reading this instantly understands that we don't care at that point *what* is in the table, only *whether* there is a record.
I also often see non-inner joins in an exists statement:
if exists(select 1 from dbo.table1 left outer join dbo.table2 on table1.x=table2.x)
This is forcing SQL to build and plan for and return results of an outer join before evaluating exists. It makes no sense to add this overhead.
Here is the equivalent, much more efficient statement:
if exists(select 1 from dbo.table1)
Another example of inefficient code is:
if(select count(1) from dbo.table1)!=0
Logically, this is the same as the if exists statement, but it does not perform as well because SQL must count all of the rows in the table before evaluating the condition. If the table is large, this can add more than a second to the processing time each time it runs.
4. Unnecessarily using WITH RECOMPILE statement in a procedure.
This should generally only be used in a proc that builds dynamic SQL and executes it. In this case, you don't want SQL to think that the execute plan it built last time for the proc itself will work this time because the parameters are different and thus the statement that is built is different. Even in this case, you should generally try your best to write the proc in a way that does not require dynamic SQL and remove the WITH RECOMPILE statement.
I have never seen any other real-world case in which this clause did not hurt performance. Indeed, it hurts performance in ALL cases. It is just necessary in those very specific cases.
5. Unnecessarily using dynamic SQL
Dynamic SQL should be (and in almost all cases CAN be) avoided in procedures. Most times that I see it used, it is just a lazy way to do something that should have been done in the application layer anyway. In many of those cases there are ways to avoid it just by refactoring the SQL proc. Here are some real word examples of its use:
SELECT @exestring = @curcompany + '.dbo.smCleanupPostingJournalEntries'
EXEC @exestring
This statement will perform better if the calling application gets the value for @curcompany and calls the correct proc in the first place. Doing it this way also increases the chance of errors that do not bubble back up to the application properly. This is what happens if the proc in the dynamic SQL fails or doesn't exist.
CREATE PROC dbo.SomeProc (@TempTableName VARCHAR(255))
AS
BEGIN
DECLARE @sqlString VARCHAR(2000)
SELECT @sqlString = 'UPDATE a set a.Myfield=b.Somefield FROM dbo.MyTable a inner join dbo.' + @TempTableName + ' b on a.Keyfield=b.Keyfield'
EXEC sp_sqlexec @sqlString
END
In this case the application is building a temp table then passing it in. This make zero sense if the table is not a #temp table. Even if it is #temp table, it will always perform better if you build the temp table inside the proc and then use it normally. If you find a case where it is otherwise, I would be very interested to see it.
1. Forgetting the schema name for the table.
select col1, col2, col3 from table1
While this looks correct at first glance, it is a sneaky performance killer. You will find code written like this all over the internet. The reason is that people posting on the internet usually don't know what your schema name will be and maybe not even which type of database you are using.
SQL Server uses a query optimizer to build a plan for how it will execute a query. This plan generation takes time. Even a simple join or pivot can take a few seconds to generate a plan. Once the plan is generated it is stored in a cache and SQL can re-use it for subsequent calls to that same query. The catch is that in order to reuse the plan, SQL needs to ensure that the query is actually the same query the next time. One of the ways it does this is by storing the schema that the plan was generated against. If the schema is not specified for *all* the tables in a query, the plan is still generated but is not stored in a way that it can ever be re-used.
For example, consider the following query. Let's say this query gets run 100 times a day by various users.
select a.col1 from table1 a inner join table2 b on a.x=b.x
Lets say that SQL takes 2 seconds (an eternity, but a real world number) to generate and store the plan for this query, and another .5 seconds to bring back the results. The first time a user runs the query it will take 2.5 seconds for the results to return. The next time a user runs that same query it will still take 2.5 seconds. Over the course of a day, that is 250 seconds (2.5*100)!!!
Here is the same query with the schema names.
select a.col1 from dbo.table1 a inner join dbo.table2 b on a.x=b.x
This is a lazy way of typing the query. It saves the developer time and that is where the advantage ends. Unless you really need to use *all* of the columns in a table, the query will always perform better if the columns are explicitly listed. How much better will depend on the size and number of columns in the table.
Where I see this crop up most often is in a statement such as this:
if exists(select * from dbo.table)
This forces SQL to expand the * to all column names. This admittedly only takes a fraction of a second, but when we are talking about a production accounting system, those fractions add up quickly.
This brings us to the next item.
The best practice is:
if exists(select 1 from dbo.table where [condition])
This selects the first column regardless of name. There will always be at least one column in a table. A developer reading this instantly understands that we don't care at that point *what* is in the table, only *whether* there is a record.
I also often see non-inner joins in an exists statement:
if exists(select 1 from dbo.table1 left outer join dbo.table2 on table1.x=table2.x)
This is forcing SQL to build and plan for and return results of an outer join before evaluating exists. It makes no sense to add this overhead.
Here is the equivalent, much more efficient statement:
if exists(select 1 from dbo.table1)
Another example of inefficient code is:
if(select count(1) from dbo.table1)!=0
Logically, this is the same as the if exists statement, but it does not perform as well because SQL must count all of the rows in the table before evaluating the condition. If the table is large, this can add more than a second to the processing time each time it runs.
4. Unnecessarily using WITH RECOMPILE statement in a procedure.
This should generally only be used in a proc that builds dynamic SQL and executes it. In this case, you don't want SQL to think that the execute plan it built last time for the proc itself will work this time because the parameters are different and thus the statement that is built is different. Even in this case, you should generally try your best to write the proc in a way that does not require dynamic SQL and remove the WITH RECOMPILE statement.
I have never seen any other real-world case in which this clause did not hurt performance. Indeed, it hurts performance in ALL cases. It is just necessary in those very specific cases.
5. Unnecessarily using dynamic SQL
Dynamic SQL should be (and in almost all cases CAN be) avoided in procedures. Most times that I see it used, it is just a lazy way to do something that should have been done in the application layer anyway. In many of those cases there are ways to avoid it just by refactoring the SQL proc. Here are some real word examples of its use:
SELECT @exestring = @curcompany + '.dbo.smCleanupPostingJournalEntries'
EXEC @exestring
This statement will perform better if the calling application gets the value for @curcompany and calls the correct proc in the first place. Doing it this way also increases the chance of errors that do not bubble back up to the application properly. This is what happens if the proc in the dynamic SQL fails or doesn't exist.
CREATE PROC dbo.SomeProc (@TempTableName VARCHAR(255))
AS
BEGIN
DECLARE @sqlString VARCHAR(2000)
SELECT @sqlString = 'UPDATE a set a.Myfield=b.Somefield FROM dbo.MyTable a inner join dbo.' + @TempTableName + ' b on a.Keyfield=b.Keyfield'
EXEC sp_sqlexec @sqlString
END
Thursday, April 19, 2018
Update SSRS report path
Here is a handy little script to help update SSRS report paths
IF EXISTS (
SELECT 1
FROM INFORMATION_SCHEMA.ROUTINES
WHERE SPECIFIC_SCHEMA = N'dbo' AND SPECIFIC_NAME = N'cssp_Upsert_SSRS_Path '
)
DROP PROCEDURE dbo.cssp_Upsert_SSRS_Path
GO
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE PROCEDURE dbo.cssp_Upsert_SSRS_Path (
@ReportReference VARCHAR(128), --the existing report reference in WSRepts table, or the new one you want to add
@ReportName VARCHAR(255), --the name if the report without the extension (as it appears in the url)
@Product INT -- 1=Job Cost, 2=Service, 3=EMS, 4=shared
)
AS
/*
KEY2ACT, Inc.
History:
20180419 djedziniak initial version
*/
BEGIN
BEGIN TRY
SET NOCOUNT ON
DECLARE @DbName AS VARCHAR(128),
@ReportServerUrl AS VARCHAR(512),
@RptServerRef AS VARCHAR(255),
@lastSlashIndex AS INT,
@slashIndex AS INT,
@productfolder AS VARCHAR(255)
SELECT @DbName = DB_Name(),
@RptServerRef = coalesce((
SELECT Report_Server_URL
FROM DYNAMICS.dbo.SY40800
WHERE SITENAME = 'Company'
), '')
IF (LEN(RTRIM(@RptServerRef)) > 0)
BEGIN
/* Only when has a Report Server reference will it insert any records in regards to Reporting Services. */
SELECT @lastSlashIndex = 0,
@slashIndex = Charindex('/', @RptServerRef);
/* First, find the last forward slash */
WHILE (@slashIndex > 0)
BEGIN
IF (@slashIndex > @lastSlashIndex)
SELECT @lastSlashIndex = @slashIndex;
SELECT @slashIndex = Charindex('/', @RptServerRef, @slashIndex + 1);
END
IF (@lastSlashIndex > 0)
SELECT @ReportServerUrl = LEFT(@RptServerRef, @lastSlashIndex);
END
SELECT @productfolder = CASE @Product
WHEN 1
THEN '/Signature Job Cost/'
WHEN 2
THEN '/Signature Service/'
WHEN 3
THEN '/Signature Equipment/'
WHEN 4
THEN '/Signature Shared/'
ELSE ''
END
IF EXISTS (
SELECT NULL
FROM dbo.WSRepts
WHERE ReportReference = @ReportReference
)
UPDATE dbo.WSRepts
SET ReportLocation = coalesce(@ReportServerUrl + @DbName + @productfolder + @ReportName, '')
WHERE ReportReference = @ReportReference AND LEN(RTRIM(ReportLocation)) <= 0;
ELSE
INSERT INTO dbo.WSRepts (
ReportReference,
ReportLocation
)
SELECT @ReportReference,
coalesce(@ReportServerUrl + @DbName + @productfolder + @ReportName, '')
END TRY
BEGIN CATCH
DECLARE @ErrorMessage NVARCHAR(4000);
DECLARE @ErrorSeverity INT;
DECLARE @ErrorState INT;
SELECT @ErrorMessage = ERROR_MESSAGE(),
@ErrorSeverity = ERROR_SEVERITY(),
@ErrorState = ERROR_STATE();
SELECT @ErrorMessage = 'ERROR IN cssp_Upsert_SSRS_Path ' + @ErrorMessage
RAISERROR (
@ErrorMessage, -- Message text.
@ErrorSeverity, -- Severity.
@ErrorState -- State.
);
END CATCH;
END
GO
GRANT EXECUTE
ON dbo.cssp_Upsert_SSRS_Path
TO DYNGRP
GO
PRINT 'CREATED cssp_Upsert_SSRS_Path '
GO
SELECT 1
FROM INFORMATION_SCHEMA.ROUTINES
WHERE SPECIFIC_SCHEMA = N'dbo' AND SPECIFIC_NAME = N'cssp_Upsert_SSRS_Path '
)
DROP PROCEDURE dbo.cssp_Upsert_SSRS_Path
GO
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE PROCEDURE dbo.cssp_Upsert_SSRS_Path (
@ReportReference VARCHAR(128), --the existing report reference in WSRepts table, or the new one you want to add
@ReportName VARCHAR(255), --the name if the report without the extension (as it appears in the url)
@Product INT -- 1=Job Cost, 2=Service, 3=EMS, 4=shared
)
AS
/*
KEY2ACT, Inc.
History:
20180419 djedziniak initial version
*/
BEGIN
BEGIN TRY
SET NOCOUNT ON
DECLARE @DbName AS VARCHAR(128),
@ReportServerUrl AS VARCHAR(512),
@RptServerRef AS VARCHAR(255),
@lastSlashIndex AS INT,
@slashIndex AS INT,
@productfolder AS VARCHAR(255)
SELECT @DbName = DB_Name(),
@RptServerRef = coalesce((
SELECT Report_Server_URL
FROM DYNAMICS.dbo.SY40800
WHERE SITENAME = 'Company'
), '')
IF (LEN(RTRIM(@RptServerRef)) > 0)
BEGIN
/* Only when has a Report Server reference will it insert any records in regards to Reporting Services. */
SELECT @lastSlashIndex = 0,
@slashIndex = Charindex('/', @RptServerRef);
/* First, find the last forward slash */
WHILE (@slashIndex > 0)
BEGIN
IF (@slashIndex > @lastSlashIndex)
SELECT @lastSlashIndex = @slashIndex;
SELECT @slashIndex = Charindex('/', @RptServerRef, @slashIndex + 1);
END
IF (@lastSlashIndex > 0)
SELECT @ReportServerUrl = LEFT(@RptServerRef, @lastSlashIndex);
END
SELECT @productfolder = CASE @Product
WHEN 1
THEN '/Signature Job Cost/'
WHEN 2
THEN '/Signature Service/'
WHEN 3
THEN '/Signature Equipment/'
WHEN 4
THEN '/Signature Shared/'
ELSE ''
END
IF EXISTS (
SELECT NULL
FROM dbo.WSRepts
WHERE ReportReference = @ReportReference
)
UPDATE dbo.WSRepts
SET ReportLocation = coalesce(@ReportServerUrl + @DbName + @productfolder + @ReportName, '')
WHERE ReportReference = @ReportReference AND LEN(RTRIM(ReportLocation)) <= 0;
ELSE
INSERT INTO dbo.WSRepts (
ReportReference,
ReportLocation
)
SELECT @ReportReference,
coalesce(@ReportServerUrl + @DbName + @productfolder + @ReportName, '')
END TRY
BEGIN CATCH
DECLARE @ErrorMessage NVARCHAR(4000);
DECLARE @ErrorSeverity INT;
DECLARE @ErrorState INT;
SELECT @ErrorMessage = ERROR_MESSAGE(),
@ErrorSeverity = ERROR_SEVERITY(),
@ErrorState = ERROR_STATE();
SELECT @ErrorMessage = 'ERROR IN cssp_Upsert_SSRS_Path ' + @ErrorMessage
RAISERROR (
@ErrorMessage, -- Message text.
@ErrorSeverity, -- Severity.
@ErrorState -- State.
);
END CATCH;
END
GO
GRANT EXECUTE
ON dbo.cssp_Upsert_SSRS_Path
TO DYNGRP
GO
PRINT 'CREATED cssp_Upsert_SSRS_Path '
GO
Thursday, February 8, 2018
Changing outlook time increments
Have you noticed that when you create an appointment in outlook, it only allows you to choose a time on the half hour increments initially. So If I need to schedule something from 10:15 to 10:30, I must do the following:
1. Click on 10:00 to get the appointment
2. Manually change the start time to 10:15 (which causes outlook to change the end time to 10:45)
3. Manually change the end time back to 10:30
After all that, the visual for the appointment will overlap the 10:30 slot, so it appears at first glance that there is a conflict.
This has annoyed me for many years, so today I decided t find a solution.
The solution is buried behind several layers of option window in a place you would not expect.
1. Right Click on an empty time slot on your calendar and choose View Settings
2. Click Other Settings
3. Change the Time Scale drop down to 15 minutes
4. Make sure the Font is set to 8pt
5. Click OK to close the windows
Now when you schedule an appointment, outlook will assume 15 minute increments.
This also changes the display of the calendar a bit. Since the calendar window was optimized by someone short-sighted enough to think everyone would be happy with 30 minute increments, your entire work day may not be visible without scrolling.
So it is a question of what is more annoying. Personally, I am going with the 15 minute increments and will live with a little scrolling.
1. Click on 10:00 to get the appointment
2. Manually change the start time to 10:15 (which causes outlook to change the end time to 10:45)
3. Manually change the end time back to 10:30
After all that, the visual for the appointment will overlap the 10:30 slot, so it appears at first glance that there is a conflict.
This has annoyed me for many years, so today I decided t find a solution.
The solution is buried behind several layers of option window in a place you would not expect.
1. Right Click on an empty time slot on your calendar and choose View Settings
2. Click Other Settings
3. Change the Time Scale drop down to 15 minutes
4. Make sure the Font is set to 8pt
5. Click OK to close the windows
Now when you schedule an appointment, outlook will assume 15 minute increments.
This also changes the display of the calendar a bit. Since the calendar window was optimized by someone short-sighted enough to think everyone would be happy with 30 minute increments, your entire work day may not be visible without scrolling.
So it is a question of what is more annoying. Personally, I am going with the 15 minute increments and will live with a little scrolling.
Tuesday, November 14, 2017
Why modifying a product at the dexterity level is rarely a good thing
I get requests to customize Dynamics GP and/or Signature all the time. Over my 17 years of doing this I have tried to pay attention to what works and what doesn't. Here are my observations:
Above all, keep it simple. Process trumps code every time. Many times the process can be simplified and at the same time avoid customization altogether. Left to their own devices, people tend to complicate processes. There are many reasons for this, which I will not debate here. The bottom line is that each business is not as unique as their employees like to think they are. If hundreds of service companies of a similar size to yours are able to use a piece of software with no modifications, and you think your company absolutely needs that modification, you need to take a hard look at why.
Invariably, there are differences and unique situations that bring about the need for custom code. In those cases, you should use a minimalist approach, while taking into account long term maintenance. A customization that you can't afford to maintain over the long haul is not a solution. The same goes for one that is overly complicated and fragile.
I get a lot of requests where the partner assumes Dexterity is the best fit. That is almost never the case. Here is how dexterity mods work:
*** Dynamics GP allows dexterity mods to the core product. These are called 3rd party products and appear in the dynamics.set file with a number issued by Microsoft. Signature is basically one great big dexterity mod.
*** Any time Microsoft changes anything about Dynamics GP or the 3rd party that has been modified changes anything (Service packs, version changes, end of year payroll, etc...) the dexterity mods have to be re-evaluated and potentially changed to continue working with the new core Dynamics GP product. In the case that a dex mod has been done to a 3rd party product, you must wait for the 3rd party dictionary to be re-evaluated, changed, tested, and released. Then you must start with the new code base and completely re-code the dex mod.
*** The ability of 3rd party dictionaries to talk to each other is limited.
So if you are building a product with functionality that needs to tightly integrate to Dynamics GP, dexterity may be the best solution. If you are customizing a product, it is always a last resort.
Here are some ways that customizations can be accomplished, and the pro's and cons of each. I have listed them in order of what is typically lowest cost to highest cost.
Modifier:
This tool is packaged with Dynamics GP and can be used on 3rd party products also.
Pros:
- Relatively cheap to purchase
- With minimal training, one or more of your own employees can make and maintain these types of modifications.
- Visual changes can be made to forms and reports
Cons:
- You can't make any code changes with this
- Report changes are limited
Modifier with VBA:
This tool is packaged with Dynamics GP and can be used on 3rd party products also.
Pros:
- Same as Modifier
- Code can be added
- Long term maintenance costs are low
Cons:
- Although you can easily add code, changing the behavior of the dex code is limited and problematic
- Report changes are limited
- Can be buggy in a multi-user environment
SSRS
Dynamics GP and many 3rd party products have the capability of calling SSRS reports rather than Dexterity reports. The SSRS report can be customized.
Pros:
- Easy to customize. Many minor customizations can be done by power users.
- With some care in naming conventions, they can be easy to maintain over time.
- Long term maintenance costs are low.
Cons:
- May need to create a Addin to call a custom SSRS report if the SSRS report does not already exist in the product, or if parameters need to be passed.
- Requires SQL language skills and table structure knowledge to build reports from scratch.
.NET Addins
This tool requires Visual Studio in order to use it. It can be used on 3rd party products also. It involves placing custom DLLs in the Addins folder in the GP client.
Pros:
- Robust UI and code ability
- Long term maintenance costs are low.
- Service packs to the product usually require no change at all to the addin
- Can be coded and implemented quickly
- Can easily override and replace most dex logic
- Can easily create entirely new custom windows
Cons:
- Can't override absolutely everything in dex.
Dexterity Mods
This involves creating a custom dexterity dictionary that is added to the dynamics.set file. The pros and cons below assume you are making changes to a 3rd party.
Pros:
- Can directly and surgically change dexterity code.
- Results in the tightest possible integration to GP.
Cons:
- Difficult or impossible to communicate between 3rd party dictionaries, meaning you have to create a custom version of the 3rd party dictionary in order to make changes.
- Difficult to test and deploy
- Rapidly shrinking pool of proficient dexterity programmers means higher costs and longer wait times.
- Very high long term maintenance costs. Basically you are re-coding the entire thing each time you do a service pack, meaning you get to pay full price for it over and over as long as you need it.
Above all, keep it simple. Process trumps code every time. Many times the process can be simplified and at the same time avoid customization altogether. Left to their own devices, people tend to complicate processes. There are many reasons for this, which I will not debate here. The bottom line is that each business is not as unique as their employees like to think they are. If hundreds of service companies of a similar size to yours are able to use a piece of software with no modifications, and you think your company absolutely needs that modification, you need to take a hard look at why.
Invariably, there are differences and unique situations that bring about the need for custom code. In those cases, you should use a minimalist approach, while taking into account long term maintenance. A customization that you can't afford to maintain over the long haul is not a solution. The same goes for one that is overly complicated and fragile.
I get a lot of requests where the partner assumes Dexterity is the best fit. That is almost never the case. Here is how dexterity mods work:
*** Dynamics GP allows dexterity mods to the core product. These are called 3rd party products and appear in the dynamics.set file with a number issued by Microsoft. Signature is basically one great big dexterity mod.
*** Any time Microsoft changes anything about Dynamics GP or the 3rd party that has been modified changes anything (Service packs, version changes, end of year payroll, etc...) the dexterity mods have to be re-evaluated and potentially changed to continue working with the new core Dynamics GP product. In the case that a dex mod has been done to a 3rd party product, you must wait for the 3rd party dictionary to be re-evaluated, changed, tested, and released. Then you must start with the new code base and completely re-code the dex mod.
*** The ability of 3rd party dictionaries to talk to each other is limited.
So if you are building a product with functionality that needs to tightly integrate to Dynamics GP, dexterity may be the best solution. If you are customizing a product, it is always a last resort.
Here are some ways that customizations can be accomplished, and the pro's and cons of each. I have listed them in order of what is typically lowest cost to highest cost.
Modifier:
This tool is packaged with Dynamics GP and can be used on 3rd party products also.
Pros:
- Relatively cheap to purchase
- With minimal training, one or more of your own employees can make and maintain these types of modifications.
- Visual changes can be made to forms and reports
Cons:
- You can't make any code changes with this
- Report changes are limited
Modifier with VBA:
This tool is packaged with Dynamics GP and can be used on 3rd party products also.
Pros:
- Same as Modifier
- Code can be added
- Long term maintenance costs are low
Cons:
- Although you can easily add code, changing the behavior of the dex code is limited and problematic
- Report changes are limited
- Can be buggy in a multi-user environment
SSRS
Dynamics GP and many 3rd party products have the capability of calling SSRS reports rather than Dexterity reports. The SSRS report can be customized.
Pros:
- Easy to customize. Many minor customizations can be done by power users.
- With some care in naming conventions, they can be easy to maintain over time.
- Long term maintenance costs are low.
Cons:
- May need to create a Addin to call a custom SSRS report if the SSRS report does not already exist in the product, or if parameters need to be passed.
- Requires SQL language skills and table structure knowledge to build reports from scratch.
.NET Addins
This tool requires Visual Studio in order to use it. It can be used on 3rd party products also. It involves placing custom DLLs in the Addins folder in the GP client.
Pros:
- Robust UI and code ability
- Long term maintenance costs are low.
- Service packs to the product usually require no change at all to the addin
- Can be coded and implemented quickly
- Can easily override and replace most dex logic
- Can easily create entirely new custom windows
Cons:
- Can't override absolutely everything in dex.
Dexterity Mods
This involves creating a custom dexterity dictionary that is added to the dynamics.set file. The pros and cons below assume you are making changes to a 3rd party.
Pros:
- Can directly and surgically change dexterity code.
- Results in the tightest possible integration to GP.
Cons:
- Difficult or impossible to communicate between 3rd party dictionaries, meaning you have to create a custom version of the 3rd party dictionary in order to make changes.
- Difficult to test and deploy
- Rapidly shrinking pool of proficient dexterity programmers means higher costs and longer wait times.
- Very high long term maintenance costs. Basically you are re-coding the entire thing each time you do a service pack, meaning you get to pay full price for it over and over as long as you need it.
Thursday, June 15, 2017
Editing table structure in SQL
By default, SSMS will not let you insert a column in the middle of a table and save the changes. This is because it requires dropping and recreating the table.
However, if you are OK with doing that, you don;t need to do it manually. You can just shut off this validation in SSMS.
Tools > Options > Designers > Table and Database Designers
Clear the Prevent saving changes that require the table to be re-created check box.
It's that easy.
Actually, here are all the changes what this validation catches:
https://support.microsoft.com/en-us/help/956176/error-message-when-you-try-to-save-a-table-in-sql-server-saving-changes-is-not-permitted
However, if you are OK with doing that, you don;t need to do it manually. You can just shut off this validation in SSMS.
Tools > Options > Designers > Table and Database Designers
Clear the Prevent saving changes that require the table to be re-created check box.
It's that easy.
Actually, here are all the changes what this validation catches:
- Change settings allow nulls of a column.
- Rearrange the columns of the table.
- Change the data type of the column.
- Add a new column.
https://support.microsoft.com/en-us/help/956176/error-message-when-you-try-to-save-a-table-in-sql-server-saving-changes-is-not-permitted
Monday, April 17, 2017
Custom Date and Time functions in SQL
Many of you are probably aware that you can create a function in SQL that returns a value and then use it like a column in a SQL statement.
For instance,
select WS_DateOnly(getdate())
will return a single column containing the date portion of the current date (truncating off the time).
In GP, these type of time truncations are critical because the columns contain either a date or a time, but not both.
I understand the need for using functions to keep formatting and such the same. They can also simplify SQL statements by removing much of the calculations from the statement.
However, in the case of date and time truncation, format and calculation does not come into play at all. The following statements are equivalent.
select WS_DateOnly(getdate())
select convert(varchar,getdate(),102)
and
select WS_TimeOnly(getdate())
select convert(varchar,getdate(),114)
The formats for the dates do not really matter other than 102 is a time-less date and 114 is a date-less time.
Here are some reasons that burying this type of calculation in a function are a bad idea:
1. SQL creates a dependency for each place the function is used. This adds to overhead and erodes performance.
2. A developer reading the statement that uses the function does not know exactly what the function is doing unless they go open it, slowing down troubleshooting.
3. SQL traces see the function call as a separate event, cluttering up traces.
The most painful issue of all is when the function gets added to a check constraint on a table. This creates a dependency that will crash the upgrade utilities for GP and requires manual correction.
So the takeaways are:
1. Never use functions to mask simple SQL functions like date truncation
2. Use functions sparingly
3. Never use a function in a table constraint on a GP database.
For instance,
select WS_DateOnly(getdate())
will return a single column containing the date portion of the current date (truncating off the time).
In GP, these type of time truncations are critical because the columns contain either a date or a time, but not both.
I understand the need for using functions to keep formatting and such the same. They can also simplify SQL statements by removing much of the calculations from the statement.
However, in the case of date and time truncation, format and calculation does not come into play at all. The following statements are equivalent.
select WS_DateOnly(getdate())
select convert(varchar,getdate(),102)
and
select WS_TimeOnly(getdate())
select convert(varchar,getdate(),114)
The formats for the dates do not really matter other than 102 is a time-less date and 114 is a date-less time.
Here are some reasons that burying this type of calculation in a function are a bad idea:
1. SQL creates a dependency for each place the function is used. This adds to overhead and erodes performance.
2. A developer reading the statement that uses the function does not know exactly what the function is doing unless they go open it, slowing down troubleshooting.
3. SQL traces see the function call as a separate event, cluttering up traces.
The most painful issue of all is when the function gets added to a check constraint on a table. This creates a dependency that will crash the upgrade utilities for GP and requires manual correction.
So the takeaways are:
1. Never use functions to mask simple SQL functions like date truncation
2. Use functions sparingly
3. Never use a function in a table constraint on a GP database.
Subscribe to:
Posts (Atom)
SQL 2022 TSQL snapshot backups!
SQL 2022 now actually supports snapshot backups! More specifically, T-SQL snapshot backups. Of course this is hardware-dependent. Here ...
-
Requirement: Trim, truncate, or otherwise modify the text from one SharePoint list field to make it appear in another. Solution: Make the...
-
SQL Job to refresh TEST from PRODUCTION Last Updated: 2018.11.12 I like to include each of these steps as a separate job step. If you ...
-
I am reposting this information from https://community.dynamics.com/gp/b/gplesliev/archive/2014/02/20/dex-ini-switches-my-complete-list in...