Wednesday, October 26, 2016

Special Order Parts Business Alert Example

We are having a great time here at Synergy 2016!

Here is the example from the Workflow session we did yesterday.

This is a trigger based business alert that is designed to email the Technician and a dispatch mail group whenever a part is received that is associated with a Service Call.


Here is the stored procedure that constructs and sends the email:
CREATE PROCEDURE dbo.cssp_SpecialOrderPartsReceipt (@ITEMNMBR VARCHAR(31),@ITEMDESC VARCHAR(101), @Service_Call_ID VARCHAR(17))
AS
/*

This is an example for educational purposes. You should test all code for its intended use before placing it in production. No warranty is implied. It you find any issues with the code, please post in comments and I will endeavor to correct it.
20161025 djedziniak initial version
*/
BEGIN
SET NOCOUNT ON
DECLARE @Technician VARCHAR(11),@CUSTNMBR VARCHAR(15),@ADRSCODE VARCHAR(15),@TECHEMAIL VARCHAR(41),@emails VARCHAR(max),@THISbody VARCHAR(max)

SELECT @Technician = Technician,@CUSTNMBR = CUSTNMBR,@ADRSCODE = ADRSCODE
FROM dbo.SV00300
WHERE Service_Call_ID = @Service_Call_ID

SELECT @TECHEMAIL = rtrim(SV_Pager_PIN)
FROM dbo.SV00115
WHERE Technician = @Technician

SELECT @emails = @TECHEMAIL + ';dispatch@key2act.com'
,@THISbody = 'Item: ' + rtrim(@ITEMNMBR) + ' ' + rtrim(@ITEMDESC) +
' Customer: ' + rtrim(@CUSTNMBR) + ' Address: ' + rtrim(@ADRSCODE)

EXEC msdb.dbo.sp_send_dbmail @profile_name = 'Business Alert'
,@recipients = @emails
,@body = @THISbody
,@subject = 'Special Order Part Received';
END


Here is the trigger that calls the proc.
CREATE TRIGGER dbo.cstr_POP10310_INSERT ON dbo.POP10310 AFTER INSERT
AS
/*
This is an example for educational purposes. You should test all code for its intended use before placing it in production. No warranty is implied. It you find any issues with the code, please post in comments and I will endeavor to correct it.
20161025 djedziniak initial version
*/
BEGIN
SET NOCOUNT ON;
DECLARE @JOBNUMBR VARCHAR(17),@ITEMNMBR VARCHAR(15), @ITEMDESC VARCHAR(101), @QTY NUMERIC(19,5)

DECLARE C_INSERTED CURSOR FOR
SELECT JOBNUMBR,ITEMNMBR,ITEMDESC
FROM inserted
WHERE RTRIM(JOBNUMBR)!=''
OPEN C_INSERTED
FETCH NEXT FROM C_INSERTED INTO @JOBNUMBR,@ITEMNMBR,@ITEMDESC
WHILE @@FETCH_STATUS=0
BEGIN
EXEC dbo.cssp_SpecialOrderPartsReceipt @ITEMNMBR,@ITEMDESC,@JOBNUMBR
FETCH NEXT FROM C_INSERTED INTO @JOBNUMBR,@ITEMNMBR,@ITEMDESC
END
CLOSE C_INSERTED
DEALLOCATE C_INSERTED
END

You will have to configure DB mail on your SQL Server in order for the send mail part to work.  However, you could replace that call with some other action, like writing a log record or calling an eConnect proc.

Enjoy!


Friday, October 21, 2016

script to auto roll calls forward in signature

DECLARE @ROLLDATE DATETIME,
@STATUS INT
SELECT @ROLLDATE=CONVERT(VARCHAR,GETDATE(),102)

EXEC SMS_Auto_Call_Roll_Forward @ROLLDATE,'sa',@STATUS OUT
go

Wednesday, September 14, 2016

Steps to change the name of a VM running SQL and SSRS

1. Open File Explorer

2. Right click This PC and choose Properties, then Advanced system Settings

3. On the Computer Name tab click change

4. Change the name and restart now

5. Launch SSMS and login (You may need to use the new server name to log in) and run the following:
sp_dropserver 'old_physical_server_name\GP2015'
GO

sp_addserver 'new_name\GP2015', local
GO


6. Restart SQL Server

7. Close SSMS

8. Open SSRS configuration

9. Choose Database then change database and follow the wizard to locate the ReportServer database.

10  Restart SSRS




Get VS Tools for Dynamics to work with 2015

1. Install VS Tools for Dynamics 2013.

2. Copy the folder C:\Program Files (x86)\Microsoft Visual Studio 12.0\Common7\IDE\ProjectTemplates\CSharp\Dynamics GP
and paste to
C:\Program Files (x86)\Microsoft Visual Studio 14.0\Common7\IDE\ProjectTemplates\CSharp\

3. Open a command prompt as administrator

4. Run the following commands:
cd C:\Program Files (x86)\Microsoft Visual Studio 14.0\Common7\IDE
devenv /installvstemplates

Tuesday, September 13, 2016

Set up Auto Logon in windows 2012 R2

  1. Run regedit.exe
  2. Navigate to
    HKLM\Software\Microsoft\Windows NT\CurrentVersion\winlogon
    
  3. Set or create the following keys
    (DWORD)  AutoAdminLogon = 1
    (String) DefaultUserName = Your user name
    (String) DefaultPassword = Your password

Monday, June 20, 2016

New SQL Server 2016 features and enhancements

With the release of SQL Server 2016, there is a 215 page book of new features and enhancements, many of them way over the average SQL administrator's head.

This post is not intended to cover all of them, just the ones I see as most useful to know about.

Think of it as cliff notes for SQL Server 2016 features for mortals.


Always Encrypted

Description:
This new feature is useful for storing SQL data in an encrypted format.  This is done by generating a column master key and column encryption key.  The SQL server has no ability to decrypt the data.  Any application with access to the SQL Server can query the data, but only those applications with the encryption key will be able to make sense of it.  No word yet on when Dexterity will take advantage of this, but .NET Addins may be able to use it now.

Pros:
- SQL Server never sees the unencrypted version of the data.
- The data is not transferred unencrypted.  
- You can choose to encrypt only some of the columns in a table.
- If you encrypt a column using a deterministic encryption type, you can make it a key column or use it for compare operations such as joins and where clauses.

Cons:
- Only the .NET framework 4.6 and JDBC 6.0 drivers work with this so far.
- Simple ad-hoc tools like SSMS cannot be used to review or change data.
- Out the box, SSRS will not be able to decrypt the data for reporting
- LIKE and sort operations are not supported on encrypted columns


Row level security

Description:
This new feature allows you to control which rows of data a particular user has access to select, insert, update, or delete.  This is done by adding a column for the user credentials and then creating inline table functions for the tables you wish to protect.  You then create a security policy on the table to perform the filter.  There does not appear to be any reason this will not work with Dexterity tables, but I have not yet tested it.

Pros:
- The database handles the data access, reducing repetitive coding across multiple applications.
- Eliminates security access code on the client side.  
- Works out of the box with SSRS and even SSMS
- You can silently block data modification operations or return an error

Cons:
- Only intended for middle tier applications, since it is not immune to side-channel attacks.


Dynamic Data Masking

Description:
This feature allows you to mask all or part of a columns data.  This is different from encryption, where the value can be decrypted to get the actual data.  Masking replaces the data with zeros or X's.
This is done using the MASKED WITH option in the table definition. There does not appear to be any reason this will not work with Dexterity tables, but I have not yet tested it.

Pros:
- Only the returned data is changed.  There is no change to the underlying data in the table.
- You can grant the UNMASK permission to a role to allow certain users to see the actual values.
- Removes repetitive code from applications.
- Helps keep sensitive data from being reproduced (such as on reports)
- More secure than encryption, since the masked value cannot be used to get the real value in any way.

Cons:
- Granting UNMASK only works for the entire database, not a single table.
- Does not work with encrypted columns



Autoconfigured data files for TempDB

Description:
This one is the best kind.  It is actually a performance fix of sorts that requires you to do nothing!
Prior to this version, during install, the TempDB was created using a single data file.  In 2011, Microsoft began recommending that in high-use scenarios there be one file per logical processor to prevent contention.  Unfortunately, this word did not get disseminated very well, and white papers on the subject likely flew over most mortal's heads.  

So Microsoft changed the install to automatically detect the number of logical processors and create the recommended number of data files for TempDB.  This does not help if you later change hardware, but adding more data files is relatively simple.


Query Store

Description:
This is a new feature that helps find and fix performance issues.  When turned on, it persists execution plans and makes several new dashboards available for query performance.  This can be used in conjunction with FORCE PLAN to force SQL to use the most efficient execution plan all the time,


Stretch Database

Description:
This feature allows you to store an effectively unlimited amount of "cold" data in the SQL Azure Cloud while maintaining warm and hot data in an on-premise installation.  This requires no changes to application code to implement, and data transmitted through he cloud can be encrypted.  This feature is of limited use with Dynamics installations, however, due to the heavy use of check constraints in dexterity tables.  Still, it is worth mentioning for disk hogs such as tables containing images and text columns.


Temporal Tables

Description:
This features allows you to create tables that allow you to see the state of the data at any point in the past.  It is really two tables - current and history.  It is not compatible with FILESTREAM or INSTEAD OF triggers.  It is also not recommended for use with blobs due to storage space issues.   Queries on this type of table are identical except for the addition of the FOR SYSTEM_TIME clause if you are getting historical records.  You can combine this with Stretch Database to track history indefinitely.  The jury is still out on whether we can use this with Dexterity tables, since we must add hidden period columns (thus changing the table structure).  Sounds like it needs a test :)


JSON support

Description:
The new FOR JSON clause makes exporting data to JSON a breeze, and other new functions such as OPENJSON, ISJSON, JSON_QUERY, and JSON_MODIFY help "bake in" JSON support. Using computed columns, you can even create table indexes off JSON properties inside the column data.


Reporting Enhancements

Mobile Report Publisherhttps://www.microsoft.com/en-us/download/confirmation.aspx?id=50400  This is a new tool that works with SSRS to allow you to design and publish reports for mobile devices.  It has many dashboard style controls to help conserve screen space.

KPI Development: A new KPI screen lets you create your own KPI's directly from the SSRS Report Manager web portal page.

Export Directly to PowerPoint from SSRS.















Wednesday, June 8, 2016

Well-behaved triggers

I frequently here programmers lamenting the use of SQL triggers.  To some they seem to be an evil to be avoided at all costs, and while I certainly agree that their use should be minimized, there are still instances where a trigger is the best approach.

In my experience, much of the fear of triggers seems to stem from poorly written triggers.  There is definitely a "right" way to do it and a bunch of wrong ways to do it.  Unfortunately, the standard trigger templates that ship with SSMS tend to lean to the wrong way.

So let's start with the basic trigger script from SSMS.

CREATE TRIGGER dbo.MyTrigger
ON dbo.MyTable
AFTER UPDATE
AS
UPDATE dbo.MyTable
SET ModifiedDate = GETDATE()
FROM inserted i
WHERE i.CustomerID = MyTable.CustomerID
GO


This looks simple and innocuous enough.  Ponder the following considerations, however:
1. Who wrote this and why?
2. What happens when 1000 records are bulk updated in MyTable?
3. What if I only need to update MyTable when CustomerID is one of the fields that was updated?
4. How do I stop my update inside the trigger from triggering the update trigger again?
5. Since this trigger could slow down performance, how do I prevent this trigger from running from certain other procs?

Here is a much better script:

CREATE TRIGGER dbo.cstr_MyTable_UPDATE
ON dbo.MyTable
AFTER UPDATE
AS
/*
Company

Description:

History:
20160608 djedziniak initial version
*/
BEGIN
SET NOCOUNT ON;
IF(UPDATE({COLUMN}))
BEGIN
DECLARE C_INSERTED CURSOR FOR
SELECT {columns}
FROM inserted
WHERE {WHERE CLAUSE}
OPEN C_INSERTED
FETCH NEXT FROM C_INSERTED INTO {VARIABLES}
WHILE @@FETCH_STATUS=0
BEGIN

FETCH NEXT FROM C_INSERTED INTO {VARIABLES}
END
CLOSE C_INSERTED
DEALLOCATE C_INSERTED
END
END
GO


The line IF(UPDATE({COLUMN})) checks to see if a column has been updated in the inserted table.  If it has, this function returns true.  This is how we can prevent our logic from running unless certain data was updated.

Notice that there is a cursor defined here.  That is because the inserted table could have many records in it.  A common mistake is assuming that it only contains a single record.  When possible, you should avoid using a cursor.  In our case, we can eliminate the cursor and just do a bulk update.

CREATE TRIGGER dbo.cstr_MyTable_UPDATE
ON dbo.MyTable
AFTER UPDATE
AS
/*
Company

Description:

History:
20160608 djedziniak initial version
*/
BEGIN
SET NOCOUNT ON;
IF(UPDATE(CustomerID))
BEGIN
UPDATE dbo.MyTable
SET ModifiedDate = GETDATE()
FROM inserted i
WHERE i.CustomerID = MyTable.CustomerID
END
END
GO


Now if I want to stop this trigger from running itself (and potentially creating a Cartesian product), I need to add context info.

CREATE TRIGGER dbo.cstr_MyTable_UPDATE
ON dbo.MyTable
AFTER UPDATE
AS
/*
Company

Description:

History:
20160608 djedziniak initial version
*/
BEGIN
SET NOCOUNT ON;

if Context_Info()=0x55555 return
set Context_Info 0x55555

IF(UPDATE(CustomerID))
BEGIN
UPDATE dbo.MyTable
SET ModifiedDate = GETDATE()
FROM inserted i
WHERE i.CustomerID = MyTable.CustomerID
END

set Context_Info 0x00000

END
GO


What the first red line is doing is checking a global variable for a binary value.  If that value is set, the trigger returns without doing anything.  This line can also be used to prevent the trigger from running when an update is made from some other object.  That object would first set the context info, so the update, then clear it.
The two set lines are setting the variable before the update and clearing it after the update.

So now this trigger is much safer than the default basic trigger template.
I am handling bulk operations, restricting my operations based on certain fields being updated, and using context info to prevent the trigger from calling itself.

If you have any other tips for making triggers more well-behaved, post them here!


Friday, June 3, 2016

Table of GP menus for use with Menus for VST

Top Level MenuSubmenuFormCommand
Tools*Command_SystemCL_Tools
Toole >> SetupCommand_SystemCL_Setup
SystemCommand_SystemCL_System_Setup
CompanyCommand_SystemCL_Comapny_Setup
PostingCommand_SystemCL_Posting_Setup
FinancialCommand_FinancialsCL_Financial_Setup
SalesCommand_SalesCL_Sales_Setup
PurchasingCommand_PurchasingCL_Purchasing_Setup
InventoryCommand_InventoryCL_Inventory_Setup
PayrollCommand_PayrollCL_Payroll_Setup
Tools >> UtilitiesCommand_SystemCL_Utilities
SystemCommand_SystemCL_System_Utilities
CompanyCommand_SystemCL_Company_Utilities
FinancialsCommand_FinancialsCL_Financial_Utilities
SalesCommand_SalesCL_Sales_Utilities
PurchasingCommand_PurchasingCL_Purchasing_Utilities
InventoryCommand_InventoryCL_Inventory_Utilities
PayrollCommand_PayrollCL_Payroll_Utilities
Tools >> RoutinesCommand_SystemCL_Routines
CompanyCommand_SystemCL_Company_Routines
FinancialCommand_FinancialCL_Financial_Routines
SalesCommand_SalesCL_Sales_Routines
PurchasingCommand_PurchasingCL_Purchasing_Routines
InventoryCommand_InventoryCL_Inventory_Routines
PayrollCommand_PayrollCL_Payroll_Routines
TransactionsCommand_SystemCL_Transactions
FinancialCommand_FinancialCL_Financial_Transactions
SalesCommand_SalesCL_Sales_Transactions
PurchasingCommand_PurchasingCL_Purchasing_Transactions
InventoryCommand_InventoryCL_Inventory_Transactions
PayrollCommand_PayrollCL_Payroll_Transactions
InquiryCommand_SystemCL_Inquiry
SystemCommand_SystemCL_System_Inquiry
FinancialCommand_FinancialCL_Financial_Inquiry
SalesCommand_SalesCL_Sales_Inquiry
PurchasingCommand_PurchasingCL_Purchasing_Inquiry
InventoryCommand_InventoryCL_Inventory_Inquiry
PayrollCommand_PayrollCL_Payroll_Inquiry
ReportsCommand_SystemCL_Reports
SystemCommand_SystemCL_System_Reports
CompanyCommand_PayrollCL_Company_Reports
FinancialCommand_PurchasingCL_Purchasing_Reports
SalesCommand_SalesCL_Sales_Reports
PurchasingCommand_PurchasingCL_Purchasing_Reports
InventoryCommand_InventoryCL_Inventory_Reports
PayrollCommand_PayrollCL_Payroll_Reports
CardsCommand_SystemCL_Cards
SystemCommand_SystemCL_System_Cards
FinancialCommand_FinancialCL_Financial_Cards
SalesCommand_SalesCL_Financial_Cards
PurchasingCommand_PurchasingCL_Purchasing_Cards
InventoryCommand_InventoryCL_Inventory_Cards
PayrollCommand_PayrollCL_Payroll_Cards

Wednesday, May 25, 2016

Slick trick for doing a Choose() in C#

Back in my VB days, I frequently found the Choose() function handy.
I frequently used it for converting integer values to boolean for checkboxes and such.

Dim i as integer
i=1
Checkbox1.checked=Choose(i,false,true)

Nice and clean.

However, in C# I have been relegated to doing this:

int i=1;
if(i==1)
{
Checkbox1.checked=true;
}
else
{
Checkbox1.checked=false;
}

Much messier and longer.  If I have a dozen checkbox fields, this becomes excessively long.

So today I finally had enough and searched for a better way until I found this trick.  I just declare an array of boolean values on the fly and access the one I want.

int i=1;
Checkbox1.checked=new[] {false,true }[i];

There we go!  Back to a 1 liner that is easy to read.

Thursday, May 12, 2016

Shrinking a database

Most companies running Dynamics GP are using the Full backup model.  This means that they are backing up the log files more frequently than the database.

When on this model, it should not be necessary to shrink the data or log file very often.

However, there are instances where you would want to do this, such as immediately after archiving (removing) a large amount of data from the database.

Here are the commands to get the files shrunk back down. You should do this at a time when no users or other processes are accessing the database.

--1. Create a full backup of the database.  Since this will involve changing the backup model, we will be breaking any log chains that exist.

--2. Set the database to single user mode.  This will prevent anything from connecting to the database while we are working on it

ALTER DATABASE [YourDbName] SET SINGLE_USER WITH ROLLBACK IMMEDIATE;
GO

--3. Set the recovery model to simple.  This will allow us to quickly shrink the files.
ALTER DATABASE [YourDbName] SET RECOVERY SIMPLE;
GO

--4. Shrink the data and log files.  The number here represents 10% free space after shrinking.  This can take a while to run.

DBCC SHRINKDATABASE ([YourDbName], 10);
GO

--5. Shrinking trashes the indexes, so now we reorganize them.  This can take a very long time to run.

USE [YourDbName]
GO

DECLARE @TableName VARCHAR(255),
@sql NVARCHAR(500)

DECLARE TableCursor CURSOR FOR
SELECT OBJECT_SCHEMA_NAME([object_id])+'.'+name AS TableName
FROM sys.tables

OPEN TableCursor
FETCH NEXT FROM TableCursor INTO @TableName

WHILE @@FETCH_STATUS = 0
BEGIN
SET @sql = 'ALTER INDEX ALL ON ' + @TableName + ' REORGANIZE;'
EXEC (@sql)
FETCH NEXT FROM TableCursor INTO @TableName
END
CLOSE TableCursor
DEALLOCATE TableCursor
GO

--6. Set recovery model back to full

ALTER DATABASE [YourDbName] SET RECOVERY FULL;
GO

--7. Set Database back to multiuser mode

ALTER DATABASE [YourDbName] SET MULTI_USER WITH ROLLBACK IMMEDIATE;
GO

--8. Remember to take a new full backup after you are done so that the full backup model can start a new log chain.





Wednesday, May 11, 2016

SmartConnect can't see my custom proc

I have repeatedly been frustrated when trying to install custom nodes in eOne SmartConnect.
There is little documentation on the web and this has been a pain for several versions, so I have decided to maintain this post to list ways in which we have gotten it to work.

First, lets detail the steps installing a node in smartconnnect (assuming whe stored proc is already installed.  Let's assume the stored proc name is MyEconnectProc.)
1. Launch Smartconnect as an administrator
2. Click the Maintenance Tab and then choose Node Maintenance.

Note:  If you do not see a list of nodes in the window you need to stop here and follow the SmartConect installation documentation to install the default GP nodes before proceeding.  Make sure you restart SQL server after this is done, and then restart SmartConnect.

3. Navigate to the point in the tree where you want your node to appear.  I want this one to appear under Payables > Vendors, so I will right click on that node and choose Add New Node.

4. In the window that pops up, I need to enter the name of my proc in the Technical Name.  It is important that it be capitalized the same as the proc in the database.  Click tab.  The Technical Name should gray out.  If you get an error, see troubleshooting steps below.

5. Enter the Display Name

6. Check the parameters that should be required.

7. Click Save.

You can now create maps and direct SmartConnect to call this proc as an eConnect proc.



So here is the scenario when it goes wrong:

1. I create a custom eConnect proc.  For the sake of this example I am NOT using node builder or any other tool.  I am just writing a sql proc and is econnect compliant.

2. I install this proc on a Dynamics GP company database.

3. I launch SmartConnect as an admin.  Go to node maintenance to add the node. For seemingly no reason, SmartConnect frequently decides that the proc does not exist.


Things to check before pulling out the old sledgehammer and reinstalling SmartConnect:

1. Make sure the proc is eConnect compliant.  Spelling on the output parameter names is important.
@O_iErrorState INT=0 OUTPUT,
@oErrString VARCHAR(255)='' OUTPUT

2. Try installing the custom node on all company databases and the DYNAMICS database before adding the node.  SmartConnect doesn't give us a way to tell it WHERE the proc is, so this works sometimes to get the node added.  You can then drop the proc from the databases where you don't need it.

3. Make sure the proc does not have any funky characters in the name (like - (dash)).  SmartConnect does not seem to like that.

4. Make sure there is a Pre and Post proc present, even if you are not using them.

5. Make sure all parameters start with @I_v except the 2 output parameters at the end.

6. Try restarting SQL server if possible, then restarting SmartConnect


If all else fails, you may have to use SQL Stored Procedure as that destination instead of eConnect Proc.


Thursday, March 3, 2016

Why is my database application running so slowly?

There are many reasons that a database application can be slow.  In large ERP systems like Dynamics GP, there are often complex operations happening behind the scenes when batches are being generated or posted.  All too often, we throw more hardware at a performance problem., when the issue could be as simple as a poor join or missing index.

So how do we figure out what to do about a performance problem?

First, I would make sure the SQL server is set up in a manner that follows best practices.  Here are some links to posts on setup:
Why you should not store user data on a sql server.
SQL Server setup and maintenance tips
Dream SQL Install
Creating a good set of SQL maintenance plans

If SQL is installed in a virtual environment, start here:
Keys to SQL on VMWare
Essential Guide to SQL Server Virtualization

Let's say I have done all that and I still have performance issues with posting a large batch.  The users normally use the application on a terminal server, so I am not sure if this is a network, terminal server, or application issue.

If you have a good tool, such as LiteSpeed, turn it on and post the batch.  In other words, use your tool to figure out why it is slow and make changes to fix it.  If you still have issues, or if you only have the tools that install with SQL server, here is what I would do:
1. Get a base line
- Restore the production database to a test database on the same server
- Post the batch from the terminal server while running a SQL Profiler trace
   * Time the process
   * Save the trace
Restore prod to test (so we keep using same data set)
2. To see if terminal server is the issue, run the process from a fat client while running a trace
   * Time the process
   * Save the trace
Restore prod to test (so we keep using same data set
3. To rule out network issues, run the process from the SQL server while running a trace
    * Time the process
   * Save the trace
Restore prod to test (so we keep using same data set

At this point you will know whether it is a network issue, terminal server issue, or SQL issue, based on the difference in run times.
If it is a SQL issue:
- Use trace results to identify the SQL objects involved and manually generate a list of those objects.

   * TablesCheck row count and explore options for purging or archiving records on large tables. Check index fragmentation.

   * ViewsCheck execution plan for missing indexes.  Eliminate poor joins.  Eliminate function calls that can be easily handled with inline statements.  Ensure that schema (eg. dbo) is referenced on every table operation to allow caching of the execution plan.

   * Stored ProceduresCheck execution plan for missing indexes.  Eliminate poor joins.  Eliminate function calls that can be easily handled with inline statements.  Ensure that schema (eg. dbo) is referenced on every table operation to allow caching of the execution plan.  Try switching between #temp and @temp variables where used.  Look for opportunities to reduce the size of local or temp table record sets early in the processing.  Eliminate cursors where possible.

After all that, now it is time to look deeper in SQL.  
Check the performance counters for PLE and RAM in use to help determine if more RAM is needed. A good rule of thumb is that if PLE regularly falls below (75*(GB of RAM)), more RAM is needed.

Check the performance counters for disk reads and writes per second to help determine if a faster hard drive is needed.

A
 This should get you started.  I will add to this post as I find tricks that help.

Thursday, February 18, 2016

Why don't any images display on my ASPX page?

Over the years, I have had more than one occasion to wonder what planet the developers of IIS are from.  It seems that every version of IIS has had at least one default setting that made no sense at all... probably not even on their planet.

Today installed an aspx application on a shiny new Windows 2008 R2 / IIS 7.5 environment.

When I ran the application, all the image tags were broken.  I verified that the images were in the correct file location and that IUSR had read access to that folder.  So why would NONE of the images display?

From the people who brought you Windows ME:
The answer is another poor decision by some IIS developer.  By default, the static content role is not installed.  Installing it fixes the issue.



I hereby issue a challenge to anyone who wishes to take it.  Go try to find a single web site running on IIS that does not have at least one image AND does not use CSS.  You will fail, because there isn't one... anywhere... I bet you would have trouble faking one.  The whole point of a website is to render content, not just text.

So I am left to ponder the extreme depth of this obtuseness.... and dream of visiting this other planet.

Tuesday, February 16, 2016

Did my trigger fire from and update, insert, or delete?

I recently got a question on a trigger and I could see that the coder was trying to handle two operations in a single trigger and perform separate actions based on the operation.

Now, we all know that we could create a separate trigger for each operation.  However, it is sometimes helpful for maintaining code if we use a single trigger.


Here is a simple example of how to handle the different operations in a single trigger.

create table test(
myfield varchar(1)
)
go

create  TRIGGER cstr_test
ON test
AFTER INSERT, UPDATE, DELETE
AS
BEGIN
if exists(select 1 from deleted)
begin
if exists(select 1 from inserted)
begin
print 'update'
end
else
begin
print 'delete'
end
end
else
begin
print 'insert'

end
END
go


insert into test select 1

update test set myfield=2 where myfield=1

delete from test



I hope this helps someone!

Monday, February 8, 2016

Add items to list object in Dex using VBA

Let's say you have a drop down list box on your Dynamics GP window and you want to add an item to it.  Unfortunately, the list items are being set by dexterity code when the window opens, so this seems impossible.

However, using a little VBA, we can add our own items to the list.

We do this using SanScript!

Here is an example of how to accomplish this:

Private Sub Window_BeforeOpen(OpenVisible As Boolean)
 Dim CompilerApp As Object
    Dim CompilerMessage As String
    Dim CompilerError As Integer
    Dim CompilerCommand As String

    ' Create link without having reference marked
    Set CompilerApp = CreateObject("Dynamics.Application")

    CompilerCommand = ""
    CompilerCommand = CompilerCommand & "add item str(2099) to 'Year' of window MyWindow of form MyForm;"

    ' Execute SanScript
    CompilerApp.CurrentProductID = 131
    CompilerApp.CurrentProduct = CompilerApp.CurrentProduct & "!Modified"
    CompilerError = CompilerApp.ExecuteSanscript(CompilerCommand, CompilerMessage)
    If CompilerError <> 0 Then
        MsgBox CompilerMessage
    End If

End Sub

SQL 2022 TSQL snapshot backups!

  SQL 2022 now actually supports snapshot backups!  More specifically, T-SQL snapshot backups. Of course this is hardware-dependent. Here ...