- Run
regedit.exe
- Navigate to
HKLM\Software\Microsoft\Windows NT\CurrentVersion\winlogon
- Set or create the following keys
(DWORD) AutoAdminLogon = 1 (String) DefaultUserName = Your user name (String) DefaultPassword = Your password
David Jedziniak maintains this blog and posts helpful tips and tricks on SQL Server and Dynamics GP development. The opinions expressed here are those of the author and do not represent the views, express or implied, of any past or present employer. The information here is provided without warranty of fitness for any particular purpose.
Tuesday, September 13, 2016
Set up Auto Logon in windows 2012 R2
Monday, June 20, 2016
New SQL Server 2016 features and enhancements
With the release of SQL Server 2016, there is a 215 page book of new features and enhancements, many of them way over the average SQL administrator's head.
This post is not intended to cover all of them, just the ones I see as most useful to know about.
Think of it as cliff notes for SQL Server 2016 features for mortals.
Always Encrypted
This post is not intended to cover all of them, just the ones I see as most useful to know about.
Think of it as cliff notes for SQL Server 2016 features for mortals.
Always Encrypted
Description:
This new feature is useful for storing SQL data in an encrypted format. This is done by generating a column master key and column encryption key. The SQL server has no ability to decrypt the data. Any application with access to the SQL Server can query the data, but only those applications with the encryption key will be able to make sense of it. No word yet on when Dexterity will take advantage of this, but .NET Addins may be able to use it now.
Pros:
- SQL Server never sees the unencrypted version of the data.
- The data is not transferred unencrypted.
- You can choose to encrypt only some of the columns in a table.
- If you encrypt a column using a deterministic encryption type, you can make it a key column or use it for compare operations such as joins and where clauses.
Cons:
- Only the .NET framework 4.6 and JDBC 6.0 drivers work with this so far.
- Simple ad-hoc tools like SSMS cannot be used to review or change data.
- Out the box, SSRS will not be able to decrypt the data for reporting
- LIKE and sort operations are not supported on encrypted columns
Row level security
Description:
This new feature allows you to control which rows of data a particular user has access to select, insert, update, or delete. This is done by adding a column for the user credentials and then creating inline table functions for the tables you wish to protect. You then create a security policy on the table to perform the filter. There does not appear to be any reason this will not work with Dexterity tables, but I have not yet tested it.
Pros:
- The database handles the data access, reducing repetitive coding across multiple applications.
- Eliminates security access code on the client side.
- Works out of the box with SSRS and even SSMS
- You can silently block data modification operations or return an error
Cons:
- Only intended for middle tier applications, since it is not immune to side-channel attacks.
Dynamic Data Masking
Description:
This feature allows you to mask all or part of a columns data. This is different from encryption, where the value can be decrypted to get the actual data. Masking replaces the data with zeros or X's.
This is done using the MASKED WITH option in the table definition. There does not appear to be any reason this will not work with Dexterity tables, but I have not yet tested it.
Pros:
- Only the returned data is changed. There is no change to the underlying data in the table.
- You can grant the UNMASK permission to a role to allow certain users to see the actual values.
- Removes repetitive code from applications.
- Helps keep sensitive data from being reproduced (such as on reports)
- More secure than encryption, since the masked value cannot be used to get the real value in any way.
Cons:
- Granting UNMASK only works for the entire database, not a single table.
- Does not work with encrypted columns
Autoconfigured data files for TempDB
Description:
This one is the best kind. It is actually a performance fix of sorts that requires you to do nothing!
Prior to this version, during install, the TempDB was created using a single data file. In 2011, Microsoft began recommending that in high-use scenarios there be one file per logical processor to prevent contention. Unfortunately, this word did not get disseminated very well, and white papers on the subject likely flew over most mortal's heads.
So Microsoft changed the install to automatically detect the number of logical processors and create the recommended number of data files for TempDB. This does not help if you later change hardware, but adding more data files is relatively simple.
Query Store
Description:
This is a new feature that helps find and fix performance issues. When turned on, it persists execution plans and makes several new dashboards available for query performance. This can be used in conjunction with FORCE PLAN to force SQL to use the most efficient execution plan all the time,
Stretch Database
Description:
This feature allows you to store an effectively unlimited amount of "cold" data in the SQL Azure Cloud while maintaining warm and hot data in an on-premise installation. This requires no changes to application code to implement, and data transmitted through he cloud can be encrypted. This feature is of limited use with Dynamics installations, however, due to the heavy use of check constraints in dexterity tables. Still, it is worth mentioning for disk hogs such as tables containing images and text columns.
Temporal Tables
Description:
This features allows you to create tables that allow you to see the state of the data at any point in the past. It is really two tables - current and history. It is not compatible with FILESTREAM or INSTEAD OF triggers. It is also not recommended for use with blobs due to storage space issues. Queries on this type of table are identical except for the addition of the FOR SYSTEM_TIME clause if you are getting historical records. You can combine this with Stretch Database to track history indefinitely. The jury is still out on whether we can use this with Dexterity tables, since we must add hidden period columns (thus changing the table structure). Sounds like it needs a test :)
JSON support
Description:
The new FOR JSON clause makes exporting data to JSON a breeze, and other new functions such as OPENJSON, ISJSON, JSON_QUERY, and JSON_MODIFY help "bake in" JSON support. Using computed columns, you can even create table indexes off JSON properties inside the column data.
Reporting Enhancements
Mobile Report Publisher: https://www.microsoft.com/en-us/download/confirmation.aspx?id=50400 This is a new tool that works with SSRS to allow you to design and publish reports for mobile devices. It has many dashboard style controls to help conserve screen space.
KPI Development: A new KPI screen lets you create your own KPI's directly from the SSRS Report Manager web portal page.
Export Directly to PowerPoint from SSRS.
Wednesday, June 8, 2016
Well-behaved triggers
I frequently here programmers lamenting the use of SQL triggers. To some they seem to be an evil to be avoided at all costs, and while I certainly agree that their use should be minimized, there are still instances where a trigger is the best approach.
In my experience, much of the fear of triggers seems to stem from poorly written triggers. There is definitely a "right" way to do it and a bunch of wrong ways to do it. Unfortunately, the standard trigger templates that ship with SSMS tend to lean to the wrong way.
So let's start with the basic trigger script from SSMS.
This looks simple and innocuous enough. Ponder the following considerations, however:
1. Who wrote this and why?
2. What happens when 1000 records are bulk updated in MyTable?
3. What if I only need to update MyTable when CustomerID is one of the fields that was updated?
4. How do I stop my update inside the trigger from triggering the update trigger again?
5. Since this trigger could slow down performance, how do I prevent this trigger from running from certain other procs?
Here is a much better script:
The line IF(UPDATE({COLUMN})) checks to see if a column has been updated in the inserted table. If it has, this function returns true. This is how we can prevent our logic from running unless certain data was updated.
Notice that there is a cursor defined here. That is because the inserted table could have many records in it. A common mistake is assuming that it only contains a single record. When possible, you should avoid using a cursor. In our case, we can eliminate the cursor and just do a bulk update.
Now if I want to stop this trigger from running itself (and potentially creating a Cartesian product), I need to add context info.
What the first red line is doing is checking a global variable for a binary value. If that value is set, the trigger returns without doing anything. This line can also be used to prevent the trigger from running when an update is made from some other object. That object would first set the context info, so the update, then clear it.
The two set lines are setting the variable before the update and clearing it after the update.
So now this trigger is much safer than the default basic trigger template.
I am handling bulk operations, restricting my operations based on certain fields being updated, and using context info to prevent the trigger from calling itself.
If you have any other tips for making triggers more well-behaved, post them here!
In my experience, much of the fear of triggers seems to stem from poorly written triggers. There is definitely a "right" way to do it and a bunch of wrong ways to do it. Unfortunately, the standard trigger templates that ship with SSMS tend to lean to the wrong way.
So let's start with the basic trigger script from SSMS.
CREATE TRIGGER dbo.MyTrigger
ON dbo.MyTable
AFTER UPDATE
AS
UPDATE dbo.MyTable
SET ModifiedDate = GETDATE()
FROM inserted i
WHERE i.CustomerID = MyTable.CustomerID
GO
ON dbo.MyTable
AFTER UPDATE
AS
UPDATE dbo.MyTable
SET ModifiedDate = GETDATE()
FROM inserted i
WHERE i.CustomerID = MyTable.CustomerID
GO
This looks simple and innocuous enough. Ponder the following considerations, however:
1. Who wrote this and why?
2. What happens when 1000 records are bulk updated in MyTable?
3. What if I only need to update MyTable when CustomerID is one of the fields that was updated?
4. How do I stop my update inside the trigger from triggering the update trigger again?
5. Since this trigger could slow down performance, how do I prevent this trigger from running from certain other procs?
Here is a much better script:
CREATE TRIGGER dbo.cstr_MyTable_UPDATE
ON dbo.MyTable
AFTER UPDATE
AS
/*
Company
Description:
History:
20160608 djedziniak initial version
*/
BEGIN
SET NOCOUNT ON;
IF(UPDATE({COLUMN}))
BEGIN
DECLARE C_INSERTED CURSOR FOR
SELECT {columns}
FROM inserted
WHERE {WHERE CLAUSE}
OPEN C_INSERTED
FETCH NEXT FROM C_INSERTED INTO {VARIABLES}
WHILE @@FETCH_STATUS=0
BEGIN
FETCH NEXT FROM C_INSERTED INTO {VARIABLES}
END
CLOSE C_INSERTED
DEALLOCATE C_INSERTED
END
END
GO
ON dbo.MyTable
AFTER UPDATE
AS
/*
Company
Description:
History:
20160608 djedziniak initial version
*/
BEGIN
SET NOCOUNT ON;
IF(UPDATE({COLUMN}))
BEGIN
DECLARE C_INSERTED CURSOR FOR
SELECT {columns}
FROM inserted
WHERE {WHERE CLAUSE}
OPEN C_INSERTED
FETCH NEXT FROM C_INSERTED INTO {VARIABLES}
WHILE @@FETCH_STATUS=0
BEGIN
FETCH NEXT FROM C_INSERTED INTO {VARIABLES}
END
CLOSE C_INSERTED
DEALLOCATE C_INSERTED
END
END
GO
The line IF(UPDATE({COLUMN})) checks to see if a column has been updated in the inserted table. If it has, this function returns true. This is how we can prevent our logic from running unless certain data was updated.
Notice that there is a cursor defined here. That is because the inserted table could have many records in it. A common mistake is assuming that it only contains a single record. When possible, you should avoid using a cursor. In our case, we can eliminate the cursor and just do a bulk update.
CREATE TRIGGER dbo.cstr_MyTable_UPDATE
ON dbo.MyTable
AFTER UPDATE
AS
/*
Company
Description:
History:
20160608 djedziniak initial version
*/
BEGIN
SET NOCOUNT ON;
IF(UPDATE(CustomerID))
BEGIN
UPDATE dbo.MyTable
SET ModifiedDate = GETDATE()
FROM inserted i
WHERE i.CustomerID = MyTable.CustomerID
END
END
GO
ON dbo.MyTable
AFTER UPDATE
AS
/*
Company
Description:
History:
20160608 djedziniak initial version
*/
BEGIN
SET NOCOUNT ON;
IF(UPDATE(CustomerID))
BEGIN
UPDATE dbo.MyTable
SET ModifiedDate = GETDATE()
FROM inserted i
WHERE i.CustomerID = MyTable.CustomerID
END
END
GO
CREATE TRIGGER dbo.cstr_MyTable_UPDATE
ON dbo.MyTable
AFTER UPDATE
AS
/*
Company
Description:
History:
20160608 djedziniak initial version
*/
BEGIN
SET NOCOUNT ON;
if Context_Info()=0x55555 return
set Context_Info 0x55555
IF(UPDATE(CustomerID))
BEGIN
UPDATE dbo.MyTable
SET ModifiedDate = GETDATE()
FROM inserted i
WHERE i.CustomerID = MyTable.CustomerID
END
set Context_Info 0x00000
END
GO
ON dbo.MyTable
AFTER UPDATE
AS
/*
Company
Description:
History:
20160608 djedziniak initial version
*/
BEGIN
SET NOCOUNT ON;
if Context_Info()=0x55555 return
set Context_Info 0x55555
IF(UPDATE(CustomerID))
BEGIN
UPDATE dbo.MyTable
SET ModifiedDate = GETDATE()
FROM inserted i
WHERE i.CustomerID = MyTable.CustomerID
END
set Context_Info 0x00000
END
GO
What the first red line is doing is checking a global variable for a binary value. If that value is set, the trigger returns without doing anything. This line can also be used to prevent the trigger from running when an update is made from some other object. That object would first set the context info, so the update, then clear it.
The two set lines are setting the variable before the update and clearing it after the update.
So now this trigger is much safer than the default basic trigger template.
I am handling bulk operations, restricting my operations based on certain fields being updated, and using context info to prevent the trigger from calling itself.
If you have any other tips for making triggers more well-behaved, post them here!
Friday, June 3, 2016
Table of GP menus for use with Menus for VST
Top Level Menu | Submenu | Form | Command |
Tools* | Command_System | CL_Tools | |
Toole >> Setup | Command_System | CL_Setup | |
System | Command_System | CL_System_Setup | |
Company | Command_System | CL_Comapny_Setup | |
Posting | Command_System | CL_Posting_Setup | |
Financial | Command_Financials | CL_Financial_Setup | |
Sales | Command_Sales | CL_Sales_Setup | |
Purchasing | Command_Purchasing | CL_Purchasing_Setup | |
Inventory | Command_Inventory | CL_Inventory_Setup | |
Payroll | Command_Payroll | CL_Payroll_Setup | |
Tools >> Utilities | Command_System | CL_Utilities | |
System | Command_System | CL_System_Utilities | |
Company | Command_System | CL_Company_Utilities | |
Financials | Command_Financials | CL_Financial_Utilities | |
Sales | Command_Sales | CL_Sales_Utilities | |
Purchasing | Command_Purchasing | CL_Purchasing_Utilities | |
Inventory | Command_Inventory | CL_Inventory_Utilities | |
Payroll | Command_Payroll | CL_Payroll_Utilities | |
Tools >> Routines | Command_System | CL_Routines | |
Company | Command_System | CL_Company_Routines | |
Financial | Command_Financial | CL_Financial_Routines | |
Sales | Command_Sales | CL_Sales_Routines | |
Purchasing | Command_Purchasing | CL_Purchasing_Routines | |
Inventory | Command_Inventory | CL_Inventory_Routines | |
Payroll | Command_Payroll | CL_Payroll_Routines | |
Transactions | Command_System | CL_Transactions | |
Financial | Command_Financial | CL_Financial_Transactions | |
Sales | Command_Sales | CL_Sales_Transactions | |
Purchasing | Command_Purchasing | CL_Purchasing_Transactions | |
Inventory | Command_Inventory | CL_Inventory_Transactions | |
Payroll | Command_Payroll | CL_Payroll_Transactions | |
Inquiry | Command_System | CL_Inquiry | |
System | Command_System | CL_System_Inquiry | |
Financial | Command_Financial | CL_Financial_Inquiry | |
Sales | Command_Sales | CL_Sales_Inquiry | |
Purchasing | Command_Purchasing | CL_Purchasing_Inquiry | |
Inventory | Command_Inventory | CL_Inventory_Inquiry | |
Payroll | Command_Payroll | CL_Payroll_Inquiry | |
Reports | Command_System | CL_Reports | |
System | Command_System | CL_System_Reports | |
Company | Command_Payroll | CL_Company_Reports | |
Financial | Command_Purchasing | CL_Purchasing_Reports | |
Sales | Command_Sales | CL_Sales_Reports | |
Purchasing | Command_Purchasing | CL_Purchasing_Reports | |
Inventory | Command_Inventory | CL_Inventory_Reports | |
Payroll | Command_Payroll | CL_Payroll_Reports | |
Cards | Command_System | CL_Cards | |
System | Command_System | CL_System_Cards | |
Financial | Command_Financial | CL_Financial_Cards | |
Sales | Command_Sales | CL_Financial_Cards | |
Purchasing | Command_Purchasing | CL_Purchasing_Cards | |
Inventory | Command_Inventory | CL_Inventory_Cards | |
Payroll | Command_Payroll | CL_Payroll_Cards |
Wednesday, May 25, 2016
Slick trick for doing a Choose() in C#
Back in my VB days, I frequently found the Choose() function handy.
I frequently used it for converting integer values to boolean for checkboxes and such.
Dim i as integer
i=1
Checkbox1.checked=Choose(i,false,true)
Nice and clean.
However, in C# I have been relegated to doing this:
int i=1;
if(i==1)
{
Checkbox1.checked=true;
}
else
{
Checkbox1.checked=false;
}
Much messier and longer. If I have a dozen checkbox fields, this becomes excessively long.
So today I finally had enough and searched for a better way until I found this trick. I just declare an array of boolean values on the fly and access the one I want.
int i=1;
Checkbox1.checked=new[] {false,true }[i];
There we go! Back to a 1 liner that is easy to read.
I frequently used it for converting integer values to boolean for checkboxes and such.
Dim i as integer
i=1
Checkbox1.checked=Choose(i,false,true)
Nice and clean.
However, in C# I have been relegated to doing this:
int i=1;
if(i==1)
{
Checkbox1.checked=true;
}
else
{
Checkbox1.checked=false;
}
Much messier and longer. If I have a dozen checkbox fields, this becomes excessively long.
So today I finally had enough and searched for a better way until I found this trick. I just declare an array of boolean values on the fly and access the one I want.
int i=1;
Checkbox1.checked=new[] {false,true }[i];
There we go! Back to a 1 liner that is easy to read.
Thursday, May 12, 2016
Shrinking a database
Most companies running Dynamics GP are using the Full backup model. This means that they are backing up the log files more frequently than the database.
ALTER DATABASE [YourDbName] SET SINGLE_USER WITH ROLLBACK IMMEDIATE;
--8. Remember to take a new full backup after you are done so that the full backup model can start a new log chain.
When on this model, it should not be necessary to shrink the data or log file very often.
However, there are instances where you would want to do this, such as immediately after archiving (removing) a large amount of data from the database.
Here are the commands to get the files shrunk back down. You should do this at a time when no users or other processes are accessing the database.
--1. Create a full backup of the database. Since this will involve changing the backup model, we will be breaking any log chains that exist.
--2. Set the database to single user mode. This will prevent anything from connecting to the database while we are working on it
GO
--3. Set the recovery model to simple. This will allow us to quickly shrink the files.
ALTER DATABASE [YourDbName] SET RECOVERY SIMPLE;
GO
--4. Shrink the data and log files. The number here represents 10% free space after shrinking. This can take a while to run.
DBCC SHRINKDATABASE ([YourDbName], 10);
GO
--5. Shrinking trashes the indexes, so now we reorganize them. This can take a very long time to run.
USE [YourDbName]
GO
DECLARE @TableName VARCHAR(255),
@sql NVARCHAR(500)
DECLARE TableCursor CURSOR FOR
SELECT OBJECT_SCHEMA_NAME([object_id])+'.'+name AS TableName
FROM sys.tables
OPEN TableCursor
FETCH NEXT FROM TableCursor INTO @TableName
WHILE @@FETCH_STATUS = 0
BEGIN
SET @sql = 'ALTER INDEX ALL ON ' + @TableName + ' REORGANIZE;'
EXEC (@sql)
FETCH NEXT FROM TableCursor INTO @TableName
END
CLOSE TableCursor
DEALLOCATE TableCursor
GO
--6. Set recovery model back to full
ALTER DATABASE [YourDbName] SET RECOVERY FULL;
GO
--7. Set Database back to multiuser mode
ALTER DATABASE [YourDbName] SET MULTI_USER WITH ROLLBACK IMMEDIATE;
GO
--8. Remember to take a new full backup after you are done so that the full backup model can start a new log chain.
Wednesday, May 11, 2016
SmartConnect can't see my custom proc
I have repeatedly been frustrated when trying to install custom nodes in eOne SmartConnect.
There is little documentation on the web and this has been a pain for several versions, so I have decided to maintain this post to list ways in which we have gotten it to work.
First, lets detail the steps installing a node in smartconnnect (assuming whe stored proc is already installed. Let's assume the stored proc name is MyEconnectProc.)
1. Launch Smartconnect as an administrator
2. Click the Maintenance Tab and then choose Node Maintenance.
Note: If you do not see a list of nodes in the window you need to stop here and follow the SmartConect installation documentation to install the default GP nodes before proceeding. Make sure you restart SQL server after this is done, and then restart SmartConnect.
3. Navigate to the point in the tree where you want your node to appear. I want this one to appear under Payables > Vendors, so I will right click on that node and choose Add New Node.
4. In the window that pops up, I need to enter the name of my proc in the Technical Name. It is important that it be capitalized the same as the proc in the database. Click tab. The Technical Name should gray out. If you get an error, see troubleshooting steps below.
5. Enter the Display Name
6. Check the parameters that should be required.
7. Click Save.
You can now create maps and direct SmartConnect to call this proc as an eConnect proc.
So here is the scenario when it goes wrong:
1. I create a custom eConnect proc. For the sake of this example I am NOT using node builder or any other tool. I am just writing a sql proc and is econnect compliant.
2. I install this proc on a Dynamics GP company database.
3. I launch SmartConnect as an admin. Go to node maintenance to add the node. For seemingly no reason, SmartConnect frequently decides that the proc does not exist.
Things to check before pulling out the old sledgehammer and reinstalling SmartConnect:
1. Make sure the proc is eConnect compliant. Spelling on the output parameter names is important.
@O_iErrorState INT=0 OUTPUT,
@oErrString VARCHAR(255)='' OUTPUT
2. Try installing the custom node on all company databases and the DYNAMICS database before adding the node. SmartConnect doesn't give us a way to tell it WHERE the proc is, so this works sometimes to get the node added. You can then drop the proc from the databases where you don't need it.
3. Make sure the proc does not have any funky characters in the name (like - (dash)). SmartConnect does not seem to like that.
4. Make sure there is a Pre and Post proc present, even if you are not using them.
5. Make sure all parameters start with @I_v except the 2 output parameters at the end.
6. Try restarting SQL server if possible, then restarting SmartConnect
If all else fails, you may have to use SQL Stored Procedure as that destination instead of eConnect Proc.
There is little documentation on the web and this has been a pain for several versions, so I have decided to maintain this post to list ways in which we have gotten it to work.
First, lets detail the steps installing a node in smartconnnect (assuming whe stored proc is already installed. Let's assume the stored proc name is MyEconnectProc.)
1. Launch Smartconnect as an administrator
2. Click the Maintenance Tab and then choose Node Maintenance.
Note: If you do not see a list of nodes in the window you need to stop here and follow the SmartConect installation documentation to install the default GP nodes before proceeding. Make sure you restart SQL server after this is done, and then restart SmartConnect.
3. Navigate to the point in the tree where you want your node to appear. I want this one to appear under Payables > Vendors, so I will right click on that node and choose Add New Node.
4. In the window that pops up, I need to enter the name of my proc in the Technical Name. It is important that it be capitalized the same as the proc in the database. Click tab. The Technical Name should gray out. If you get an error, see troubleshooting steps below.
5. Enter the Display Name
6. Check the parameters that should be required.
7. Click Save.
You can now create maps and direct SmartConnect to call this proc as an eConnect proc.
So here is the scenario when it goes wrong:
1. I create a custom eConnect proc. For the sake of this example I am NOT using node builder or any other tool. I am just writing a sql proc and is econnect compliant.
2. I install this proc on a Dynamics GP company database.
3. I launch SmartConnect as an admin. Go to node maintenance to add the node. For seemingly no reason, SmartConnect frequently decides that the proc does not exist.
Things to check before pulling out the old sledgehammer and reinstalling SmartConnect:
1. Make sure the proc is eConnect compliant. Spelling on the output parameter names is important.
@O_iErrorState INT=0 OUTPUT,
@oErrString VARCHAR(255)='' OUTPUT
2. Try installing the custom node on all company databases and the DYNAMICS database before adding the node. SmartConnect doesn't give us a way to tell it WHERE the proc is, so this works sometimes to get the node added. You can then drop the proc from the databases where you don't need it.
3. Make sure the proc does not have any funky characters in the name (like - (dash)). SmartConnect does not seem to like that.
4. Make sure there is a Pre and Post proc present, even if you are not using them.
5. Make sure all parameters start with @I_v except the 2 output parameters at the end.
6. Try restarting SQL server if possible, then restarting SmartConnect
If all else fails, you may have to use SQL Stored Procedure as that destination instead of eConnect Proc.
Subscribe to:
Posts (Atom)
SQL 2022 TSQL snapshot backups!
SQL 2022 now actually supports snapshot backups! More specifically, T-SQL snapshot backups. Of course this is hardware-dependent. Here ...
-
Requirement: Trim, truncate, or otherwise modify the text from one SharePoint list field to make it appear in another. Solution: Make the...
-
SQL Job to refresh TEST from PRODUCTION Last Updated: 2018.11.12 I like to include each of these steps as a separate job step. If you ...
-
I am reposting this information from https://community.dynamics.com/gp/b/gplesliev/archive/2014/02/20/dex-ini-switches-my-complete-list in...