Blog

D365FO pu20 union query problem: “There is a field mismatch in the union query. Field [Field Name] is not compatible with field .”

Cause of the run time error “There is a field mismatch in the union query. Field [Field Name] is not compatible with field .” in D365FO.

Advertisements

I found an exciting and obscure bug (? – “feature?”) in Dynamics 365 for Finance and Operations. It doesn’t seem to rear its head in platform update 15, but does in platform update 20. If you get the run time error message (notice the intentional space that looks like a missing word before the period at the end):

There is a field mismatch in the union query. Field [Field Name] is not compatible with field .

…then you might have made the same mistake we made.

If you are creating a union Query, and you’ve set “Derived Table,” I think it was ignored in the past. However, as of platform update 20, if you set it– and if there is a mismatch within the fields of a data source– you will get no error when you compile/build, but you WILL get the above error at run time.

Here’s an example showing the mistake we had:

 

Look closely. MORE closely. For most of the columns under ProjEmplTransactionsView in XskProjVendTransQuery, the “Derived Table” is set to ProjEmplTransactionView. (No pluralizing “s.”) It is ProjEmplTransactionsView (with the “s”) for XskVendInvoiceTransRecId.

I’m not sure that “Derived Table” is even needed (what does it do?) but when it was consistently ProjEmplTransactionsView for all the columns where it is set to something, the run time errors went away.

AXUG / D365 User Group

This will be a less focused and problem/solution oriented blog post than usual for me. I’m currently taking a short break while attending the Dynamics AX / 365 User Group Summit. Midway through, we’ve heard the keynote, I’ve hit a few sessions, and I’ve gotten an idea how this thing works and what to expect next year (in Orlando, Florida, if you want to register for next year).

Attendance seems to be roughly 1/3rd IT people, with enough developers to fill sessions pretty well.

There have been sessions focused on X++ coding, but mostly higher level (you can only do so much in an hour long session). Best session so far that I attended went over the difference between “overlayering” (what you’d call “customization” in older versions) and “extensions,” which I already knew; but with useful explanation about the new-ish Chain Of Command functionality. Also a great (but again high-level) session on automated testing, something I’ve never looked at as a newer developer; I now have a rough idea how to set up Unit Tests and Type Provider / Integration Tests, which I look forward to diving into.

Common Data Model is big. (Is this the same as Common Data Service? I’m sure Microsoft doesn’t mind any confusion. See also renaming VSTS to “Azure DevOps,” a confusing name that I really hope does not stick.) Power BI and Flow are huge topics. Although ostensibly friendly to end users, developers should not ignore these tools. You WILL be integrating Power BI into D365 as a developer, especially if there are custom reports; and you’ll find Flow helpful with some integrations.

There are no shortage of tools to learn that part, at least. Microsoft mentioned https://microsoft.com/learn during their keynote (in between basically demoing the whole family of products and how easily they talk to each other, in a perfect world at least…) which should be a good starting point for learning many things… still no real path for X++ programming, though. The expectation still seems to be that either you’ve been working in AX 2012 and earlier and need refreshers, or you’re coming from C++ and learning from a generous employer; third parties are starting to fill the gap by offering training, but that is your only real option if you can’t get on-the-job training.

Importing an Azure BACPAC to a development VM

As a Dynamics 365 for Finance and Operations developer, you will probably find it easier to develop, test, and debug problems if you have a recent database available to you in your development VM. Although there does exist general documentation on this process, as of this writing, I’ve found it less than ideal for developers; a couple of minor errors, some extra steps, and lacking a few tips for problems you might run into.

Prerequisites

For sake of this post, I’m going to assume that a system administrator has tackled providing you with an exported BACPAC file, and that you are able to get it copied to your development VM.

You should install the latest version of SQL Server Management Studio (or at least some 17.x version), which is an easy way to get the updated SQL tools you will need to work with BACPAC files from current versions of SQL Server Azure. Honestly, I recommend this as part of the initial VM setup.

If you’ve been running Dynamics 365 for Finance and Operations for a while, your VM hard drive might not be big enough. You might need to make the virtual drive larger.

Import the database

You need to run an command prompt as administrator. If you don’t run the command prompt as administrator, you will probably get an error like “Unable to connect to master or target server …” At the administrator command prompt, use the following:

cd "C:\Program Files (x86)\Microsoft SQL Server\140\DAC\bin"
SqlPackage.exe /a:import /sf:C:\TEMP\BackupFilename.bacpac /tsn:localhost /tdn:AxDB_YYYYMMDD /p:CommandTimeout=1200

You should note three parts of these commands:

  1. If the first command gives you an error, and/or you only have a “C:\Program Files (x86)\Microsoft SQL Server\130\DAC\bin” directory, and/or you get an error similar to “Error importing database:Could not read schema model header information from package. The model version ‘3.5’ is not supported.“: you should, as mentioned in the prerequisites, install the latest version of SQL Server Management Studio (or at least some 17.x version), which is an easy way to get the updated SQL tools you need.
  2. For the second command, you need to change the /sf: switch to point to the BACPAC file.
  3. For the second command you also need to change the /tdn: switch to whatever temporary name you want to use to import the new database. You will change it later, but the name you choose will determine filenames used for the database; so it’s best to choose something you haven’t used before. I recommend using a AxDB_YYYYMMDD format. If importinf gives you an error about the filename being in use, the easiest thing to do is pick a different database name. Remember, it’s just temporary!

Once you have it running, be prepared to wait a while. Like, a couple of hours is very possible.

Update the database

This is pretty much just running code in a query window of SQL Server Management Studio (SSMS). I offer some minor tweaks (and comments) to what Microsoft does.

-- Following is the SQL script you use after importing the database.
-- You might want to run it one "chunk" (up to each "go" statement) at a time.
-- You need to change the next line to use the "temporary" database name you picked.
use AxDB_YYYYMMDD
go


CREATE USER axdeployuser FROM LOGIN axdeployuser
EXEC sp_addrolemember 'db_owner', 'axdeployuser'
go

CREATE USER axdbadmin FROM LOGIN axdbadmin
EXEC sp_addrolemember 'db_owner', 'axdbadmin'
go

-- might error - deprecated or retail?
CREATE USER axmrruntimeuser FROM LOGIN axmrruntimeuser
EXEC sp_addrolemember 'db_datareader', 'axmrruntimeuser'
EXEC sp_addrolemember 'db_datawriter', 'axmrruntimeuser'
go

CREATE USER axretaildatasyncuser FROM LOGIN axretaildatasyncuser
EXEC sp_addrolemember 'DataSyncUsersRole', 'axretaildatasyncuser'
go

CREATE USER axretailruntimeuser FROM LOGIN axretailruntimeuser
EXEC sp_addrolemember 'UsersRole', 'axretailruntimeuser'
EXEC sp_addrolemember 'ReportUsersRole', 'axretailruntimeuser'
go


CREATE USER [NT AUTHORITY\NETWORK SERVICE] FROM LOGIN [NT AUTHORITY\NETWORK SERVICE]
EXEC sp_addrolemember 'db_owner', 'NT AUTHORITY\NETWORK SERVICE'
go


-- Not everybody needs this, but it doesn't hurt.
UPDATE T1
SET T1.storageproviderid = 0
 , T1.accessinformation = ''
 , T1.modifiedby = 'Admin'
 , T1.modifieddatetime = getdate()
FROM docuvalue T1
WHERE T1.storageproviderid = 1 --Azure storage
go


-- It is very unlikely you need this in a development environment. Microsoft includes it, but I do not recommend it for developers.
ALTER DATABASE CURRENT SET CHANGE_TRACKING = ON (CHANGE_RETENTION = 6 DAYS, AUTO_CLEANUP = ON)
GO


-- Begin Refresh Retail FullText Catalogs
-- If you do not use retail components, you do not need this section, but it doesn't hurt.
DECLARE @RFTXNAME NVARCHAR(MAX);
DECLARE @RFTXSQL NVARCHAR(MAX);
DECLARE retail_ftx CURSOR FOR
SELECT OBJECT_SCHEMA_NAME(object_id) + '.' + OBJECT_NAME(object_id) fullname FROM SYS.FULLTEXT_INDEXES
 WHERE FULLTEXT_CATALOG_ID = (SELECT TOP 1 FULLTEXT_CATALOG_ID FROM SYS.FULLTEXT_CATALOGS WHERE NAME = 'COMMERCEFULLTEXTCATALOG');
OPEN retail_ftx;
FETCH NEXT FROM retail_ftx INTO @RFTXNAME;

BEGIN TRY
 WHILE @@FETCH_STATUS = 0 
 BEGIN 
 PRINT 'Refreshing Full Text Index ' + @RFTXNAME;
 EXEC SP_FULLTEXT_TABLE @RFTXNAME, 'activate';
 SET @RFTXSQL = 'ALTER FULLTEXT INDEX ON ' + @RFTXNAME + ' START FULL POPULATION';
 EXEC SP_EXECUTESQL @RFTXSQL;
 FETCH NEXT FROM retail_ftx INTO @RFTXNAME;
 END
END TRY
BEGIN CATCH
 PRINT error_message()
END CATCH

CLOSE retail_ftx; 
DEALLOCATE retail_ftx; 
go
-- End Refresh Retail FullText Catalogs


-- Microsoft does not tell you to do this, but in a development environment, it is very likely that you want to change the database "recovery model."
-- Otherwise, you need to set up backups and/or clean up the transaction log; otherwise the database can grow at an unmanageable rate, eating up your whole VM drive.
ALTER DATABASE CURRENT SET RECOVERY SIMPLE;
go

-- You might want to shrink the size of the transaction log, just to keep room free on your VM's drive.
-- You can do it in the SSMS GUI, or use the following statement (with a change to reflect your DB name).
DBCC SHRINKFILE(AxDB_YYYYMMDD_Log, 50);
go

Re-provision the target environment & Reset the Financial Reporting database

I do not use Retail components or Financial Reporting, especially not in my development environment; so I have no reason not to follow exactly what is in the Microsoft documentation.

Start to use the new database

To switch to using the new database, you must first stop these three services (the Microsoft documentation is out of date on the service names):

  • Management Reporter 2012 Process Service
  • Microsoft Dynamics 365 for Operations – Batch Management Service
  • World Wide Web Publishing Service

Then rename the databases so your new one is named AxDB (the old one can be named AxDB_ORIG or AxDB_old or whatever; eventually you will want to delete it, to save space). Then you can re-start the three services above, and you are good to go!

You can rename the databases using the SSMS GUI (in Object Explorer) or by using T-SQL commands similar to:

USE master;
go
ALTER DATABASE AxDB MODIFY NAME = AxDB_OLD;
ALTER DATABASE AxDB_YYYYMMDD MODIFY NAME = AxDB;
go

Build/synchronize the new database

You should launch Visual Studio, go to Dynamics 365 > Build Models, and do a full build (with the “Synchronize Database” option) all of your customized and/or extended models.

After that, you are ready to go!

Calling an X++ method on a Data Entity through OData

Today I learned of a documented but little-known (well, to me) way to put an X++ method on a data entity, and make it available via an OData call. This is a great way to build more powerful integrations through Flow or other external tools and programs.

I won’t belabor it since it is documented, but something like this:

[SysODataActionAttribute("CalcMaintenanceDuration", true)]
public int CalculateMaintenanceDuration()
{
 //do something
 return 0;
}

…will let you make a method available through an OData call. You’d use a URL that looks something like this to call the method:

https://MYCOMPANY.operations.dynamics.com/data/MYDATAENTITY.CalculateMaintenanceDuration

This needs to be done as a POST request, so you can’t easily test it in Internet Explorer like you can just getting the data entity (which sends you a JSON file full of data). Calling and using OData from outside Dynamics 365 for Finance and Operations is outside the scope of this blog post… and, honestly, outside the scope of my expertise at this point. You’ll want to look at the OData standards (where the methods you can call are usually called “actions”) and/or documentation for your language. Tools like Flow can handle all this in the background for you, just letting you browse available “actions.”

Data Entities “versus” OData

The language and visibility of Data Entities and OData might be a little confusing for some users who are not sure about what they can see in the Dynamics 365 for Finance and Operations web GUI versus what is available via OData (in PowerBI, Excel, etc.) for them. I made this graphic to explain it, if it helps you.

Data Entities

(click here to view large – anybody who can explain how to get WordPress to show this in my desired size inline, let me know)

 

Expanding the hard drive on your development VM (“onebox”)

In some development shops, the development VMs will be cloud hosted and centrally managed. But I try to help out others who are managing their own development VMs; and if you’re restoring copies of a growing production database in order to debug a trick issue, you might find that the “as shipped” VM doesn’t have a big enough hard drive. Here are the steps to help you expand it.

  1. In Hyper-V Manager, stop the VM and delete any checkpoints. (Unfortunately, you can’t expand the drive for a VM with checkpoints.)
  2. In Hyper-V Manager, in the “Settings” for the VM, go to “Hard Drive” and find the “Edit” button under the virtual hard disk:
    Edit virtual hard drive in Hyper-V Manager
  3. Use the wizard to “Expand” to the desired size. (Unfortunately, it’s hard to know exactly how much space you’ll need. A database bacpac is significantly compressed, and you know better than me how much space you are likely to need.)
  4. Start & connect to the VM. You will not see the new space available yet.
  5. In the VM, run Disk Management. It is in Control Panel, or will come up if you type diskmgmt in the Start menu:
    diskmgmt
    Typing “diskmgmt” in the Start menu should bring up something that looks like this.

     

  6. In Disk Management, right-click the OsDisk (C:) partition and choose “Extend Volume.”
    Extend volume in Disk Management
  7. You should be able to just keep clicking “Next” through the wizard without changing any defaults; it will expand to fill the newly available space.
  8. That’s all, you’re done! When you are done restoring your bacpac and building it (or whatever else you wanted to expand the disk for), don’t forget that you’ve deleted all your checkpoints.

 

Although this isn’t a process specific to Dynamics 365, if you aren’t used to managing a VM, I hope it helps you get back to developing a little sooner.

Installing Application X++ Updates Still Sucks (part two)

PART TWO: IT’S NOT PARANOIA IF THEY’RE REALLY OUT TO GET YOU (OR, STEP-BY-STEP INSTRUCTIONS INCLUDING DISASTER MITIGATION)

This post is to walk you through applying X++ application updates for Dynamics 365 for Finance and Operations. As of this writing, there is a Microsoft wiki page documenting how to do so, but it’s slightly out of date… and, more significantly, it doesn’t emphasize the preventative measures you can use to avoid hoarking your development environment (or worse your source control) if you need to yank the hotfix back out.

Before reading this, you should understand what application update hotfixes (versus binary update hotfixes) are, and the general documented steps for applying them. You should also be aware that things can go wrong, and sometimes you need to remove them from your development VM, or pull them back out of source control even after they’ve been deployed. Part one of this blog post series covers some of these subjects, but I assume basic competence with source control.

Step “Zero”: Have Microsoft application code in source control

There might be some disagreement with this, and welcome discussion in the comments. As time has gone on, I’ve become less convinced that there is a need for this. It definitely is not “best practices.”

But, as you are probably aware, every developer VM has a gigabyte or more (depending what you count) of code, in tens of thousands of files, in Microsoft-controlled models like Application Suite. Since you can’t directly change it, and you know it’s already on every shipped “onebox,” it probably seems like a waste of time and space to check all that in.

But, when you apply an application hotfix, you are changing some of that code. The changed files will be checked in; that’s how they get built and distributed. If you discover a need to roll back that hotfix, and you want it to roll back to an older version (instead of doing a deletion)… it’s way easier if the known good previous version was already there.

This is what the “prepare” step (which I’ll discuss below) is for, by the way. (Something that took me about a year to figure out. Yes, it’s documented. Yes, I miss things sometimes.) But what if you forget or have a problem with the “prepare” step?

So, if you are starting a brand new project (or have a time machine), you might consider doing this as your very first changeset: Add, at minimum, the contents of ApplicationSuite/Foundation to source control.

Possibly a better idea: at least make a copy of the “pristine” ApplicationSuite/Foundation somewhere. Or keep an “as shipped” onebox available. Basically, somewhere to get the original code when it is overwritten.

Step One: Download the hotfixes you need

I think Microsoft would love it if we all kept up on hotfixes, constantly applying them as soon as they came out. Given the effort involved, I’m quite impressed if anybody manages to do that. Seriously… if you do this, please comment below and talk to me about how many hours you spend each week on doing so.

Although most Microsoft documentation talks about the “tiles” in the environment pages of Lifecycle Services, in my experience, we have been using “Issue Search” (possibly with guidance from direct contacts at Microsoft) to identify hotfixes that address specific, pressing problems.

For what it’s worth, as of this writing, going through the “tiles” sometimes lets you see the dependent hotfixes for a given hotfix; it also lets you download multiple hotfixes in a single package. I don’t know that the latter is always desirable; I find hotfixes problematic enough that I prefer to be able to apply them one at a time, with hotfixes isolated to indivdual changesets, in order to narrow down problems. Using the “tiles” does, however, make sure the hotfixes you see are applicable to your application version; if you go through issue search, you need to check the noted “Release.”

Either way, you need to download and unzip these in your development VM. I like to keep these downloads in a specific directory, each in a subdirectory that has the hotfix KB number in its name. Like so:

hotfix directory structure

Step Two: Have a fallback plan for your development VM (“onebox”)

There is a small, but non-zero, chance that the hotfix will fail and be very difficult to scrape off your dev VM. So: after you download the hotfix locally, and “get” the latest dev branch changesets from source control, and you’re about ready to apply… stop.

If possible, checkpoint your VM. This is the fastest and easiest way to get back to a good state if things go south. Checkpoints are not meant as backups, and you can’t keep a long string of them without eating a ton of disk space, so, you’ve got to regularly delete your old checkpoints… but before applying a hotfix, you should have a current checkpoint you can go back to.

If a checkpoint is not possible, I offer two other suggestions: first, make a copy of PackagesLocalDirectory (that idea is inspired by this nice blog post). Second, have a second development VM at the same version on standby, from which you can copy out files.

Step Three: Prepare and check in

To be very clear about the point of the “prepare” step: this makes it easy to check in all the files that the hotfix will change, in their current (pre-hotfix) state, to make it easy to roll the hotfix back. If you didn’t go with “step zero” above (or if you did but didn’t include every single Microsoft model), this step is crucial if you need to roll back the hotfix later. I’ve needed to do this multiple times. Don’t blow off this step.

You have two choices for your tool here: command line or GUI.

Choice 1: Prepare with SCDPBundleInstall (command line)

I’ve had trouble with SCDPBundleInstall. Others think it is more reliable than the VS addin. If you choose this command line tool, you’ll use a command like this:

SCDPBundleInstall.exe -prepare -packagepath=C:\AXHotfixes\MicrosoftDynamicsAX_KB4058584\HotfixPackageBundle.axscdppkg -metadatastorepath= c:\AOSService\PackagesLocalDirectory -tfsworkspacepath= c:\AOSService\PackagesLocalDirectory -tfsprojecturi=https://myaccount.visualstudio.com/defaultcollection

I couldn’t find thorough documentation for SCDPBundleInstall, but there’s a little in the official instructions for hotfixes.

Choice 2: Prepare in Visual Studio (GUI addin)

If nothing else, this tool is a nice way to see which hotfixes are already installed. If you go the GUI route, go to Dynamics 365 > Addins > Apply hotfix; browse to the hotfix; and click “Prepare.” Do not click “Apply” yet!

Prepare hotfix

For large hotfixes, this might take a little time; as of this writing, there are no progress bars or indicators, so you pretty much just have to wait until the GUI is responsive again.

ALWAYS: Check the “prepare” files in

Whether you used SCDPBundleInstall or the Visual Studio GUI addin, you need to check the prepared files in BEFORE you “apply” the hotfix. Otherwise, trying to roll the hotfix back will be a nightmare.

I won’t walk you through this. If you can’t manage a VSTS checkin with a sensible comment, you are not ready to manage hotfixes.

Step Four: Apply/Install

Once again, you choose between SCDPBundleInstall or the Visual Studio GUI addin.

Choice 1: Install with SCDPBundleInstall (command line)

If you choose this command line tool, you’ll use a command like this:

SCDPBundleInstall.exe -install -packagepath=C:\AXHotfixes\MicrosoftDynamicsAX_KB4058584\HotfixPackageBundle.axscdppkg -metadatastorepath= c:\AOSService\PackagesLocalDirectory -tfsworkspacepath= c:\AOSService\PackagesLocalDirectory -tfsprojecturi=https://myaccount.visualstudio.com/defaultcollection

Choice 2: Prepare in Visual Studio (GUI addin)

If you used this tool for the “Prepare” step, this is pretty intuitive. Just click “Apply” instead of “Prepare” this time.

Apply hotfix

NEVER: DON’T CHECK IN YET!

Unlike the “Prepare” step, you do not check files in immediately after you Apply/Install the hotfix.

However, you should look at your “Pending Changes” and make sure that any changed files it lists were checked in during the “Prepare” step. There might be a couple of files under “AxUpdate” that you can ignore (they are used by the system to track which hotfixes have been installed); but anything else should be an “Edit,” not an “Add.”

Step Five: Refresh models, resolve conflicts, and build

Occasionally Microsoft might include new models in a hotfix. It can’t hurt to start by going to Dynamics 365 > Model Management > Refresh Models before you continue.

If you have any customizations/overlays on the objects changed by the hotfix, there might be conflicts. Use Dynamics 365 > Addins > Create project from conflicts and check every VAR, ISV, etc. model where you’ve got custom code. (See my blog post on resolving conflicts for a few tips.) If you’re feeling like a cowboy, you might try just doing a build first, counting on that to raise an error if there are unresolved conflicts. (Later versions of D365 will eliminate customizations, but as of this writing, we’re not all there yet.)

Customizations/overlays or not, I have seen hotfixes break extensions. I have also seen them have dependencies on other hotfixes that are not included. You should ALWAYS do a build that includes every model the hotfix touched; and, probably, you should throw in all your in-house extension models as well. If you don’t know how to determine what models the hotfix touched, or if you are unsure and the hotfix contains more than one or two files, consider taking the time to do a full build of ALL models.

If this raises errors, hopefully you have the expertise to deal with them. But if you need to back out… this is why I recommended a fallback plan in step two. You can probably just undo the changes… probably. But it’ll feel good to have insurance that you can just restore a checkpoint.

Step Six: Check in, etc.

If/when you get a clean build on your dev VM, you are ready to check in. If you’re managing hotfixes, you should know how to do that. If you’re responsible for builds and/or deployments, know that those application hotfixes will be included just like any other custom code you write. It is, hopefully, smooth sailing at this point.

Step Seven: Oh Crap Something Went Wrong

Hopefully you rarely/never need this step. But, maybe you checked something in and it broke the build for some reason; or it has some awful side effect when deployed. Either way, you need to get it back out.

If you did step “zero” or step three correctly, and all the hotfix files have checked-in “known good” versions before the hotfix, you do a rollback like you would any other code. (Don’t know how to do a rollback? You probably shouldn’t be in charge of this stuff. You’ll need some remedial VSTS help for your job.)

Of course, if you applied multiple hotfixes simultaneously instead of doing them one at a time, I hope you’re rolling them all back simultaneously. Otherwise… honestly, you might want to roll them all back, and re-apply the ones you want to keep. Separately, one at a time, this time around.

If you did not do step “zero” or step three, you’ve probably ruined the rest of your day. Probably your evening, and tomorrow, too. But, I can share some hard-won tricks with you. There might be other ways, and I welcome feedback in the comments.

Step 7.1: Stop other devs from doing a “get”

Until you finish cleaning this up, warn other developers to avoid doing a “get” from source control, or they’ll have a ruined day too.

Step 7.2: If you don’t have a checkpoint, get a baseline

Rolling back this hotfix will probably hoark your dev VM.

If you don’t have a checkpoint, what you need to do is get your hands on a good baseline of the code before the hotfix. Your best bet for this is to set up a new clean “onebox” dev VM (if you don’t have one on standby) and “get” up to the changeset right before the hotfix.

Of course, if you’re doing all that work, you might want to just abandon your old VM and switch to a new one. Your call on what will ruin your day the least.

Step 7.3: Roll back the hotfix

Start by rolling back the changeset containing the bad hotfix(es). This will give you a “delete” for most of the files in it. If you’ve determined that the file did not exist before applying the hotfix, this is just what you want.

If a file DID exist before applying the hotfix, but it wasn’t checked in, the “delete” in this rollback will wipe it off of your dev VM. If any other developer did a “get” of the changeset with the hotfix, then later does a “get” of the deletion, I’m not sure what happens; but it might wipe the file off their dev VM too, ruining their day as well. That’s why we try to stop them from doing a “get” until we’ve cleaned up.

Step 7.4: Check in the rollback

Getting the rollback checked in means that developers can “get” again. As long as they “get” the rollback changeset in the same action as the hotfix(es) changeset, it doesn’t delete the system files off their VM.

Step 7.5: Apply your checkpoint OR replace “previously existing” files changed in the hotfix

Ideally you have a pre-hotfix checkpoint. Or can just switch to a different dev VM. Otherwise…

Using the baseline copy of the code you acquired in Step 7.2, copy any files that the hotfix changed into your dev VM. You won’t need to overwrite existing files, but, this will replace the ones you wiped off your system.

Whatever you did, you probably want to “get” the latest, maybe verify with a “compare,” and probably do a full build of all models to make sure your dev VM is in a working state.

And next time, I’m sure you’ll remember to take care during the “Prepare” step.

Here’s hoping you won’t need to revisit this blog post (or at least this step) again soon!