For the purposes of this series of posts, I’m going to assume that you are creating you build images using MDT 2013, but the concepts and code here will perfectly well morph easily into any other OS Build tooling.
Basically, I’m going to cover a basic overview of the problem, some concepts about how best to handle it and finally some actual solutions. I’ll be producing code that’ll support anything from Windows XP/2003 (Although if you are still producing builds for this times must be desperate), all the way up to Windows 10/2016.
The code included with these posts will be in VBScript initially as this is the only real way to deal with legacy OS support, and later on it’ll there’ll be a PowerShell version for Windows 10 and the modern Windows server versions.
Now, let’s open with a bit of a problem statement.
As someone whose job involves creating build images for the Window operating system family, you’ll already know that builds come in two basic forms:
- Empty Builds – Build images that contain nothing except for some client (SCCM/Radia/etc.), that will install everything other than the basic OS.
- Almost Complete Builds – Builds that include most of the components required by the end user, but especially the stuff that takes a long time to install such as Office or XenApp.
Now, from my experience, most companies employ the “Almost Complete” option in order to reduce network bandwidth and the total install time. I recently saw one of my end customers building a laptop using the “Empty Build” method and it took nearly four hours to complete. I think that most organisations find that unacceptable, and installing as much as you can in the build image can drop the deployment time to under an hour.
Whatever build methodology you are using, the one thing that is essential in any large organisation is build release management. I.E. if I create a build image called XXX v1.0.1 and deploy it to one machine, I should be able to go back to this build at any point to test stuff. Imaging that we have now progress to v1.0.3 and you get reports that the Finance team have tried to run an end of year reporting tool and it does not work. You probably want to check the basic infrastructure first, but you may later decide that changes made to the build have affected the application. Therefore, you need to be able to reproduce v1.0.1 exactly to see if the application did previously work. This would enable you to look at the problem in terms of the delta of items in v1.0.3 and v1.0.1. As a fall back, you should be able to give the Finance team a machine with v1.0.1 to work on whilst you solve the problem (hopefully going to be called v1.0.4).
Now, what has that got to do with “Windows Updates”? Well, the patching level of you build is part of your build version. Perhaps it is a security update or .NET Framework release that has broken the application. If your build process adds all of the latest patches at the end of the build process, then v1.0.1 could be as broken as v1.0.3.
Herein lies the problem. How to I produce builds with a set collection of patches? MDT 2013 (And the earlier versions), include task sequence items to handle updates:-
In this sample task sequence you can see the second item and the last item call the update scripting. These are built-in scripts for MDT and are very good. However, if I created a build next week, I could not make it the same as a build I created last week because it would install all of the patches for today, not the one’s for last week.
Now, 99% of the time this isn’t an issue, in fact it’s a benefit because otherwise the end-user has to have the patches deployed later on. However, large organisations need something better the than this. These organisations NEED to get back to v1.0.1 when it’s required. The problem may be affecting 50,000 users and someone will get fired in 36 hours if the problem isn’t solved.
So, how to I create a build with all of the patches from when I last created the build? Well the simplest way is to seal the build on the day in a single WIM file and not apply any updates beyond this point until the build is in “Live”. But what if you have several build images (e.g. developer build, 32-bit versions for Accessibility tools, One for Project 2016 users, etc…), then you’ll have to create all of these builds on the same day at the same time to get the patching to match across build versions.
Therefore, the solution I use, and I’ve put into quite a few companies at this point, is to use a script to detect the missing patches and download them to a repository. I then use that same offline scanning file (called WSUSSCN2.CAB). This is updated every time Microsoft release new patches and is publically downloadable from here: http://go.microsoft.com/fwlink/?LinkID=74689
This file is quite large (well over 100MB last time I looked). However, this is a point in time definition of all the Windows patches. If in a years time a re-create this build using the file I downloaded today, then the build will include only the patches relevant at the time of the release of this definition file.
In the next part of this post, we’ll examine some practical scripting to use this idea.
Now, I’m hungry and thinking of some kind of lunch…