24 May

Improve your Windows Deployment Strategy

We recently hosted a webcast discussing some best practices and changes IT departments can implement to help simplify and streamline deployment projects. As technology has evolved, so has Windows deployment. Whether you’re involved with new employee onboarding, PC break-fix or Windows migration projects, organizations should examine the existing procedures that are in place to be sure the current deployment strategy is maximizing all of the IT resources.

Here are a few best practices to consider:

Reference computers

Often times, organizations have many hardware and software configurations and they create different images for each one, this can lead to a large image library to maintain as well as additional expenses, overhead and confusion. Here are a few tips to avoid this:

Create a plan and formally document your imaging strategy, and then document the contents of each image. This can be a reference for you and your team around what you need when you need to create new images. You might find that you don’t need to create as many images as you thought, or that a minor adjustment in your approach can greatly simplify the ongoing management of images.

Keep it clean. Don’t add unnecessary drivers or files to the reference computer from which you will create your image. Start with a pristine baseline and make every action on that reference computer decisive. For example, don’t visit any web sites or launch any applications unless you want that baked into the image.

Use a virtual machine instead of a physical computer – it’s handy and portable, simple to update, and easy to experiment with and rollback.

Image Strategy and Driver Management

A common image management approach is to make and manage a separate image for each device; but this approach is incredibly labor intensive. It leads to additional administrative overhead that you don’t need, as well as an ongoing maintenance challenge, plus unnecessary bandwidth and storage consumption. It is also difficult to update and maintain images and to keep track of all of them over time. Organizations will sometimes take an alternative approach using the “blob method” either through a third party tool or on their own, by taking every possible driver they can imagine they would need and include it in the image. This approach can be a problem for two reasons. First, you are adding a significant amount of additional, unnecessary files which add to storage both on the network and on each device being deployed, which is a waste of disk space and network bandwidth if you deploy over the network. Second, a host of unnecessary drivers will be deployed to computers, so the machine will have dozens, if not hundreds, of unnecessary drivers and it still may not receive the drivers it needs and the device could have buttons that don’t work, or worse, reliability problems.

This is why the best practices we espouse include the following:

  • Architect an image management plan to utilize and maintain the fewest number of images possible
  • Separate drivers from the image. Use intelligent tools to add and modify drivers, preferably in a way that is completely independent of the system image.

Image Updates

Imaging can play an ideal role in the break-fix process, dramatically accelerating return-to-productivity (RTP) time and keeping IT worker frustration to a minimum. Using imaging as a part of the IT break-fix process is new to many people. Teams often don’t think to use imaging for break-fix, in part, because their imaging solutions aren’t reliable, and re-deployment can cause further problems rather than accelerating RTP time. The other challenge is that images typically are not updated very frequently because of the time required to perform updates and the continual growth in device diversity within an organization. So re-deployment, if successful, is met with a long cycle of updates and software deployment.

An effective, reliable deployment solution and good habits can make re-deployment a break-fix dream come true. But you have to be able to see to the following:

  • Make a schedule – You may have a de facto schedule, dictated by when you lose patience with how old the image is and the number of updates required post deployment; but you should create a schedule to make quick, regular updates to the OS and applications. If you’ve gotten the image architecture and strategy right, this will be easy: you will have a minimal set of unique images that can run properly on any device in your infrastructure. If it takes so long to make image updates that you only do it once a year (or less), consider a different strategy or find a different set of tools that will make it easier.
  • Find a non-destructive process – Some strategies and processes are destructive, meaning that the methods used eventually render resources, like the reference machine, useless and lead to starting over completely. For example, most Windows deployment processes rely on the Microsoft System Preparation tool (Sysprep) to prepare a configured computer for imaging which limits the number of times you can use that machine for an image.
  • Make it efficient – Look for ways to minimize impact on the network and infrastructure. Minimize bandwidth and storage by using technologies that support deduplication or delta images.

Accelerate RTP Time

Return to productivity, or “RTP” time, is a key consideration for IT end point jobs like deployment when it is used for desktop break-fix support. The time required to deploy an image is largely based on image size. It simply takes more time to move more data over the network or from media to a hard disk. But you also add to RTP time by having a lengthy post-deployment task sequence to install applications and updates. RTP time is lost work time, and although redeployment may shortcut hours of troubleshooting, you and your team look better if that redeployment is quick. A few basic tactics can decrease return to productivity times for workers:

  • Include applications in the image – This approach is contrary to convention. But the convention only exists because of 1) the typically destructive processes driving you to only make updates occasionally to avoid starting over, and 2) per-device image requirements making image updates a very time consuming process. Following other best practices mentioned earlier in this article, like using a non-destructive process, using a VM as the reference computer, and having a smart approach to hardware independence can obviate the long-held philosophy of image minimalism. Adding additional applications to the baseline image may add some time to the imaging process, the system will be available with minimal post-imaging tasks and people can get right back to work.
  • Keep the image updated – An up-to-date image not only decreases the number of updates required post-imaging, it can also be more secure. And if you follow our guidance on image updates, they can be quick and easy for you to perform.

About the Author

Aaron Suzuki
Aaron has spent his entire career as an IT consultant. Rising at the age of 26 to the role of President for a regional Internet application development firm, Aaron led the company successfully through the economic downturn of the early 2000's. From there, he moved to a broader technology business opportunity, taking on the revival of an ailing Seattle-based IT firm where he acted as the Director of Business Development. Aaron co-founded Prowess in 2003 and co-founded SmartDeploy in 2009. As the CEO, he helps create and instill process in production and management. He is responsible for the ongoing operations of the business, including day-to-day management. Aaron drives the strategic direction of the company, and he is the primary liaison to the Advisory Board.

Comments are closed.