Running on Visual Studio 2012 Pre-RTM bits
Like many of you already know, tomorrow Visual Studio 2012 RTM will be on the MSDN sites for everyone to download. Some of you might already be planning how to upgrade to this version and are getting things in order. For many years now, Info Support has been a regular member of Technical Adoption Program (TAP) for the various versions of Visual Studio. Using these TAPs we gain much insight and knowledge and about what is coming around the corner and thus have an open discussion with Microsoft what improvements can be made to make the product even better. Since one only really experiences the product when using it, part of the TAP is committing on going live with the pre-RTM bits.
Going live with pre-RTM bits of whatever product always has its risks. So what kind of precautions did we take for our upgrades ?
- Have support of your supplier. Going online with pre-RTM without any support of the supplier is never a good idea. While the current Beta en RC of 2012 where go-live, this doesn’t implies that there is already 24/7 guaranteed support available. If support is part of the courtesy, you will need an entrance to get stuff fixed if they are really critical to your business. Our first countermeasure is therefore of course being part of the TAP program. Thanks to our TAP champs throughout the years, if we ran into serious issues we could always ask them for help.
- Limit the impact of a failure, but don’t test to small. In our case we went live with the pre-RTM bits within the software development teams of our R&D department. This is a reasonably sized development environment to get useful insight but not to large that we have major issues when the environments goes down for a day. If your testing team is too small, one might not hit the edge cases you will see in larger environments. We e.g. found a small issue with our TFS when we had people check-in in at 3:00 AM, that’s not something you will find with a 10 person team only testing during the day.
- Perform Upgrade Tests. Make a clone and/or get a backup of your environment and test the upgrade on there. We used the migration using a migration-upgrade for our TFS server and a rollout of the (base)images for the build servers and workstations. The reason we choose a migration upgrade instead of creating an exact clone of the TFS server is because this tests ones backup-restore plan as well. After the test upgrade we performed a few tests on the TFS server to see if everything was sound; get-latest of the complete server and compare it with the migrated one, recreated the cube and compared totals in the cub, and run a complete build of our product using the cloned environment.
- Keep an eye on patches and make use of your automated builds to save time. Since pre-RTM bits aren’t usually part of the testing matrix for general fixes and security patches, some patches might have interesting unsuspecting results in combination with pre-RTM software. Since we don’t have the manpower and time to test all the (windows) updates and the build artifacts before installing, we make use of our automated builds to detect this. As you might know, automated builds are a quality improvement tool which compile, integrate and test software in an automated fashion to catch defects early. Given we are in the Netherlands, on Saturday and Sunday people are usually not at work and therefore the code churn is much less than during the week. This means that on Sunday usually the same code is compiled and tested on Sunday as it was on Saturday. Most people therefore think it’s wise to not trigger the build since it will be the same code anyway resulting in the same build results. When one however installs the patches between the build on Saturday and Sunday the build would only reflect the changes made by a given patch showing its impact crystal clear. This saves you a lot of time hunting down what had impact the patch had. One example of this is the dreaded KB982168 which broke the WCF stack on Windows 2003. Using our automated builds we knew that patch was the culprit (and saved some production servers in the process b.t.w.).
- Hope for the best, prepare for the worst. For our R&D department we have a minimal five days of backups available on a moment notice and even longer ago on tapes. We use five days because if one makes a mistake on Friday, one would notice it on Monday. However since one would be trying to figure out what is going on, the decision of restoring the backup is usually on Tuesday. That means that on Tuesday I need the backup of Thursday (before the mistake), hence five days back.
Your migration will very likely be to the RTM version in which most of these risks don’t apply. These measure might therefore be overkill for you, however some elements might be beneficial for your upgrade as well.