Ramblings From The Litter Tray of Life

Archive for June, 2008

Where’s my MSCONFIG gone?!

Posted by graycat on 24 June 2008

[dodgey sci-fi voice over]

There exists and application.

So powerful that it keeps the load-up in line.

It has existed since the dawn of XP.

But now it has …… gone!

[/dodgey voice over]

Ok, it’s nothing quite that amazing or worthy of it’s own show with a special effects budget but it is very useful.

MSCONFIG is an application in Windows XP that is used to troubleshoot the start-up process of a machine. With it you can see what is starting up when, what applications are launching in the background, what services are set to start or not and a few other things.

The most common use I’ve seen for it is to simply speed up the boot process by cutting down what is launched in the background.

When you launch this from the command line (type msonfig and hit go) you’ll be presented with a number of tabs. These are:

  • General – gives you options on what mode to start Windows in (safe mode etc)
  • System.ini – is where old versions of Windows lists system files to be loaded, such as drivers and fonts
  • Win.ini – is pretty much redundant nowadays but pertains to old versions of Windows
  • Boot.ini – provides some extra boot options
  • Services – controls the services
  • Startup – deals with what applications are launched at start-up.

There’s also a “Tools” tab but that’s only in Vista so I’m not going to cover it here. Oh, whilst msconfig is native in XP, you can copy it over to a 2000 Pro machine and it’ll run fine from there. handy if you’ve got a mixed environment.

Now more often than not this little app is just there and saves a lot of time and effort. However, on the odd occasion simply typing msconfig in the run box will produce the dreaded “command not recognised” error message.

But worry not! It is still there, it’s just not been registered. To quickly find it you need to go to “c:\windows\pchealth\helpctr\binaries” and low, you will find what you seek.

Personally, if a machine slides sufficiently that I need to use msconfig and its not there then it instantly jumps up the queue for being rebuilt ala the ghosting process. However, if that’s not possible and you want to fix it so you can just type msconfig at the root command line you can correct the issue in the registry. The path declared under HKLM\sotware\microsoft\windows\currentversion\app paths\msconfig\

Advertisements

Posted in Applications, IT | Tagged: , , | Leave a Comment »

The Trouble With WSUS

Posted by graycat on 18 June 2008

Now I love WSUS 3.0 and think it’s a great service to add to even a small business. In fact, once I’d got a test version going, I couldn’t believe how good it was and quickly rolled it out across the company.

But there is a flaw. The one single issue I have with it ….. is it’s a complete bugger to troubleshoot when it doesn’t work properlly! This is especially annoying as it is so easy to install, configure and get running usually. It’s just when it dies then it’s a nightmare.

Ok, well maybe that’s not entirely true. There is a pretty robust logging side to it so you can see what’s going on and hopefully where it’s failing. However, when you get beyond this then you’re in for an interesting time. And I don’t mean involving 4 rolls of cling film, two midget strippers and a waffle iron!

My most recent run in with a wayward WSUS system was when I was called in to consult on a small business who were having some networking issues as well as WSUS seemed to be playing up. As I knew these systems pretty well already, I wasn’t overly concerned and set about my usual recon to see the lay of the land. Unfortunately I had an internet connection that made Amy Winehouse look stable and well balanced so this took some time. After a while I tracked down a few DNS, DHCP and AD replication issues if not their causes. The WSUS issue was a great one though – for all intents and purposes the whole damn thing had disappeared!

After much searching it was kind of located and I set abut trying to connect to it through the console as per usual but no dice. With some more time searching I decided it was time to reinstall the application (but leave the database and files) and see if I could breath some life into it.

My messiah like skills failed me and the server did not rise live JC on Easter Day.

The situation was now that the application would install but just as the configuraiton wizard was about to kick off it would die with a nice message saying “the console can’t connect to the server” and basically asking to make sure it was even there. Not all that much use but at least I could say the application was installed.

Google proved to highlight that this was not an uncommon situation and that there are numerous causes and solutions to match. After running through the first few and discarding those that don’t apply to this install I was left weaving the finger of blame towards .NET runtime 2 causing some issues with file permissions. A quick reinstall of .NET 2 and resetting the local file permissions and I was feeling good.

Until the console would still not connect!

A palm to the forehead and I realised I’d have to restart the services at the very least. So a few clickety clicks later and the services were reset and the console was firing up….. all the way into a successful connection to the WSUS server.

Simple in hindsight but all the possibilities meant that everything had to be crawled through one step at a time. In the future, just remember anything to do with websites (that’s how the updates are served I believe) then there’s going to be .NET of some kind in there as well as the networking service if it’s permissions based too.

Ok, not sure if any of that made sense but it’s heading towards 1am and I can definitely hear my bed calling me. The rest of it can wait until the morning.

Posted in IT | Tagged: , , | Leave a Comment »

Going Virtual!

Posted by graycat on 16 June 2008

Well it seems like it’s the hot topic at the moment in the server arena and I can finally say that we are dipping our toes into the virtual server paddling pool and I’ve brought my costume too!

My reasoning for a long time is that we are very very server heavy to the tune of one server for every 3-5 people. Now that is heavy even if he is my brother. This is partly due to the geographical spread of our sites with each site requiring certain local resources and offsite DR capabilities. However, I think the real reason we’ve grown to so many servers is the fear of putting eggs into a single basket – no matter how nicely it’s weaved, something is going to get broken eventually. Over time this has meant that we’ve got servers that just do this one job, or this few jobs or this and this but nothing else mainly for the fact that the servers were unstable and when, not if, they went down then you lost all the rolls at once. Not good, as they say. This sad state of affairs was, in my opinion, exacerbated by buying “custom” or white box servers and not maintaining them sufficiently for them to function for a long and happy life. More often then not the failures would be hardware related so the theory went “it you can’t trust that bit of kit, best we get another one and put this role on it over there”.

Not a bad theory and it did work.

However we’re now in the situation where the sheer importance of the IT infrastructure has been acknowledge throughout the upper echelons of the company and we are allowed to implement “care and support” of the servers as it should be. Things that seem no brainers now like separate comms rooms from the general office space, run and standby cooling, actually cooling of any type was a major fight in some places not to mention redundancy in power and backups. Because of this shift in attitude it is now very rare that we have any hardware failures.
When this is combined with an agreed rolling replacement of servers and an eye to consolidating to fewer physical machines vitrualising servers becomes very attractive.

From a support point of view, I’ve found that the one thing that causes the most outages is the OS itself. The hardware is now very high quality and redundant within each server so if you do fail a drive, the server just keeps on rocking. However, if you kill the OS …. you’re basically buggered. On this basis alone, why buy over-spec’d servers that will at more be 20% used?

So after floating the idea a few times over the last few months (usually to be confronted with a big “Not going to happen. We need all these servers”) I’ve got approval to run a test case and consolidate three existing servers onto the one VMWare rig as a proof of concept. If all goes well, we’ll look to expand the rig with more RAM and hard drives and move more servers over as thing are needed.

Our initial build will be a HP ML380 with two Quad Core 3GHz CPU’s running 12Gb of RAM with ~1.5Tb of external storage on a SCSI array. This can easily be expanded up to over 20Gb of RAM and another 1.5Tb external array or even directly on to a SAN if required ….. but that’s a long way off yet.

The servers we’re looking to consolidate have a variety of roles but the main ones are an existing “IT” server, a print and application server and SharePoint test machine. All of which are well over 3 years and most likely beyond 5 years too if memory servers me correct. Our initial plan will be to do a “physical to virtual” cloning of these servers and see how things go. However, once the dust has settled I’m planning on separating the application server into separate virtual servers for each application.

Why? You might ask. Won’t that just make more servers for you? Well, yes and no. The issue we have is not so much the number of servers but the number of physical boxes that all need looking after. With making them virtual we will be able to separate out those tricky applications that even after almost a decade of working with them, I still consider them to be just so much black magic and voodoo. If we have a completely clean OS install with just the application on-top then it can be customised to suit and completely sandboxed from other interferences. Trust me – for some of these applications when even if you update it then remove the update as per the guidelines they still won’t work, this is a great bonus!

So it looks like I’m going to be getting a crash course on VMWare Enterprise in the very near future. Not that I’ve got nothing else to do (anyone need a job? Got a helper spot or two going!) but I’m actually really looking forward to this …. and it looks like I’ve been given it as my baby too so fingers crossed! I’ll write about anything interesting I come across. Probably.

Posted in IT | Tagged: , , , | Leave a Comment »

Migrating DHCP Servers

Posted by graycat on 15 June 2008

We’re consolidating our servers at present so are shuffling a few roles about to their optimum places. One of which is moving the networking and AD roles in one office onto the one server rather then spread over a few. Migrating a brand new DHCP server is quite often as simple as copying the database from one to the other and restarting the service. However, if you want to move a love one with reservations and leases etc it becomes a bit tricker.

Microsoft very kindly provide information and even a “How To” for migrating between different versions (here). After reading it thoroughly I found it to be really useful but unfortunately it doesn’t cover my situation adequately. If you’re migrating from an NT4 / 2000 / 2003 member server to a 2003 member server then it is absolutely spot on. If you’re doing something else it becomes less so.

Anyway, before we go into the situation I tackled tonight (it is late saturday night / early sunday morning afterall!) here’s a brief outline of the MS page for those people who are migrating between two 2003 member servers:

  1. compact the source database using the jetpack command
  2. export the database using netsh dhcp server export c:\dhcp.txt all
  3. import the database using netsh dhcp server import c:\dhcp.txt all
  4. authorise the server if you haven’t already and away you go

As I said, the Microsoft step by step guide is really good for member server to member server migration. Unfortunately for me, I had to migrate from a member server to a domain controller and this proves a little trickier.

My first attempt resulted in an error message stating something like:

COMMAND FAILED: Unable to access audit file path as specified

Understandably this was a bit off putting but nothing I couldn’t deal with.
Initially I put it down to a file path issue as the source server has the OS installed on H:\ (no idea why, it just is ok?) and the new server has only the C:\ partition. My first attempt was to rejig the source set up to point to the C:\ partition for the backup and database path and try again. No change though so I tried a few more things like having services started or stopped at the various export / import phases but I still came back to the same message.

After reading the Microsoft document really in-depth, I spotted an almost through away line regarding importing on to a DC. The basis is that you need to explicitly be a member of the local administrators group and as there are no local user accounts on a DC, this could prove tricky.
The Microsoft article mentions in about half a line that you need to restart the server into Directory Restore mode and then use that local administrator account to import the database. This is a great idea ….. if you’re onsite and have physical access to the server to do this. If, on the other hand you are like me and are sat on the sofa watching a movie, having a glass of wine and working over a VPN then this really isn’t going to work all that well for you. Well, unless you happen to have either the server in your lounge or are sleeping at work. Again.

Worry not though! I found a trick that worked so smoothly I had to give myself a high-five. Sad, I know but it was the thing to do at the time.

My thinking at the time was that if I can’t logon in directory restore mode, what’s the highest local account I could access? Well, as the domain admin account I was using was second probably only to The Administrator account for admin rights I was a bit stuck. Until it hit me – I’m using a domain account but I need a local account on a machine that doesn’t have any ….. but all machines have a system account! So using the age old trick to kick off a command line box as the local system account (detail upon request if you don’t already know it) I ran through the import phase again…. and it worked a dream.

So in the end it turned out to be a permissions issue and that it could be resolved remotely by using the system account to do the final import. All that’s left for me to do tonight is clean the two servers up, deactivate the old scope before unauthorising the old server. Tomorrow I’m going to check in on the new server a few times to make sure it’s leasing correctly and all the settings have stuck after the transfer. To be honest, I’m 90% certain it’s going to work but there’s not point in risking it with a whole live network, is there?

Right, time for some more wine and a chill-out I think. Enjoy your weekend.

Posted in IT | Tagged: , , | 2 Comments »

Deploying Visio 2007

Posted by graycat on 3 June 2008

……. or how to force Visio 2007 to do deploy kind of like 2003.

Now if you’re new to the Office 2007 side of things you may not be aware that the deployment side of things has completely changed from the 2003 release ……. and it’s not for the better in my opinion.

With the previous version you would make an admin install point on a share and then create the GPO and apply a msp to customise the settings and / or parts you want deploying. Using this method you could have one install point and use msp’s to deploy different versions. For example, you can make an admin install point using Office 2003 Professional and then use msp’s to cut the installed applications down to just those that are needed for each of the other versions such as Basic, Small Business, Standard etc.

This was the approved and recommended method for deploying the Office suite of applications but no more. Microsoft are now pushing people towards using their System Management Server (SMS) or
System Center Configuration Manager 2007 as it is now called to roll out software and as such have seriously crippled it by removing the ability to use msp’s and customise your install.

That’s the bad side of things but if you think about it, how much do you actually customise your Office installs?

One improvement with 2007 is that the admin install point is created by just copying the contents of the CD on to a share. Unfortunately when you combine this with the lack of support for the msp transforms, it means you’re going to have to have a copy of the CD for each version you’re going to deploy. Annoying but not overwhelmingly difficult.

The only way to customise the install is using the config.xml file located in the same directory as the msi files. About the only useful things you can put in there are install location, software key, licensed user and company. All useful things but that’s as far as it goes really.

My basic process for rolling out the 2007 office applications is this:

  1. Copy the CD up on to the deployment share (for us this is a DFS site that is replicated round the company)
  2. Configure the config.xml file with the required info
  3. Create a GPO with the msi
  4. Apply as required and filter as needed

Pretty simple, huh? I’ll let you know how it goes with our roll out 😉

Posted in Applications, IT | Tagged: , , , | 1 Comment »