Fitting AMD X2 4600+ Dual Core

Since Intel have released their latest Dual Core range AMD have slashed many of their prices. So I thought this was a good time to replace my year old (!) AMD 64-3200 with a shiney new AMD 64 X2 4600+.
 
The processor arrives a neat little box with a fairly hefty heatsink, fins, copper pipe and pre-applied TIM (Thermal Interface Material). Reading the instructions it all looked pretty simple so I set about uninstalling the existing processor.
 
Sweat #1. The retaining bracket of the existing heatsink is set into place with large plastic arm, unlocking this arm proved to be the first stage of making me sweat. Pulling the arm up is supposed to free the bracket, however the arm started to bend rather than turn. So after inspecting the bracket I managed to free the bottom end of the bracket which allowed the arm to easily swing free.
 
Sweat #2. The existing heatsink came away but become stuck on the second bracket location point. Not sure why, my view of this point was hindered by the PSU but with a bit of fiddling it eventually came free
 
Sweat #3. The last processor I’ve removed by hand was a 486Sx so this was all pretty much new to me, however it seemed simple enough. Pulling up the processor retaining arm was easy enough. I tried to pull the processor out but it wouldn’t budge. I wasn’t convinced the retaining arm was up properly so pulled it a bit harder. Suddenly the processor bay moved down a couple of mm <gulp>. Ok looks like it should do that since the processor was now easy to lift out. Hurray, stage 1 complete…uninstall successful
 
Sweat #4. After carefully removing the new processor from its packaging, whilst hanging off the grounded radiator, I carefully position the CPU over the slot. "Hmm, how am I going to line this up correctly with the…plop". The CPU went straight into the slot with zero fuss. Retaining arm down, job done.
 
Sweat of sweats #5. So far so good, now came the biggest battle…fitting the new heatsink. I carefully positioned the heatsink of the processor consious of getting the square of TIM to line up with the processor. If fitting in nice an easy. Now for retaining clip #1, "click". Retaining clip #2, "come on you s**", "come on…", motherboard creaking, "come on!". No joy. I took the heatsink back off and tried again at least another three times, each time looking dispearingly at the smudged TIM. For some reason I just couldn’t get both clips on. Eventually I resorted to brute force (and a fair dollop of ignorance) and finally "click"…hurray!
 
I’ve been reading a few stories about people having to re-install XP after fitting a dual core so with some trepidation I switched the machine back on. The BIOS startup showed the correct processor and continued to load XP (still holding breath). XP started fine, a ‘new hardware’ dialog popped up installed a new driver and everything was fine…or was it?
 
First things first, I took at look at the CPU temp’ probe. "32C", ok. "35", Huh. "40", hmm. "47", gulp. "50", eek. "47, 45, 47". Ok stabilised at high 40s, even for a dual core that seems a tad hot for doing very little. So I ran a performance benchmark and it seemed to be about 12% quicker – ho hum. The really odd thing was that Cool n’ Quiet utility was showing the processor to be constantly max’ed out. So after a little forum searching I found that I was running the Microsoft rather than AMD drivers.
 
Sweat #6. Tried to install latest drivers, "you must uninstall previous version". Ok, uninstalled previous version. Installed new version, "Fault, could not find necessary file". Oh no, so I’ve uninstalled some driver I didn’t install in the first place and the new ones won’t install. Rebooted and everything seemed fine. Tried to install the new drivers and it worked. Rebooted and Cool n’ quiet (CnQ) started working, also the usuall 4/5 second startup pause in Windows was gone. So it now seems like XP understands the dual core better and core temp was now stable at 32C (although under server load 47C) – better. However, re-running the performance test with CnQ the machine was now slower than it was with the 3200! So I’m little puzzled by that, perhaps the performance tests don’t enjoy having CnQ on.
[Edit remember to switch the Power Option scheme to minimal to get CnQ to work)
 
So overall upgrading from a 3200 to a 4600 was ok, the CPU temp has risen under load but but the rest of the temp’s have remained about the same. The performance doesn’t seem to be good when using the benchmarking tool ‘PerformanceTest’, but it does feel quicker to load the initial drivers and the like. Plus there isn’t the usual start menu lag when another application is opening. So it does feel much more like a true multitasking environment rather than time-sliced. One of main reasons for going dual core was to use Microsoft Virtual Server. It does seem to work better with dual core, still not as good has having the proper server but it doesn’t seem to "drop out" quite as much as it did before. For purely scientific reasons I now need to load up a few FPS games, just to see if there is any performance difference you understand.
 
 
 
 

Western Digital MyBook Premium

I decided that I needed some "offsite" storage so I bought an external drive from Western Digital. There were so many choices but I finally settled on the WD MyBook range because;
  1. Ok price per GB
  2. Use an external (or rather non-bus) power supply so it won’t suck the juice from a laptop or (more importantly) won’t require a USB port to itself
  3. Understands when the host machine is on and off and acts accordingly
  4. Quiet to run
  5. Doesn’t look ugly

I plumped for the Premium rather than the Express because of it’s "Capacity Gauge" – a little coloured ring that shows how full (or not) it is. They also do a Pro but that seems more about connectivity.

So how is it? Well first impressions are pretty good. I plugged into both USB and Firewire without any problem, which is great ’cause I’ve got lots of USB devices and no Firewire so it won’t take another devices place on the hub. It’s not the fastest drive in the world and the backup software was frankly odd to use. It seems to run in one of two modes; a) All documents/pictures/music etc – it will scan all the folders on the drives you tell it b) You tell me what folder to backup. I’d really want a combination of the two but there you go. Plus it took about 5-6 hours to backup the documents I requested but only seemed to take about 30 mins to manually copy the files over. So I’ve gone back to simply copying the files I want to the drive. OK I won’t be able to do incremental backups but I’ve always been wary of those.

Is it noisy? No actually it’s pretty quiet and will shut itself down when not in use so I’m very pleased with that. It does vibrate when sitting on my desk but putting a magazine under it was enough to stop that, so that says more about the quality of my desk than the drive! It also stays pretty cool so environmentally very good.

So it’s all rosy? Not quite, the reason for me upgrading to a premium doesn’t seem to work – the capacity ring. I’ve gone through the KB and uninstalled/reinstalled but nothing has convinced it to start working. I’ve posted to WD so I’ll see what their support is like.

 [Edit]

Over a week has passed and I’ve not received a single response from WD. So I’ve sent yet another "Question" to them. So far I’m not very impressed with their support team. I’ve also connected the MyBook to another PC and still no joy viewing the capacity.

[Edit]

Guess what, still no response. To be honest I’m pretty disgusted with them. Well I’ll give them another few days but so far I can’t honestly recommend buying any WD disk if this is the level of service you can expect.

Remote Desktop, Restarting in Console Mode

Remote Desktop is a fantastic way of using a remote Window machine. The biggest problem with it is using Visual Studio on a Windows 2003 machine. For some reason, probably something to do with clashes with the console session, Visual Studio runs very slowly. The workaround is to use the /console flag when starting remote desktop, this works like XP and takes over the machines desktop console session rather than creating a new "background" session. However, the problem here is that it seems almost impossible to ask the machine to restart or shutdown. The only way I’ve found of doing this is to issue shutdown -i from another machine (could be your remote desktop client) and make sure you don’t have "warn users" checked, that seems to do the trick.
 
[Edit] To use console mode when using a Mac as the the Remote Desktop client you must hold Apple Key down whilst pressing connect

URI or Windows file format?

I’ve been writing a component that streams a file to the user based up a supplied path. The path can either be a Uniform Resource Idendfier (URI) or a standard Windows path, e.g. C:\My Documents\MyDoc.doc
 
The problem is that the majority of the streaming enabled .net components accept either a URI or a file path but not both. Therefore my code has to understand if the passed in string contains a URI or a file path. My first port of call was Uri.IsWellFormedUriString which returns true if the string contains a URI. So if it’s a valid URI then I use the URI Streaming components otherwise I’ll use the FileStream components…however, what if they’ve supplied a URI but have simply messed up the encoded, this would fail and then attempt to supply a dodgy URI to the file streaming code. I thought I was going to have do some exception catching (yuk) but to my suprise File.Exists doesn’t seem to care about the format of the file path, either it can see it or it can’t, regardless of what rubbish you enter. So I can happily assume a file path for any invalid URI and providing I check that the file exists I don’t have to do any nasty exception handling
 
I also encountered some slightly quirky behaviour to URI.TryCreate that creates a URI or fails without raising an exception. I started to use this but discovered that it always seemed to successfully create a URI (using the file:// schema) no matter what invalid text I threw at it. So be careful when using that method.
 
 

Macbook arrived

Finally moved into the dizzy world of OSX from OS8 with a bottom of the range Macbook. So what’s it like?
 
Macbook Laptop
As a laptop the Macbook is very nice, small and compact with a nice screen. The keyboard has a very nice (and quiet) action. The Magna-plug thingy is odd, almost tears your arm (ok it pulls a bit) when the plug gets near the socket. The screen is very nice, a good 13.4" widescreen size which, although is reflective, produces rich colours. I’m also very impressed with the two finger scrolling on the taskpad, great idea.
Now for the problems…
US keyboard!!! Ok it’s not the biggest problem in the world and I’ve suffered with Sun systems in the past and yes my aging G3 is US too, but come on Apple get your head out of your xenophobic butts and give us a UK keyboard.
The tiny build in web cam is great but it’s positioned at exactly the point you use to open the lid, so I dare say they’ll be lots of thumb prints on the lens soon.
I’ve read reports about the machine getting too hot and also of it discolouring (probably related). Yes it does get hot, but it a lot hotter than my Dell…hmm perhaps it is, I guess only time will tell if has an adverse affect on the components.
OSX
Ok I’m a Windows user for most of the time so I do tend to be all "fingers and thumbs" when it comes to using a new OS, a recent excursion into Linux confirmed this. Overall it’s fine.
The initial setup process was annoying. The first confusing choice was, "US or GB keyboard". Erm, well I want a GB keyboard but this is an Apple laptop so I don’t have the choice you little *****! Next was the networking… I’ve got a wireless network but didn’t have the encryption key to hand but it was a real nightmare trying to persuade the setup to move past that. Ok there was an option not to use wireless but I didn’t know how difficult it would be to persuade it to use wireless again (turns out it’s easy). Fortunally my neighbours don’t bother with secure wireless (another blog on that later) so I happily piggy-backed onto their unsecure network.
Machine name, I’ve yet to sus this one, currently it’s called after me, or according to my router HOST1. Hmm, annoying.
Now, context menus. I know Windows have had a second mouse button since the year dot, and Sun had at least 20 (or was it 3) and Apple have, until very recently, refused to ack’ this but it’s so much easier to right-click rather than command-click. Come on Apple we want a second mouse button on the laptops, you know it makes sense swallow some of that pride and get on with it.
The dock…what on earth is the horribly over-large blob taking 1/3 of my screen? Yes look at the funny bobbling icons, yuk…ok it may appeal to people who still stare at planes with wonder but come on. So after right-clicking (yes I’m calling it that) I got the dock to a decent size.
"I’m doing something indicators" – normally with any computer you get some indication that it’s doing something. For me, that usually means a hard disk LED. Apple have always been quick to get rid of ugly things that shouldn’t have any use, great. However, there have been a number of times that I’ve launched some application and I’ve been faced with a completly blank screen, no animated beach ball, hour-glass, nothing but a normal pointer. With no disk LED I’ve simple no idea what’s going on. Sure this is the OSs fault but a little support from the hardware wouldn’t go a miss here.
.Mac account
You get some, now standard, applications with OSX such as iChat. But wait, you can’t simply use it, oh now you’ve got to subscribe to a .mac account for £70 per year! Now I’m all for paying a little extra for the extra features, such as on-line file sharing but really…paying for an instant messenger account is just too much, in fact paying for a web space these days is a cheek.
Interop with Windows
So far so good, connected to Windows shares without an problem. Downloaded and run Microsofts Remote Desktop Client so I’ve managed to develop on a PC from the Macbook without too much fuss…apart from the right-clicking de-selecting text before displaying the context menu. The other way around is a bit more of a problem. I installed VNC for OSX, which works ok. The speed in no way matched the RDP of Remote Desktop but its ok to do the odd bit of work but you can’t use it to work remotely on the machine…well not if you’ve got my patience. 

Serializable Dictionary

Whilst writing a WinForm with a TreeView I came across ye olde problem of using components that don’t support Serialization. In my case I was shadowing the TreeNodes with my own Dictionary object, actually a generic myDictionary<string, myObject> only to discover that Dictionary doesn’t support Serialization either
 
So faced with writing some custom serialization code I quickly reached for Google and found…
 
So now I had a serializable dictionary I just had to sort out my components. The first annoying problem was that my component contains a field of type TreeNode. So like a good .net developer I added the NonSerialable attribute to the field. But when I went to serialize the component it complained it couldn’t find out what a TreeNode was, grrr. So the mere fact of exposing a public property of TreeNode, even though the resulting serialization won’t use it, is enough to fail the XmlSerializer. I’ve not thought too much about a solution to this since I only use this class internally so I changed the property from Public to Internal and that did the trick…although what if I did need to expose that as a public method? So the code now happily writes my dictionary of objects to disk.

Talking about Visual Studio tips and tricks

This helps with one of my greatest peeves, that of writing too much code on one line. In the age of huge monitors and people that use freakishly small fonts the line length of code can quickly become unreadable for many users.

 

Quote

Visual Studio tips and tricks

Column Guides in Visual Studio

A lot of coding guidelines specify the maximum length for a line of code. For instance in the CLR, Microsoft like to keep lines of code under 110 characters long. Visual Studio has a feature which lets you display a vertical line at the column of your choosing to help visually see when a line is getting too long. This does involve mucking in the registry so the usual disclaimers apply.

To enable this feature, set:

[HKEY_CURRENT_USER\Software\Microsoft\VisualStudio\8.0\Text Editor]

"Guides"="RGB(192,192,192) 110"

The values passed to the RGB function let you specify the color of the line, and the number following tells Visual Studio at what column to display it. The

Snippets and auto-stubs

A few more tips for getting the most out of Visual Studio.
 
  • Snippets from Intellisense. When you start to type in the editor you’ll get the trusty intellisense drop down. If there is a snippet available then it also be shown, for example as you start to type t..r..y you’ll see the "try" snippet, but how do you invoke it. After much trial and error yuo need to double tab, i.e. with the try snippet highlighted press tab twice in quick succession
  • Auto-stubs. When you’re inside a function and you realise you have to going to have write another function and call it, then write the the call to the yet-to-exist function an you’ll see a little block appear under the new function name. Selecting that will automatically create the skeleton code for your new function using the arguments (and datatypes).
  • Auto-add "Using". If you add a call to a component you have referenced but don’t specific the full namespace and haven’t yet added it to the "using" statments then a little block will appear under the component. Selecting that will auto-create the correct "using" statement

Toolstrips and Application Settings

I’ve not really spent any serious time writing a Windows Forms application in .net so I decided to see what’s changed.

The first thing that struck me, and subsequently consumed all of my time was the seemingly simple concept of a menu and tool-bar, or in .net terms MenuStrip and ToolStrip. Back in the day creating a menu was a basic affair, you said you wanted a menu and one would appear at the top of the window that you could change using editors of varying capabilities. This time I double clicked the menu control and the menu embedded itself within a container half way down the page. Now I’d also been playing with docking and anchoring so try as I might I couldn’t persuade the menu to appear at the top, it was always below another container. At this point I realized that I needed to get my head around the ToolStripbusiness, and so follows my tiny guide to using menus and toolbars.

1. The Tool Strip Container

Start off dropping a Tool Strip Container on the form, this allows the user to move any menus or toolbar to whatever edge of the screen they so desire.

2. The Status bar

Drop a status bar control inside the Tool Strip Container and dock it to the bottom of the container

3. Menu & Toolbar Strips

Drop a Menu Strip inside the Tool Strip Container followed by a Tool bar strip. You may want to use the smart icon to automatically insert the standard menus and buttons. I’d also recommend setting the “GripStyle” of the menu to visible.

4. Dock the Tool Strip Container

Set the docking mode of the container to “Full”.

You should now have the basis for a standard form where the user can drag the menu or tool strip to any of the edges of the window. The next trick is getting the application to remember where the user has left their toolstrips, this brings into to play the concept of Application Settings. In Visual Basic 6.0 saving user preferences was done via the SaveSetting and GetSetting API. This vanished with .net so I wrote a little wrapper for Isolated Storage to do pretty much the same thing. However, .net 2.0 introduces the ideas of Application Settings and User Settings. It’s a good concept, not only can you ask a form to save the settings you can also ask it to reload the last set or even reset to the default values, powerful stuff. So I can finally throw my Isolated Storage wrapper away, but I was still faced with the tedious job of storing the location of these ToolStrips. Fortunately before I dived headlong into grabbings lots of location details I discovered a static class called the ToolStripManager. This exposes LoadSettings\SaveSettings that does exactly what I wanted, remembers all the toolstrip location details on a per user basis. So just add;

ToolStripManager.LoadSettings(this);

ToolStripManager.SaveSettings(this);

 

To your Form constructor and FormClosing functions respectively and the user preferences will be adhered to. However, ToolStripManager doesn’t support the Form’s Application Settings concept of Reload or Default. Reloading isn’t too difficult, since the save takes place only when the Form is closed you can simply recall LoadSettings and it resets to the last known settings. “Default” is a bit trickier, you need to put the ToolStrips back to they way they were when the user first opened the application. As the application author I knew where they should go, but how do you tell that to the application? I guessed that the trick was to add the controls to the ToolStrips Container.Controls property, or in my case;
toolStripContainer.TopToolStripPanel.Controls.Add

This worked fine, the TabStrips would all appear in the top container where they started life, but not quite. The order of the TabStrips was seemingly random, no amount of reordering or setting of indexes would provide a constant result. Reading a bit more I discovered that I should be using Join and not Add…it’s so obvious(?). Join allows you to specify the row in the container you wish the ToolStrip to appear, therefore allowing me to consitantly display the menu before toolbar.
toolStripContainer1.TopToolStripPanel.Join(toolStrip, row);

 

NB. Development Gotcha

During development of the the menu my SaveSettings kicked in and saved my new menu in the wrong position. Therefore whenever it loaded the settings back my menu would be incorrectly layed out. To fix this you have to navigate into your own document store and alter the settings XML by hand, usually located in something looking like:

<drive>:\Documents and Settings\<account>\Local Settings\Application Data\<application name>

 

Better performance by using a local var vs param in SQL Server?

I’d forgotten about this “issue” until I read on the ASP.NET forum someone else having the same problem. I can’t vouch for what happened to the that user but I can explain what I saw. Basically I had a very simple stored procedure that took an Id value in as a param and then happily used it with a query, something like (not the actual query for customer IP reasons):
Create Proc MyProc(@Id int)
As
Select Col1, Col2 from MyTable Where
Col3=@Id
 
The proc worked fine until the database data was scaled-up. Suddenly it performed very poorly and no amount of statistics updates would fix it. So I went through the usual steps of breaking the inner query out of the procedure and hard-coding the values:
 
Select Col1, Col2 from MyTable Where Col3=3
 
It worked great, so I started to add bits back until it went wrong again:
Declare @MyLocalId int
Set @MyLocalId = 3
Select Col1, Col2 from MyTable Where
Col3=@MyLocalId
 
Worked great…
Create Proc MyProc(@Id int)
As
Declare @MyLocalId int
Set @MyLocalId = 3
Select Col1, Col2 from MyTable Where
Col3=@MyLocalId
 
Worked great…starting to get concerned…
Create Proc MyProc(@Id int)
As
Declare @MyLocalId int
Set @MyLocalId = 3
Select Col1, Col2 from MyTable Where
Col3=@Id
 
Poor again…hmm…
Create Proc MyProc(@Id int)
As
Declare @MyLocalId int
Set @MyLocalId = @Id
Select Col1, Col2 from MyTable Where
Col3=@MyLocalId
 
Worked great!!!
What just happened? All I’ve done is de-reference the param into a local variable and used that. It worked and since I was under a lot of pressure at the time I forgot all about it. However, since reading that at least one other person is seeing the same issue I’ve decided to dig a little deeper. My first clue came from TechNet that stated that by using a local variable the query wouldn’t use statistics, now I’m not sure about that but maybe that’s something do with it. The reason that is plausible is because at low data volumes the procedure worked fine but with different statistics it started to perform badly. Although uncommon it’s not impossible for the Query Optimizer to get confused and produce a poor plan. If you remove its ability to use statistics it falls back to a sort of worst-case algorithm. In this situation it maybe that the default worst-case plan is better that its optimized attempt, in SQL 2005 you could then force the “correct” plan. But it’s still a theory at the moment, but I’ve dug out a Kimberly Tripp webcast on how params work so hopefully that may shed some light on the matter. More posts soon, in the meantime if anyone else has any ideas then please post.