Beware of duplicate cookie values

Came across a strange problem today that I thought I should record since I’ve already wasted enough time investigating it. The problem was that a web site was recording a setting to a cookie each time a page was unloaded; e.g. Preference1. Any page can then read the cookie and take appropriate actions. However, for some reason the preference was not correctly read on any other page. Puzzled I examined the document.cookie property. For some strange reason the value was duplicated; e.g. Preference1=hello; Preference2=bla; Preference1=goodbye. So when the page wrote to the cookie the 2nd version changed but when the cookie was read the 1st version was taken. Once I cleared the cookies for the domain everything went back to normal. The site only has one domain, no sub-domain or anything. Very strange behaviour.

So there it is, an aide memoir for me, but if anyone can provide an explanation I would be grateful. I should mention this was all client side JavaScript.

Internet Explorer (IE6 & IE7) fault in unknown module

Just been through a very frustrating couple of days with IE. A few days ago the machine had used an old dial-up connection to ftp to a site when the machine crashed. After the crash IE refused to connect to a web site. It just sat there not doing anything. Looking the Event Log it said there was a fault in an unknown module. After installing and running many anti-spyware tools nothing was found. So maybe it was an IE6 problem. So I upgraded to IE7 only for it to have the same problem. I upgraded to XP SP3, still the same problem. I reset all the IE options, disabled all add-ins, nothing. Interestingly looking at the settings in network connections also seemed to be a bit dodgy. I rebooted in ‘safe mode with networking’ and it worked!? So I disabled the same set of services, used sysinternals ‘autorun’ to remove all the start-up applications but still nothing. Something odd was going on. To make things even more confusing IE7 would *sometimes* work if you typed in a URL somewhere else, e.g. Windows Explorer or Start->Run. So it didn’t seem that it was the core IE components. I then tried to open an XML file that on my disk and again IE crashed but this time rather than freezing it displayed the old ‘send error information’ dialog. In desperation I tried to send. Then the old dial-up dialog popped up, ah that’s odd. I turned off all the dial-up settings from the dialog. After a reboot the IE now seems to be working. I’m not sure if it was the dial-up settings that were causing but they did seem to bracket the start and end of the problems.

Browser speed wars, it’s not all about rendering

After reading a quick performance guide to the various new browser engines it struck me that it has missed another aspect, that I just happen to have been experimenting with…that of caching. A current customer I’m working with has some issues with poor performing and unreliable download speeds. With this in mind I’ve examined how three of the common browsers deal with caching, after all if you can cache a resource then the download speed becomes far less important. Using IIS6 I created a simple HTML page with containing a page of text, a single jpg and a link to a stylesheet. The page expiration is set to a year in the future and with the Etags both on and off. The results were a little surprising;

Internet Explorer (6,7,8) is determined not to be caught out by changes on the server. First off I do a hard refresh and what the network traffic. Every resource is returned to the browser "as new" from GETs with lots of "200 ok" responses from the server. With every subsequent refresh IE asks the server if the resource has changed, resulting in a "304 not modified". So IE creates a server roundtrip which is small, however, is it necessary? I’ve specifically provided a expiration date so why is it ignoring me? Interestingly if you shut the server down, IE will ask for the server and then realise it’s not there and just server up the cached version. So what happens if I change any of the resources? Well since IE asks the server each time, IE reflects the changes immediately. This is good news if you change your "static" content but not great for my customer!

Firefox (3) provides a very nice compromise. As with IE the hard refresh returns all the resources. The next two requests seems to see Firefox continue to behave like IE with it asking for changes. However, after the 2nd refresh FF seems to accept that the resources are not changing and stops making server roundtrips until the item hits the expiry date. To be absolutley accurate, FF did seems to occasionally check for changes but very, very infrequently. If the server drops FF continues to server up the resource from the cache. This is good news for my customer since the server roundtrips are almost cut to zero.

Safari (3.1.1.) works pretty much like IE, with the exception of when the server is down no page is rendered…boo.

To give you some idea of the difference in speeds because of this caching/non-caching behaviour I ran my tests against the BBC’s main news page, news.bbc.co.uk. The total time to return all the resources for a hard refresh for IE and FF was ~3.5s consisting of some 87 requests. The next two refreshes remained at 87 requests but were mainly the "has it changed" request resulting in better performance ~2.5s. However, the next refresh saw FF shine. IE remained at 87 requests and ~2.5s whereas FF, relying on its cache, only made 17 requests taking a mere 700ms. Ok ok so this doesn’t take into account the rendering speed, but rendering speed does not improve your download speeds, caching can fake that improvement. So at the moment I have FF ahead of IE. I’ve yet to test Opera and I have to say that Safari does render very quickly but sort out your caching Web Kit!

NB. If this is very important to your business please run the tests yourself, I found the following to be useful tools; FireBug’s net performance tab; Fiddler2 and WireShark.

getElementById gotcha

I was looking, and I can’t stress this enough, at someone else’s HTML/JavaScript. The code looked something like;

<a onMouseOver="elementid.className=’New’" />…<img id="ElementId" src=…>…<div id="elementid">…

so I was asked to find out why this wasn’t working in a number of browsers. Well the first thing I spotted was the non-standard way of accessing the className in OnMouseOver. So I changed it to document.getElementId(‘elementid’).className=’New’; No errors but the style change (which was working in IE with the above code) stopped working in IE. It took a fair bit of head scratching but as you can probably tell by the HTML i’ve added but yet to talk about the problem was the img tag. In the first example of directly setting the className the call is case-sensitive and correctly alters the style of the  DIV. However, and this was a surprise to me, getElementById is case-insensitive and was returning the img tag rather than the DIV! Therefore, and it comes as no real surprise, don’t rely on the case when creating unique Ids.

IE7 doesn’t launch in Vista

I’m currently seeing an odd problem with one of my machines running Vista. Quite often when I launch IE7 it simply doesn’t appear, I can see it in the process list but no browser. I’ve tried killing the process off and relaunching but that doesn’t help. The only thing that seems to work is to kill the explorer.exe (windows explorer) process off (which kills the desktop) and restart that. If it’s some Antivirus plugin I won’t be amused!
 

Safari for Windows

I must admit to raising an eyebrow when I saw the release of Safari 3 Beta for Windows. I then read a very negative opinion of it in Mac User. The premise of the argument was that by giving Safari to the PC (i.e. the masses but a Mac-ite would never admit to that) that this would encourage the dreaded Hacker to target Safari and therefore the Mac. I think there is some merit to that argument but don’t really believe it is a massive deal (I bet I’ll be eating my words in the future now) because;

  1. The browser had better be sandboxed and besides the underlying OS are so different you’d have to target the Mac, although granted once you break in on one platform the same code would probably work on the other.
  2. Mac Users need to take security seriously. Apple were so ***** stupid with their advertising about no viruses, I hope someone sues them when the get a virus.
  3. Many web sites fail in Safari because they are aimed at the masses and a large number of developers simply don’t have access to a Mac (and therefore Safari) to test the site on. It is for that reason that I only ever use Firefox on the Mac.

So the bottom line for me is if Apple want me to use Safari then I need more site support for it. That means making it standards compliant and allowing site developers to test it. Therefore shipping a Windows versions makes a lot of sense to me.

Testing for Safari when you don’t have OSX

Developing using Microsoft technologies can make it expensive to test your site for other browsers and platforms. IE, Firefox and Opera can ease (or maybe that’s make things harder) to test your site. However, the big bugbear is OSXs Safari. Well until the OSX86 project managers to make it legal to run OSX on any PC I think it boils down to…
  1. Buy a Mac – nice if you afford it
  2. Use a screen shot service – pain to use if your site has any kind of dynamic changes, lets face it they’re a pain to use full stop
  3. Use another KDE based browser

Option 3 is the one I’m currently recommending. Safari is based upon the KDE browser engine, so why not use another browser that uses the same fundamental rendering engine, e.g. Konqueror. Well, the first problem is if you’re running Windows there currently isn’t a version for good ole’ Windows. The answer is turn to Linux, well sort of. My advice is to get hold of the VMWare player with a downloaded image of your favourite flavour of Linux, mine is Ubuntu (if only for the name). Install Konqueror and off you go, Safari like browsing without OSX. It’s not a 100% guarantee, but you’ll iron out the most obvious problems.