Counting the occurrences of a character in a column

I needed to count the number of periods in a nvarchar column and, as usual, there is no specific help in SQL’s string library. I don’t want to go the CLR route as it’s a maintenance script and can’t add CLR components. So here comes a bit of SQL gymnastics…or hack if you will…
LEN(column)-LEN(REPLACE(column,’.’,”))

Remote Desktop Console feature after Vista SP1

Remote Desktop allows the user to connect to the remote machines Console session. This is very useful is side-stepping some issues with Visual Studio (perhaps they’ve fixed those now?). However, after installing Vista SP1 I noticed that even though I was specifying the /Console flag I was getting a standard session. Apparently this has changed and you must now supply /admin instead.

Using Classic ASP to avoid performance problems with ASP.NET Dynamic controls

First off I must admit that I’m not the biggest fan of ASP.NET and you’ll often find me accusing it of attempting to shoe-horn an old VB6 style event model to the web. However, I do concede that it is a very good attempt at doing just that and helping to take away much of the plumbing work that was required to write a decent classic ASP application, one of these issues was ‘solved’ by Viewstate. Love it or hate it viewstate is an easy way to maintain the values of the HTML form elements between post-backs. So everything is Ok then? Well I’m conveniently ignoring the size issues to talk about a common pattern of using a single page and changing the view of the page by using dynamically created controls. Typically the single page has no content of its own but some stimulus, such as the query string, allows the code-behind to create completely different views of data. I’ll stop short of saying it’s a way to implement MVC/MVP patterns but you could do it.

An Example

So as an example consider creating an application that allows the user to search for employee details, by selecting from a set of criteria. Once selected the user presses the "show me the results" button and the page takes the criteria and displays details. The user can then change the details and update them or move back to the criteria page. So introduces our first problem, what details to display for the employees?

Displaying details that are only known at run-time

Ok so it is a bit contrived, but let’s say that the number of text boxes change due to the type of employee. The code runs a fairly expensive SQL query, that takes 30 seconds to complete, in order to discover the employee details to display. But we need to be careful where to run this query in order to correctly construct the controls for ASP.NET to use. The basic part of the page life-cycle to create dynamic controls is in the OnInit override. Here are some snippets, I know there is code in there I’ve not explained, hopefully later you’ll see how I’ve populated those…

protected override void OnInit(EventArgs e)

        { 


            base.OnInit(e);


            // Add dynamic controls here


            CreateView(); 


        }


private void CreateView()
        {
            if (this.lastView == this.currentView)
            {
                // do nothing, rely on the view state populating the fields
                return;
            }
            // not the same view, so trash whatever went before
           
// (probably nothing yet but just to be safe)
            this.PlaceHolderDynamicContent.Controls.Clear();

            if (this.currentView == 1)
            {
                CreateView1Controls();
            }
            if (this.currentView == 2)
            {
                                      // Warning, expensive discovery query in here
                CreateView2Controls();
            } 
        }

Ok so we’ve created the dynamic controls, but how do you read the changes the end-user has entered?

Reading changes made to dynamic controls

Leaving the well trodden road of static controls can be tricky, to read data from dynamically created controls in a post-back you must do so after ASP.NET has; a) Create the control hierarchy used to create the previously rendered page b) inserted the values into the controls from the viewstate and post data. The basic place to read the data is the page_load event.

protected void Page_Load(object sender, EventArgs e)
        {
                // Read saved data from dynamic controls here
                SaveLastView();
        }

So we can create a view that was unknown at design time and read the data from those dynamic controls so what’s the problem?

Running expensive discovery queries on post-back

As we’ve seen in order to read the data from dynamic controls we have to help ASP.NET out by creating the initial set of controls during post-back. However, to do that we’ll have to re-run that expensive discovery query again. If the user has changed some details then it’s an expense we’ll have to put up with (or use some other caching mechanism). However, what if the user hasn’t made any changes and want to return to the Criteria view? Currently we’d blindly run the discovery query and incur 30 sec hit only to throw away all the controls and create the control set for the criteria…seems a bit of waste. So how can we know that the user has navigated away from view when we can only read the data in the Page_Load, but that happens after the Page Initialize and therefore after we’ve run the discovery query! Well this is where classic ASP can come to the rescue.

Classic ASP rides to the rescue

The ASP.NET page life-cycle isn’t magic, the browser posts data to the server, ASP.NET process the data and transforms it into the event based model. There is lot of smoke and mirrors going on but the underlying process hasn’t changed from classic ASP, the Response object still contains the user’s posted data. So if we have a navigation control called MyButtonView1 then you can fish directly into the Response object and get the value via Response.Form["MyButtonView1"]. This means that in the Initialize event we can know if the user is navigating away and therefore we don’t have to run the discovery query for the details. Hurray all the problems solved? No, what happens if the user has made some changes and then navigated away? I knew you’d ask that. Well this is where it becomes irritating, because you have to write more an more code to support the dynamic controls reaching a point where you may as well write classic ASP from the off. Oh well, here is one way to do this. Add client side OnChange to the dynamic controls that update a single "HiddenFieldNeedsToSave" control, then in the Init you can check this too. So finally we’ve got a mechanism to support dynamic controls without having to needless re-run expensive discovery queries.

protected override void OnInit(EventArgs e)
        {
            this.lastView = Convert.ToInt32(Request["HiddenFieldView"]);
            if (Request["ButtonView1"] == "View1")
            {
                this.currentView = 1;
            }

            if (Request["ButtonView2"] == "View2")
            {
                this.currentView = 2;
            }

            if (Request["ButtonSave"] == "Save")
            {
                this.isSaving = true;
                this.currentView = this.lastView;
            }
            base.OnInit(e);

            // Add dynamic controls here
            if (this.isSaving)
            {
                CreateLastView();
            }
            else
            {
                CreateView();
            }
        }


 protected void Page_Load(object sender, EventArgs e)
        {
            if (this.isSaving)
            {
                // Read saved data from dynamic controls here
                SaveLastView();
                this.isSaving = false;
                CreateView();
            }
            this.HiddenFieldView.Value = Convert.ToString(this.currentView);
        }

Hopefully I’ve missed something and some nice person can show me the error of my ways, but until then my way of solving this ASP.NET problem is to turn to classic ASP…or just switch to using the MVC project 😉
   

Where to add dynamic controls in the asp.net page life cycle?

There are some things in software development that I just keep having to re-read. One such subject that refuses to stay in mind is where in the asp.net page life-cycle should I; a) add dynamic controls b) set the properties of those controls c) read user saved values from those controls. So as an aid-mémoire:

OnInit – Create the dynamic controls. This is the basic place to create controls, without getting into the whole re-loading of the cycle when adding child controls.
Page_Load – read user saved values and set control values. This is used ’cause proving the dynamic controls have already been created (see above) then the viewstate and post-back mechanisms will have loaded the correct user set values.

There, why is that so hard to remember?

How multiple web sites can share one binary folder

I answered a post about how you can share a single binary folder with many web sites. I can certainly see the advantage of having configurable sites (different web configs) but share the same binary folder to make it easier for maintenance. My first thought was that you could use virtual folders but unfortunately the code probing won’t see the virtual folder…shame. So I looked into the ye olde unix (or nix) style of symbolic links, thing create shortcut link on steriods.  My first idea was to create a ‘hardlink’ of the bin folder itself. E.g.

ActualBinaries\Bin

Web1\Bin (really hardlink to ActualBinaries\Bin)

Web2\Bin (really hardlink to ActualBinaries\Bin)

Although the CLR did indeed go to the correct folder it failed to load the assemblies in there complaining that the format of the path was wrong. So sensing that I was close to something I tried creating an real bin folder and hardlink the indicidual files. E.g.

ActualBinaries\bin\MyComponent.dll

Web1\bin\MyComponent.dll (really hardlink to ActualBinaries\bin\MyComponent.dll)

Web2\bin\MyComponent.dll (really hardlink to ActualBinaries\bin\MyComponent.dll)

Now that worked! So how do create one of these links? Well it’s actually quite straightforward;
fsutil hardlink create
"C:\Web2\bin\MyComponent.dll"
"C:\ActualBinaries\bin\MyComponent.dll"

Apparently Powershell can easily create them too, although I’ve yet to try that. So is it worth the effort? I can see that running one MSI would be handy, especially when you’ve got COM registrations going on. However if you’ve only got .net components and no extra registrations then I doubt it’s worth the extra effort.

Problems using a Blend project after upgrading to Silverlight Beta 2

I just ran into an annoying issue after upgrading to Silverlight Beta 2. Time for a story…
Machine had Blend pre-beta 2 and had created a SL application.
Along came beta 2 and Blend was un-installed and the latest version of Blend installed.
Old Blend project was manually copied and pasted (see previous post on Blend and Beta 2) into a new Blend project, built and tested.
.xap and default.html copied over to web server.
When default.html was opened the page displayed the Get Silverlight banner even though the browser had the correct Silverlight. Clicking Get displayed a site saying that the site was using an old version of Silverlight.

So I took the source files from Blend and opened them on a machine with Visual Studio + beta 2 toolkit. This told me that the project needed upgrading (eh and why?) so I let it and built the project. Deployed it to the web site and everything worked fine! Not sure what happened there but I’m suspicious of only having Blend on a machine, I’m not ruling out some manual mess-up but why would the Blend build not raise the problem? Oh well, I”ll put it down to beta fun.

Browser speed wars, it’s not all about rendering

After reading a quick performance guide to the various new browser engines it struck me that it has missed another aspect, that I just happen to have been experimenting with…that of caching. A current customer I’m working with has some issues with poor performing and unreliable download speeds. With this in mind I’ve examined how three of the common browsers deal with caching, after all if you can cache a resource then the download speed becomes far less important. Using IIS6 I created a simple HTML page with containing a page of text, a single jpg and a link to a stylesheet. The page expiration is set to a year in the future and with the Etags both on and off. The results were a little surprising;

Internet Explorer (6,7,8) is determined not to be caught out by changes on the server. First off I do a hard refresh and what the network traffic. Every resource is returned to the browser "as new" from GETs with lots of "200 ok" responses from the server. With every subsequent refresh IE asks the server if the resource has changed, resulting in a "304 not modified". So IE creates a server roundtrip which is small, however, is it necessary? I’ve specifically provided a expiration date so why is it ignoring me? Interestingly if you shut the server down, IE will ask for the server and then realise it’s not there and just server up the cached version. So what happens if I change any of the resources? Well since IE asks the server each time, IE reflects the changes immediately. This is good news if you change your "static" content but not great for my customer!

Firefox (3) provides a very nice compromise. As with IE the hard refresh returns all the resources. The next two requests seems to see Firefox continue to behave like IE with it asking for changes. However, after the 2nd refresh FF seems to accept that the resources are not changing and stops making server roundtrips until the item hits the expiry date. To be absolutley accurate, FF did seems to occasionally check for changes but very, very infrequently. If the server drops FF continues to server up the resource from the cache. This is good news for my customer since the server roundtrips are almost cut to zero.

Safari (3.1.1.) works pretty much like IE, with the exception of when the server is down no page is rendered…boo.

To give you some idea of the difference in speeds because of this caching/non-caching behaviour I ran my tests against the BBC’s main news page, news.bbc.co.uk. The total time to return all the resources for a hard refresh for IE and FF was ~3.5s consisting of some 87 requests. The next two refreshes remained at 87 requests but were mainly the "has it changed" request resulting in better performance ~2.5s. However, the next refresh saw FF shine. IE remained at 87 requests and ~2.5s whereas FF, relying on its cache, only made 17 requests taking a mere 700ms. Ok ok so this doesn’t take into account the rendering speed, but rendering speed does not improve your download speeds, caching can fake that improvement. So at the moment I have FF ahead of IE. I’ve yet to test Opera and I have to say that Safari does render very quickly but sort out your caching Web Kit!

NB. If this is very important to your business please run the tests yourself, I found the following to be useful tools; FireBug’s net performance tab; Fiddler2 and WireShark.

Installing Kaspersky antivirus on Parallels running Vista

Parallels "comes with" Kaspersky anti-virus software and since I’d had a few issues with AVG on my Vista host I decided that at least Kaspersky must have been tested to run on Parallels. Why do I make these assumptions? 😉 Anyway to install it you simply select the menu item from Parallels. It tells you it will reboot and sure enough it does. But then nothing else happens. I tried this a couple of times and no sign of a Kaspersky tool anywhere. So I tied it again but this time I didn’t allow the autorun to…run. I then right-clicked on the setup exe (kissexe or something) and said ‘run as administrator’. That rebooted but this time it acutally installed. Hurray….good grief.

 

Virus checkers, the good and bad

I’ve previously written that I have grave doubts about Virus checkers. Well this weekend my g/f laptop (XP SP2) found itself under attack from the malware known as ‘XP Antivirus 2008’ – yes it’s a genuine virus/malware masquerading as a anti-virus tool. Fortunatley Norton Anti-virus spotted it pretty quickly and did just enough to stop it causing too much damage. Although it didn’t get rid of it completely it stopped it ‘working’ and with the additional help of Malwarebyte the system was soon cleaned. So 1:0 to the Virus checkers. However, the previous week my version of Vista running on the Mac via Parallels started to act very oddly. Visual Studio wouldn’t compile properly, Task Manager wouldn’t go away, Windows Update wouldn’t…update. I quickly narrowed it down to AVG anti-virus not correctly updating. Every time I stopped it attempting to update everything else worked fine. So I tried to uninstall it, the uninstalled just stopped after about an 1 hour of doing very little. I rebooted in safe-mode and uninstalled it in an instant. Good ridance. Finally I can get back to using Vista…1:1.

Problem connecting to an instance of a SQL Server Cluster

I was asked to try and find out why a web server could not communicate with the database. After checking all the usual network configurations I was beginning to despair. Together with the sys-admin we started firing off telnet connections, pings, etc, all worked but still SQL clients failed to connect. Finally by luck more than judgement the sys-admin attempted to connect to the database without the SQL instance name. Viola it worked. It seems that when you make a clustered virtual server it forms an alias from both the server names *and* the instance name, i.e. MyServer\MyInstance simply becomes MyClusterServer. We hadn’t had this problem on previous builds because we normally just have the default instance.