LEN(column)-LEN(REPLACE(column,’.’,”))
Counting the occurrences of a character in a column
LEN(column)-LEN(REPLACE(column,’.’,”))
So as an example consider creating an application that allows the user to search for employee details, by selecting from a set of criteria. Once selected the user presses the "show me the results" button and the page takes the criteria and displays details. The user can then change the details and update them or move back to the criteria page. So introduces our first problem, what details to display for the employees?
Ok so it is a bit contrived, but let’s say that the number of text boxes change due to the type of employee. The code runs a fairly expensive SQL query, that takes 30 seconds to complete, in order to discover the employee details to display. But we need to be careful where to run this query in order to correctly construct the controls for ASP.NET to use. The basic part of the page life-cycle to create dynamic controls is in the OnInit override. Here are some snippets, I know there is code in there I’ve not explained, hopefully later you’ll see how I’ve populated those…
protected override void OnInit(EventArgs e)
{
base.OnInit(e);
// Add dynamic controls here
CreateView();
}
private void CreateView()
{
if (this.lastView == this.currentView)
{
// do nothing, rely on the view state populating the fields
return;
}
// not the same view, so trash whatever went before
// (probably nothing yet but just to be safe)
this.PlaceHolderDynamicContent.Controls.Clear();
if (this.currentView == 1)
{
CreateView1Controls();
}
if (this.currentView == 2)
{
// Warning, expensive discovery query in here
CreateView2Controls();
}
}
Ok so we’ve created the dynamic controls, but how do you read the changes the end-user has entered?
Leaving the well trodden road of static controls can be tricky, to read data from dynamically created controls in a post-back you must do so after ASP.NET has; a) Create the control hierarchy used to create the previously rendered page b) inserted the values into the controls from the viewstate and post data. The basic place to read the data is the page_load event.
protected void Page_Load(object sender, EventArgs e)
{
// Read saved data from dynamic controls here
SaveLastView();
}
So we can create a view that was unknown at design time and read the data from those dynamic controls so what’s the problem?
As we’ve seen in order to read the data from dynamic controls we have to help ASP.NET out by creating the initial set of controls during post-back. However, to do that we’ll have to re-run that expensive discovery query again. If the user has changed some details then it’s an expense we’ll have to put up with (or use some other caching mechanism). However, what if the user hasn’t made any changes and want to return to the Criteria view? Currently we’d blindly run the discovery query and incur 30 sec hit only to throw away all the controls and create the control set for the criteria…seems a bit of waste. So how can we know that the user has navigated away from view when we can only read the data in the Page_Load, but that happens after the Page Initialize and therefore after we’ve run the discovery query! Well this is where classic ASP can come to the rescue.
The ASP.NET page life-cycle isn’t magic, the browser posts data to the server, ASP.NET process the data and transforms it into the event based model. There is lot of smoke and mirrors going on but the underlying process hasn’t changed from classic ASP, the Response object still contains the user’s posted data. So if we have a navigation control called MyButtonView1 then you can fish directly into the Response object and get the value via Response.Form["MyButtonView1"]. This means that in the Initialize event we can know if the user is navigating away and therefore we don’t have to run the discovery query for the details. Hurray all the problems solved? No, what happens if the user has made some changes and then navigated away? I knew you’d ask that. Well this is where it becomes irritating, because you have to write more an more code to support the dynamic controls reaching a point where you may as well write classic ASP from the off. Oh well, here is one way to do this. Add client side OnChange to the dynamic controls that update a single "HiddenFieldNeedsToSave" control, then in the Init you can check this too. So finally we’ve got a mechanism to support dynamic controls without having to needless re-run expensive discovery queries.
protected override void OnInit(EventArgs e)
{
this.lastView = Convert.ToInt32(Request["HiddenFieldView"]);
if (Request["ButtonView1"] == "View1")
{
this.currentView = 1;
}
if (Request["ButtonView2"] == "View2")
{
this.currentView = 2;
}
if (Request["ButtonSave"] == "Save")
{
this.isSaving = true;
this.currentView = this.lastView;
}
base.OnInit(e);
// Add dynamic controls here
if (this.isSaving)
{
CreateLastView();
}
else
{
CreateView();
}
}
protected void Page_Load(object sender, EventArgs e)
{
if (this.isSaving)
{
// Read saved data from dynamic controls here
SaveLastView();
this.isSaving = false;
CreateView();
}
this.HiddenFieldView.Value = Convert.ToString(this.currentView);
}
Hopefully I’ve missed something and some nice person can show me the error of my ways, but until then my way of solving this ASP.NET problem is to turn to classic ASP…or just switch to using the MVC project 😉
OnInit – Create the dynamic controls. This is the basic place to create controls, without getting into the whole re-loading of the cycle when adding child controls.
Page_Load – read user saved values and set control values. This is used ’cause proving the dynamic controls have already been created (see above) then the viewstate and post-back mechanisms will have loaded the correct user set values.
There, why is that so hard to remember?
ActualBinaries\Bin
Web1\Bin (really hardlink to ActualBinaries\Bin)
Web2\Bin (really hardlink to ActualBinaries\Bin)
Although the CLR did indeed go to the correct folder it failed to load the assemblies in there complaining that the format of the path was wrong. So sensing that I was close to something I tried creating an real bin folder and hardlink the indicidual files. E.g.
ActualBinaries\bin\MyComponent.dll
Web1\bin\MyComponent.dll (really hardlink to ActualBinaries\bin\MyComponent.dll)
Web2\bin\MyComponent.dll (really hardlink to ActualBinaries\bin\MyComponent.dll)
Now that worked! So how do create one of these links? Well it’s actually quite straightforward;
fsutil hardlink create
"C:\Web2\bin\MyComponent.dll"
"C:\ActualBinaries\bin\MyComponent.dll"
Apparently Powershell can easily create them too, although I’ve yet to try that. So is it worth the effort? I can see that running one MSI would be handy, especially when you’ve got COM registrations going on. However if you’ve only got .net components and no extra registrations then I doubt it’s worth the extra effort.
So I took the source files from Blend and opened them on a machine with Visual Studio + beta 2 toolkit. This told me that the project needed upgrading (eh and why?) so I let it and built the project. Deployed it to the web site and everything worked fine! Not sure what happened there but I’m suspicious of only having Blend on a machine, I’m not ruling out some manual mess-up but why would the Blend build not raise the problem? Oh well, I”ll put it down to beta fun.
Internet Explorer (6,7,8) is determined not to be caught out by changes on the server. First off I do a hard refresh and what the network traffic. Every resource is returned to the browser "as new" from GETs with lots of "200 ok" responses from the server. With every subsequent refresh IE asks the server if the resource has changed, resulting in a "304 not modified". So IE creates a server roundtrip which is small, however, is it necessary? I’ve specifically provided a expiration date so why is it ignoring me? Interestingly if you shut the server down, IE will ask for the server and then realise it’s not there and just server up the cached version. So what happens if I change any of the resources? Well since IE asks the server each time, IE reflects the changes immediately. This is good news if you change your "static" content but not great for my customer!
Firefox (3) provides a very nice compromise. As with IE the hard refresh returns all the resources. The next two requests seems to see Firefox continue to behave like IE with it asking for changes. However, after the 2nd refresh FF seems to accept that the resources are not changing and stops making server roundtrips until the item hits the expiry date. To be absolutley accurate, FF did seems to occasionally check for changes but very, very infrequently. If the server drops FF continues to server up the resource from the cache. This is good news for my customer since the server roundtrips are almost cut to zero.
Safari (3.1.1.) works pretty much like IE, with the exception of when the server is down no page is rendered…boo.
To give you some idea of the difference in speeds because of this caching/non-caching behaviour I ran my tests against the BBC’s main news page, news.bbc.co.uk. The total time to return all the resources for a hard refresh for IE and FF was ~3.5s consisting of some 87 requests. The next two refreshes remained at 87 requests but were mainly the "has it changed" request resulting in better performance ~2.5s. However, the next refresh saw FF shine. IE remained at 87 requests and ~2.5s whereas FF, relying on its cache, only made 17 requests taking a mere 700ms. Ok ok so this doesn’t take into account the rendering speed, but rendering speed does not improve your download speeds, caching can fake that improvement. So at the moment I have FF ahead of IE. I’ve yet to test Opera and I have to say that Safari does render very quickly but sort out your caching Web Kit!
NB. If this is very important to your business please run the tests yourself, I found the following to be useful tools; FireBug’s net performance tab; Fiddler2 and WireShark.