Mocking services in Unit Tests

I recently posted the following; Unit Test Mocking…


I’m currently looking at several Mocking tools to try and break various component dependancies when unit testing. So for example I want to unit test MyObject.SayHello(arg1) but that uses MyDataService.GetData(Request). When I unit test I don’t want to actually want to invoke MyDataService but simply assume a return value. That way I can unit test only MyObject and not worry about any incorrect behaviour (or setup problems) from MyDataService.

Looking at the options the majority of the Mocking objects work in the following fashion:

MyUnitTest()

MockObject("MyDataService.GetDate", "MyInputValue", "AssumedReturnValue");

MyObject.SayHello("Hello");

…..

What I really dislike about this approach is that to write the unit test I need to know alot of information about the services MyObject is going to call, I think this is wrong for a couple of reasons. 1. I wouldn’t assume the author of the unit tests knows anything about the services that will be called, especially in when using Extreme Programming and/or a second person other than the code component author is writing the tests 2. If the service changes you have a maintaince headache of revisiting all the unit tests to change their expected values.

I’ve got some ideas about;

a) having a centralised map of in’s and out’s (maybe with special codes in invoke real components)

<MyObject><UnitTestTypeA><MyDataService/>..

<MyDataService><UnitTestTypeA><in>bla</in><out>blabla</out>…

b) decorating the methods with example values

[UnitTestTypeA; In=bla;out=blabla]

MyDataService.GetData

c) the service components mock their own objects and the clients asks for all the mocks, so the code looks something like

MyUnitTest()

MyObject.GetAllMocks("UnitTestTypeA")

MyObject.SayHello("Hello");

i.e. would require reflection or interfaces an extra code in the components solely for Mocking

d) some combination of the above


I had a interesting reply from Joe Rohde, who seems to have some inside knowledge at Microsoft, that they are looking into producing something similar for a future realease of Visual Studio. That’s given me the push to look at implementing (c) probably using using something like reflection rather than interfaces that way it should be possible to maintain some form of map association with the actual component rather than implement code into the component. The first hurdle I can see is how to mock objects that are not simple types and do not support serialization. 

Google Site Map (beta)

I’ve recently been playing with Google Site Maps, it’s a fairly simple mechanism to "help" Google understand your site. At it’s core is an XML file that allows you to include all the pages you wish to have indexed with Google, although it still doesn’t guarantee that they will be indexed. It also allows you to specify how frequently the page will change and you can provide a page priority, presumably to help order your pages in a search result.
 
One of the first things I wanted to do was produce a map for a friend’s site Web Site Design site. But rather than produce it by hand I had a look at the utilities/code out there to produce the map automatically. The majority of the utilities need to be run on the site hosting the web pages. That certainly has many advantages however it also has a few drawbacks;
  1. An auto-generated site map will show pages I don’t want to be indexed
  2. I only have FTP access to the site and I don’t have an execute style rights to the site
  3. I tend to work on a local site and the publish all the changes to the live site in one go, this should include the site map
  4. I’ve got "sub sites" that are not linked from my default page that I wish to include in the index – not easy for web crawlers to detect.

With these problems in mind I’ve set about writing a little Windows utility that you can point to your local site and generate an editable site map (see GoogleSiteMap picture). I’ve got a little bit of tidying up to do, but if anyone wants a copy then please post and I’ll see if it I can post the installer somewhere. It seems to work well and includes a "validate" button, so if you do make some changes to the XML you can test to ensure that you’ve not broken the schema.

 

Lightweight Transactions and Connection Pooling

System.Transactions, together with the appropriate data provider such as SQL 2005, provides a mechanism known as the Lightweight Transaction Manager (LTM). The basic principle is simple, if you open a transction and only talk to SQL 2005 then it won’t bother enlisting with the full blown Distributed Transaction Manager (DTC) and therefore you won’t incurr all the nasty overheads that entails, good news. However, there is a problem. Consider the following pseudo code…
Using(System.Transaction)
{
Func 1()
Func 2()
}
 
Func 1()
Connection.Open("MyDb")
DoTransactionWork
 
Func 2()

Connection.Open("MyDb")
DoMoreTransactionWork
 
What happens here is that the LTM gets involved in Func 1 and happily does the transactional work without involving the DTC. However, when we run Func 2 it will promote the transaction to the DTC! This is frustrating because for years we’ve been told the benefits of connection pooling and that we should use a connection and then throw it away ASAP ’cause connection pooling will save us. However, in the case of LTMs it works against us because even though we’ve used exactly the same connection details, the second call to the connect will promote the transaction to the DTC. What I’d want to see is more collaboration between the LTM and connection pooling. If you ask for a connection that is exactly the same as a one already opened in the LTM then do NOT promote it with the DTC. I assume the problem is really with SQL or at least the provider, but I can’t believe it would be difficult problem to solve.
 
[Edit] One MSDN Forum entry from someone reporting to be on the Microsoft Test team stated that this was down to be fixed…so who knows, perhaps this problem won’t be around much longer.
[Edit] I’ve raised a bug report, please vote for it and hopefully it will get fixed sooner that later
 

Problems moving to System.Transactions

I’ve hit a couple of surprising issues when porting code from Enterprise Services (COM+) transactions to System.Transactions.

  1. The loss of the transactional component “Supports”.
  2. Nested transactions must have exactly the same isolation level.

 

“Supports” says to the transaction coordinator that if there is a transaction then enlist in it, otherwise I don’t want to run a transaction at all. It looks like Microsoft have considered this obsolete since if you don’t want to run in a transaction then don’t ask for one. The problem is that I want to write a function that receives the transaction option as an argument, however the client has no way of saying “supports”. The alternatives is either “Required” or “Suppress”. If they say “Required” and one isn’t running then they’ll create a new transaction when they didn’t want one. Conversely if they pass in “Suppress” and one is running it will opt out of the transaction, again not the correct behaviour. The workaround is to write an If..condition, ok not a big problem but “Supports” was a far more elegant solution, so why remove it?

 

The second point is far more annoying, almost tempted to say it’s a bug. One of the fundamental ideas of transactions, since “Transaction Server” in the late 90’s, was that your code can ask for a transaction blissfully unaware of if it should be enlisted in a currently running transaction or not. This *was* a good model since it allowed you to easily use/reuse components safe in the knowledge that the Transaction Coordinator would take the strain. The only caveat to this was that you could not ask for transaction at a higher isolation level than one currently running, annoying but sensible. So what have Microsoft done now? Well when you ask for a transaction it must be exactly the same isolation level as the one running, complete madness. I’ve no idea why they’ve done this, I’m sure there is good reason and I’m awaiting a reply. In the meantime I’ve written a wrapper for System.Transaction that examines the isolation level of System.Transactions.Transaction.Current and automatically alter the isolation level passed into the wrapper to match the current value. But again, why am I having to write these workarounds?

Hosting Domain Specific Languages

I’ve just discovered today that Microsoft are only going to allow languages created with the DSL to be hosting within Visual Studio. I’m very disapointed by this news. The general push behind DSL is to encourage Software Factories and generally improve the practices used to create applications. The big trick I believe they’ve missed is that there are many applications that allow the user to customise their experience. Why should we ignore our application users and let them live outside of good software development practices? Even if you only consider such users as casual developers, getting people involved in good practices at any level can only be a good thing. There are many professional developers who have started life customising existing applications and, "getting them while they’re young" approach would seem a good idea. Don’t get me wrong, Microsoft Tools division has as much right to make money as anyone at Microsoft, but I think forcing DSLs to be created only in Visual Studio and then allowing them to be deployed and used freely would be a much better strategy. Even the Windows Workflow Foundation (or Workflow Foundation as they prefer) team have produced in essence a DSL for workflows which they allow to be hosted without Visual Studio so why not custom DSLs? Very disapointed.

Navigation shortcuts

Quick tips for navigating around code in Visual Studio 2005.
 
When you select "Goto Definition" you can navigate back to the line in the calling function using one of the following…
1. CTRL-   … move to my last position
2. CTRL * … move to the last calling function
 
The difference between the two is that (1) will move back through every step, so if you page down twice it will page up twice for you. Whereas (2) will simply take you straight back to the previous function.

Inconsistent behaviour of FOR XML AUTO?

I’ve recently had to fix what seemed like a very strange problem with SQL Server, where very occasionally the system would stop working but would eventually fix itself. The system in question runs under a single identity, all access to the database goes via this "gateway" identity. Under particular circumstances the users of the system were getting incorrect results. The underlying stored procedure producing the incorrect results uses FOR XML AUTO to send back an XML result set. However, when I profiled the SQL and re-ran the same query in Management Studio it produced the correct results. I believe the problem was based upon the following facts;
  1. When the system runs it runs under identity A, therefore any stored procedure executed will have a specific plan created for identity A – "plan A". When I run the query as myself I get – "plan Me"
  2. The query uses FOR XML AUTO but does not have an ORDER BY
  3. FOR XML AUTO relies on the order to create the parent/child relationships
  4. Plans change depending upon the exact circumstances of the specific call, i.e. statistics, calling arguments, user id, etc

So what happended? Well I believe the problem was the "plan A" was constructing its query in a way that results in a different ordering of the results, I’ve certainly seen this happen in other queries (especially when attempting to CAST data). "Plan Me" was then using the natural (and more typical) order of how the data was added. This would explain why the XML results were different. This would also go a long way to explaining why after a reset of SQL or a DBCC FreeProcCache the system would start working again. This is because the plans would have to be re-created and I’d see the correct results since normally "plan A" would be exactly the same as "plan me" and therefore the system would work.

 

So the lessons learnt – 1) Always supply an ORDER BY for XML AUTO 2) When you’re testing for inconsistent results, use the same user id.

Southwest Trains

I live about a 1 hour train ride from London. However, this train "service" is provided by SouthWest Trains for which they charge me £25 for a day return. However, my penultimate journey took nearly 2.5 hours (probably about as fast as a decent bike ride) each way because of engineering works and they compensated me with…nothing. My last journey actually started off well, I got to London in the advertised 1 hour, so it can be done. However, the journey home took 2 hours and I had to sit on the floor. Better still after several "electrical faults" whilst stopped at the station (incurring a 30 min’ delay) we eventually get moving…only to be told 1/2 way through the journey that because of the delay we’ll no longer be stopping at my station! So I’m forced to either get off at the main station after my stop and get a taxi back (at my own expense) or change four stops early and hope to pick up a connection – which they said would be another 10 min’ away. So I get off and start the walk to relevant platform (5) which is a fair walk away but I’ve go 10 min’ so shouldn’t be a problem. "The next train arriving at platform 5 is due in 3 mins’", ok time for a jog. Eventually I get home 1 hour late.
 
Now I dare say this is a familiar tale to those poor souls who use Britains railways on a daily basis but I just like to make two very simple points;
  1. "Give up my car", they say. Not a chance given the consistent level of bad service I’ve experienced. Even if they did manage to put in a service that would get me to work from my home without taking 3 times as long as the car journey, I very much doubt it would run consistently enough for me to ever make a meeting on time
  2. What other service on earth gets away with charging you a fee for a service they do not deliver. If someone asks me to deliver some software in a week and I agree to that, take their money, then deliver it 3 weeks late do you think they’d want some compensation? How is it that train companies get away with ripping us off. It’s a total disgrace.

 

 

Converting VBScript to .net

I’ve recently been faced with the prospect of supporting a number of VBScripts in a product written almost exclusively in C#. So rather than continue supporting VBScript I wondered how difficult it would be to convert the VBScript to .NET?
 
So how to convert the code? The obvious answer is to either write a parser or use a commercial converter or parser language. Well I couldn’t find anything (at least not cheaply) that would automatically convert the code. I also rejected the idea of writing a parser since I know from experience that the parser is easy, it’s implementing all the language rules that are tricky. Now that sounds like a good reason but in reality I knew that the VBScripts I’d be supporting are all roughly the same and would only represent a small subset of the available VBScript lexicon so I didn’t really want to spend the time implementing a fully blown solutions, what I needed was a fairly quick and simple answer. I decided that it was about time I learnt regular expressions and it seemed to me that this would provide me with a conversion mechanism.
 
The next problem was which .net language to choose. I spend most my day using C# but business developers don’t tend to like that – again another excuse. The real reason for choosing VB.NET is that it is *very* forgiving and shares a number of functions and keywords with VBScript so should make the conversion easier. So the converter was going to go from VBScript to VB.Net and use RegEx’s to do the donkey work.
 
So using the Regex component of .net I set about producing a converter.
Tasks, including the pattern used;
NB. As you’ll see I’m a RegEx newbie so the patterns used tended to change as I found new ways of doing things, but hey they work.
  1. Ensure Option Explicit is removed – I don’t need it so I’m dropping it from the VBScript – (?i)\s*Option Explicit
  2. VBScript functions and Subs – Pesky devils, the problem here is that function need a "As object" value and all routines really should (in my case) have their arguments prefixed with ByRef to be compatible with the VBScript.
    Find those routines – (?i)(Function|Sub)\\s*[a-z_][a-z_0-9]*\\s*[(](([\\w\\d]*)|(\\s*,\\s*[\\w\\d]*))*[)]
    Remember the name of the routine – (?i)(?<=((Function|Sub)\s*))\w*
    Get the arguments – (\((([\w\d]*)|(,\s*[\w\d]*))*)|(,\s*(([\w\d]*)|(,\s*[\w\d]*))*)
  3. So we’ve converted the routine declaration, but VBScript doesn’t (typically) use brackets when calling a routine and VB.NET requires them, so "lucky" we remembered the names of the routines in step 2.
    Find any caller to a function that isn’t already using brackets and isn’t simply the assignment of the function result – string.Format(@"(?i)(?<!(Function|Sub)\s*){0}(?!(\s*=)|(\s*\())", routineName)
  4. Adding Namespaces – this one caught me out at first, it was fairly obvious that I needed to import "system" but it took a couple of scratched heads to include "Microsoft.VisualBasic", seems obvious now!
  5. The next problem was general differences in keywords and types, etc. The most common one was replacing "now()" with "DateTime.Now". This proved an interesting problem since a number of scripts contained code such as "now()-1", so converting that to "DateTime.Now-1" didn’t cut it since .net can’t correctly cast the integer. So (for some reason) I chose to replace those with DateSerial(0,0,x) – (?i)(?<=DateTime\.Now\s*)(-|+)\d"
    The other common problem was VBScript variables called "Return", so they needed replacing, again nothing fancy just guessed at a unique name, NB return isn’t a valid exit in VBScript –
    replace…(?i)\bReturn\b", "ReturnVarX "
  6. "Set" and "Let" – so Microsoft has finally killed these off, who knew? 😉 – (?i)(?<=\s)set\s*
  7. All done!

So there you go, as long as you’re not guaranteeing 100% VBScript conversion and, like me, have a finite (if large) number of scripts to convert, you can probably convert them all with the minimum of fuss and a few regular expression.

Talking about Testing for Safari when you don’t have OSX

 

Quote

Testing for Safari when you don’t have OSX

Developing using Microsoft technologies can make it expensive to test your site for other browsers and platforms. IE, Firefox and Opera can ease (or maybe that’s make things harder) to test your site. However, the big bugbear is OSXs Safari. Well until the OSX86 project managers to make it legal to run OSX on any PC I think it boils down to…
  1. Buy a Mac – nice if you afford it
  2. Use a screen shot service – pain to use if your site has any kind of dynamic changes, lets face it they’re a pain to use full stop
  3. Use another KDE based browser

Option 3 is the one I’m currently recommending. Safari is based upon the KDE browser engine, so why not use another browser that uses the same fundamental rendering engine, e.g. Konqueror. Well, the first problem is if you’re running Windows there currently isn’t a version for good ole’ Windows. The answer is turn to Linux, well sort of. My advice is to get hold of the VMWare player with a downloaded image of your favourite flavour of Linux, mine is Ubuntu (if only for the name). Install Konqueror and off you go, Safari like browsing without OSX. It’s not a 100% guarantee, but you’ll iron out the most obvious problems.

 

 Well I’ve since learnt that this isn’t as useful as it first seems. I’ve now had two occasions when Konqueror as done some odd (or not worked at all) and Safari has been fine. So…accept no substitute!