@Media 2008 (atmedia2008) Day 2

Professional front-end engineering – Nate Koechley

Although large parts of this session are essentially the same as Nate has presented before it was still good. It’s no surprise that it is similar content since it comes from Yahoo’s best practices so they’re unlikely to change that quickly. A few points I thought I’d highlight are; Progressive Enhancement rather than Graceful Degradation, JavaScript verifier (JSLint), front end unit testing (YUITest), JSDoc,  Firebug to test form/post attacks, save CSS/JS files as UTF-8, pre-load new assets before a new product launch to avoid complaints about slow start up since the new assets are not yet cached, try to keep to <256 colours to use PNG8, iPhone caches only ~20 items, avoid using @Import.

I had a quick chat with Nate and discussed .net server controls vs. JavaScript libraries and using the server to overcome some of the multi-file issues. Nate told me about the YUI.net project that is working on combining the two so I’ll have to take a look at that. The discussion of using post-processing or streaming of multiple files as a single file stream were interesting. I was concerned that given Yahoo’s extensive testing of seemingly almost every permutation downloading CSS/JS that there was some problem with servers side code. Nate assured me that there wasn’t any specific issue and that Yahoo teams were free to chose. Of course he explained again that the use of Yahoo compression techniques would help alleviate these problems.

I enjoyed this presentation although it was a little rushed and I had seen a fair bit of this before.

Rating – 0 doodles – 8/10

Building on the shoulder of giants – Jonathan Snook

The basic premise of this was to use the libraries out there since they’ve gone through the rigour of testing, browser problems, etc. I liked Jonathan but I thought the point was a bit too laboured and one hour was too much. I did like the time-line demo and the idea of small amounts of code to create a good site is appealing but it’s a fairly easy point to make.

Rating – 10 doodles – 6/10.

PS. I really liked Jonathan’s responses in the later panel discussion. His responses were mature and well-rounded and didn’t pander to the frankly crowd pleasing answers from a couple of the other panellists! He is clearly a smart man and I’ll certainly be adding him to my RSS reader.

The why & which of JavaScript libraries – John Resig

John, the originator of the JQuery library, gave a great talk about the alternative libraries. I must confess that I’m almost 100% professionally in the Microsoft world but I do have a little experience of playing with one of those libraries, Yahoo’s YUI. So I did find the various pro’s and con’s of the libraries interesting. I also like John’s honest look at why libraries are good, and also why they might be bad. The libraries John concentrated on were;

Rating – 0 doodles, 8/10

WAI-ARIA – It’s easy – Steve Faulkner

I admit I thought this was going to be about a tool to check your site but I’ve obviously been out of the accessibility loop for too long. Steve explained that it was a set of attributes that helped explain to a (usually) screen reader. Of particular interest was the ideas of a role and pre-canned/pre-localised explanations of how to use controls. It looks good but the one area that I found difficult to understand was that of compliance. If you have new attributes but don’t change the underlying schema rules then it will break the page. The reasoning was that you’d use JavaScript to inject the ARIA attributes. As someone pointed out that relies on JavaScripts, but what worried me is that still produces invalid mark-up. Changing the DOM representation of the mark-up to be essentially invalid doesn’t sit will with me. It would surprise me if a compliant browser would simply refuse to allow non-standard attributes and personally I’d hope it wouldn’t. Avoiding changing the underlying schema’s is a convenient hack, I think it would have been better to create spin-off schemas or use the modular nature of schemas to add a new set.

Back to the presentation, Steve did have more than his fair share of presentation gremlins which did derail the flow.

Rating – 0 doodles – 7/10

Global Design: Characters, Language, and more – Richard Ishida

I really enjoyed Richard’s presentation last year and this was going well, for me, until I think Richard thought he was losing the audience. In this presentation Richard delved into the technical reasons of why you want to chose Unicode and encode with UTF-8. Richard gave some useful tips, such as turning off the dreaded BOM wherever possible, not bothering with the language meta-tag,  using both language attributes in the xml declaration, not using xml declaration for IE6 (cause it pushes it into quirks mode – one I did know).  As a developer I was enjoying the technical stuff but Richard seemed to realise that this wasn’t what all the audience wanted so he quickly took the level back up again and then rushed through the remaining slides.

Rating – 0 doodles – 7/10

The core of the talk can be found in the following W3C Unicode tutorial

PS. I was a little disappointed that Richard didn’t respond to one of the panel questions (he was in the audience) about why data is important to accessibility as the panel struggled and kept thinking, "we just heard Richard tell us for 1 hour about the importance of encoding to making localising/globalising a site" – a form of accessibility.
Kick it! | DZone it! | del.icio.us

@Media 2008 (atMedia2008)

I’m currently attending @Media 2008 (atMedia2008) so I thought I add an very quick blog about the first day, I’ll expand on the topics later…

Designing our way through Data – Jeffrey Veen

Jeffrey talked about the his work at Google and what inspired some of his work. Very interested talk about how to visualise data, although I’ve seen big chunks of it before. I won’t talk about the content since you can view it in the above link. Great start and for a keynote it fulfils the task of being inspirational.

Rating – 0 doodles – 9/10 (point deducted for reproducing some parts)

Mental Models, sparking creative thinking through empathy –

Indi Young

Good technique to acquire what the end-user really wants (in this case from a site). I thought this was especially useful for marketing/planing teams looking to create a roadmap. A lot of ideas will be familiar to anyone who captures user stories but it looks like a good tool to aid in that process. What I liked about the process was the focus on the end user and the ability to create a graphical representation of the both the user ‘wants’ and the mechanisms that either support it or are tabled to support it. As with User Stories the idea is to capture the user needs at a (initially with User Stories) high level. So rather than, "I want to be able to drop down a set of schools in the area", you’d expect to capture, "when I’m looking at houses in an area I am concerned about the quality of schools". It prevents people leaping ahead and designing the page/application before really understand the requirements, anything that promotes that thinking can’t be wrong! Indi talked about such a house search which was a great example and also talked about the ‘in the corridor’ where you try to imagine what a person would be thinking as they walk down a corridor. E.g. they’re more likely to be thinking, "I need to get that report out be 2pm" rather than, "I need a web page that allows me to select the xyz report….". However, there were a number of other examples that were basically making the same point but were only interesting to people in that domain, I think they could have been left out and took some of the momentum away from the presentation.

I enjoyed the talk but I do think it would be better to talk about the ‘why’ before the ‘what’ as it was very difficult to appreciate the goals of it until you know the why. I certainly look out for her book.

Rating – 1 doodle – 7/10

Getting your hands dirty with HTML5 – James Graham & Lachlan Hunt

The 5 mins spent talking about HTML5 examples was interesting, why bother with the other 55? James and Lachlan are obviously clever people and are capable of presenting although I found the style a little…academic like. I’ve given this a poor rating because of the dwelling of the why rather than the now. Although I think it’s right to give a little history I found that the majority of the presentation was this rather than the details of HTML5, which given the title is disappointing.  On a personal note, I really hate mail-lists as I find them antiquated and dripping in ivory-tower academia so it did grate on me that they championed this method, just set a bit of scene really.

Rating – 10 doodles – 3 nodding offs = 3/10

[Edit] Oh and it looks like they’re just a snobby as me ;)….

  1. # [15:11] <annevk> http://pdkm.spaces.live.com/blog/cns!D1DDEC9FF002FB8C!872.entry didn’t like the talk at least… complaining about mailing lists and academia
  2. # [15:11] <annevk> but spaces.live.com prolly says enough
  3. # [15:12] <hsivonen> Philip`: occasionally the audience pointedly disagrees with the presenter
  4. # [15:12] <jgraham_> Yeah, I think we spent too long on design principles
  5. # [15:12] <hsivonen> Philip`: happened at XTech with Steven Pemberton’s talk
  6. # [15:13] <Philip`> hsivonen: Disagreeing with presenter doesn’t make it a bad presentation – it’s good if things are thought-provoking and get people interested and involved 🙂
  7. # [15:13] <jgraham_> So even being on spaces.live.com it isn’t an entirely unfair criticism 🙂
  8. # [15:13] <hsivonen> a Flickr guy made notes in the audience during the presentation and then flushed his counter-arguments at the end
  9. # [15:14] <jgraham_> (I should note for posterity that it is in no way Lachy’s fault that we sepnt too long on that section)

Underpants over my trousers – Andy Clarke

Fun and interesting talk about getting inspiration from comic layouts, a different way to describe how to move the readers eye around the page. Also talked about how comics use the size (or even lack of) a frame to describe the amount of time the reader should spend reading that section. Andy also talked about creating several templates for the "same" dynamic page to accommodate data of different sizes, nice idea. A typically entertaining talk from Andy and I find myself very envious of his position. Andy has a reputation of producing designs that work for the target rather than sweating about all the backward compatible issues (e.g. the lovely use of transparent PNGs…wonder what that looks like in IE6). I find myself sitting on the fence about that, but I can’t deny that I like his designs and like the idea of been able to choose customers based on what I want to do…must be nice. You also have to love his anti-americanizatizms <wink>, at least we have that in common!

Rating – 0 doddles – 9/10

Designing User Interfaces: Details make the difference – Dan Rubin

I would sum up this talk with, "the devil is in the detail". Dan showed the level of scrutiny need to create good looking sites. Some of the ideas I found particularly interesting were; getting dynamic data to avoid widows by adding non-breaking space, using -1 letter-spacing on big headers and for me, a non-designer, the proportional spacing rules…thank you Dan.

Rating – 0 doodles – 8/10

More to follow….
Kick it! | DZone it! | del.icio.us

Silverlight Deep Zoom

I’ve just had my first try of Deep Zoom, the collection of tools as user controls that allow the user to zoom into what seemingly looks like a single image. The idea is that as a developer (or publisher) you can collate a number of images together using the Deep Zone Composer to form one Gestalt style super image. So what can you use it for? Well IMO there are two main uses;
1. A scrollable collage of pictures where you publish just the one Deep Zoom picture and you scroll around zooming into to each picture where each picture still retains the original resolution. The BBC Radio 1 and Hard Rock cafe both have examples of this. Is it useful, hmm, well it’s a way of showing pictures and I’m sure there are benefits in reducing the bandwith of high-res pictures that the user never looks at but for me it’s not very exciting.
2. Continuous zooming. This is the one that interested me. The ability to scroll on an area and keep scrolling to the tinest detail sounds very useful. Not unlike the way Google/Live maps work…or for me like Bladerunner!
 
To try Deep Zoom out I first downloaded the relvant tools from www.Silverlight.net and used the Deep Zoom Composer to create my image. I wanted the continuous zoom idea so to do that you need to take a series of picture of something, each time zooming in a specific area(s). Then in the composer you resize and position them on top of each other, carefully lining them up. This is a difficult process on two counts, 1st taking the pictures is tricky…you need good lighting and very careful positioning of the camera. The 2nd problem is the composer doesn’t (seem) to allow you to set an opacity on the picture you’re trying to place. That makes it difficult to line the two pictures up correctly. Once you’re satisfied with the picture you export it into a Silverlight project. In my case this project didn’t work, just got the dreaded catistrophic exception, or to those who develop Silverlight, the standard XAML exception. I followed the instructions in http://www.codeproject.com/KB/silverlight/DeepZoom.aspx and created a separate Silverlight project and copied the assets over. That worked, so what is the result? Well yes it works, the user can zoom into the pictures detail pretty easily, but is it any use outside a bit of fun? The problems of creating/composing the image makes it difficult to recommend for everyday use, I think for one-off imagery it is powerful, launching a new car maybe, perhaps medical applications but these are quite specialist. I think coupled with an excellent image detection/mapping/stitching library then it would become a powerful tool for applications that display images.
 
 

The trouble with RTS’

I do like to play Real Time Strategy (RTS) games but I’m getting a little bored with the AI in them. One of my biggest bugbears is the use of mission triggers. For example, in Company Of Heroes you must take a town hall from the Nazis. As soon as you take the hall then the misson changes to hold to the town hall. You then have a few mins to set up a defence and are then attacked with a large number of enemy tanks. So the trick is not to take the town hall. Destroy everything but don’t take the town hall. You are then free to cover every approach with a carpet of land-mines, snippers, anti-tank guns, etc. Then when you’re sure that there isn’t enough room for a knat to get to the town hall you take it. Then enemy now has no chance to get near the town hall and you win easily. The same strategy works on numerous missions and also on pretty much every other RTS. An example from Act of War is where you need to destroy a training camp. Then all hell breaks lose and you’re attacked from the ground, from the air, missile strikes the works. So you use the same tactic, don’t destroy the last building in the camp but you go after all the airfiles, heli-pads, etc. The AI just sits there waiting for the trigger of the losing the camp before it attacks. Again, when you’ve covered the area with turrets, wiped out every air threat and have snipers behind every tree then you destroy the last building. I know it’s not entering into the spirit of the games but the AI is so stuck in the mission trigger state that it never reacts to the threat you pose, come on game developers please try and build reactive AI into the games.
 

Another reason for a different query plan

While investigating an issue with a poor performing query a colleague realised that although it ran quickly in SQL Workbench it was causing timeouts  from .net clients. He used a nice trick of running the same query using OSQL (ADO client) to run the query and simply wait until the query ran. Why does this work? Well I believe the problem is that the query is too complicated for the plan to be created before the command to execute the query has timed out. But why doesn’t boot-strapping the plan in SQL Workbench help? I’ve struggled to answer this before but I may (may) have found the answer in SET OPTIONS. Query plans are cached for each group of set options, therefore there is a fair chance that the default ADO set options differ from those used by SQL Workbench. Although that in itself is interesting what it still doesn’t explain is why one set of options produces plans faster than another. It sounds a little odd to me, but it’s worth investigating further.

The Bozo bit

If there is one anti-pattern that I really need to keep reading is this one, http://en.wikipedia.org/wiki/Bozo_bit
I’m posting it here in an effort to remind me not to do it.
 

Profiling Web Projects in VS2008

I had a number of problems using VS2008’s profiler today so I thought I’d share my finding to lesson the suffering for others!

I’m working on a solution with a large number of projects all starting from a web project. All of the projects are strongly named. I wanted to use ‘instrumentation’ in order to get the most accurate statistics I could gather.

Problem 1.
When I first tried to launch the profiler it complained that my project was signed. The problem is that instrumentation requires the profiler to inject code into my DLL therefore changing the DLL, thus breaking the signature of the code. Happily it tells you that you can use an instrumentation step to re-sign the DLL.

Problem 2.
Ok, now luckily I’d been here before (and blogged how) so I added the re-sign code and tried again. So it launched (using IIS rather than Cassini) and I navigated around the site and finished the session. However the report was gibberish and contained hex numbers and guids where function and DLL names should be, not good. So it looks like a problem with the pdbs. After setting a number of pdb locations and ensuring the pdbs were serialized correctly I finally decided that the injection of code was upsetting the pdb so I switched off signing for every project.

Problem 3.
Still gibberish. Odd. I went to the ASP net temporary files and delete the lot. Re ran it and hurray something that looks like a profile.

Hard Disk noise

I’ve spent a fair bit of time trying to stop my PC sounding noisy. I could feel that the hard disks were vibrating and that was probably the cause. The disks had been pretty close to silent (through carefull buying) but for some reason that were now noisy. So I bought the Nexus DiskTwin hard disk cooler and vibration dampner. Is a pretty standard fair where you install your 3.5 disk into the 5.25 bay but with a large chunk of rubber and metal heat sink to make up the difference. But which out of my 3 disks should get the 5.25 treatment? I took the power out of all the disks and put the back trying out all the permutations. Odd thing was that when 2 of the 3 disks were installed is was pretty quiet, with all 3 the vibration came back. So I can only assume that the combination of disks is just enough to catch some sort of re-inforced feedback and cause the vibration. So I decided that I should keep the two Seagates together since they use the same hardware and hopefully would create the same frequency of noise and isolate the Samsung. Installing the Nexus was easy enough but it was wan’t without an annoying problem. My case uses runners to easily fit disks but the screw positions are not flush to the end of the drive like you’d expect from a 5.25 drive. Consequently the disk won’t fit into the rail guides and therefore I was forced to remove 2 drive bays to make room for the one disk. Was it worth it? Well the horrible vibration has died down a bit, and is close to silent with one of the drives goes to sleep so overall it was worth it to reduce the noise even if it hasn’t cured it. But at least I’ve got a good idea what I need to do now…get rid of one of the drives!
 
 

Set Focus plea to web developers

This is a plea to you web developers out there, and that includes those responsible for this site too! If you write a page, typically I’m thinking of a logon page, and you want to set the focus to a particular text box please write it correctly. It is very annoying that as the page renders the text box is available so I start to enter my password only for the document load to eventually fire and the code either resets the password text and/or places the insertion point at the start of the text box. This is very annoying, check what’s happening to the text box before assuming it is empty!

The wacky world of parallel SQL

I was asked to look a SQL Server problem where a single user was running a single batch query and getting a deadlock, which on the face of it is a pretty neat trick. The error stated; ‘was deadlocked on lock | communication buffer resources with another process and has been chosen as the deadlock victim’. Why piqued my interest was the phrase ‘communication buffer’. The query was pretty complex with a large number of joins and some casts on the join conditions but I was reassured that the code had worked fine on the development servers just not on the pre-live server. So what is a communication buffer in this context? I made a leap and assumed it was where the plan has split the work into its various stages and the communication was where the data was been processed and combined as the plan is executed. But still why would that cause a deadlock…ah well the title of the blog probably gives it away ;). The pre-live server is a pretty big beast with more cores than the average orchard and with only one user (no stress) it was likely that SQL was going to attempt to use a number of them. So in the great tradition of using a hammer to crack to a nut we switched the degree of parallelism on the server to be 1 rather than 0, i.e. don’t do it. The query ran fine. I probably should report this as a bug with SQL 2005 but if you do run into this problem then I’d suggest you use the maxdop hint in your query…or turn it off at the server, afterall what other nasties could this feature cause?