@Media 2008 (atmedia2008) Day 2

Professional front-end engineering – Nate Koechley

Although large parts of this session are essentially the same as Nate has presented before it was still good. It’s no surprise that it is similar content since it comes from Yahoo’s best practices so they’re unlikely to change that quickly. A few points I thought I’d highlight are; Progressive Enhancement rather than Graceful Degradation, JavaScript verifier (JSLint), front end unit testing (YUITest), JSDoc,  Firebug to test form/post attacks, save CSS/JS files as UTF-8, pre-load new assets before a new product launch to avoid complaints about slow start up since the new assets are not yet cached, try to keep to <256 colours to use PNG8, iPhone caches only ~20 items, avoid using @Import.

I had a quick chat with Nate and discussed .net server controls vs. JavaScript libraries and using the server to overcome some of the multi-file issues. Nate told me about the YUI.net project that is working on combining the two so I’ll have to take a look at that. The discussion of using post-processing or streaming of multiple files as a single file stream were interesting. I was concerned that given Yahoo’s extensive testing of seemingly almost every permutation downloading CSS/JS that there was some problem with servers side code. Nate assured me that there wasn’t any specific issue and that Yahoo teams were free to chose. Of course he explained again that the use of Yahoo compression techniques would help alleviate these problems.

I enjoyed this presentation although it was a little rushed and I had seen a fair bit of this before.

Rating – 0 doodles – 8/10

Building on the shoulder of giants – Jonathan Snook

The basic premise of this was to use the libraries out there since they’ve gone through the rigour of testing, browser problems, etc. I liked Jonathan but I thought the point was a bit too laboured and one hour was too much. I did like the time-line demo and the idea of small amounts of code to create a good site is appealing but it’s a fairly easy point to make.

Rating – 10 doodles – 6/10.

PS. I really liked Jonathan’s responses in the later panel discussion. His responses were mature and well-rounded and didn’t pander to the frankly crowd pleasing answers from a couple of the other panellists! He is clearly a smart man and I’ll certainly be adding him to my RSS reader.

The why & which of JavaScript libraries – John Resig

John, the originator of the JQuery library, gave a great talk about the alternative libraries. I must confess that I’m almost 100% professionally in the Microsoft world but I do have a little experience of playing with one of those libraries, Yahoo’s YUI. So I did find the various pro’s and con’s of the libraries interesting. I also like John’s honest look at why libraries are good, and also why they might be bad. The libraries John concentrated on were;

Rating – 0 doodles, 8/10

WAI-ARIA – It’s easy – Steve Faulkner

I admit I thought this was going to be about a tool to check your site but I’ve obviously been out of the accessibility loop for too long. Steve explained that it was a set of attributes that helped explain to a (usually) screen reader. Of particular interest was the ideas of a role and pre-canned/pre-localised explanations of how to use controls. It looks good but the one area that I found difficult to understand was that of compliance. If you have new attributes but don’t change the underlying schema rules then it will break the page. The reasoning was that you’d use JavaScript to inject the ARIA attributes. As someone pointed out that relies on JavaScripts, but what worried me is that still produces invalid mark-up. Changing the DOM representation of the mark-up to be essentially invalid doesn’t sit will with me. It would surprise me if a compliant browser would simply refuse to allow non-standard attributes and personally I’d hope it wouldn’t. Avoiding changing the underlying schema’s is a convenient hack, I think it would have been better to create spin-off schemas or use the modular nature of schemas to add a new set.

Back to the presentation, Steve did have more than his fair share of presentation gremlins which did derail the flow.

Rating – 0 doodles – 7/10

Global Design: Characters, Language, and more – Richard Ishida

I really enjoyed Richard’s presentation last year and this was going well, for me, until I think Richard thought he was losing the audience. In this presentation Richard delved into the technical reasons of why you want to chose Unicode and encode with UTF-8. Richard gave some useful tips, such as turning off the dreaded BOM wherever possible, not bothering with the language meta-tag,  using both language attributes in the xml declaration, not using xml declaration for IE6 (cause it pushes it into quirks mode – one I did know).  As a developer I was enjoying the technical stuff but Richard seemed to realise that this wasn’t what all the audience wanted so he quickly took the level back up again and then rushed through the remaining slides.

Rating – 0 doodles – 7/10

The core of the talk can be found in the following W3C Unicode tutorial

PS. I was a little disappointed that Richard didn’t respond to one of the panel questions (he was in the audience) about why data is important to accessibility as the panel struggled and kept thinking, "we just heard Richard tell us for 1 hour about the importance of encoding to making localising/globalising a site" – a form of accessibility.
Kick it! | DZone it! | del.icio.us

@Media 2008 (atMedia2008)

I’m currently attending @Media 2008 (atMedia2008) so I thought I add an very quick blog about the first day, I’ll expand on the topics later…

Designing our way through Data – Jeffrey Veen

Jeffrey talked about the his work at Google and what inspired some of his work. Very interested talk about how to visualise data, although I’ve seen big chunks of it before. I won’t talk about the content since you can view it in the above link. Great start and for a keynote it fulfils the task of being inspirational.

Rating – 0 doodles – 9/10 (point deducted for reproducing some parts)

Mental Models, sparking creative thinking through empathy –

Indi Young

Good technique to acquire what the end-user really wants (in this case from a site). I thought this was especially useful for marketing/planing teams looking to create a roadmap. A lot of ideas will be familiar to anyone who captures user stories but it looks like a good tool to aid in that process. What I liked about the process was the focus on the end user and the ability to create a graphical representation of the both the user ‘wants’ and the mechanisms that either support it or are tabled to support it. As with User Stories the idea is to capture the user needs at a (initially with User Stories) high level. So rather than, "I want to be able to drop down a set of schools in the area", you’d expect to capture, "when I’m looking at houses in an area I am concerned about the quality of schools". It prevents people leaping ahead and designing the page/application before really understand the requirements, anything that promotes that thinking can’t be wrong! Indi talked about such a house search which was a great example and also talked about the ‘in the corridor’ where you try to imagine what a person would be thinking as they walk down a corridor. E.g. they’re more likely to be thinking, "I need to get that report out be 2pm" rather than, "I need a web page that allows me to select the xyz report….". However, there were a number of other examples that were basically making the same point but were only interesting to people in that domain, I think they could have been left out and took some of the momentum away from the presentation.

I enjoyed the talk but I do think it would be better to talk about the ‘why’ before the ‘what’ as it was very difficult to appreciate the goals of it until you know the why. I certainly look out for her book.

Rating – 1 doodle – 7/10

Getting your hands dirty with HTML5 – James Graham & Lachlan Hunt

The 5 mins spent talking about HTML5 examples was interesting, why bother with the other 55? James and Lachlan are obviously clever people and are capable of presenting although I found the style a little…academic like. I’ve given this a poor rating because of the dwelling of the why rather than the now. Although I think it’s right to give a little history I found that the majority of the presentation was this rather than the details of HTML5, which given the title is disappointing.  On a personal note, I really hate mail-lists as I find them antiquated and dripping in ivory-tower academia so it did grate on me that they championed this method, just set a bit of scene really.

Rating – 10 doodles – 3 nodding offs = 3/10

[Edit] Oh and it looks like they’re just a snobby as me ;)….

  1. # [15:11] <annevk> http://pdkm.spaces.live.com/blog/cns!D1DDEC9FF002FB8C!872.entry didn’t like the talk at least… complaining about mailing lists and academia
  2. # [15:11] <annevk> but spaces.live.com prolly says enough
  3. # [15:12] <hsivonen> Philip`: occasionally the audience pointedly disagrees with the presenter
  4. # [15:12] <jgraham_> Yeah, I think we spent too long on design principles
  5. # [15:12] <hsivonen> Philip`: happened at XTech with Steven Pemberton’s talk
  6. # [15:13] <Philip`> hsivonen: Disagreeing with presenter doesn’t make it a bad presentation – it’s good if things are thought-provoking and get people interested and involved 🙂
  7. # [15:13] <jgraham_> So even being on spaces.live.com it isn’t an entirely unfair criticism 🙂
  8. # [15:13] <hsivonen> a Flickr guy made notes in the audience during the presentation and then flushed his counter-arguments at the end
  9. # [15:14] <jgraham_> (I should note for posterity that it is in no way Lachy’s fault that we sepnt too long on that section)

Underpants over my trousers – Andy Clarke

Fun and interesting talk about getting inspiration from comic layouts, a different way to describe how to move the readers eye around the page. Also talked about how comics use the size (or even lack of) a frame to describe the amount of time the reader should spend reading that section. Andy also talked about creating several templates for the "same" dynamic page to accommodate data of different sizes, nice idea. A typically entertaining talk from Andy and I find myself very envious of his position. Andy has a reputation of producing designs that work for the target rather than sweating about all the backward compatible issues (e.g. the lovely use of transparent PNGs…wonder what that looks like in IE6). I find myself sitting on the fence about that, but I can’t deny that I like his designs and like the idea of been able to choose customers based on what I want to do…must be nice. You also have to love his anti-americanizatizms <wink>, at least we have that in common!

Rating – 0 doddles – 9/10

Designing User Interfaces: Details make the difference – Dan Rubin

I would sum up this talk with, "the devil is in the detail". Dan showed the level of scrutiny need to create good looking sites. Some of the ideas I found particularly interesting were; getting dynamic data to avoid widows by adding non-breaking space, using -1 letter-spacing on big headers and for me, a non-designer, the proportional spacing rules…thank you Dan.

Rating – 0 doodles – 8/10

More to follow….
Kick it! | DZone it! | del.icio.us

Silverlight Deep Zoom

I’ve just had my first try of Deep Zoom, the collection of tools as user controls that allow the user to zoom into what seemingly looks like a single image. The idea is that as a developer (or publisher) you can collate a number of images together using the Deep Zone Composer to form one Gestalt style super image. So what can you use it for? Well IMO there are two main uses;
1. A scrollable collage of pictures where you publish just the one Deep Zoom picture and you scroll around zooming into to each picture where each picture still retains the original resolution. The BBC Radio 1 and Hard Rock cafe both have examples of this. Is it useful, hmm, well it’s a way of showing pictures and I’m sure there are benefits in reducing the bandwith of high-res pictures that the user never looks at but for me it’s not very exciting.
2. Continuous zooming. This is the one that interested me. The ability to scroll on an area and keep scrolling to the tinest detail sounds very useful. Not unlike the way Google/Live maps work…or for me like Bladerunner!
 
To try Deep Zoom out I first downloaded the relvant tools from www.Silverlight.net and used the Deep Zoom Composer to create my image. I wanted the continuous zoom idea so to do that you need to take a series of picture of something, each time zooming in a specific area(s). Then in the composer you resize and position them on top of each other, carefully lining them up. This is a difficult process on two counts, 1st taking the pictures is tricky…you need good lighting and very careful positioning of the camera. The 2nd problem is the composer doesn’t (seem) to allow you to set an opacity on the picture you’re trying to place. That makes it difficult to line the two pictures up correctly. Once you’re satisfied with the picture you export it into a Silverlight project. In my case this project didn’t work, just got the dreaded catistrophic exception, or to those who develop Silverlight, the standard XAML exception. I followed the instructions in http://www.codeproject.com/KB/silverlight/DeepZoom.aspx and created a separate Silverlight project and copied the assets over. That worked, so what is the result? Well yes it works, the user can zoom into the pictures detail pretty easily, but is it any use outside a bit of fun? The problems of creating/composing the image makes it difficult to recommend for everyday use, I think for one-off imagery it is powerful, launching a new car maybe, perhaps medical applications but these are quite specialist. I think coupled with an excellent image detection/mapping/stitching library then it would become a powerful tool for applications that display images.
 
 

The trouble with RTS’

I do like to play Real Time Strategy (RTS) games but I’m getting a little bored with the AI in them. One of my biggest bugbears is the use of mission triggers. For example, in Company Of Heroes you must take a town hall from the Nazis. As soon as you take the hall then the misson changes to hold to the town hall. You then have a few mins to set up a defence and are then attacked with a large number of enemy tanks. So the trick is not to take the town hall. Destroy everything but don’t take the town hall. You are then free to cover every approach with a carpet of land-mines, snippers, anti-tank guns, etc. Then when you’re sure that there isn’t enough room for a knat to get to the town hall you take it. Then enemy now has no chance to get near the town hall and you win easily. The same strategy works on numerous missions and also on pretty much every other RTS. An example from Act of War is where you need to destroy a training camp. Then all hell breaks lose and you’re attacked from the ground, from the air, missile strikes the works. So you use the same tactic, don’t destroy the last building in the camp but you go after all the airfiles, heli-pads, etc. The AI just sits there waiting for the trigger of the losing the camp before it attacks. Again, when you’ve covered the area with turrets, wiped out every air threat and have snipers behind every tree then you destroy the last building. I know it’s not entering into the spirit of the games but the AI is so stuck in the mission trigger state that it never reacts to the threat you pose, come on game developers please try and build reactive AI into the games.
 

Another reason for a different query plan

While investigating an issue with a poor performing query a colleague realised that although it ran quickly in SQL Workbench it was causing timeouts  from .net clients. He used a nice trick of running the same query using OSQL (ADO client) to run the query and simply wait until the query ran. Why does this work? Well I believe the problem is that the query is too complicated for the plan to be created before the command to execute the query has timed out. But why doesn’t boot-strapping the plan in SQL Workbench help? I’ve struggled to answer this before but I may (may) have found the answer in SET OPTIONS. Query plans are cached for each group of set options, therefore there is a fair chance that the default ADO set options differ from those used by SQL Workbench. Although that in itself is interesting what it still doesn’t explain is why one set of options produces plans faster than another. It sounds a little odd to me, but it’s worth investigating further.