mo.notono.us

Wednesday, April 04, 2012

I <3 IE8

No, not really. 

On our recently completed Vogue Archive project, IE8 support was a requirement, due to a large number of potential users being stuck at the office on Windows XP with no freedom to install a better browser.  (We had a similar requirement for Firefox 3.6 support, but nowhere near the same kind of trouble with that browser though it was definitely the second worst browser in our field).

Background

The Vogue archive is an HTML5* + Silverlight application: we have two viewers, one built in HTML5 (for tablets and desktop browsers that support it), one built in Silverlight (for all desktop browsers). Both viewers are housed within the same HTML5 "chrome" - see yellow sections in the image below:

Html5Chrome

IE8, of course, was released years before HTML5 started its meandering way through the standardization process, so it should hardly be expected that IE8 should support HTML5.

Mmm, Cookies!

It should be expected, however, that IE8 could support HTTP cookies properly.

Not so much.

We got an error report from the field that when IE8 users logged out from the archive, and then logged back again, the logon process went through, and then promptly redirected them back to the unsecured welcome page at the start of the logon process. Hm.  Sure enough it did.  The excellent error report also stated that for some reason they were seeing two authentication cookies, one of which was empty.  Could that have something to do with it?  Huh? 

[Quite some time later]

The problem was indeed related to the double-cookies, but it appears it was actually caused by how IE8 interprets cookie expiration dates:

The standard way to delete a cookie is to create a new cookie with the same name, in the same domain (and path), with an expiration date set to a date in the past.  A pretty standard date to use is the 'epoch' start date (JavaScript's beginning of time) - midnight of 1/1/1970, GMT, represented as "Thu, 01-Jan-70 00:00:01 GMT;"

For whatever reason, IE8 sees this date, and attempts to convert it to local time - in our case (EDT) 4hrs earlier: 12-31-69 08:00:01 PM.  Slight problem - since '69 was before the start of the epoch, this is then further interpreted as meaning 2069 (never mind the second bug that a winter time should be converted using EST - aka GMT-5hrs).  So rather than creating a new cookie that immediately expired and thus was deleted, we ended up with a new very long-lived cookie.

To complicate things further, as a brute force way for us to make sure we delete both local and domain cookies (we don't know the preference of the client), in our delete-cookie script we actually create two expired cookies, one for each domain (i.e. vogue.com and archive.vogue.com).  It appears the login/logout process got confused and sometimes read one cookie (empty, expiring in 2069) and sometimes the other (valid session cookie).

Solution

While the analysis was complex, the solution was simple - we now use an expiration date of 1/1/2000 rather than 1/1/1970 - now IE can convert times all it wants, and it still stays a date in the past, and the cookie is expired.

Tell Them Again

I <3 IE8.

Labels: , , , ,

Friday, March 30, 2012

idea: bookmarklet to persist personal form data in localStorage

As a developer, I frequently have to clear my browsers cache, and also cookies, in order to test a site.  This is a PITA as now I'm logged out from Google, PivotalTracker, etc, etc.

It also showcases how very few sites store login information in localStorage by default (note to devs, if you're to offer a "Remember Me" button, use localStorage, not a cookie). 

So my idea is this: a set of two bookmarklets: the first would retrieve any form data entered in a form (prior to you submitting it) and store that data in localStorage, then the second would fill out a form using the data stored in localStorage for that site.

What about security you might ask?  Well, clearly this should only be used on a personal computer - and maybe password fields should be excluded in any case.  But this is stored locally, it is not transmitted anywhere, and the data is not accessible to any other site, so the data should stay between you and your computer.  One exception would be any potentially malicious script hosted on the site, but that seems like a risk in itself - the same script could much more effectively simply grab the form data on entry.

So - good idea or bad?

Labels: , , , , ,

Thursday, August 11, 2011

archive.rollingstone.com – another feather in our cap

With the successful launch of the new iPad-enabled Rolling Stone Archive, I figured I’d take the time out to congratulate our client, Bondi Digital, and my team at Applied Information Sciences (AIS): Jim Jackson, Robin Kaye, Ian Gilman and Siva Mallena  (with additional help from Leslee Sheu and Kevin Hanes).

Built on the same technology that we used to launch i.Playboy.com, the Rolling Stone archive combines our Silverlight viewer and the Html5, touch-optimized iPad viewer in a single site, sharing peripheral components such as menus and search features.  Per client requirements for Rolling Stone all desktop users will get the Silverlight-based viewer, with its keyboard and mouse integration, and deep zoom of images, while iPad users are automatically switched to the Html5 viewer.

Building and optimizing a highly graphics intensive app like this for the excellent, but admittedly limited, iPad browser has been a thoroughly enjoyable challenge. Showcasing our work to the public through another premier publication like Rolling Stone makes it all the more satisfying.

Our team is already onto the next publishing project – stay tuned…

Labels: , , , , , , , , , ,

Thursday, March 24, 2011

IE9 the new king of the Underscore performance tests

See http://documentcloud.github.com/underscore/test/test.html and past tests: http://mo.notono.us/search?q=underscore

Labels: , , , ,

Tuesday, March 01, 2011

Practical example of jQuery 1.5’s deferred.when() and .then()

"“Fun with jQuery Templating and AJAX” by Dan Wellman is a generally interesting article, but I found the code in the “Getting the Data” block especially interesting – see how each get function RETURNS the $.ajax function call, which can then be called inside a when() function, vastly simplifying the workflow (there’s an error in the listed code – getTweets() is supposed to return the ajax function, not simply execute it).

http://net.tutsplus.com/tutorials/javascript-ajax/fun-with-jquery-templating-and-ajax/

Even more interesting is this pattern, suggested by commenter Eric Hynds (whose blog has now been added to my Google Reader list):

http://net.tutsplus.com/tutorials/javascript-ajax/fun-with-jquery-templating-and-ajax/comment-page-1/#comment-357637

$.when( $.get('./tmpl/person.tmpl'), $.getJSON('path/to/data') )
   .then(function( template, data ){
      $.tmpl( template, data ).appendTo( "#people" );
   });

The deferred.done() and then() methods will take as arguments the results from each function called in the when() function – in order – i.e. the output of the get will map to template, and the output from the getJson will be mapped to data.  This is pretty sweet!

Perhaps a simpler to observe example of the behavior is shown here: http://jsfiddle.net/austegard/ZaFVg/ - no prize for correct guesses as to the result of this…

/* Hello and World are both treated as resolved deferreds - they 
can be replaced with any function, like a $.get, etc */
$.when( "Hello", "World" ).then(
   function(x, y){ alert(x + " " + y); }
);

Labels: , , , ,

Tuesday, January 11, 2011

"Microsoft has completely lost the web development community."

Last year Mark Pilgrim released a free e-book/site called “Dive Into Html5” (http://diveintohtml5.org/).  The site/book has served as a valuable resource on a recent Html5 project we’re working on here at AIS, and I have frequently gone back for details on topics such as local storage and canvas.  It is an excellent book for any bleeding edge web developer.  It is so choice. If you have the means, I highly recommend picking one up.

This week, Mark posted his observations on how publishing a free e-book (which is also purchasable in print format) works well for him, and that it gives great insight into what parts of the book are being read, and by whom. He then makes the following observation:

6% of visitors used some version of Internet Explorer. That is not a typo. The site works fine in Internet Explorer — the site practices what it preaches, and the live examples use a variety of fallbacks for legacy browsers — so this is entirely due to the subject matter. Microsoft has completely lost the web development community. (emphasis mine)

I forwarded this internally within AIS, and a nice debate ensued.  One common complaint was the hyperbole of the statement, and I agree; a more accurate line would likely be "Microsoft as a browser vendor has lost significant mindshare in the bleeding edge web development community."

Personally one of the things I love about Html5 (using the term the way the hypers would – to mean modern web development with client-driven UI interactions using JavaScript, CSS(3) and some HTML5 semantics) is that it has in some ways unified the web development community:  The debate a few years ago was about JSP vs .NET vs PHP vs Python vs Rails vs someotherservertechnology.  Folks from different camps seldom interacted and learned from each other.  With Html5, the backend processes are completely irrelevant, as long as they don’t muck with the Html (ASP.NET webforms is still a major sinner here, unfortunately) and developers using all sorts of backend software and operating systems are now adding to the collective knowledge, mostly working towards the common goal of getting as much functionality as possible, pushed to end users through mostly standards compliant browsers. 

For instance, our Html5 app is backed by ASP.NET MVC 2 and SQL server.  We do all our development on Windows, in Visual Studio – we’re looking to deploy to Azure.  Clearly we’re MS developers.  But we could just as well have done the app in Php against MySql running on linux and apache, and we’re taking cues from folks using python, java, Rails, Node.js, php and God knows what on the backend.

At the same time I haven’t used IE by choice for about 5 years, maybe more…

I was asked what I thought MS could do to gain back some developer mindshare – so here goes:

  • My thoughts are that if Html5 and the set of bleeding edge technologists that go with it are any kind of priority for MS,  they need to do some or all of the following:
  • Find a way to upgrade the legions of IE 6, 7  and 8 users to IE9.  This will obviously not be easy,  but they could do something similar to what Google did with Chrome frame (i.e. make IE9 a plugin for the older browsers),  or they could do something like the makers of the “IE Tab” Chrome and Firefox extensions do,  allow IE to be hosted inside Chrome,  and only activate it for certain sites.  Or let users install IE9 side by side with the older versions.   All of these would have as goal to encourage end users to use the latest possible browser for the task they need it for,  and to make them install IE9 instead of Chrome or Firefox.
  • Make IE9 the paragon of standards compliance.   (They are actually getting close to this...)
  • Bring IE9 to WP7 and whatever tablet software they're coming out with.
  • Reduce the focus of Silverlight as a browser plugin,  and make it more about web-deployed desktop apps.
  • Drastically improve the support for css and javascript in Visual Studio, including debugging and unit testing.   And give this toolset away in the form of VS Express.
  • Evolve the Dev tools in IE9 to become better than Chrome's inspector and the Firebug plugin.
  • Separate the IE development from Windows to allow quicker iterations
  • Do more things like the jQuery deal. The world of CSS is a mess (we desperately need mixins and code forks like those provided by media queries), MS could take the lead here…

The point is, whether Mark’s browser percentages are statistically valid as an indication of web developer’s preferences, or to what degree Microsoft is lagging/losing developer mindshare; these are not the pertinent questions.  The fact is that Microsoft is now not a leader in emerging web development areas – maybe they never were – but should they want to be, they need to take action. IE9 is shaping up to be a great browser, and they need to push it aggressively.

Labels: , , , , , , , , , , , , , , , , , , ,

Wednesday, September 15, 2010

IE9 Beta test scores against Underscore.js

Another new browser launch, and another obligatory proves-absolutely-nothing-definite/just-a-single-use-case performance test against the Underscore.js utility framework.

Previous tests showed that IE was gaining on the lead (Chrome), and that is still the case: As seen in the charts below, IE is sometimes faster, but still generally slower than Chrome (longer bars are better):

Again, this test proves very little, other than that IE9’s new Chakra JS engine is still slower than V8 for doing array iterations, and faster for mapping, getting property values, and creating list ranges.  IE9 has a number of features Chrome doesn’t have (yet) such as hardware acceleration (the IE Speed reader demo runs 790% faster in IE than in Chrome!) and ES property getter/setter standard compliance, just to mention two random ones…

IE9 beta is a HUGE step forward for IE.  Not sure if it will become my default browser, but this is at great day for the web.  Now if the EU and other governing bodies can just look the other way while MS quietly replaces all prior IE instances with IE9… ;-)

Labels: , , , ,

Monday, August 09, 2010

Dear Microsoft: Embrace JavaScript Already

It’s 2010:JavaScript is 14 years old, and you’ve officially supported “JScript” for the past 13 years.  Yet today, I have to open my JavaScript file in the FREE, OPENSOURCE, NotePad++ to find a missing closing } deep in my JavaScript file, because your latest premium Visual Studio IDE still can’t properly parse the language.

I appreciate the efforts you’ve gone to with improved intellisense in VS2010, but that is far from enough.  Why do we still need macros or plugins for elementary functionality such as function outlining, a document hierarchy tree, bracket matching and other validation?

As long as there is an internet driven by HTML, there will be JavaScript right beside it.  Embrace it already.

Labels: , , , ,

Thursday, August 05, 2010

Seadragon.com is now Zoom.it

Microsoft Live Labs recently rebranded their SeaDragon public Deep Zoom service ‘Zoom It’ and put it at http://zoom.it

They now have an API for Silverlight, .NET and JavaScript, allowing you to submit the url of your image to deep zoom, returning the url for your Deep Zoom Image (DZI). Or, for the non-programmatic approach, you can simply submit your url through the browser at http://zoom.it (the same way you could previoously through seadragon.com).

Completed DZIs are given a very short, incremental url, e.g. http://zoom.it/10ms, and you also get the embed code to put the image on your own site, like so:

The embed code for the above is exceedingly simple:

<script src="http://zoom.it/l6BK.js?width=auto&height=400px"></script>

Labels: , , , ,

Saturday, June 26, 2010

Undersore.js Performance Tests Revisited (this time with pretty charts)

Out of sheer vanity, I added my own blog feed to my Google Reader, as I was curious if anyone ever Liked my posts.
Answer: Nope. :-(

Anyhow, I came across my post on Underscore.js, and since MS just dropped Platform Preview 3 of IE 9, I thought I’d redo the comparison in Chrome6, IE8 and IE9 (though I know this is hardly any complete benchmark test, it’s still telling).  The results are below: 

As I said in my last post, I can’t wait for IE9 to replace every previous IE version…  I haven’t been this excited about an IE product since IE4, which was more than 10 years ago.

IE 8 – still a dog.

IE8 results

IE 9 PP3 – Starting to look good!  Faster than Chrome in some tests!

IE9 results

Chrome 6 – still the winner in most categories, though the lead is shrinking

Chrome 6 results

Bottom line, though – if you’re doing a lot of looping/mapping, you should use Underscore rather than jQuery.

Labels: , , , , ,

Monday, May 24, 2010

Comparative Performance of Underscore.js in Chrome and IE

I came across the very handy-looking Undersore.js today, and clicked on the test & benchmark link. I first ran the test in Chrome.  The results below show number of operations per second.  Looks like each, map, keys, values, and range are pretty inexpensive operations, whereas uniq and intersect should be used sparingly.  All makes sense. 

Then out of curiosity, I ran the same tests in IE and Firefox.  The exact numbers are not significant as the results vary by 10-20% between subsequent runs in the same browser, but the range is pretty illustrative.  And yes, I know IE9 is harder, better, faster, stronger, so this is not a fair fight.  I can’t wait for IE9 to replace every previous IE version…

  Ops/sec

(higher is

better)
Test Chrome 6 IE 8 Firefox 3.6

_.each()

20213

510

3249

_(list).each()

13570

493

3161

jQuery.each()

3637

209

910

_.map()

18581

303

5488

jQuery.map()

7084

686

8519

_.pluck()

10852

282

4785

_.uniq()

127

1

33

_.uniq() (sorted)

308

210

84

_.sortBy()

1641

45

359

_.isEqual()

4962

869

1826

_.keys()

22675

1142

4295

_.values()

24551

321

5435

_.intersect()

83

1

20

_.range()

33345

1223

5262

Again, why I use Chrome as my default browser.

Labels: , , , , ,

Friday, April 30, 2010

Fun with 1000 Rolling Stone covers, AndreaMosaic and the Deep Zoom Composer

UPDATE 08/11/2011: See post about the new archive, now also for the iPad here: archive.rollingstone.com – another feather in our cap

Having finished helping Rolling Stone magazine put their archive online (our company, AIS, together with Bondi Digital, did the Silverlight-based archive portion (more on the project later, potentially), another company did the main site), I decided to get artsy by generating a mosaic of the latest issue cover, by using thumbnails of some 1000 previous covers.

mosaic partial zoom

AndreaMosaic worked beautifully in creating the mosaic, and even pre-generating the starting point for a Deep Zoom image.  I opened that in Deep Zoom Composer, and generated both a Silverlight and an Seadragon Ajax version of the composite.  I then uploaded the whole shebang to my public s3 bucket.

It all took a bit of fiddling to get right, but if I had to do it again I could probably do the whole thing in 5-10 minutes…  It’s that easy. (Seriously, this blog post is taking longer…)

Of course, now that I’m putting in all the links in this post, I realize I could have simply hosted this at seadragon.com…  Oops.

UPDATE:Since MS is so kind to do the processing for me, I figured I might as well create a bigger version of this. The following is around 180MP, made up of 3000 tiles, each 300px tall... Click the full screen button in the embedded viewer for the best effect.

Update, May 12th: ...and here's the latest 2-page cover:

Labels: , , , , , ,

Wednesday, June 17, 2009

More complicated JavaScript string tokenizer – and Twitter Conversations

(I'm not sure when I started using the term tokenizer - "formatter" may be more common...)

I’ve experimented in the past with a C# string.Format() alternative that would allow property names as tokens rather than the numeric index based tokens that string.Format() uses.  Hardly a unique approach, and many others did it better.

Here’s another first-draft ‘tokenizer’, this time in JavaScript:

String.prototype.format = function(tokens) {
///<summary>
///This is an extension method for strings, using string or numeric tokens (e.g. {foo} or {0}) to format the string.
///<summary>
///<param name="tokens">One or more replacement values
///if a single object is passed, expects to match tokens with object property names, 
///if a single string, number or boolean, replaces any and all tokens with the string
///if multiple arguments are passed, replaces numeric tokens with the arguments, in the order passed
///</param>
///<returns>the string with matched tokens replaced</returns>
  var text = this;
  try  {
    switch (arguments.length) {
      case 0: {
        return this;
      };
      case 1: 
      {
        switch (typeof tokens) {
          case "object":
          {
            //loop through all the properties in the object and replace tokens matching the names
            var token;
            for (token in tokens) {
              if (!tokens.hasOwnProperty(token) || typeof tokens[token] === 'function') {
                break;
              }
              //else
              text = text.replace(new RegExp("\\{" + token + "\\}", "gi"), tokens[token])
            }
            return text;
          };
        case "string":
        case "number":
        case "boolean":
          {
            return text.replace(/{[a-z0-9]*}/gi, tokens.toString());
          };
        default:
            return text;
        };
      };
      default:
      {
        //if multiple parameters, assume numeric tokens, where each number matches the argument position
        for (var i = 0; i < arguments.length; i++) {
          text = text.replace(new RegExp("\\{" + i + "\\}", "gi"), arguments[i].toString());
        }
        return text;
      };
    };
  } catch (e) {
    return text;
  }
};

The comment (in VS Intellisense format) is pretty self-explanatory, note that it doesn’t allow any special formatting as String.Format does, nor does it even support escaping {}‘s – in general it is quite crude.

That said, when used within it’s limitations it works - below are a couple of actual usage scenarios, both from my ongoing Twitter Conversations experiment:

var url = "http://search.twitter.com/search.json?q=from:{p1}+to:{p2}+OR+from:{p2}+to:{p1}&since={d1}&until={d2}&rpp=50".format({ 
  p1: $("#p1").attr("value"), 
  p2: $("#p2").attr("value"),
  d1: $("#d1alt").attr("value"), 
  d2: $("#d2alt").attr("value")
});

and

$.getJSON(url + "&callback=?", function(data) {
  $.each(data.results, function(i, result) {
    content = ' \
<p> \
	<a href="http://twitter.com/{from_user}"><img src="{profile_image_url}" />{from_user}</a> \
	(<a href="http://twitter.com/{from_user}/statuses/{id}">{created_at}</a>): \
	{text} \
</p>'.format(result);

The working Twitter Conversations sample is here, as you can tell it’s really just a wrapper around the Twitter Search API.

Labels: , , ,

Tuesday, June 16, 2009

Simple JavaScript string tokenizer

A crude String.Format equivalent in JavaScript -blatantly copied from frogsbrain

//from http://frogsbrain.wordpress.com/2007/04/28/javascript-stringformat-method/ 
String.format = function(text) { 
    if (arguments.length > 1) { 
        for (var i = 0; i < arguments.length - 1; i++) { 
            text = text.replace(new RegExp("\\{" + i + "\\}", "gi"), arguments[i + 1]); 
        } 
    } 
    return text; 
}; 

#twitcode version (130 characters!)

strf=function(t){a=arguments;if(a.length>1)for(i=0;i<a.length-1;i++)t=t.replace(new RegExp("\\{"+i+"\\}","gi"),a[i+1]);return t}; 

Labels: , ,

Thursday, May 21, 2009

Looking through the source of SharePoint on SharePoint

Microsoft launched their new SharePoint site a few days ago, and for the first time the SharePoint site is actually hosted on SharePoint (!).  It’s a nice looking site, with a dynamic user interface, courtesy of AJAX and Silverlight.  I decided to take a closer look at the visible source code – that is, the rendered HTML, JS, CSS, and Xap files.

Below are some observations:

  • They’re first loading the OOTB stylesheets, including HTML Editor and core.css (all 4K+ lines of it), completely unmodified, then they override the defaults with additional stylesheets (the MSCOMP_Core.css is another 4K+ lines of css) – seems inefficient?
  • They only load Core.js if authenticated, through a custom server control:
    <!-- RegisterCoreJSIfAuthenticated web server control -->
    <span id="ctl00_RegisterCoreJsIfAuthenticated1"></span>
  • Interestingly MS uses Webtrends
  • They use custom js to get around the name dll:
    <script type="text/javascript" src="/_catalogs/masterpage/remove_name_dll_prompt.js"></script>
  • There’s extensive Control look and feel customization through Control specific CSS
  • A lot of their stylesheets reference slwp_something – SilverLight WebPart perhaps?
  • The viewstate looks pretty nasty but in the end is only 61KB, which I guess is acceptable
  • The page includes the standard minified versions of MicrosoftAjax.js, MicrosoftAjaxWebForms.js, and SilverlightControl.js
  • The on-page Silverlight initialization code is NASTY, not sure if this is standard for Silverlight, or if this is an ugly exception.  Why not use JSON?  Why use encoded javascript?  Here’s a very short random sample – note that they didn’t bother getting rid of spaces (%20) before encoding:
    SiteNavigationDefinition%3E%0D%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%3C%2FChildSites%3E%0D%0A%20%20%20%20%20%20%20%20%20%20%20%20%3C%2FSiteNavigationDefinition%3E%0D%0A%20%20%20%20%20%20%20%20%20%20%20%20%3CSiteNavigationDefinition%3E%0D%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%3CSelected%3Efalse%3C%2FSelected%3E%0D%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%3CSiteName%3EECM%3C%2FSiteName%3E%0D%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%3CSiteUrl%3E%2Fproduct%2Fcapabilities%2Fecm%2FPages%2Fdefault.aspx%3C%2FSiteUrl%3E%0D%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20%3C
    Best of luck debugging that.
  • There’s a mix of absolute and relative references to the same image library, (but that’s a very picky observation)
  • YSlow result:  D, pinging it on number of HTTP requests, lack of a CDN (why doesn’t MS have a CDN?), Expirations headers, ETags and not minifying JS and CSS, but overall size is not bad for a MOSS page, especially not one this visually engaging – but then it turns out YSlow does not account for loading of Silverlight content – the Xap files for the Top Nav and main control are 240KB and 632KB, respectively:
    clip_image001
  • The XAP files also contain some interesting content, like this test image for the header – but they’re not actually using fast for the search…
    clip_image003
  • They use an Image Transitioner component from Advaiya (sidenote – pure Silverlight websites are just as annoying as pure Flash websites), who has supported MS on other Silverlight initiatives – wonder if the SL pieces were outsourced to them?

So – all in all a nice looking site, but I have some questions as to the completeness of the project.  Maybe it’s just me, but if I was Tony Tai (MS SharePoint Product Manager), I would spend another week finishing things up a bit…

Labels: , , , , , ,

Friday, February 13, 2009

Why I use Chrome

While checking her email on my computer yesterday, my wife asked me why I use “that Google POS browser” (Chrome).  I think she was annoyed that it was unfamiliar to her, and that it’s just the latest example of me doing something geeky that gets a bit on her non-geeky nerves.  Oh well – marital bliss... 

For the record (not that she would read this…), this is why I use Chrome:

  1. It is clean.  No clutter.  This actually matters.
  2. It is fast: Below are the results of the V8 Engine Benchmark scores.  Sure the benchmark is written by Google, sure it only measures JS performance, and sure it doesn’t mean that Chrome is 20x faster than IE8.  But it is faster, just as Firefox3 is noticeably faster than IE8:

Benchmark test results from http://v8.googlecode.com/svn/data/benchmarks/v3/run.html run on my laptop

V8Benchmark_IE8 V8Benchmark_Firefox3 V8Benchmark_Chrome
IE 8.0.6001 Firefox 3.0.6 Chrome 1.0.154

Now if only Chrome was as extensible as Firefox is, and if only people would start writing standards compliant html and JavaScript (people = Microsoft, really – SharePoint and OWA are the two biggies), then I would ONLY use Chrome.  As is I’m stuck with all three.  Sigh.

Labels: , , , , , , , ,

Tuesday, January 13, 2009

MOSS: Add Incoming Links to a Wiki Page with jQuery

Sharepoint’s wiki implementation is rudimentary, but still useful.  One of the corners cut in the implementation is that incoming links are on a separate page – you have to click the Incoming Links link (and wait for the screen to refresh) to see them.  It’d be much more user-friendly to show these links on the same page as the content.

Turns out with jQuery this is a fairly trivial exercise,  at least for a single Wiki page*:  Simply add a Content Editor Web part to the page and copy the following code into the Source Editor.

<script src="http://ajax.googleapis.com/ajax/libs/jquery/1.2.6/jquery.min.js" type="text/javascript"></script>
<script type="text/javascript">
$(function() {
  //get the url for the incoming links page
  u = $("a[id$=WikiIncomingLinks_LinkText]")[0].href;

  //create a target container and load it with the incoming links
  //filtered to show the links list only
  l = $("<div id='incomingLinks' style='border-top: solid 1px  silver'>").load(u + " .ms-formareaframe");

  //append the new container to the wiki content
  $(".ms-wikicontent").append(l);
});

</script>

It may be noted that the above code could even be combined into one single chain – I prefer the above for readability and debugging purposes.  Also not sure if I need to dispose of the local variables – this is a POC more than anything else.

Adding script through a CEW part

The incoming links are now on the page, right below the content:

Incoming Links directly on the Wiki Page

A more thorough implementation might position the links in a box in the upper left corner, and simultaneously removing the “Incoming Links” link.

*I haven’t quite thought out how to inject this throughout a wiki.  Any suggestions?

Labels: , , , , , ,

Wednesday, November 26, 2008

Analyzing the Amazon Universal WishList Bookmarklet

Amazon added a nifty Universal Wishlist function to their site back in August but I just discovered it today (thanks Yuriy!).  Essentially it is a personalized bookmarklet/favelet that injects code into any page, letting you add any and everything to your Amazon wishlist.

The code for the bookmarklet is this:

javascript:(function(){var%20w=window,l=w.location,d=w.document,s=d.createElement('script'),e=encodeURIComponent,x='undefined',u='http://www.amazon.com/gp/wishlist/add';if(typeof%20s!='object')l.href=u+'?u='+e(l)+'&t='+e(d.title);function%20g(){if(d.readyState&&d.readyState!='complete'){setTimeout(g,200);}else{if(typeof%20AUWLBook==x)s.setAttribute('src',u+'.js?loc='+e(l)),d.body.appendChild(s);function%20f(){(typeof%20AUWLBook==x)?setTimeout(f,200):AUWLBook.showPopover();}f();}}g();}())

Not very readable – so I decided to expand it.  I added linebreaks, and in some cases vars, {}s and in one case changed a tertiary conditional to a regular if..else statement:

//javascript:
(function() {

    //order of execution:
    //1. Initialization
    //2. Script check
    //3. Load remote script
    //4. Execute remote script


    //***Initialization ***
    //create short aliases for common objects
    var w = window, l = w.location, d = w.document;
    //create a script element and give it a short alias
    var s = d.createElement('script');
    //create some more short aliases...
    var e = encodeURIComponent, x = 'undefined';
    var u = 'http://www.amazon.com/gp/wishlist/add';

    //***Script check ***
    //if the script element was not properly created...
    if (typeof s != 'object') {
        //..navigate to the Wishlist add page, passing along the current page url and title
        l.href = u + '?u=' + e(l) + '&t=' + e(d.title);
        //effectively ends the script
    }

    //...else (script successfully created)

    //*** Load remote script ***
    //create a function that we can call recursively until the document is fully loaded
    function g() {
        //if the document is not fully loaded...
        if (d.readyState && d.readyState != 'complete') {
            //.. wait 200 milliseconds and then try again
            setTimeout(g, 200);
        } else { //... the document IS fully loaded
            //if the Amazon Universal Wishlist object is undefined...
            if (typeof AUWLBook == x) {
                //set the source of the script element to http://www.amazon.com/gp/wishlist/add.js 
                //This is a pretty complex and large JavaScript object, not discussed here
                //Essentially it displays a floating div in the top left corner of the page
                //in which the user can enter product details and select from the pictures on the
                //current page.
                //AUWLBook is personalized for each user, setting Wishlist titles and regystryIds
                //If the user is signed out, it will navigate to the Amazon sign in page, and then 
                //to the wishlist
                //If the user is signed in, it will display a floating confirmation div with options
                //of navigating to the list or "continue shopping".
                //I don't BELIEVE the loc query parameter actually alters the generated JavaScript, 
                //it must be used for other purposes
                s.setAttribute('src', u + '.js?loc=' + e(l));
                //append the script
                d.body.appendChild(s);
            }
            //***Execute remote script ***
            //create a function that we can call recursively until the AUWLBook object is fully loaded
            function f() {
                //if the AUWLBook object is not loaded yet, wait 200 ms and try again
                if (typeof AUWLBook == x) {
                    setTimeout(f, 200);
                } else { //when loaded display the pop-over div - see above
                    AUWLBook.showPopover();
                }
            }
            //call the f function
            f();
        }
    }
    //call the g function
    g();
} ())
See http://etherpad.com/10gdKmen5u for the full js with proper highlighting.

Labels: , , , , ,

Tuesday, September 02, 2008

Pre-stolen Idea: The Google Chrome Omnibox

Are you feeling lucky?

Image by dullhunk via Flickr

Just the other day I was saying to one of my office mates that Google Toolbar ought to replace the URL address bar with a suggestion-style drop down list that would suggest URLs for you, similar to how the Windows Explorer intellisense works.

That is, if I started typing goo - it should suggest to complete that with google.com, or goofball.com, etc, etc.  The Firefox address bar does this now, but only for URLs that are in your history.  With all the PageRank data available at their fingertips, Google should be able to suggest the URL of a page that is NOT in your history.

And now, apparently, in Google Chrome, they will:

Google Chrome - the comic book, page 19

Time to brush the dust off the ol' JavaScript books - or buy some new ones.  JavaScript is not going away any time soon.

Labels: , , ,

Thursday, August 14, 2008

Re-Testing Zemanta Plugin for Windows Live Writer

HOMESTEAD, FL - JUNE 23:  Jason Atkins, creato...

Image by Getty Images via Daylife

Yesterday I tested the Zemanta plug-in, but the page it hit on start-up caused a JS error.  As they said in the comments, they fixed it - JS error is gone.  Quick work, bodes well for the evolution of the component feature set.

-- or maybe not.  A new quirk has appeared: after I inserted the image, a square followed by a closing tag has appeared in my editing window, looking at the HTML view I see it stems from the following code:

<div class="zemanta-pixie"><img class="zemanta-pixie-img" src="http://img.zemanta.com/pixy.gif?x-id=303e21bb-9ae8-46c6-aec3-beeddccc77ec" />&gt;</div>

When I put text after the glyph, and then tried adding an Article, my text disappeared.  And the Articles are REALLY hit and miss. (Maybe they're using the machine-gun pictured above...)

No bueno...

>

Labels: , , , ,