Monday, October 30, 2006

Firefox 2.0: Client-side session and persistent storage

Firefox 2.0 contains support for new client-side storage features. In the release notes it reads:

New support for storing structured data on the client side, to enable better handling of online transactions and improved performance when dealing with large amounts of data, such as documents and mailboxes. This is based on the WHATWG specification for client-side session and persistent storage.

WHATWG Web Applications 1.0 Working Draft describes this mechanism as being similar to HTTP session cookies. Incidentally, it is also worth noting that there are numerous other interesting sections of this specification that have not yet been implemented! When you look at the spec, it would be tempting to think that this new mechanism could just be a new way to manipulate cookies, however this is *not* what Firefox 2.0 is doing.

The code following this entry gives simple examples of how to set a variable and retrieve variables using these new cross webpage storage mechanisms; page1.html & page2.html use session storage, page1a.html & page2a.html use persistent storage.

After executing the persistent storage example I noticed that a file called webappsstore.sqlite was created in my Application Data directory for Mozilla Firefox. I think the extension for this file pretty much gives the game away, this is an SQLite file. There are a couple more SQLite files which can be found in this directory, these play a part in the new Firefox 2.0 phishing (urlclassifier2.sqlite) and search engine (search.sqlite) functionality.

Firing up an evaluation version of Visual SQLite and loading up webappsstore.sqlite you will see this file contains a table called mozwebappsstore with columns; domain, key, value, secure.

Having proven to myself that the Firefox 2.0 persistent storage is provided by SQLite, I would make an educated guess that the session storage is provided by a memory resident version of the database. Using this new storage mechanism is easier than using cookies but the key advantage over cookies is likely to be performance. Changing and reading cookies usually mean disk access. When using a memory resident database, there is no disk access. Accessing multiple values from persistent storage even though this will involve disk access is also likely to be faster because all the data is in one place. Visual SQLite did not indicate that webappsstore.sqlite contains any indexing and when dealing with large amounts of data adding indexing could further improve performance.

I have yet to see a full disclosure of the details for the storage mechanism. I do not know what the implications of embedding SQLite into Firefox 2.0 are likely to be. However, it would not be a major stretch of the imagination to expect JavaScript to gain a complete SQL access mechanism for client-side stored data (need I mention the AJAX/LAJAX again). SQLite supports SQL and as of now SQLite is already pre-embedded in my favourite browser.

page1.html (session storage)

<script type="text/javascript">
sessionStorage.setItem("test", 123);
<a href="page2.html">Page 2</a>

page2.html (session storage)

<script type="text/javascript">
window.onload = function() {
document.getElementById("result").innerHTML = "<b>" + sessionStorage.getItem("test") + "<\/b>";
<div id="result"></div><a href="page1.html">Page 1</a>

page1a.html (persistent storage)

<script type="text/javascript">
var storage = globalStorage[''];
storage.setItem("test", 123);
<a href="page2a.html">Page 2a</a>

page2a.html (persistent storage)

<script type="text/javascript">
window.onload = function() {
var storage = globalStorage[''];
document.getElementById("result").innerHTML = "<b>" + storage.getItem("test") + "<\/b>";
<div id="result"></div><a href="page1a.html">Page 1a</a>

Friday, October 06, 2006

Multiple DHTML trees on a page with dynamic ids

I have written before about XBEL and DHTML, Unobtrusive DHTML, and the power of unordered lists and D.D. de Kerf's Easy DHTML TreeView. My previous DHTML tree implementation (derived from D.D. de Kerf's DHTML tree) made use of JavaScript's this keyword to toggle folders. I have extended this method to include "Expand/Collapse" support. We have seen expand and collapse before in Matt Kruse's DHTML Tree. However, I have chosen a slightly different approach inspired by Random Content Order script.

The id of the tree is still used for expand/collapse (via getElementById()). The difference with this technique is that JavaScript now generates the tree id. This means we can have multiple DHTML trees on a page with expand/collapse buttons that do not interfere with each other without any special preliminary HTML preparation being necessary.

An HTML element id is supposed to be unique on a page. The key benefit of this method is that we no longer need to ensure a unique id is assigned to each unordered tree list in the HTML. My new technique is particularly useful where pages may contain several instances of a DHTML tree. These could be rendered as a result of aggregating content from multiple sources (like a portal does). Uniqueness of id is now a random affair taken care of by the JavaScript.

Additionally, the expand/collapse buttons themselves are attached using JavaScript therefore if it is not enabled they will not even appear. Note also that all the trees share the same JavaScript and CSS file.

My new DHTML tree implementation.

Look at the static source code and you will see that each tree starts out with identical HTML markup. A useful tip is to use JSLint to help debug JavaScript. My DHTML tree was initially behaving a little strangely on IE and I have found that Microsoft's script debugger is not very good. Incidentally, a designer has helped me with graphics (Thanks Ben).

Friday, September 29, 2006

Injecting XML input into XQuery using Spring

I recently went to a XML Access Languages event held jointly by W3C and Presentations centered on XQuery, XSLT 2.0, XPath 2.0 and SPARQL. The whole event was tremendously interesting and I will probably blog further about this at a latter date.

Liam Quin gave a very enjoyable talk on XQuery which specifically caught my eye. Michael Kay (of SAXONICA) also spoke about the relationship of XQuery to XSLT 2.0, XPath 2.0 and XML Schema so I feel particularly well informed on the subject now (there is nothing like hearing it from the horse's mouth!). XQuery looks a bit like a hybrid of SQL and XPath (with FLWOR [For-Let-Where-Order-Return] syntax thrown in) and is particularly useful for accessing XML data across disparate sources.

Following the conference I have been doing some experiments using SAXON, starting with trying out the examples in Bob DuCharme's article Getting Started with XQuery.

XQuery 1.0 and XSLT 2.0 both support's XPath 2.0's document() and collection() functions for accessing external input XML documents. These are potentially extremely powerful facilities. Having recent experience with the Spring Framework, this way of doing things was a concern to me. XQuery et al appeared to be, at first glance anyway, advocating closely coupling documents with processing. This flies in the face of the inversion of control (dependency injection) design pattern which I have learnt to love.

To further illustrate my point lets say I have some XML source documents that I want to perform some XQuery on:

  • they might be on the filesystem
  • they might be in an XML database
  • they might be in a CLOB on a relational database
  • they might be on the web accessed via an URI
  • they might have been returned via Web Service
  • they might be DSML format returned from an LDAP server
  • they might be from some combination of the above, I could go on...

The document could be coming from almost anywhere and therefore would need to be accessed using very different mechanisms depending on the situation. Does that mean we need as many XQuery implementations as there are access mechanisms? I would hope not. That said, this seems to be the current situation where multiple XML database vendors supplying their own implementations of XQuery for their particular databases. This is wrong surely? I would argue that where the source XML originates is none of the XQuery processors' business and arguably the precise source should not even be detectable from the URI!

SAXON (the free version) includes support for XQuery. The SAXON XQuery processor has native support for accessing XML documents from the filesystem but as an experiment I'd thought it would be good to see if I could make use of the Spring Framework to feed the SAXON XQuery processor.

There is already a Spring XML Database Framework that enables Spring to access eXist and Apache Xindice XML database datasources to interact with XQuery but it looked a little complicated for my needs.

I discovered that the URI in the document() and collection() functions is merely a reference to an external document, it is not necessary that this should imply a specific access mechanism. In order to fool SAXON into accepting my Spring accessed input data I discovered that all I needed to do was to implement a Spring aware URIResolver and CollectionURIResolver. I could then configure SAXON to use those resolvers to access the documents and collections referenced in the XQueries.

What follows is by no means full featured (it is hard wired to read from a Spring resource) but it could be extended to support multiple data access mechanisms. I achieved my ends via two fairly simple java beans and a test program.

SpringXQuery performs the XQuery itself.
It is responsible for loading XQuery query file (using the Spring resource loader).
It is used to configure the collection and document URI resolvers.

SpringURIResolver provides the URI resolution.
You can configure a map of collections to use, using a map of maps.
You can configure a map of documents.

Other files are:

Spring's application context configuration file
A simple test program

and also the XQuery files and example XML files used in Bob DuCharme's XQuery article.

Incidentally I made use of Spring's MapFactoryBean in order to make my Spring configuration a little bit cleaner. I also made use of a tip I found, Spring: Locating Application Relative Resources, to ensure the Spring resource loader works.

Surprisingly enough it works; this is despite the fact that I do not fully understand all the intricacies of what I am implementing!

It looks to me that when using XQuery across multiple datasources performance is always likely to be an issue and this partly explains why all these database vendors have vendor specific implementations. I might argue that performance issues should be addressed on a separate level and, IMHO, is not a sufficient argument for re-implementing an entire language! Roll on javax.xml.xquery...

Saturday, September 02, 2006

Using XSLT 2.0 to emulate IE7 feed reader appearance (including filter by category, date and title sorting)

I have managed to create an XSLT 2.0 stylesheet that emulates the appearance of IE7's feed reader including sorting functionality.

To see the stylesheet in action click here

Download it here

As with my XSLT 2.0 tagcloud experiment, I have again made use of W3's Online XSLT 2.0 Service and I have again used Microsoft RSS 2.0 feed Recently Added and Updated Feeds (oh the irony!).

The hard bit was sorting by RFC 822 dates. I had to use <xsl:analyze-string> to convert the RFC 822 date string into a sortable format. I found lots of information about how to do this on Dave Pawson's XSLT 2.0 site (date processing, regular expressions) plus finding something similar to the regular expression that matches RFC 822 dates helped a lot (Jorgen Thelin's blog entry containing the helpful regexp).

The XSL stylesheet and CSS are by no means perfect, they currently only work with RSS 2.0 feeds that contain RFC 822 format dates and even then they would benefit from some serious refactoring BUT I'm really pleased with the result, I really should get out more....

Friday, September 01, 2006

Web 2.0 needs online XSLT tranformation engines and XSLT 2.0 generated tagclouds

XSLT 2.0 stylesheet that produces a tag cloud

A few weeks ago I produced an XSL stylesheet that could produce a tag cloud from an RSS 2.0 or Atom feed. This made use of a technique called Muenchian Method of grouping (named after Oracle man Steve Muench). I had read that XSLT 2.0 contained native grouping functionality, (which should be easier to understand), I thought I'd investigate producing a tag cloud with an XSLT 2.0 stylesheet. For some reason, Xalan, my favourite XSLT processor does not yet properly support XSLT 2.0 therefore I had to use Saxon to do the XSLT 2.0 processing. I discovered a, servlet based, demonstration Online XSLT 2.0 Service hosted at (which also uses Saxon).

Click here for the XSLT 2.0 stylesheet I have written that produces a tagcloud. It makes use of XSLT 2.0's <xsl:for-each-group> element instead of the Muenchian Method. The iframe below should show a tag cloud that is the result of an XSLT transformation of the Recently Added and Updated Feeds from Microsoft RSS 2.0 feed, making use of W3's online XSLT 2.0 service. [Incidentally, the online service also supports passing parameters into the XSL transformation].

Why did I do this?

I thought that using an online XSLT transformation engine would be a neat way to produce tag clouds and such for people using free hosting services like Google's I was thinking that I could host the XSL on Google Pages. The GData powered Blogger Data API is reported to support entry categories, unfortunately I have not yet got this to work properly yet. In fact it, worse than that, it killed the test blog that I was experimenting with, I now get We're sorry, but we were unable to complete your request.

Why Web 2.0 needs free online XSLT transformation engine services

You get the idea by now, if we make use of online XSLT transformation services and free hosting services which produce XML we can really start to use the web as a platform. It is nice to have your own server to tinker with but I would argue that it should not be necessary in the age of Web 2.0.

What is great about all this "Web 2.0" stuff, is that we already have all we need to accomplish it. We do not need to wait for any new technologies, it is already here, and we just need reliable services to create new ways to make use of the web. I think that it would be great if Google or Yahoo or somebody hosted a free, high performance online XSLT transformation engine. Blimey, they could even advertise on the front page and I wouldn't care!

Granted, my XSLT tagcloud example might not have brought you around to my way of thinking yet so here is another powerful example where an online XSLT transformation engine would be superb.

Everybody loves AJAX at the moment but there are those painful same domain XMLHttpRequest problems that could require the use of an application proxy and have made On-Demand Javascript and JSON so popular (as used in Yahoo's JSON callback technique). [Incidentally, Google's AJAX Web Search also uses this technique; I will speak no more of this in case I get in trouble ;)].

So you want to write a super duper, AJAX application and host it on a free service. HTML, CSS and JavaScript can be hosted anywhere but how do we get around those pesky XMLHttpRequest problems if we are relying on free hosted services? This is where an online XSLT transformation engine would come in very handy. So you want to process some external XML but it isn't available in JSON format? The answer is transform the XML into JSON!

I found an XSL stylesheet that could convert XML into JSON on the eBay developer site. eBay even host an online XSLT service but it is too restrictive to use freely.

Hosting an online transformation engine would be a very good way for a company to showcase their XSL processing hardware (hint, hint, IBM please take note).

Now I know what we need, it is quite frustrating that it isn't already available. If you know different and can tell me where I can access a free, high performance, unrestricted, reliable, online XSLT processor engine please let me know!

Thursday, August 17, 2006

A nasty css hack solution to the IE z-index problem

IE has notorious problems with the z-index positioning property. Z-index is supposed to allow the web developer to control the order in which page elements stack up.

I needed to overlap two elements and messed around with z-index and could not get the results I wanted in IE. I did not want to significantly modify the way the page was created, the page only had problems rendering in IE.

I turned to a very dirty solution to this, namely, JavaScript embedded in CSS.

Based upon this integrating javascript into stylesheets blog entry, I cobbled together some JavaScript embedded in a CSS file and my z-index woes disappeared (but it did make me feel slightly dirty!).

body {
background: url("
document.body.onload = function(){
var xbutton = document.getElementById('xbutton');
if (xbutton) { = 9999;

Tuesday, August 15, 2006

Google AJAX Search API: A simple web search example

I just discovered Google AJAX Search API. It looks to have been about since the beginning of June 2006. Very nice it is too, although some may say it would have been easier if we could get results from Google's search engines (local, web, video and blog) in the OpenSearch Response XML format.

The Google AJAX Search API returns results in JSON format. There are some very nice and some quite advanced usage samples to peruse on Google's site. For some reason the simplest example of how to use the API, i.e. to obtain a simple list of results, seems to be missing from the usage samples. So here is an example of how to do this (note: the API limits you to getting either 4 or 8 results).

You'll also need to obtain a Google AJAX Search API key. You can see the example below working here [requires JavaScript enabled obviously!], narcissistically the example searches for my name on Google web search. I'm sure that somebody out there will find this very simple example useful!

<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "">
<html xmlns="">
<meta http-equiv="content-type" content="text/html; charset=utf-8"/>
<title>Very Simple Web Search - Google AJAX Search API demo</title>
<style type="text/css">
body {
background-color: white;
color: black;
font-family: Arial, sans-serif;
font-size: small;

.url {color: green;}
.cached {color: #77C;}
<script src=";v=0.1&amp;key=<INSERT API KEY>"
<script type="text/javascript">

var gWebSearch;

function OnLoad() {
// Initialize the web searcher
gWebSearch = new GwebSearch();
gWebSearch.setSearchCompleteCallback(null, OnWebSearch);
gWebSearch.execute("mark mclaren");

function OnWebSearch() {
if (!gWebSearch.results) return;
var searchresults = document.getElementById("searchresults");
searchresults.innerHTML = "";

var results = "";
for (var i = 0; i < gWebSearch.results.length; i++) {
var thisResult = gWebSearch.results[i];
results += "<p>";
results += "<a href="\""" + thisResult.url + "\">" + thisResult.title + "<\/a><br \/>";
results += thisResult.content + "<br \/>";
results += "<span class=\"url\">" + thisResult.url + "<\/span>";
if (thisResult.cacheUrl) {
results += " - <a class=\"cached\" href="\""" + thisResult.cacheUrl + "\">Cached <\/a>";
results += "<\/p>";
searchresults.innerHTML = results;

<body onload="OnLoad()">
<div id="searchresults"></div>

Tuesday, August 08, 2006

XSLT generated Tag clouds (inspired by IE7Beta3)

I recently installed IE7Beta3 (I have only recently upgraded my home PC to XP). Straight away I was drawn to the feed aggregator, which is now integral to the browser. I was very impressed; I like the sleek styling. The aggregator part is not perfect by any means, managing feeds looks like a bit of a pain and it wouldn't install a valid OPML feed list that I had.

See Internet Explorer 7's superior feed handling for an overview, with pictures, of IE7Beta3's feed aggregator.

I am slightly concerned that the feed reader looks so much like a web page, this hides the complexity of RSS platform from the user but at the same time it could make you think that RSS should always behave like this. What IE7Beta3 is doing is a little bit more complicated than your average XSL transformation of RSS feed.

The Filter by category features looks very nice. Then it stuck me, the Filter by category part of the page is a variant of a tag cloud. Only feeds that support the <category> elements can be rendered like this, as far as I am aware that limits this behaviour to RSS 2.0 and Atom.

As with any fancy interface I started to think how it works and I thought I'd have a stab at producing a tag cloud using XSLT alone (I had a quick look and couldn't find anybody else doing exactly this on the web).

My XSLT makes use of something called the Muenchian Method (me neither!). I found this described in the Grouping and Counting sections of Dave Pawson's XSLT Questions and Answers. It turns out that the Muenchian Method isn't actually that complicated once you get started (you have to be a little careful with sorting). So a little time later I have my XSL that can transform an Atom or RSS 2.0 feed (containing category elements) into a tag cloud.

Here is the XSLT that creates tag clouds from RSS 2.0 and Atom feeds

Here is a tag cloud that I created from the complete Atom feed generated by my blog
Here is a tag cloud that I created from an RSS 2.0 feed called Recently Added and Updated Feeds from Microsoft (I think this comes pre-installed with IE7Beta3)

The CSS for the tag cloud was stolen from How to Make a Tag Cloud for Movable Type Blogs.

The next steps in implementing the IE7Beta3 style interface would be the sort by title and date functions. Sorting by date would probably be easier with Atom feeds. The Atom date format is very simple. With the RSS 2.0 feeds you can't guarantee the date format pattern you will get, this makes sorting more of a challenge. Filtering and sorting simple stuff is relatively easy to do with XSLT.

What stops this XSLT from being run inside the client browser is two things.

  • Obtaining remote feeds

    You probably need to be able to run the XSLT against XML feeds obtained from a remote source. Therefore we need a mechanism to obtain or proxy the feed. Also, if you are obtaining remote feeds it would be polite to use the conditional get mechanism if possible. This suggests a server side implementation, maybe using the ROME fetcher.
  • Passing parameters back into the XSLT

    In order to initiate the Filter by category, Sort by date, Sort by title behaviour we need to be able to pass parameters back into the XSLT. Passing parameters into a XSL stylesheet in the client browser is a fairly nightmarish prospect. Again, this suggests a server side application would be best.

Having worked out how to create the Filter by category "tag cloud" I don't think it would be too hard to create a facsimile of the IE7Beta3 feeds interface using JSTL or a servlet.

Friday, August 04, 2006

SiteMesh I Likes

SiteMesh is one of those tools that has made me stop and re-evaluate how I write web applications (a paradigm shift if you like). It is basically a servlet filter that allows you to decorate web pages with header, footer and navigational adornments after your application produces the output. I saw it described somewhere as AOP for the web. Something about the way that SiteMesh works feels very right to me at the moment. SiteMesh focuses the mind on creating applications in such a way that they become a very good fit with the direction where Web application development are currently heading (even if you're not actually planning to use SiteMesh in those applications).

SiteMesh is not Tiles

I have, on some limited occasions, used Tiles with Struts applications. There is something about Tiles that seems over elaborate. I'm sure there are numerous cases where Tiles does things that you can't do with SiteMesh. However, I like SiteMesh precisely because it is such a simple idea.

Separating presentation from content

What is really nice about SiteMesh is that it really focuses the mind on completely separating presentation from content (even the navigational aspects). This seems all the more relevant in the current climate of Web 2.0 and portlet environments. Web 2.0/AJAX style applications often devolve user interface elements to the client side. Portlets also advocate the final responsibility for user interface to the portal.

Context Specific Rendering

I like the idea that based on the contents of a cookie the same application could be rendered entirely differently. I could have university department specific interfaces without the need to actually change the underlying application once it has been written to accommodate SiteMesh decoration.

To use the description often quoted to me about CMS systems, with SiteMesh the content becomes the sandwich filling with the bread of the sandwich (headers, footers, navigation) added via SiteMesh.

Using SiteMesh with Struts Bridge Portlets

What I also like is the idea that since it works via a filter, I can take advantage of this in the portlet environment. Struts Bridge applications are Struts applications that can run as standalone applications and simultaneously as portlets. Essentially, if I wrote a Struts Bridge based application then using SiteMesh it could have header, footer and navigational elements in the "standalone" view and without the need to change anything these would disappear in the portlet rendering (this is because portlets are not effected by servlet filters).

Passing "Where Am I?" content through to SiteMesh

In order for SiteMesh to add context specific navigation it needs to know something about where it has been invoked. Since SiteMesh works as a servlet filter, it can only extract this information from some aspect of the page. SiteMesh has access to a pages' URL, URL parameters and cookies, so it can use these as sources of information. SiteMesh decorators can also access information from the contents of a HTML page (although the specifics elements that you can access appear deliberately quite limited). SiteMesh can access the <head>, any <meta> elements specified in the <head> and the <body> content. I think it would pollute and overcomplicate SiteMesh if it had full HTML DOM browsing facilities, restricting it to a limited set of information makes it simpler and in my view better. Therefore if you add a meta data element to the header of the content page this can be used to render the context aware navigation (all this machine readable meta data sounds a bit semantic web-ish, doesn't it?). So with a bit of effort you can create context aware navigation menus. This got me thinking about how best to separate breadcrumb (homeward path) style navigation, I will talk more about this in my next entry.

See also:
  1. Dynamic Navigation using SiteMesh
  2. Dependency Injection with SiteMesh

Separating breadcrumb (homeward path) navigation from content using XML/XSL

My experiments with SiteMesh got me thinking about how best to separate out breadcrumb like navigation from the core application content. Strictly speaking I mean Homeward Path navigation rather than the form of breadcrumb navigation that essentially gives you an onscreen potted browser history.

I have seen numerous approaches to this kind of navigation. The custom JSP tag libraries for this kind of thing that I have seen have never felt quite right. In my travels I recently discovered something called JATO (also known as Sun Java System Application Framework). JATO seems to have been around for a long while but I hadn't seen it before because it appears that it was only distributed as part of the Sun ONE Application Server (formerly iPlanet Application Server). JATO seems to be a MVC framework that existed before the JSTL, Struts, JSF and Spring era.

One of the examples in the JATO sample application is of a Static Breadcrumb method.

I liked the example but not the implementation. So I thought to myself, I can do better than that! What I have ended up with is a very simple and consequently quite satisfying solution. Using the navigationSample.xml from the JATO sample application, I have achieved the same result through a fairly simple XSL transformation. This is achieved with bog standard JSTL without the need to write any new tag libraries. You could easily extend the navigation xml sitemap to include more link information and if you were using SiteMesh you could also take advantage of any "Where Am I?" information passed from your content pages.

Here is JATO's navigationSample.xml as used in the JATO sample application.

<?xml version="1.0" encoding="UTF-8"?>
<category name="Performing Arts" id="0">
<category name="Acting" id="2">
<category name="Actors" id="20">
<category name="Buster Keaton" id="200"/>
<category name="Charlie Chaplin" id="201"/>
<category name="WC Fields" id="202"/>
<category name="Actresses" id="21">
<category name="Mae West" id="210"/>
<category name="Bette Davis" id="211"/>
<category name="Marlene Dietrich" id="212"/>
<category name="Companies" id="22"/>
<category name="Circus Arts" id="3">
<category name="Acrobatics" id="30">
<category name="Anti-Gravity" id="300"/>
<category name="Human Design" id="301"/>
<category name="Trixy's Arco Page" id="302"/>
<category name="Clowning" id="31">
<category name="Rodeo Clowns" id="310">
<category name="One Eyed Jack" id="3100"/>
<category name="Cool Nite Rodeo" id="3101"/>
<category name="Creepy Clowns" id="311">
<category name="Creepy Clown Gallery" id="3110"/>
<category name="Ghost Town Clowns" id="3111"/>
<category name="Anti-Clowns" id="312">
<category name="The No Clown Zone" id="3120"/>
<category name="The Anti-Clown Page" id="3121"/>

Here is my simple JSP which currently does the XML/XSL transformation and passes any appropriate parameters into the XSL. It could of cource be optimized for performance, e.g. caching the XSL or XML in memory.

<%@taglib uri="" prefix="c"%><%--
<%@taglib uri="" prefix="x"%><%--
<c:import url="navigationSample.xml" var="xml"/><%--
<c:import url="navigation.xsl" var="xsl"/><%--
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"
<html xmlns="">
<style type="text/css">
body {
font-family: helvetica;
font-size: 0.8em;
<c:when test="${empty}"><%--
<x:transform xml="${xml}" xslt="${xsl}"/><%--
<x:transform xml="${xml}" xslt="${xsl}"><%--
<x:param name="id" value="${}"/><%--

Here is my navigation.xsl

<?xml version="1.0"?>
<xsl:stylesheet version="1.0" xmlns:xsl="">
<xsl:output method="xml" version="1.0" omit-xml-declaration="yes" />
<xsl:param name="id" select="'0'"/>
<xsl:strip-space elements="category"/>

<xsl:template match="/">

<xsl:template match="category[@id = $id]">
<xsl:for-each select="ancestor::category">
<xsl:attribute name="href">?id=<xsl:value-of select="@id"/></xsl:attribute>
<xsl:value-of select="@name"/>
</a> &gt;
<xsl:attribute name="href">?id=<xsl:value-of select="@id"/></xsl:attribute>
<xsl:value-of select="@name"/>
<xsl:if test="count(category) &gt; 0">
<xsl:for-each select="category">
<xsl:attribute name="href">?id=<xsl:value-of select="@id"/></xsl:attribute>
<xsl:value-of select="@name"/>


And that is all you need, horribly simple eh? I hope so.

Technically there is nothing stopping us doing XSL transformations for navigation in the client. This emergent navigation seems to be the direction that Web 2.0 applications are going. Although what stops me doing this now is all the cross browser XSL transformation issues and accessibility issues of needing to support non-javascript driven solutions on the client.

Tuesday, July 11, 2006

I won Java Posse Applet of the week

I'm so proud! It seemed to amuse and bemuse the Posse in equal measure! See: Java Posse #067 - Newscast for July 11th 2006. The applet in question can be found in Embedding Databases, Web Servers, App Servers in the Browser, this was inspired by Francois Orsini's embedded Java DB in an applet but instead of a database I embedded the Jetty web application server.

From the podcast @ 20minutes 58seconds. Transcribed by me - (apologies if I didn't identify the speaker correctly, emphasis is mine):

Dick: Next up, Applet of the week this week.

Dick: This one is really off the wall. And I saw it and I knew it had to be an "Applet of the week". It has no graphical presence whatsoever. You fire this thing up and basically a bit of text comes up to say its running. But it's a full Jetty app server embedded in an applet. So you run this thing, you grant it permission and then basically you can hit localhost:9000 on your machine and there is Jetty running with a full app server that is ready to deploy war files to and stuff like that. I just thought it was a really, really, neat thing to do in an applet!

Joe: Wait a minute, from an applet? What a strange thing to do!

Tor: If they obscure the page then the applet could stop right? So you have to have this browser page visible or the server stops responding?

Dick: I think that would be true, yeah.

Carl: Yeah, this really should be a web start.

Joe: That is very weird.

Dick: I kind of thought it was just neat that somebody thought of doing it. It is one of those really off the wall things and I was like this is kind of weirdly cool, so I thought that's definitely worth an applet of the week. Just if nothing else because somebody is really thinking outside of the box on that one.

Tor: Yeah, if you want applet of the week then just go to your favourite app, go to your main class and add extends applet and you're in business.

Carl: Yeah, perfect.

Dick: There you go.

Joe: That's hilarious.

Carl: I don't know if that always works Tor.

Tor: Yeah, you have to implement like four methods. Write start and stop.

Carl: Get the windows hosted right....

Thanks guys!

Wednesday, May 31, 2006

Spring starts you programming in pure XML!

I'm now very enthusiastic, some may say obsessed, about using the Spring Framework. Spring is certainly making me more productive. The following example made me feel as if I'd switched from Java to pure XML as my core programming language. I'm embedding some closed-source java classes into one of my web applications. (Actually, I'm embedding the Dwarf IMAP and SMTP servers into a web application, this is insane huh?, anyway that bit is not important).

My closed source bean has a setter something like this:

setFile( file)

Since this is to run embedded inside my web application I want set to my file with a path relative to where the web application is installed. Therefore, I think I need to locate the file resource using ServletContextResource and access the file by using MethodInvokingFactoryBean. In my Spring ApplicationContext I now have the following XML:

<bean class="closed.source.bean" id="someBean">
<property name="file">
<bean class="org.springframework.beans.factory.config.MethodInvokingFactoryBean">
<property name="targetObject">
<bean class="">
<constructor-arg index="0">
<bean class=""/>
<constructor-arg index="1">
<property name="targetMethod">

It works but it just feels wrong.

Saturday, May 27, 2006

Rendering roads on Google Maps using Java and PostGIS

A short while ago, Amit posted a question on my blog asking how he could render road data from a Google Earth KML file onto the web based Google Maps. I wasn't really sure what to suggest at the time and the best I could come up with was to create custom tiles. After a little experimentation I've now come with a different solution.

There have been many mashups using the Google Maps API, as requirements become more sophisticated and the dataset increases in size you start to find that a smattering of trigonometry is no longer sufficient or efficient.

I'm not a expert on Geographical Information Systems (GIS) but I do find the subject very interesting. A few weeks ago, I discovered that some databases have geographically ("spatial") aware extensions. The idea being that a database could be extended to support native spatial data types (co-ordinates, points, linestrings etc) and also support common GIS functions. These are some "GIS aware database" implementations that I found:

Commercial GIS aware database offerings

Open source GIS aware database offerings

I also have to mention JTS which is not a database but is a handy Java class library of GIS functions. Now from my perspective it would be ideal if there were a mature 100% Java database with a geographical extension but at the moment my preference from the free offerings that I found is PostGIS.

I will now describe the process I took from KML file to a fully working (if not quite production ready) Google Maps powered road renderer. It is not a particularly difficult process but there are quite a few steps involved. Whilst conducting my investigations I found that somebody had achieved something similar to this using PHP and output to SVG format (see: Dynamic Loading of Vector Geodata for SVG Mapping Applications Using Postgis, PHP and getURL()/XMLHttpRequest()).

I made use of the following tools


Amit's KML file contained around 3MB of road data. The first stage in getting this data into the PostGIS database was to convert it from KML into GML. Since both KML and GML are both XML formats there are a couple of XSL styles floating around on the net that I could have used (e.g. Styling KML to GML). What I actually ended up doing was loading the KML into TextPad with the help of a couple of example GML files that I found on the web I set about performing some "Search and Replace" surgery until I got the KML formatted to look like the GML that I wanted. At this stage it is important to test the resulting GML file to make sure the rendered GML resembles the output of the original KML. GML is a little bit of a pig to validate, as it seems that GML is mostly intended to be embedded into other documents. This means that you may have to write your own DTD or XML Schema to get your GML to validate (yuck!). Once you have something that validates, then you can load it up into a GML aware renderer and see what you get. Quantum GIS (QGIS) is free and is able to render GML files.

Google Earth showing the road network as rendered via Amit's original KML file

Quantum GIS showing the same road network but this time rendered from the newly created GML file

GML to PostGIS

We have our GML file and have checked that the rendered version resembles the rendered version of the original KML file. The next step is to get the GML into the PostGIS database. There is a GIS toolkit called FWTools which include a utility called ogr2ogr which can be used to convert between different GIS formats (much like GPSBabel does for GPS systems). One really nice feature of ogr2ogr is that it can directly import data from GML files into PostGIS databases. I used this tool to import my GML data, invoking it using something like this:

ogr2ogr -f "PostgreSQL" "PG:dbname=postgis user=postgres password=postgres host=localhost port=5432" roads.gml

I could check on the data import and also tweak the table and column names with PostreSQL's pgAdmin III utility.

PostGIS to dynamically served GML and then to Google Maps API

Right, the road data is now in my PostGIS database. The next step is to create a servlet that will serve up only the relevant portions of the road data for Google Maps to render. This is a very common requirement of systems like PostGIS and since I am relatively unfamiliar with GIS jargon it took me a little while to pinpoint how exactly to do it. Apparently what I wanted to do was perform a "frame-based" query, the chapter 4 of the PostGIS documentation helpfully provides an example of how to do this (if I'd only known it was called this sooner!). Generally there is a lot of GIS jargon that is quite academic, mathematical and disconcerting for the uninitiated (e.g. convex hulls) but if you keep looking long enough you'll eventually find what you need!

As you'd expect the construction of my servlet was actually conducted in parallel with the creation of my Google Maps API HTML and JavaScript. I started my servlet with the examples of using Java clients with PostGIS. For the Google Maps part I modified the Event Listeners example from the Google Maps API Documentation to pass the bounding box co-ordinates of the current browser view.

After a little experimentation with other approaches (including using JSON) I chose to output GML format XML from my servlet. It just seems more straightforward to me to do it this way. I also noticed that IE seems to be particularly choosy about the name of the servlet (it seems to need to have the .xml extension and output the text/xml content type).

Google Maps API rendering part of the road network in Firefox, the JavaScript processes the servlet generated GML

Quantum GIS rendering of the same part of the road network as shown above, again using the same servlet generated GML

Static Demo

Here is a static version of of my Google Maps powered road renderer. I don't want to serve up the complete servlet backed application from my blog server without doing some further optimisation.


GML generating servlet source code:
Google Maps HTML/JavaScript file: RoadsRender.html [Note: this is not a live example as the background servlet is not running].

A copy of Amit's original KML file: Amit-Uttaranchal.kml [zipped 858 KB, unzipped 3.62 MB]
GML road network file derived from the above KML file: roads.gml [zipped 983 KB, unzipped 4.32 MB]
XML Schema for the above GML file: roads.xsd

Friday, May 26, 2006

Developing with Dwarf, the 100percent Java IMAP Server

I've recently been working on integrating a webmail client with our uPortal installation. The consensus is that the best IMAP webmail clients seem to be written in PHP (SquirrelMail and Horde IMP).

The idea is that we achieve single sign-on between our portal and webmail client via the JASIG CAS authentication broker. I'm planning to use ESUP's phpCAS.

My development environment is currently Win2K, I discovered a really nice set of 100% Java e-mail servers (part of the Dwarf Server Framework) these are ideal for IMAP/SMTP testing purposes.

Usually to CASify a proper Unix IMAP server you need to install a PAM CAS module. Although there are some free windows IMAP/SMTP servers around, from what I have seen, most of them don't support pluggable authentication mechanisms. The idea of porting a Unix IMAP server with PAM authentication to windows via Cygwin is enough to give me nightmares!

I had a play with the Dwarf IMAP Server and I have found that it more than suits my purposes for development. Obviously I won't be recommending that we move to using the Dwarf IMAP server in production; I'd no more suggest this than I'd install my washing machine onto the top shelf of my bookcase. Platform portability has it's place but for mission critical systems like e-mail then native C IMAP servers are still essential! The Dwarf Server Framework is not open source but it is free to use. The nice thing about the Dwarf IMAP server is that since it is written in Java I can fairly easily add my own authentication mechanisms, I found Dwarf very easy to CASify.

Until the Apache James project comes up with a stable and easy to install open source IMAP server, Dwarf will do very nicely.

I feel duped by the Visual Systems Journal

A little while ago I followed a Google Ads link and signed up for a free subscription to Visual Systems Journal (a UK based publication). The reason I did this was because the text of the Google Ad proudly proclaimed something along the lines of "Now with HANDS-ON JAVA section". Cool, I thought, a free UK magazine with some Java coverage.

I’ve now received a couple of issues and I can't help feeling a little duped. In the HANDS-ON JAVA section over the last few issues I've seen articles on Python,Ruby on Rails and an article from Bruce Tate entitled "Pushing for pensioning Java" which seemed to be calling for the demise of Java. Don't get me wrong there is some really good stuff about Java on the VSJ website and the Bruce Tate article made some very good points (it was just there was little in the way of counterargument in the magazine).

Call me an old cynic if you like; I think the reason that the Hands-On Java section in the printed VSJ magazine gives such a negative view of Java is because all their advertisers are promoting Microsoft related technologies! I should probably stick to buying JDJ from Borders at highly inflated prices.

Thursday, May 04, 2006

Any colour as long as it's #000000

I've been upgrading various pieces of server software. I've also upgraded my blog software to the latest Pebble 2.0.0-M1 incarnation. Gone are the categories, these are replaced with a shiny new Web 2.0 style tag cloud. I also added a table of contents page. So now there are numerous ways to navigate my blog, search, calendar, tag and content based.

I also took advantage of the new Pebble skin to overhaul the design. I'm not really much of a designer so I based my new design on the Deliciously Blue design from the Open Source Web Design collection.

Incidentally, the fabric style tile shown below was used as the background to my previous blog design. It is from SquidFingers which has many other excellent free background tiles. I always meant to reference it but I had lost the link until recently.


Monday, April 24, 2006

Google SSO, GData and X-GOOGLE-TOKEN

The details of Google's Account Authentication Proxy for Web Applications are not yet available but it looks likely that this is the basis of a Google Single Sign-On (SSO) service. Applications using this service will be able to make use of Google's GData enabled applications (calendar, blogger, mail etc). You would expect that in due course there would be third party server applications interested in supporting the GData protocol.

Google has whetted our appetites with the Google Data API with emphasis on the calendar functionality. I hope they remember the open source mantra, release early, release often and don't make us wait too long before the full disclosure of Google's proxy authentication mechanism (expected release date is the end of this month [April 2006]) and the details of the other GData enabled services. Google have been criticised in some quarters for not contributing enough back to the open source community and this represents an excellent opportunity to make amends.

Having had experience with JASIG CAS I know a little about how single sign-on for web applications work. I discovered a very interesting article detailing The Mysteries of X-GOOGLE-TOKEN and why it matters which describes a curious proprietary token-based authentication mechanism that Google Talk uses. It appears to behave exactly as I would expect Google's SSO proxy mechanism to work. It seems like Google SSO may have been in in the wild all along and we just haven't realised it!

I think it is a fairly safe bet that X-GOOGLE-TOKEN will emerge in the proxy authentication part of the GData protocol.

There are some suggestive indications of how the proxy mechanism will work in the GData API, if for example you look at one of the constructors for GoogleService:

GoogleService(java.lang.String serviceName, java.lang.String applicationName, java.lang.String protocol, java.lang.String domainName)
Constructs a GoogleService instance connecting to the service with name serviceName for an application with the name applicationName.

I would expect that I could create a GoogleService instance with my local applicationName and domainName as arguments. This could authenticate against Google's SSO and call my local application back following authentication. Google SSO would send back the details necessary to validate authentication and conduct further proxied authentication making use of the X-GOOGLE-TOKEN, SID and LSID tokens mentioned in the Google Talk article.

Note also how the GData API constant GOOGLE_LOGIN_PATH of /accounts/ClientLogin is exactly the same URL as Google Talk uses for authentication.

Although I feel that there are now lots of clues, my feeling is there are still some missing pieces of the jigsaw. An anonymous source suggested that they thought that the X-GOOGLE-TOKEN mechanism might be enabled via "a piece of copypaste JS code". I would agree with this idea to some extent, however, we now know that the favoured GData client mechanisms are Java and C#. Therefore I would speculate that much like the Google Maps API requires a key, the Google Proxy Authentication Service would make similar demands on would-be SSO application clients.

Friday, April 21, 2006

GData is about more than Google Calendar integration

Hi, I'm Mark and I'm a Google powered shiny baubles addict. Last Thursday (or there abouts) Google launched Google Calendar. It works in a fiendishly clever and impressive way, Google has raised the bar so high that we have come to expect nothing less of their web application offerings (I won't mention the A word but expect it is in there).

A couple of days later and Google have launched the Google Calendar data API. The Google Calendar data API is based upon a new common API model called GData. Impressive as the calendar application is, reading between the lines it is actually GData that looks set to have longer lasting significance (see ZDNet article on GData, Google: Master of Space and (Now) Time [found this via What's Google Calendar Really About ]).

GData is still slightly shrouded in mystery at the moment, the full details are yet to be fully disclosed but there are some really tantalizing glimpses around.

Quoting the Google Code announcement "GData model uses REST principles and Atom or RSS 2.0 syndicated feeds as the base resource model to expose data held by Google services (like Google Calendar)".

The GData protocol is also set to provide an Authentication service, this looks like it might provide a single sign-on solution for web applications.

There are client libraries for GData in Java and C#(.NET) flavours as well as detailed descriptions of the bare GData web service style XML protocol requests (so that scripting languages need not miss out).

This all sounds a bit like the beginnings of a Google powered enterprise portal to me. Integration of Google's own applications is already starting to happen, little chunks of Google Calendar are starting to surface inside Gmail. Fellow portal developers will appreciate that single sign-on is usually the key sticking point for portal integration. Every system I try and integrate with wants to be the "single" gateway, it wants to be the portal. Recent examples I've worked with include Blackboard (don't ask!) and Oracle applications (just try and get Oracle Collaboration Suite applications to integrate without using Oracle Single Sign-On). I'm slightly concerned that there might now be a Google single sign-on service (just how many *single* sign-on services should I be expected to integrate with!)

Controlling the single sign-on gateway is about maintaining power, the ability to access Google's fantastic calendar, mail, blogs, feed readers, web storage and other future services might prove a very seductive draw even if it doesn't play nice with external systems.

Google's decision to place it's efforts behind the Atom format is also starting to make sense to me. Although to be fair the GData protocol supports Atom and RSS 2.0. I must admit I don't know much about the Atom API, as used by Google's, but its seems to be about reading and writing information on the web (e.g. content management for blogs). GData's introduction of REST seems to be an attempt to make the whole Atom API approach easier. GData is also extending the Atom API approach to include other common elements information (what Google refer to as "Kinds") such as capturing calendar information.

Great that is all we need, a whole new set of standard formats to integrate with and there I was complaining about Microsoft proprietary RSS extensions! Yahoo's have already integrated with the new Google Calendar format. I really don't mean to moan; I'm very excited about Google's GData offering and very much look forward to seeing what is coming next.

Thursday, April 06, 2006

Bookmarks Portlet version 0.2 released

I have just released a new version of my bookmarks portlet application.

More details here

Downloads here

Functional Features

  • Add, Edit, Delete bookmarks and folders
  • Simple bookmarks management (moving of bookmarks and folders)
  • Import, Append and Export of the standard bookmarks format (de facto Netscape DTD standard which is used by all leading browsers)
  • Alphabetical bookmarks sort

Technical Details

  • 100% scriptlet free
  • JSR168 compatible portlet
  • Runs as a portlet and as a standalone web application simultaneously (achieved with Struts Bridge)
  • Written in an accessible spirit using unobtrusive DHTML rendering
  • XBEL is used throughout internally to store and manipulate bookmarks
  • XBEL object representation built with Castor used to manipulate bookmarks
  • XSL transformations used in rendering and sorting bookmark trees
  • Tidy utility used to XMLize and clean up bookmark imports
  • MVC architecture implemented with Apache Struts and JSTL
  • Database and resource access enabled with the Spring Framework
  • Includes an example embedded database, simple authentication and optionally a simple portal

Wednesday, April 05, 2006

Embedding Databases, Web Servers, App Servers in the Browser

At ApacheCon 2005 an exciting demonstration was given by Francois Orsini (see also Francois' blog) showing how the Apache Derby database could be embedded into the Firefox browser. After reading David Van Couvering's blog entry on the subject I have previously postulated on what this could mean, without actually knowing any of the specific details. Sun has it's own variant of Apache Derby it calls Java DB. As a demonstration of Java DB the code of the ApacheCon 2005 demonstration can now be publicly accessed here. Full install instructions are available to make sure you have the right Java plug-ins installed for the demo to work. Plugins permitting you can also access the demo directly here.

The demo initially presents a login screen, once you login (with any username/password combination) and you can then edit some data in a simple form and save that data. Exit the browser and you can access the same data again!

When the demo begins the browser presents you with a "Warning - Security" pop-up window at which point you must agree to trust the demo for the demo to proceed.

What is really great about this demo is that you can download it and dissect it to see how it works. It turns out to be really quite simple. All the pages of the demo application are actually a single web page, whose various sections are exposed and hidden using the usual DHTML and AJAX techniques. So why is it important that this is a single page web application you may ask? Well the whole thing is powered by an applet. It has been a long while since I had anything to do with applets (I've never really been that impressed beyond minesweeper and space invaders). This applet is responsible for starting and shutting down the backend database and responding with appropriate XML to all the user requests, it provides services to the JavaScript rather then rendering any visible GUI. The process of communication with the locally installed Derby database via the applet is what Francois Orsini refers to as LAJAX (Local AJAX)! Cool! Does anybody notice the resemblance to how DWR works?

There is nothing particularly Java DB/Apache Derby specific about the LAJAX applet technique, it could have just as easily have been an installation of some other embedded 100% Java database like HSQLDB or H2 at the backend. In the source of the demo's index.html it mentions you could even embed a Jetty web server using this technique, something that I couldn't resist trying out for myself!

Things to note: I haven't written an applet for about 6 years and even then I didn't do anything clever with it, I'm new to Jetty, I know very little about Java Web start and other applet related technologies etc. If you can see how the following techniques can be further improved, add a comment (I'd be very interested)!

Embedded HTTP Server in a Browser

Jetty is relatively simple to get started with. The key to embedding it in the browser is applet signing. When you accept the "Security" popup warning you are trusting the applet, as you did with the security pop-up in the Derby demo, you are allowing the applet read and write access to you computer's filesystems. This is how the Derby demo was able to recall data from a previous database session because it had actually written it to your local hard disk! Most of the jar files that come with Jetty need to be signed in order to function correctly when they are invoked by the applet.

For this example I created a self-signed certificate to use for signing the applets.

keytool -genkey -keystore -alias mycert
keytool -export -keystore -alias mycert -file my.cer
jarsigner -keystore bristol.jar mycert

The hypertext documents, images etc. that are going to be accessed by the embedded web server also need to be stored inside a JAR archive in order for the applet to be able to access them. In my first experiment I concentrated on embedding a web server containing static content. In the next experiment I embedded an application server, this can contain dynamic resources (such as JSPs and Servlets).

You can access the applet source code for the embedded web server here. It runs a web server on your machine on port 9090. Try this (at your own risk!) here, it might be a good idea to keep an eye on the Java console to see what is happening as the Jetty server will publish its messages here. This is the HTML code that invokes the applet.

<applet code="bristol.MyApplet.class"
archive="bristol.jar, files.jar, org.mortbay.jetty.jar, commons-logging.jar, javax.servlet.jar">

This applet is responsible for starting and stopping the web server. Once started the content on the embedded web server can be accessed from another browser window (or tab) as long as the original page containing the applet stays open (and therefore the web server continues running).

I had to do a little experimentation in order to establish how to access the static resources from the applet (remember my applet experience is limited) but once I'd got that working it was simply a process of jarring it up and signing the necessary jar files. I didn't write any AJAX style access to the embedded web server content, although you can see how this could easily be done.

With a very simple applet you can start and shutdown an embedded web server but the real potential comes when your able to access more dynamic content.

Embedding Web Application server in a Browser

Again, Jetty provides a relatively simple way to do this. Having already solved the applet resource access problems I'm pretty much home free...or so I thought! I have to admit that I'm a big fan writing JSPs rather than servlets. Jetty supports JSPs but in order for a JSP to run, they must first be compiled. The applet cannot expect to know anything about the system onto which it is being installed and even if it has a Java compiler installed, in fact it is highly likely the target system will only contain a Java Runtime environment.

So I needed to pre-compile my JSP pages before jarring them up and signing them. Fortunately, I found that Ben E. Cline ( a.k.a. Benjy) had used Jetty and precompiled JSP pages for a CD installer. This is incidentally a really cool idea! How easy it would be to running a Jetty powered application server with a search engine like Lucene to provide search facilities for a CD.

So I precompiled the JSPs. I created a directory and copied all my JSP files to it. I then ran JspC in the directory:

java org.apache.jasper.JspC -d . -l -s -uriroot . -compile -webxml webfrag.xml *.jsp

Once JspC had successfully, I copied the "org" subdirectory to my WEB-INF/classes directory in my web app and inserted the lines in the webfrag.xml file into an appropriate web.xml file. I jarred the web application files in my resource file (files.jar). So files.jar now contained:


I then used the jarsigner to sign this archive and the additional jar files that the web application server needs.

You can access the applet source code for the embedded application server here. Again, this runs on port 9090. Try it (at your own risk!) here. My applet HTML now looks like this:

<applet code="bristol.MyApplet_1.class"
archive="bristol.jar, files.jar, org.mortbay.jetty.jar, commons-logging.jar, javax.servlet.jar, jasper-runtime.jar, jasper-compiler.jar, commons-el.jar">

I have not done anything fancy with the content but you can see how you could install useful servlets and JSP web applications on a local server. Although if you're anything like me you'd soon end up with quite a large collection of related jar files to sign and for the applet to download (but we all have broadband nowadays!)

I've also noticed that if you try one embedded demo followed by another it doesn't always work, you seen the console complaining about ThreadDeath. If you want to see both examples working, it is probably best to close the browser in between.