Thursday, August 17, 2006

A nasty css hack solution to the IE z-index problem

IE has notorious problems with the z-index positioning property. Z-index is supposed to allow the web developer to control the order in which page elements stack up.

I needed to overlap two elements and messed around with z-index and could not get the results I wanted in IE. I did not want to significantly modify the way the page was created, the page only had problems rendering in IE.

I turned to a very dirty solution to this, namely, JavaScript embedded in CSS.

Based upon this integrating javascript into stylesheets blog entry, I cobbled together some JavaScript embedded in a CSS file and my z-index woes disappeared (but it did make me feel slightly dirty!).


body {
background: url("
javascript:
document.body.onload = function(){
var xbutton = document.getElementById('xbutton');
if (xbutton) {
xbutton.style.zIndex = 9999;
}
}
");
}

Tuesday, August 15, 2006

Google AJAX Search API: A simple web search example

I just discovered Google AJAX Search API. It looks to have been about since the beginning of June 2006. Very nice it is too, although some may say it would have been easier if we could get results from Google's search engines (local, web, video and blog) in the OpenSearch Response XML format.

The Google AJAX Search API returns results in JSON format. There are some very nice and some quite advanced usage samples to peruse on Google's site. For some reason the simplest example of how to use the API, i.e. to obtain a simple list of results, seems to be missing from the usage samples. So here is an example of how to do this (note: the API limits you to getting either 4 or 8 results).

You'll also need to obtain a Google AJAX Search API key. You can see the example below working here [requires JavaScript enabled obviously!], narcissistically the example searches for my name on Google web search. I'm sure that somebody out there will find this very simple example useful!


<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<meta http-equiv="content-type" content="text/html; charset=utf-8"/>
<title>Very Simple Web Search - Google AJAX Search API demo</title>
<style type="text/css">
body {
background-color: white;
color: black;
font-family: Arial, sans-serif;
font-size: small;
}

.url {color: green;}
.cached {color: #77C;}
</style>
<script src="http://www.google.com/uds/api?file=uds.js&amp;v=0.1&amp;key=<INSERT API KEY>"
type="text/javascript"></script>
<script type="text/javascript">
//<![CDATA[

var gWebSearch;

function OnLoad() {
// Initialize the web searcher
gWebSearch = new GwebSearch();
gWebSearch.setResultSetSize(GSearch.LARGE_RESULTSET);
gWebSearch.setSearchCompleteCallback(null, OnWebSearch);
gWebSearch.execute("mark mclaren");
}

function OnWebSearch() {
if (!gWebSearch.results) return;
var searchresults = document.getElementById("searchresults");
searchresults.innerHTML = "";

var results = "";
for (var i = 0; i < gWebSearch.results.length; i++) {
var thisResult = gWebSearch.results[i];
results += "<p>";
results += "<a href="\""" + thisResult.url + "\">" + thisResult.title + "<\/a><br \/>";
results += thisResult.content + "<br \/>";
results += "<span class=\"url\">" + thisResult.url + "<\/span>";
if (thisResult.cacheUrl) {
results += " - <a class=\"cached\" href="\""" + thisResult.cacheUrl + "\">Cached <\/a>";
}
results += "<\/p>";
}
searchresults.innerHTML = results;
}

//]]>
</script>
</head>
<body onload="OnLoad()">
<div id="searchresults"></div>
</body>
</html>

Tuesday, August 08, 2006

XSLT generated Tag clouds (inspired by IE7Beta3)

I recently installed IE7Beta3 (I have only recently upgraded my home PC to XP). Straight away I was drawn to the feed aggregator, which is now integral to the browser. I was very impressed; I like the sleek styling. The aggregator part is not perfect by any means, managing feeds looks like a bit of a pain and it wouldn't install a valid OPML feed list that I had.

See Internet Explorer 7's superior feed handling for an overview, with pictures, of IE7Beta3's feed aggregator.

I am slightly concerned that the feed reader looks so much like a web page, this hides the complexity of RSS platform from the user but at the same time it could make you think that RSS should always behave like this. What IE7Beta3 is doing is a little bit more complicated than your average XSL transformation of RSS feed.

The Filter by category features looks very nice. Then it stuck me, the Filter by category part of the page is a variant of a tag cloud. Only feeds that support the <category> elements can be rendered like this, as far as I am aware that limits this behaviour to RSS 2.0 and Atom.

As with any fancy interface I started to think how it works and I thought I'd have a stab at producing a tag cloud using XSLT alone (I had a quick look and couldn't find anybody else doing exactly this on the web).

My XSLT makes use of something called the Muenchian Method (me neither!). I found this described in the Grouping and Counting sections of Dave Pawson's XSLT Questions and Answers. It turns out that the Muenchian Method isn't actually that complicated once you get started (you have to be a little careful with sorting). So a little time later I have my XSL that can transform an Atom or RSS 2.0 feed (containing category elements) into a tag cloud.

Here is the XSLT that creates tag clouds from RSS 2.0 and Atom feeds

Here is a tag cloud that I created from the complete Atom feed generated by my blog
Here is a tag cloud that I created from an RSS 2.0 feed called Recently Added and Updated Feeds from Microsoft (I think this comes pre-installed with IE7Beta3)

The CSS for the tag cloud was stolen from How to Make a Tag Cloud for Movable Type Blogs.

The next steps in implementing the IE7Beta3 style interface would be the sort by title and date functions. Sorting by date would probably be easier with Atom feeds. The Atom date format is very simple. With the RSS 2.0 feeds you can't guarantee the date format pattern you will get, this makes sorting more of a challenge. Filtering and sorting simple stuff is relatively easy to do with XSLT.

What stops this XSLT from being run inside the client browser is two things.

  • Obtaining remote feeds

    You probably need to be able to run the XSLT against XML feeds obtained from a remote source. Therefore we need a mechanism to obtain or proxy the feed. Also, if you are obtaining remote feeds it would be polite to use the conditional get mechanism if possible. This suggests a server side implementation, maybe using the ROME fetcher.
  • Passing parameters back into the XSLT

    In order to initiate the Filter by category, Sort by date, Sort by title behaviour we need to be able to pass parameters back into the XSLT. Passing parameters into a XSL stylesheet in the client browser is a fairly nightmarish prospect. Again, this suggests a server side application would be best.

Having worked out how to create the Filter by category "tag cloud" I don't think it would be too hard to create a facsimile of the IE7Beta3 feeds interface using JSTL or a servlet.

Friday, August 04, 2006

SiteMesh I Likes

SiteMesh is one of those tools that has made me stop and re-evaluate how I write web applications (a paradigm shift if you like). It is basically a servlet filter that allows you to decorate web pages with header, footer and navigational adornments after your application produces the output. I saw it described somewhere as AOP for the web. Something about the way that SiteMesh works feels very right to me at the moment. SiteMesh focuses the mind on creating applications in such a way that they become a very good fit with the direction where Web application development are currently heading (even if you're not actually planning to use SiteMesh in those applications).

SiteMesh is not Tiles

I have, on some limited occasions, used Tiles with Struts applications. There is something about Tiles that seems over elaborate. I'm sure there are numerous cases where Tiles does things that you can't do with SiteMesh. However, I like SiteMesh precisely because it is such a simple idea.

Separating presentation from content

What is really nice about SiteMesh is that it really focuses the mind on completely separating presentation from content (even the navigational aspects). This seems all the more relevant in the current climate of Web 2.0 and portlet environments. Web 2.0/AJAX style applications often devolve user interface elements to the client side. Portlets also advocate the final responsibility for user interface to the portal.

Context Specific Rendering

I like the idea that based on the contents of a cookie the same application could be rendered entirely differently. I could have university department specific interfaces without the need to actually change the underlying application once it has been written to accommodate SiteMesh decoration.

To use the description often quoted to me about CMS systems, with SiteMesh the content becomes the sandwich filling with the bread of the sandwich (headers, footers, navigation) added via SiteMesh.

Using SiteMesh with Struts Bridge Portlets

What I also like is the idea that since it works via a filter, I can take advantage of this in the portlet environment. Struts Bridge applications are Struts applications that can run as standalone applications and simultaneously as portlets. Essentially, if I wrote a Struts Bridge based application then using SiteMesh it could have header, footer and navigational elements in the "standalone" view and without the need to change anything these would disappear in the portlet rendering (this is because portlets are not effected by servlet filters).

Passing "Where Am I?" content through to SiteMesh

In order for SiteMesh to add context specific navigation it needs to know something about where it has been invoked. Since SiteMesh works as a servlet filter, it can only extract this information from some aspect of the page. SiteMesh has access to a pages' URL, URL parameters and cookies, so it can use these as sources of information. SiteMesh decorators can also access information from the contents of a HTML page (although the specifics elements that you can access appear deliberately quite limited). SiteMesh can access the <head>, any <meta> elements specified in the <head> and the <body> content. I think it would pollute and overcomplicate SiteMesh if it had full HTML DOM browsing facilities, restricting it to a limited set of information makes it simpler and in my view better. Therefore if you add a meta data element to the header of the content page this can be used to render the context aware navigation (all this machine readable meta data sounds a bit semantic web-ish, doesn't it?). So with a bit of effort you can create context aware navigation menus. This got me thinking about how best to separate breadcrumb (homeward path) style navigation, I will talk more about this in my next entry.

See also:
  1. Dynamic Navigation using SiteMesh
  2. Dependency Injection with SiteMesh

Separating breadcrumb (homeward path) navigation from content using XML/XSL

My experiments with SiteMesh got me thinking about how best to separate out breadcrumb like navigation from the core application content. Strictly speaking I mean Homeward Path navigation rather than the form of breadcrumb navigation that essentially gives you an onscreen potted browser history.

I have seen numerous approaches to this kind of navigation. The custom JSP tag libraries for this kind of thing that I have seen have never felt quite right. In my travels I recently discovered something called JATO (also known as Sun Java System Application Framework). JATO seems to have been around for a long while but I hadn't seen it before because it appears that it was only distributed as part of the Sun ONE Application Server (formerly iPlanet Application Server). JATO seems to be a MVC framework that existed before the JSTL, Struts, JSF and Spring era.

One of the examples in the JATO sample application is of a Static Breadcrumb method.

I liked the example but not the implementation. So I thought to myself, I can do better than that! What I have ended up with is a very simple and consequently quite satisfying solution. Using the navigationSample.xml from the JATO sample application, I have achieved the same result through a fairly simple XSL transformation. This is achieved with bog standard JSTL without the need to write any new tag libraries. You could easily extend the navigation xml sitemap to include more link information and if you were using SiteMesh you could also take advantage of any "Where Am I?" information passed from your content pages.

Here is JATO's navigationSample.xml as used in the JATO sample application.


<?xml version="1.0" encoding="UTF-8"?>
<category name="Performing Arts" id="0">
<category name="Acting" id="2">
<category name="Actors" id="20">
<category name="Buster Keaton" id="200"/>
<category name="Charlie Chaplin" id="201"/>
<category name="WC Fields" id="202"/>
</category>
<category name="Actresses" id="21">
<category name="Mae West" id="210"/>
<category name="Bette Davis" id="211"/>
<category name="Marlene Dietrich" id="212"/>
</category>
<category name="Companies" id="22"/>
</category>
<category name="Circus Arts" id="3">
<category name="Acrobatics" id="30">
<category name="Anti-Gravity" id="300"/>
<category name="Human Design" id="301"/>
<category name="Trixy's Arco Page" id="302"/>
</category>
<category name="Clowning" id="31">
<category name="Rodeo Clowns" id="310">
<category name="One Eyed Jack" id="3100"/>
<category name="Cool Nite Rodeo" id="3101"/>
</category>
<category name="Creepy Clowns" id="311">
<category name="Creepy Clown Gallery" id="3110"/>
<category name="Ghost Town Clowns" id="3111"/>
</category>
<category name="Anti-Clowns" id="312">
<category name="The No Clown Zone" id="3120"/>
<category name="The Anti-Clown Page" id="3121"/>
</category>
</category>
</category>
</category>

Here is my simple JSP which currently does the XML/XSL transformation and passes any appropriate parameters into the XSL. It could of cource be optimized for performance, e.g. caching the XSL or XML in memory.


<%@taglib uri="http://java.sun.com/jsp/jstl/core" prefix="c"%><%--
--%>
<%@taglib uri="http://java.sun.com/jsp/jstl/xml" prefix="x"%><%--
--%>
<c:import url="navigationSample.xml" var="xml"/><%--
--%>
<c:import url="navigation.xsl" var="xsl"/><%--
--%>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"
"http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"
>
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<title></title>
<style type="text/css">
body {
font-family: helvetica;
font-size: 0.8em;
}
</style>
</head>
<body>
<c:choose><%--
--%>
<c:when test="${empty param.id}"><%--
--%>
<x:transform xml="${xml}" xslt="${xsl}"/><%--
--%>
</c:when><%--
--%>
<c:otherwise><%--
--%>
<x:transform xml="${xml}" xslt="${xsl}"><%--
--%>
<x:param name="id" value="${param.id}"/><%--
--%>
</x:transform><%--
--%>
</c:otherwise><%--
--%>
</c:choose>
</body>
</html>

Here is my navigation.xsl


<?xml version="1.0"?>
<xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform">
<xsl:output method="xml" version="1.0" omit-xml-declaration="yes" />
<xsl:param name="id" select="'0'"/>
<xsl:strip-space elements="category"/>

<xsl:template match="/">
<xsl:apply-templates/>
</xsl:template>

<xsl:template match="category[@id = $id]">
<xsl:for-each select="ancestor::category">
<a>
<xsl:attribute name="href">?id=<xsl:value-of select="@id"/></xsl:attribute>
<xsl:value-of select="@name"/>
</a> &gt;
</xsl:for-each>
<a>
<xsl:attribute name="href">?id=<xsl:value-of select="@id"/></xsl:attribute>
<xsl:value-of select="@name"/>
</a>
<xsl:if test="count(category) &gt; 0">
<ol>
<xsl:for-each select="category">
<li>
<a>
<xsl:attribute name="href">?id=<xsl:value-of select="@id"/></xsl:attribute>
<xsl:value-of select="@name"/>
</a>
</li>
</xsl:for-each>
</ol>
</xsl:if>
</xsl:template>

</xsl:stylesheet>

And that is all you need, horribly simple eh? I hope so.

Technically there is nothing stopping us doing XSL transformations for navigation in the client. This emergent navigation seems to be the direction that Web 2.0 applications are going. Although what stops me doing this now is all the cross browser XSL transformation issues and accessibility issues of needing to support non-javascript driven solutions on the client.