Distigme

A little learning is a dangerous thing.


1 Comment

MySQL and PostgreSQL

I started a project not long ago using MySQL. It seems like the obvious choice for a free database, right? Then, still early in the project, I ran into one barrier after another, where MySQL just won’t do the right thing, because it sucks. It’s not like you absolutely can’t get something done, but rather you are continually forced into cumbersome and inefficient workarounds.

Some others have recommended PostgreSQL as an alternative. While I haven’t taken the plunge, it sure looks impressive, and I’m strongly considering it.

Here’s a quick sampling of the ways MySQL (using InnoDB) has already disappointed me:

  • Can’t rename a database.
  • Can’t alter a table in-place (it actually copies the entire table to replace it).
  • Table names are case-sensitive on Linux, contrary to the SQL standard, but not on Windows.
  • Transactions don’t support DDL. If a schema migration fails partway through, you’re screwed.
  • DDL is missing basic constructs like ADD COLUMN IF NOT EXISTS, requiring complex workarounds. The trouble is exacerbated by…
  • Direct queries are limited to a single statement. Even using the IF keyword requires wrapping everything in a stored procedure.
  • No support for arrays. For example, how do you pass a list of items to retrieve into a stored procedure? A comma-delimited string?
  • No reasonable support for time. What exists is stupid with handling time zones, and TIMESTAMP, which is only precise to the second, drags in its own baggage.
  • No reasonable way to deal with UUIDs. Would you rather use a cumbersome BINARY(16) or inefficient CHAR(36)? And what do you think happens with indexes and primary keys?
  • Inferior support for complex data structures (PostgreSQL, for example, can natively store and query JSON).
  • No support for CTEs, which are essential for recursion (e.g., querying for all the ancestors of a sub-organization).
  • In general, MySQL adherence to ANSI SQL standards is quite poor. Sometimes MySQL requires its own alternative syntax, and sometimes it doesn’t even offer an alternative.
  • The query optimizer tends to be quite poor. The longstanding cause of this is MySQL’s assumption that each table can have a different storage engine (even though everyone just uses InnoDB or some variant nowadays). Said to be the big improvement targeted for 5.6, but I don’t have high expectations.
  • Simple COUNT(*) queries require an index scan, which is so bad that common practice is to maintain denormalized counts updated by a trigger.
  • Tools (i.e., MySQL Workbench) are notoriously buggy. I fully expect to crash and lose data on a daily basis.
  • Materialized views are simply not supported (same for PostgreSQL, but at least it’s under active development).

And I haven’t even started doing anything really complicated yet. Pretty disappointing for 2013. Yes, the fail is strong with this one.

These were pretty much all non-issues when I used SQL Server ten years ago, and reportedly are well-supported in PostgreSQL (except materialized views, as noted).

Really the main thing MySQL has going for it is that it’s really really popular. Big sites like Facebook use it (though regretfully). But it seems that the tide is turning, and mass abandonment is now underway.

A lot of the movement away from MySQL has been into NoSQL engines like MongoDB. At first I had a hard time understanding the whole impetus behind the NoSQL movement. But if your view of the world of relational databases is through the lens of MySQL, it’s easy to see why you would demand something radically different.

So, I look at PostgreSQL, and it seems to do just about everything I could want, but two things hold me back. One is the worry that “the grass is always greener” and I end up trading known problems for new unknown problems. The other, more practically, is that Amazon RDS, which we plan to use, supports MySQL but not PostgreSQL. We’d probably have to go to Heroku (which, interestingly, is actually built on top of Amazon’s cloud).

Any advice from those with more experience?


Leave a comment

SMD (Service Mapping Description)

A decade ago, XML web services would describe themselves using the WSDL standard, which tools could use to generate a strongly typed and convenient wrapper for calling the service, such as a Java class. Fast forward halfway to today, and the challenge has moved on from accessing XML services from the server side to accessing JSON services from the browser.

Not that it’s ever been too difficult. JSON first became popular because it was already valid JavaScript (though that fact was only relevant to those willing to call the evil eval()). And JavaScript is dynamically typed. So calling a simple ajax method in modern Dojo might look like this:

require(["dojo/request"], function(request) {
	request.get("/ajax/comments/", {
		handleAs: "json",
		query: { id: 123 }
	}).then(function(result) {
		console.log(result.length + " comments.");
	});
});

JQuery code tends to have these direct ajax calls scattered everywhere. They look similar:

$.ajax({
	url: "/ajax/comments/",
	dataType: "json",
	data: { id: 123 }
}).then(function(result) {
	console.log(result.length + " comments.");
});

But the burden can be still lighter. What exactly was the URL? Does it go like comments/123 or comments?id=123? Was it a POST or a PUT? What if I forget a parameter that the server requires?

So, the natural evolution is to start writing wrapper functions in a new module:

define(["dojo/request"], function(request) {
	return {
		getComments: function(id) {
			return request.get("/ajax/comments/", {
				handleAs: "json",
				query: { id: id }
			});
		}
	};
});

Now the caller can be simplified:

require(["my/service"], function(service) {
	service.getComments(123).then(function(result) {
		console.log(result.length + " comments.");
	});
});

That’s progress, but it’s somewhat expensive progress, and you may question whether it’s really worthwhile for a method you only call once.

Enter SMD. Just write a JSON object (the SMD) that describes the API of your service, and Dojo can generate the methods.

define(["dojox/rpc/Service"], function(RpcService) {
	var smd = {
		"SMDVersion": "2.0",
		"id": "http://www.example.org/ajax/",
		"description": "Example Service",
		"target": "/ajax/", 
		"envelope": "URL",
		"services": {
			"getComments": {
				"target": "comments",
				"transport": "GET",
				"parameters": [
					{ "name": "id", "type": "integer" }
				]
			},
	return new RpcService(smd);
});

This example shows the SMD embedded in a Dojo module, but you can also put it in a separate file.

You can imagine that if a team has one Java or PHP expert writing the API and one JavaScript expert consuming the API, a single file to define the interface is a big help. If the back-end guy needs to change a URL, instead of sifting through the mountain of unfamiliar JavaScript and changing every ajax call, he just changes it in one place in one file. The front-end guy, meanwhile, can look at the SMD and see at a glance what services are available.

Better yet, it should be possible to generate the SMD from, say, a Spring application. I’m not aware of that having been done yet, though. But Struts seems to have something.

In fact, SMD seems to have not yet caught in the five years since Dojo began to support it. There is an old post on SitePen introducing it, and then… not a peep. The Dojo Toolkit now has some basic documentation on dojox/rpc (note that dojo/rpc is legacy), along with the SMD 2.0 proposal, but that’s about it. In contrast, JSON Schema, on which SMD is built, seems quite active. The lack of resources on modern SMD is what motivated me to write this post.

SMDs for a few popular sites (Google, Wikipedia, Twitter) are included in dojox/rpc/SMDLibrary. Naturally, since you would use these in a cross-domain scenario, they all go over JSONP.

Here is a longer example:

define(["dojox/rpc/Service", "dojox/rpc/Rest"], function(RpcService, RpcRest) {
	var smd = {
		"SMDVersion": "2.0",
		"id": "http://www.example.org/ajax/",
		"description": "Example Service",
		"target": "/ajax/", 
		"envelope": "URL",
		"services": {
			"setPassword": {
				"target": "my-password/",
				"transport": "POST",
				"parameters": [
					{ "name": "passwordOld", "type": "string" },
					{ "name": "password", "type": "string" }
				],
				"returns": { 
					"type": "object",
					"properties": {
						"isValidPasswordOld": { "type": "boolean" }
					}
				}
			},
			"status": {
				"target": "status",
				"envelope": "PATH",
				"transport": "GET",
				"parameters": [
					{ "name": "id", "type": "integer" }
				],
				"returns": "string"
			},
			"comment": {
				"target": "comment/",
				"transport": "REST",
				"parameters": [
					{ "name": "id", "type": "integer" }
				]
			}
	};
	return new RpcService(smd);
});

Here we see the setPassword method as a POST, returning a complex object. The status method builds a URL from its parameter like /ajax/status/123 and returns some text. comment is actually a REST endpoint supporting multiple methods (we have to explicitly require dojox/rpc/Rest for this to work). So now we can make calls like these:

require(["my/service"], function(service) {
	service.setPassword("old", "new");
	service.setPassword({ passwordOld: "old", password: "new" });
	service.comment.delete(86);
	service.status(42).then(function(status) {
		alert(status);
	});
});

You can call service methods two different ways, as illustrated here by setPassword: you can pass the parameters in order, the same order specified in the SMD, or you can pass an object that maps parameter names (if you named your parameters in the SMD) to their values, which is handy for when you can load such an object from a form, for example.

Every service method returns a promise, so you can and should handle the states of pending, succeeded, and failed, and you need then() or when() to access return values once they arrive.

SMD is pretty straightforward to use and has clear benefits for structuring your code on a large project. Better still, you can adopt it gradually, using SMD-generated services in some places and direct ajax calls in others. There is also potential for tools to make use of SMD in other ways (e.g., a handy test bench for manually invoking methods), but I don’t expect much of these until SMD sees wider adoption.


4 Comments

Ajax and Spring Security form-based login

If your web application uses a form-based login through Spring security, and the same application uses ajax, you likely have a problem. Say your user opens two tabs for your application in the browser and logs out from one of them, while the other is periodically updating via ajax. Or, say your user comes back after lunch, to a page whose session has long timed out, and does something that attempts ajax. Or, say the user just pushes the Back button after logging out or timing out, and that pages uses some ajax. The default behavior is to treat the ajax request like any other web page and redirect to the login form. Not good.

Aside from the obvious absurdity, there are some real problems. First, it is not straightforward for your JavaScript code to determine what went wrong; it just sees a redirect followed by a successful GET of the login page, which looks fine and dandy apart from being HTML instead of JSON. Second, if the user, in another tab, tries to access a secured page, and an ajax call hits while they are typing in their password, the user will end up looking at a page full of JSON instead of their intended destination. That’s because Spring Security, by default, determines the destination after login from whatever was most recently stored in the session. (Really? The session??? I guess that’s because if you submit a form on an expired session, you don’t want to lose all your typing, but there’s no better place to keep the form data.)

Umar offers a solution based upon writing a custom ExceptionTranslationFilter, which ends up being really cumbersome because the <http> security namespace does not play nicely with it and has to be expanded into all its gory innards.

My solution is similar but overcomes this difficulty. The two steps are to configure the request cache to reject ajax requests and to replace the authentication entry point with one that rejects ajax requests.

<bean id="nonAjaxRequestMatcher" class="org.example.NonAjaxRequestMatcher" />

<bean id="loginUrlAuthenticationEntryPoint" 
	class="org.springframework.security.web.authentication.LoginUrlAuthenticationEntryPoint">
	<constructor-arg value="/login" />
</bean>

<bean id="ajaxAuthenticationEntryPoint" 
	class="org.springframework.security.web.authentication.Http403ForbiddenEntryPoint" />

<bean id="authenticationRequestCache" 
	class="org.springframework.security.web.savedrequest.HttpSessionRequestCache">
	<property name="requestMatcher" ref="nonAjaxRequestMatcher" />
</bean>

<bean id="authenticationEntryPoint" 
	class="org.springframework.security.web.authentication.DelegatingAuthenticationEntryPoint">
	<constructor-arg>
		<map>
			<entry key-ref="nonAjaxRequestMatcher" value-ref="loginUrlAuthenticationEntryPoint" />
		</map>
	</constructor-arg>
	<property name="defaultEntryPoint" ref="ajaxAuthenticationEntryPoint" />
</bean>

<security:http entry-point-ref="authenticationEntryPoint" ...>
	<security:request-cache ref="authenticationRequestCache" />
	...
</security:http>

This works by defining a criterion nonAjaxRequestMatcher that will identify only non-ajax pages that should be redirected as needed to the login page. This can be plugged directly into the authentication request cache. The authentication entry point is a bit more complicated, but it can all be set up in Spring as shown. A DelegatingAuthenticationEntryPoint is used to ask which type of request we’re dealing with and then send interactive requests to a standard loginUrlAuthenticationEntryPoint and ajax requests to ajaxAuthenticationEntryPoint, which returns 403 Forbidden.

The NonAjaxRequestMatcher can be implemented simply:

public class NonAjaxRequestMatcher implements RequestMatcher {
	@Override
	public boolean matches(HttpServletRequest request) {
		return !"XmlHttpRequest".equalsIgnoreCase(request.getHeader("X-Requested-With"));
	}
}

This bit of magic relies on a convention built into most modern JavaScript frameworks (Dojo, JQuery, etc.) to identify ajax requests by a special header.

Lastly, you can also go one step further and write a custom replacement for Http403ForbiddenEntryPoint that will return JSON instead of HTML, as many JavaScript frameworks seem surprised when you say to expect JSON and the error page has HTML instead (!).


Leave a comment

Serving Dojo from Spring

If you have a project using Dojo (or any comparable framework), you have a few options for how to serve Dojo itself, including:

  1. Use the Google CDN. This is the most straightforward and a great place to start. You might even hope for caching efficiencies by using such a common CDN, but in practice this does not work out so well. And if you do development from somewhere with a flaky WiFi, every page reload is a spin of the revolver.
  2. Use Dojo’s sweet build system to package up all the JavaScript+CSS that your site needs. This approach can be a big win, but you have to figure out how to set everything up in your automated build system.
  3. Use Maven to pull in the Dojo WAR. You can then deploy this separately or integrate it via an overlay. Then you discover the joy of waiting for m2e to unpack 8,000 Dojo files over and over again.

There is a another very convenient method that I have never seen suggested but have found to work quite well.

There is a middle ground of serving the pre-packaged Dojo Toolkit yourself, without going through a WAR, and without ever unpacking everything on disk. First, download the ZIP, rename the extension to .jar, and put it in your WEB-INF/lib. Then, in Spring, you can map to this JAR via the classpath:

<mvc:resources 
    mapping="/dojo-1.8.0/**" 
    location="classpath:dojo-release-1.8.0/" 
    cache-period="31556926" />

When someone requests from your site, say, /dojo-1.8.0/dojo/dojo.js, Spring will use the first part of the path to map into your zip-renamed-as-jar, then find the folder and file within the ZIP itself, and serve it directly.


1 Comment

Hacking Thymeleaf to minimize white space

Thymeleaf has an open feature request to suppress superfluous white space (whitespace?) in its output.

Getting this right is good for efficiency but rife with pitfalls, which is why the safest thing is to just leave all the white space alone. For sure, you don’t want to mess with the white space inside a <pre> or <code>, or any element that might potentially be styled white-space: pre, which is nearly all of them. Even if you remove only text nodes that are only white space, you can run into things like this:

<p>I am <u>very</u> <a href="#2">interested</a>.</p>

Caveats aside, let’s see how to configure Thymeleaf now to strip whitespace-only nodes. This approach will use a custom template writer, as suggested here.

In Spring, start with the templateEngine:

<bean id="templateEngine" class="org.thymeleaf.spring3.SpringTemplateEngine">
	<property name="templateResolver" ref="templateResolver" />
	<property name="templateModeHandlers">
		<bean class="org.thymeleaf.templatemode.TemplateModeHandler">
			<constructor-arg index="0" value="HTML5" />
			<constructor-arg index="1" 
				value="#{T(org.thymeleaf.templatemode.StandardTemplateModeHandlers).HTML5.templateParser}" />
			<constructor-arg index="2">
				<bean class="org.example.WhiteSpaceNormalizedTemplateWriter" />
			</constructor-arg>
		</bean>
	</property>
</bean>

This is replacing the default list of template mode handlers (HTML5, XHTML, etc.) with a single custom one, which we are calling “HTML5” (or you could pick a distinctive name). A template mode handler really just ties together a parser and a writer, so there is no need to write a new class; instead, we construct an instance right here, passing in the name, the original parser for HTML5, and our custom template writer.

So, that’s how to wire the custom template writer into Thymeleaf via Spring. The class itself can look like this:

public final class WhiteSpaceNormalizedTemplateWriter extends AbstractGeneralTemplateWriter {
	
	@Override protected boolean shouldWriteXmlDeclaration() { return false; }
	@Override protected boolean useXhtmlTagMinimizationRules() { return true; }

	@Override
	protected void writeText(final Arguments arguments, final Writer writer, final Text text)
			throws IOException {
		final char[] textChars = text.unsafeGetContentCharArray();
		if ( StringUtils.hasText(CharBuffer.wrap(textChars)) ) {
			writer.write(textChars);
		}
	}
	
}


1 Comment

Returning a value from a function with AMD

These days I’m still getting used to AMD in Dojo. It’s a shift in thinking, that whenever you want to use a module, all you can really do is schedule something to be executed when the module finally gets loaded, quite possibly millions of nanoseconds in the future.

This pattern fundamentally breaks the request-response paradigm inherent in calling a function and getting its return value. Let’s say you have some old code like this (pretend it’s not trivial):

function getLogoTitle() {
    var logoNode = dojo.byId("logo");
    return logoNode.title;
}
function displayWelcome() {
    alert(getLogoTitle());
}

Now, you go to convert this to modern AMD style, and a first pass looks like:

function getLogoTitle() {
    require(["dojo/dom"], function(dom) {
        var logoNode = dom.byId("logo");
        return logoNode.title;
    });
}
function displayWelcome() {
    alert(getLogoTitle());
}

This will not work, and it may not be obvious why. The return is now returning from the inner anonymous function rather than getLogoTitle(), and the latter now returns nothing. If you move the return outside the require, it doesn’t have access to logoNode, and if you move that declaration outside the require, like this:

function getLogoTitle() {
    var logoNode;
    require(["dojo/dom"], function(dom) {
        logoNode = dom.byId("logo");
    });
    return logoNode.title;
}
function displayWelcome() {
    alert(getLogoTitle());
}

You can still get nothing, since the assignment happens in the future. This whole approach is fundamentally flawed, since you are trying to use at present a value that will not exist until the distant future.

The way to fix that is to schedule what you want to do with the value once it is obtained. But that stuff you want to do is actually back in the calling function displayWelcome, so you need to coordinate with the caller via a callback.

The simplest solution is to restructure like this:

function getLogoTitle(callback) {
    require(["dojo/dom"], function(dom) {
        var logoNode = dom.byId("logo");
        callback(logoNode.title);
    });
}
function displayWelcome() {
    getLogoTitle(function(title) {
        alert(title);
    });
}

As your needs grow more complex, you will appreciate the more full-featured promise-based solution offered by Deferred. The tricky part can be accessing the Deferred module itself. One crude way is to take it as a parameter:

function getLogoTitle(Deferred) {
    var result = new Deferred();
    require(["dojo/dom"], function(dom) {
        var logoNode = dom.byId("logo");
        result.resolve(logoNode.title);
    });
    return result;
}
function displayWelcome() {
    require(["dojo/Deferred", "dojo/when"], function(Deferred, when) {
        when(getLogoTitle(Deferred), function(title) {
            alert(title);
        });
    });
}

This when style accommodates both functions that return values directly and functions that return promises. So, rather than rely on knowledge of which type of function you are calling, a good habit is to rely on when for handling return values.

Now, passing in the Deferred module is ugly. There are ways to load it synchronously in the global scope, but the preferred solution is to wrap a function like getLogoTitle in a module. If both of the function and its caller are already in a module together, you’re all set. If the caller is somewhere else, though, the idea is to expand its require to include the module, so that by the time the callback executes, Deferred and any other modules needed immediately by getLogoTitle have already been loaded and hooked up and can be used directly. But writing a module involves breaking code out into another file, so the time to do that is when you your code is complex enough to benefit from separate modules.

There is a shortcut, though, which is to make a simple promise without using Deferred. Basically, you return an object with a then() function:

function getLogoTitle() {
    return {
        then: function(callback) {
            require(["dojo/dom"], function(dom) {
                var logoNode = dom.byId("logo");
                callback(logoNode.title);
            });
        }
    };
}
function displayWelcome() {
    require(["dojo/when"], function(when) {
        when(getLogoTitle(), function(title) {
            alert(title);
        });
    });
}

The flow goes like this:

  1. Let’s start with displayWelcome() having loaded the when module.
  2. getLogoTitle() is called.
  3. getLogoTitle() immediately returns a promise object containing an embedded then function.
  4. when is called with this promise and with another anonymous callback function.
  5. when sees that its first parameter has a then function and calls it, passing the callback as a parameter.
  6. The then function begins executing.
  7. The require starts off an asynchronous module load. But is the module already loaded?
    • If already loaded, proceed directly to the callback in the next step.
    • If not, register the callback and return immediately, unwinding the stack. If, later on, the module loading ever finishes, then the loader will call the callback in the next step.
  8. The require callback, which is the anonymous function(dom), executes, using the now-loaded dom do to its business.
  9. Having done its business, it passes its result to yet another callback, the anonymous function(title).
  10. At last, this callback has the return value from the original code, and it proceeds to display the alert.

It seems a little convoluted, with functions flying everywhere, but if you can get your head around it, you will have a solid grasp of asynchronous JavaScript.


1 Comment

Why Thymeleaf?

Not long ago I started a new web project in Java (not my choice of language) and rediscovered what a pain that old dinosaur JSP is. Looking for more modern replacements, I saw some suggesting JSP’s logical successor JSF (ugh, ASP.NET-style postbacks!). Many considered FreeMarker the best available. And then there’s the new kid on the block, Thymeleaf, which I fell in love with.

Coming from JSP—or more precisely JSPX, formally JSP Document, which is the only sane way to do JSP—I tend to focus on its shortcomings, and how Thymeleaf improves upon them.

So, what’s so great about Thymeleaf?

First, what I’m trying to create is HTML5. What JSPX makes for me is XML. The creators of JSPX probably thought we’d all be serving XHTML by now, but instead we got HTML5. Most unfortunately, XML is not HTML. If you need reminding of that, here are a few classic gotchas.

  1. Never self-close a <script> or <div> tag. This won’t work:

    <script src="foo.js" />
    <div style="color:red" />
    <h1>Hello, world!</h1>
    

    Even if you remember and try to do the right thing, JSPX will strip any white space and collapse the element back into self-closing form; workarounds (my favorite its to insert ${null}) are container-specific. Without the closing tag, this <script> will just do nothing and this <div> will make the rest of the document red.

  2. Double-escape everything except <script> and <style>. This JSPX won’t work:

    <div>Chapter title: &lt;script&gt;alert("Augh! I've been hacked!")&lt;/script&gt;</div>
    

    You end up actually injecting a script. Imagine if this chapter title is filled in dynamically from user-entered data, and boom, XSS attack. So, you learn to use <c:out> everywhere, but then you get to a script and try this, which won’t work:

    <script>
    var myTitle = "<c:out value="${myTitle}" />";
    </script>
    

    It turns out that <script> elements have a CDATA content model, which means that everything except </ is treated literally. Your chapter title will get mangled by HTML-escaping. You have to remember that you instead need Javascript string escaping here (thank you, Spring).

    The trickiest scenario is when you try to inject JSON:

    <script>
    var myJson = ${myJson};
    </script>
    

    Now, JSON is almost, but not quite, suitable for injecting right in like this. After all, it is valid JavaScript. But JSON will let you have a string like "</script>", whereas HTML will see this as the abrupt end of your script block (BTW, if you use CDATA sections in scripts in XHTML, you have the same problem with ]]>). You can escape </ into <\/, but you have to remember to do it, and it will never be obvious when you forget.

  3. Boolean attributes are a bit odd. HTML, in contrast to XML, doesn’t actually require an attribute value at all (old browsers even required the value to be absent!). HTML4 requires any value, if present, to be the same as the attribute name, while HTML5 allows any value. So what does this do?

    <input type="checkbox" checked="${2 + 2 == 5}" />
    

    The expression evaluates to false, so you get the attribute checked="false", which in modern browsers causes the box to be checked. Wait, that isn’t what you meant?

The upshot of all this is that any usable template system for serving HTML5 needs to actually know the idiosyncrasies of HTML5 and not just XML. In fact, the ideal target is polyglot XHTML5. For me, this is indispensable, and this is where Thymeleaf shines.

The feature that the creators of Thymeleaf seem to emphasize, though, is natural templating. If you open a .jspx file directly in the browser, you get a hot mess, but if you open a Thymeleaf .xhtml file instead, you get something very close to the live page. Then your designer can go to work.

When you have shared content such as a menu bar that you don’t want to copy-and-paste onto each page, you can use Thymeleaf’s th:include to pull in that fragment. That breaks your static-file preview, of course, but Thymol comes to the rescue and does the right thing with some JavaScript goodness.

Thymeleaf uses attributes almost exclusively, in sharp contrast to JSP relying on elements. And it consistently does the right thing for escaping text. The natural templating works primarily by supplying an optional placeholder and using Thymeleaf attributes, conventionally namespace-prefixed with th:, to replace that with a dynamic value at runtime. Other measures are in place for repetitive data such as tables. For example,

<h1 th:text="${myTitle}">Title goes here</h1>

I have reservations about relying on this approach very much. For things like localized text, the placeholders end up being a lot of duplication. You can easily end up changing the placeholders rather than the real data by mistake, or letting the two get out of sync. For something like an image or stylesheet URL, your placeholder needs to be a relative path in your project, which is annoying and again can get out of sync. Furthermore, you can edit a page and reload it without restarting the entire servlet, so working on the live page really isn’t that big a pain. So, basically, I’m not sure the extra work to make natural templating usable for offline design is worth it.

Another really handy feature of Thymeleaf is handling of URLs and localized text. In JSPX, even with Spring already helping out a lot, you would have to do something like:

<spring:url var="urlSmiley" value="${'/img/smiley.png?rating=' + rating}" />
<spring:message var="msgHappiness" value="Menu.Happiness" />
<img src="${urlSmiley}" title="${msgHappiness}" />

Thymeleaf’s expression dialect has a single-character shorthand for each:

<img src="@{/img/smiley.png(rating=${rating})}" title="#{Menu.Happiness}" />

It’s still a young project, and the developers are active and responsive. The future for Thymeleaf looks bright.