Dynamic CDI producers

1 August 2014

CDI has the well known concept of producers. Simply put a producer is a kind of general factory method for some type. It’s defined by annotating a method with @Produces. An alternative “factory” for a type is simply a class itself; a class is a factory of objects of its own type.

In CDI both these factories are represented by the Bean type. The name may be somewhat confusing, but a Bean in CDI is thus not directly a bean itself but a type used to create instances (aka a factory). An interesting aspect of CDI is that those Bean instances are not just internally created by CDI after encountering class definitions and producer methods, but can be added manually by user code as well.

Via this mechanism we can thus dynamically register factories, or in CDI terms producers. This can be handy in a variety of cases, for instance when a lot of similar producer methods would have to be defined statically, or when generic producers are needed. Unfortunately, generics are not particularly well supported in CDI. Instead of trying to create a somewhat generic producer an alternative strategy could be to actually scan which types an application is using and then dynamically create a producer for each such type.

The following code gives a very bare bones example using the plain CDI API:

public class DynamicIntegerProducer implements Bean<Integer> {
 
    @SuppressWarnings("all")
    public static class DefaultAnnotationLiteral extends AnnotationLiteral<Default> implements Default {
        private static final long serialVersionUID = 1L;
    }
 
    @Override
    public Class<?> getBeanClass() {
        return Integer.class;
    }
 
    @Override
    public Set<Type> getTypes() {
        return new HashSet<Type>(asList(Integer.class, Object.class));
    }
 
    @Override
    public Integer create(CreationalContext<Integer> creationalContext) {
        return new Integer(5);
    }
 
    @Override
    public Set<Annotation> getQualifiers() {
        return singleton((Annotation) new DefaultAnnotationLiteral());
    }
 
    @Override
    public Class<? extends Annotation> getScope() {
        return Dependent.class;
    }
 
    @Override
    public Set<Class<? extends Annotation>> getStereotypes() {
        return emptySet();
    }
 
    @Override
    public Set<InjectionPoint> getInjectionPoints() {
        return emptySet();
    }
 
    @Override
    public boolean isAlternative() {
        return false;
    }
 
    @Override
    public boolean isNullable() {
        return false;
    }
 
    @Override
    public String getName() {
        return null;
    }
 
    @Override
    public void destroy(Integer instance, CreationalContext<Integer> creationalContext) {
 
    }
}

There are a few things to remark here. First of all the actual producer method is create. This one does nothing fancy and just returns a new Integer instance (normally not a good idea to do it this way, but it’s just an example). The getTypes method is used to indicate the range of types for which this dynamic producer produces types. In this example it could have been deducted from the generic class parameter as well, but CDI still wants it to be defined explicitly.

The getQualifiers method is somewhat nasty. Normally if no explicit qualifiers are used in CDI then the Default one applies. This default is however not implemented in the core CDI system it seems, but is done by virtue of what this method returns. In our case it means we have to explicitly return the default qualifier here via an AnnotationLiteral instance. These are a tad nasty to create, as they require a new class definition that extends AnnotationLiteral and the actual annotation needs to be present as both a (super) interface AND as a generic parameter. To add insult to injury, Eclipse in particular doesn’t like us doing this (even though it’s the documented approach in the CDI documentation) and cries hard about this. We silenced Eclipse here by using the @SuppressWarnings(“all”) annotation. To make the code even more nasty, due to the way generics and type inference work in Java we have to add an explicit cast here (alternatively we could have used Collections.<Annotation>singleton).

For the scope we can’t return a null either, but have to return the CDI default as well if we want that default. This time it’s an easy return. For the scope and stereo types we can’t return a null if we don’t use them, but have to return an empty set. The isNullable method (deprecated since CDI 1.1) can return false. Finally, getName is the only method that can return a null.

Dynamic producers like this have to be added via a CDI extension observing the AfterBeanDiscovery event:

public class DynamicProducerExtension implements Extension {
    public void afterBean(final @Observes AfterBeanDiscovery afterBeanDiscovery) {
        afterBeanDiscovery.addBean(new DynamicIntegerProdcuer());
    }
}

As with all CDI extensions the extension class has to be registered by putting its FQN in META-INF/services/javax.enterprise.inject.spi.Extension.

After doing this, injection can be done as usual, e.g.:

@Singleton
@Startup
public class TestBean {
 
    @Inject
    private Integer integer;
 
    @PostConstruct
    public void init() {
        out.println(integer);
    }
}

Deploying an application with only the code shown in this post will print 5 in the logs ;)

Arjan Tijms

Eclipse Luna and JDK8

25 June 2014

Today marked the launch of Eclipse Luna, which is the first version of Eclipse which ships with Java 8 support built in. As we like to stay on the cutting edge here at ZEEF, I decided to give this new version of Eclipse a try straight away.

Unfortunately, I ran into a bug in the Java 8 support fairly quickly. Type inference in Luna seems to fail when the return type of a method needs to be inferred from a lambda within a lambda that has been used as a method parameter. For example, the following code snippet compiles normally using javac, but fails in Eclipse Luna:

    Stream.of("test")
        .flatMap(s -> s.chars().mapToObj(i -> Character.valueOf((char)i)))
        .filter(Character::isLowerCase)
        .toArray();

Eclipse seems to infer that the return type of the “i -> Character.valueOf((char)i)” lambda is Object instead of Character, which leads to infer that the return type of flatMap() should be Stream<Object> instead of Stream<Character>. As the method reference used as the filter parameter, the compiler fails on this line as this method reference is not applicable for an argument of type Object.

Fortunately, it’s easy to work around this bug in Eclipse’s compiler. Any change to either the outer or inner lambda which makes the return type explicit allows Eclipse to correctly compile the code. Casting either the inner or the outer lambda to the correct type, assigning either to a variable or replacing either with a method reference allows Eclipse to correctly compile the code. For example, if you were to assign the inner lambda to a variable, you would end up with the following code:

    IntFunction<Character> characterFromInt = i -> Character.valueOf((char)i);
    Stream.of("test")
        .flatMap(s -> s.chars().mapToObj(characterFromInt))
        .filter(Character::isLowerCase)
        .toArray();

The need to for such a workaround is unfortunate as improved type inference is one of the major enhancements in Java 8. Hopefully a fix for this issue will be available soon.

Experiences with migrating from JBoss AS 7 to WildFly 8.1

24 June 2014

This month the first major update of WildFly 8 (the base for a future JBoss EAP 7) was released; WildFly 8.1.0.Final.

I tried to get zeef.com running on it, which currently runs on JBoss AS 7. Zeef.com is a relatively new Java EE 6 based web application that was started about a year ago. As such there is not yet that much legacy code present in it.

This article is about the issues I encountered during this initial migration.

In broad lines the issues fell into the following categories:

  • Datasource
  • JASPIC
  • Tomcat/Undertow differences
  • Valves

Datasource

The first issue I run into was a failed deployment. WildFly spitted out page upon page of unintelligible mumble jumble. Upon a closer look there was something about our datasource in the middle of all of this. Apparently WildFly was trying to tell me that the datasource couldn’t be found. For zeef.com we define our datasource in application.xml (the Java EE standard way) and switch between stages uses a delegating switcheable datasource.

Moving the datasource definition to ejb-jar.xml solved the problem. Now WildFly 8.1 does support the definition of datasources in application.xml as demonstrated by this test (although a small workaround is needed). It might be an issue with loading the SwitchableXADataSource from the EAR level, but I didn’t investigate this further.

JASPIC

The next problem concerned an amount of issues related to JASPIC, the Java EE standard authentication API. JASPIC is an important but troublesome spec. It’s suspected that its TCK is very light, as (preview) implementations that don’t actually work (yet) have been certified.

The WildFly team however has done a fair amount of work to make sure its JASPIC implementation is reasonably correct, among others by using an external set of tests designed to verify that JASPIC does the most basic things right (like actually authenticating). Unfortunately one case slipped through, and that mainly concerned the behavior of HttpServletRequest#authenticate.

Specifically the following issues occured:

  • authenticate() does nothing and closes response (UNDERTOW-263)
  • authenticate() closes response when no authentication happened (UNDERTOW-259)
  • NullPointerExceptions right after request processing (WFLY-3514)
  • NullPointerExceptions for requests after a session is created (WFLY-3518

The first two issues could be worked around by installing an HttpServletRequestWrapper, e.g. via an Undertow Handler as follows:

public class JaspicHandler implements HttpHandler {
 
	private Field authenticationStateField; 
	private HttpHandler next;
 
	public JaspicHandler(HttpHandler next) {
		this.next = next;
		try {
			authenticationStateField = SecurityContextImpl.class.getDeclaredField("authenticationState");
			authenticationStateField.setAccessible(true);
		} catch (NoSuchFieldException | SecurityException e) {
			throw new RuntimeException(e);
		}
	}
 
	@Override
	public void handleRequest(final HttpServerExchange exchange) throws Exception {
 
		ServletRequestContext context =	exchange.getAttachment(ATTACHMENT_KEY);
		if (context != null) {
			ServletRequest request = context.getServletRequest();
 
			HttpServletRequestWrapper wrapper = new HttpServletRequestWrapper((HttpServletRequest) request) {
				@Override
				public boolean authenticate(HttpServletResponse response) throws IOException, ServletException {
					if (response.isCommitted()) {
						throw MESSAGES.responseAlreadyCommited();
					}
 
					SecurityContext securityContext = exchange.getSecurityContext();
					securityContext.setAuthenticationRequired();
 
					if (securityContext instanceof SecurityContextImpl) {
						SecurityContextImpl securityContextImpl = (SecurityContextImpl) securityContext;
						try {
							// Perform the actual reset of the authentication state
							authenticationStateField.set(securityContextImpl, authenticationStateField.get(new SecurityContextImpl(null, null)));
						} catch (IllegalArgumentException | IllegalAccessException | SecurityException e) {
							throw new RuntimeException(e);
						}
					}
 
					if (securityContext.authenticate()) {
						if (securityContext.isAuthenticated()) {
							return true;
						} else {
							throw MESSAGES.authenticationFailed();
						}
					} else {
						// Just return false. The original method for some reason closes the stream here.
						 // see https://issues.jboss.org/browse/UNDERTOW-259
						return false;
					}
				}
			};
 
			context.setServletRequest(wrapper);
		}
 
		next.handleRequest(exchange);
	}
 
}

And then register this as an innerHandler:

public class UndertowHandlerExtension implements ServletExtension {
    @Override
    public void handleDeployment(final DeploymentInfo deploymentInfo, final ServletContext servletContext) {
        deploymentInfo.addInnerHandlerChainWrapper(handler -> new JaspicHandler(handler)); 
    }
}

The handler extension itself has to be registered by putting its fully qualified class name in /META-INF/services/io.undertow.servlet.ServletExtension.

For the other two issues JASPIAuthenticationMechanism had to be patched by inserting a simple guard around obtaining the so-called “cached account”:

cachedAccount = authSession == null? null : authSession.getAccount();

and by inserting another guard in secureResponse to check if a wrapper hadn’t been installed before:

ServletRequest request = exchange.getAttachment(ATTACHMENT_KEY).getServletRequest();
 
if (!TRUE.equals(request.getAttribute("JASPIAuthenticationMechanism.secureResponse.installed"))) {
    request.setAttribute("JASPIAuthenticationMechanism.secureResponse.installed", TRUE);
    // original code
}

(the fix for this last issue was committed rather fast by WildFly developers and fixes it in a better way)

Tomcat/Undertow differences

The next problems were about (small) differences between Tomcat/JBossWeb that JBoss previously used and the new Undertow that’s used in WildFly 8.

The first of those issues are about what HttpServletRequest#getRequestURI and HttpServletRequest#getServletPath return when a welcome file is requested. Tomcat will return the requested location for getRequestURI and the welcome file resource for getServletPath, while Undertow will return the welcome file resource for both calls.

E.g. with a welcome file declaration in web.xml as follows:

<welcome-file-list>
    <welcome-file>index</welcome-file>
</welcome-file-list>

and when requesting the context root of an application (e.g. http://localhost:8080 for a root deployment), the results are as follows:

getRequestURI getServletPath
Tomcat/JBossWeb / /welcome
Undertow /welcome /welcome

The information about the requested resource can be used to redirect the user to the root (“/”) when for some reason the welcome file resource is directly requested. With such a redirect the website will always display a ‘clean’ URL in the address bar. With the way Undertow does things there’s no way to distinguish a request to “/” from a request to “/welcome” and thus no opportunity to redirect. If a redirect was already in place based on the Tomcat/JBossWeb behavior an endless redirect loop will be the result.

A workaround here is to create another Undertow handler as follows:

public class RequestURIHandler implements HttpHandler {
 
    private HttpHandler next;
 
    public RequestURIHandler(HttpHandler next) {
        this.next = next;
    }
 
    @Override
    public void handleRequest(final HttpServerExchange exchange) throws Exception {
 
        String requestURI = exchange.getRequestURI();
 
        next.handleRequest(exchange);
 
        exchange.setRequestURI(requestURI);
    }
}

This handler too has to be registered like we did for the JaspicHandler, but this time as an initialHandler.

Another difference (likely a bug) between JBossWeb and Undertow is that for a root deployment the former will write the JSESSIONID cookie with the path set to “/”. Undertow however leaves the path empty.

An empty path is however interpreted by the browser as being the requested URI, meaning that the JSESSIONID cookie is set on each path where the user happens to do something that creates a session. It doesn’t require much imagination to understand this causes chaos in an application.

As it appears the fact that APIs for getting the context root often return the empty string for the root deployment (instead of “/”), but the path name for other deployments (e.g. “/foo”) is the culprit here. Undertow is not the first one to fall for this. Mojarra once had an identical bug with respect to setting a cookie for The Flash.

We can workaround this issue by setting the path explicitly in web.xml as follows:

<session-config>
    <cookie-config>
        <path>/</path>
        <http-only>true</http-only>
        <secure/>
    </cookie-config>
    <tracking-mode>COOKIE</tracking-mode>
</session-config>

Valves

Tomcat (and thus JBossWeb) has a low level mechanism called a Valve, which is a kind of Filter like element but at a much lower level and with access to some of the internal server APIs.

Ideally an application wouldn’t have to resort to using these, but sometimes there’s no choice. For instance a Filter can not change the outcome of the Servlet pipeline. It’s already fully established when the Filter is called. A Filter can redirect or forward, but these mechanisms have various side-effects. There’s also no mechanism in Servlet to intercept ALL cookies being written. By wrapping the HttpServletResponse you can catch a lot, but not these emitted by lower level components like the server generated JSESSIONID and cookies set by e.g. a SAM.

For zeef.com we had to resort to use a few of these. As Valves are highly specific to Tomcat it’s only logical that Undertow can’t support them. It does have another construct called HttpHandler, which we already used above for some workarounds.

Those handlers are quite powerful. There are ones that are called before the request pipeline is set up (initial handlers) and ones that execute during this pipeline (inner and outer handlers).

One of the things where we used a Valve for in Tomcat/JBossWeb is to do universal cookie rewriting. Unfortunately Undertow doesn’t have a way to rewrite a cookie right away. There is an opportunity to have a kind of listener called right before the response is being written, but it’s a bit non-obvious.

Via a handler and some reflective code we can do this more directly, as shown by the following code:

public class CookieRewriteHandler implements HttpHandler {
 
    private Field responseCookiesField;
    private final HttpHandler next;
 
    public CookieRewriteHandler(HttpHandler next) {
 
        this.next = next;
 
        try {
            responseCookiesField = HttpServerExchange.class.getDeclaredField("responseCookies");
        } catch (NoSuchFieldException | SecurityException e) {
            throw new RuntimeException(e);
        }
        responseCookiesField.setAccessible(true);
    }
 
    @Override
    public void handleRequest(final HttpServerExchange exchange) throws Exception {
        // Overwrite the map used to store cookies internally so we can intercept
        // the cookies being written.
 
        // Alternatively: there's a wrapper handler called just before the response
        // is being written. We could use that to iterate over the cookie map.
        responseCookiesField.set(exchange, new HashMap<String, Cookie>() {
 
            private static final long serialVersionUID = 1L;
 
            @Override
            public Cookie put(String key, Cookie value) {
                // *****************************
                // rewrite cookie here as needed
                // *****************************
                return super.put(key, value);
            }
 
        });
 
        next.handleRequest(exchange);
    }
}

Of course using reflection to hack into the internals of a server is rarely a good idea and this handler is at risk of breaking with every minor update to Undertow.

Conclusion

During the initial attempt to get our application running on WildFly I certainly encountered a fair number of issues. It certainly wasn’t the case of just deploying the app and having everything working.

Of course we do have to realize that the Java EE standard datasource and authentication mechanism are unfortunately still not used that much as most vendors keep documenting their proprietary mechanisms first and foremost, which likely causes users to use those most, which on its turn may cause these standard ways to get less testing hours.

The Tomcat/WildFly differences initially looked major, but are after all just small bugs.

The Valves issue is debatable. It’s a major change in the product JBoss, but most Java EE applications should maybe not have used them in the first place. Purely from the point of view of the Java EE spec it doesn’t matter that Red Hat changed this, but looking from the point of view of JBoss itself it does matter and people have little choice but to rewrite their code if they want to upgrade.

Finally we have to realize that WildFly from a certain point of view is like an open beta. It has freely downloadable binaries, but the product is early in its lifecycle and there’s no commercial support available for it yet. When the WildFly branch transitions to JBoss EAP 7 many bugs that the community discovers now will undoubtedly have been fixed and commercial support will be available then. In a way this is the price we pay for a free product such as this. As David Blevins wrote “Open Source Isn’t Free”. Users “pay” by testing the software and producing bug reports and perhaps patches, which IMHO is a pretty good deal.

At any length there’s the eternal tradeoff to be made: adopt the new Java EE 7 spec early with WildFly 8 but be prepared to run into some issues, or wait a good deal longer for JBoss EAP 7 but then have a much more stable product to begin with. This is of course a choice everyone has to make for themselves.



Arjan Tijms

OmniFaces showcase and OpenShift app management

31 March 2014

At ZEEF.com we’re using OmniFaces pretty intensively (eating our own dog food).

We’re hosting an application that showcases a lot of OmniFaces’ features at OpenShift.

Mostly we’re very happy with it. Among its many environments OpenShift offers a JBoss EAP 6.x server that’s very regularly updated. JBoss EAP 6.x is Red Hat’s Java EE 6 implementation that has received many bug fixes during the years so it’s rather stable at the moment. And even though Red Hat has a Java EE 7 implementation out (WildFly 8) the Java EE 6 server keeps getting bug fixes to make it even more stable.

Yesterday however both the two nodes on which we have our showcase app deployed appeared to be suddenly down. An attempt to restart the app via the webconsole didn’t do anything. It just sits there for a long time and then eventually says there is a technical problem and doesn’t provide any further details. This is unfortunately one of the downsides of OpenShift. It’s a great platform, but the webconsole clearly lags behind.

We then tried to log into our primary gear using ssh [number]@[our app name].rhcloud.com. This worked, however the Jboss instances are not running on this primary gear but on two other gears. We tried the “ctl_all stop” and “ctl_all start” commands, but this only seemed to restart the cartridges (ha-proxy and a by default disabled JBoss) on the gear where we were logged-in, not on the other ones.

Next step was trying to login into those other gears. There is unfortunately little information available on what the exact address of those gears is. There used to be a document up at https://www.openshift.com/faq/can-i-access-my-applications-gear, but for some reason it has been taken down. Vaguely remembering that the URL address of the other gears is based on what [app url]/haproxy-status lists, we tried to ssh to that from the primary gear but “nothing happened”. It looked like the ssh command was broken. ssh’ing into foo (ssh foo) also resulted in nothing happening.

With the help of the kind people from OpenShift at the IRC channel it was discovered that ssh on the openshift gear is just silent by default. With the -v option you do get the normal response. Furthermore, when you install the rhc client tools locally you can use the following command to list the URL addresses of all your gears:

rhc app show [app] --gears

This returns the following:

ID State Cartridges Size SSH URL
[number1] started jbosseap-6 haproxy-1.4 small [number1]@[app]-[domain].rhcloud.com
[number2] started jbosseap-6 haproxy-1.4 small [number2]@[number2]-[app]-[domain].rhcloud.com
[number3] started jbosseap-6 haproxy-1.4 small [number3]@[number3]-[app]-[domain].rhcloud.com

We can now ssh into the other gears using the [numberX]@[numberX]-[domain].rhcloud.com pattern, e.g.

ssh 12ab34cd....xy@12ab34cd....xy-myapp-mydomain.rhcloud.com

In our particular case on the gear identified by [number2] the file system was completely full. Simply deleting the log files from /jbosseap/logs fixed the problem. After that we can use the gear command to stop and start the JBoss instance (ctl_all and ctl_app seem to be deprecated):

gear stop
gear start

And lo and behold, the gear came back to life. After doing the same for the [number3] gear both two nodes were up and running again and requests to our app were serviced as normal.

One thing that we also discovered was that per default OpenShift installs and starts a JBoss instance on the gear that hosts the proxy, but for some reason that probably only that one proverbial engineer that left long ago knows, there is no traffic routed to that JBoss instance.

In the ./haproxy/conf directory there’s a configuration file with among others the following content:

server gear-[number2]-[app] ex-std-node[node1].prod.rhcloud.com:[port1] check fall 2 rise 3 inter 2000 cookie [number2]-[app]
server gear-[number3]-[app] ex-std-node[node2].prod.rhcloud.com:[port2] check fall 2 rise 3 inter 2000 cookie [number3]-[app]
server local-gear [localip]:8080 check fall 2 rise 3 inter 2000 cookie local-[number1] disabled

As can be seen, there’s a disabled marker after the local-gear entry. Simply removing it and stopping/starting or restarting the gear will start routing requests to this gear as well.

Furthermore we see that the gear’s SSH URL can indeed be derived from the number that we see in the configuration and output of haproxy. The above [number2] is exactly the same number [number2] as was in the output from rhc app show showcase –gears.

This all took quite some time to figure out. How could OpenShift have done better here?

  • Not take down crucial documentation such as https://www.openshift.com/faq/can-i-access-my-applications-gear.
  • List all gear URLs in the web console when the application is scaled, not just the primary one.
  • Implement a restart in the web console that actually works, and when a failure occurs gives back a clear error message.
  • Have a restart per gear in the web console.
  • List critical error conditions per gear in the web console. In this case “disk full” or “quota exceeded” seems like a common enough condition that the UI could have picked this up.
  • Have a delete logs (or tidy) command in the web console that can be executed for all gears or for a single gear.
  • Don’t have ssh on the gear in super silent mode.
  • Have the RHC tools installed on the server. It’s weird that you can see and do more from the client than when logged-in to the server itself.

All in all OpenShift is still a very impressive system that lets you deploy completely standard Java EE 6 archives to a very stable (EAP) version of JBoss, but when something goes wrong it can be frustrating to deal with the issue. The client tools are pretty advanced, but the tools that are installed on the gear itself and the web console are not there yet.

Arjan Tijms

How to build and run the Mojarra automated tests (2014 update)

30 March 2014

At zeef.com we depend a lot on JSF (see here for details) and occasionally have the need to patch Mojarra.

Mojarra comes with over 8000 tests, but as we explained in a previous article, building it and running those tests is not entirely trivial. It’s not that difficult though if you know the steps, but the many outdated readmes and many folders in the project can make it difficult to find these steps.

Since the previous article some things have changed, so we’ll provide an update here.

Currently the Mojarra project is in a migration status. Manfred Riem is currently working on moving the complicated and ancient ANT based tests to a more modern Maven based setup. For the moment this is a bit of an extra burden as there are now two distinct test folders. Eventually though there should be only one. Since the migration is in full swing things can also still change often. The instructions below are valid for at least JSF 2.2.5 till 2.2.7-SNAPSHOT.

We’ll first create a separate directory for our build and download a fresh version of GlassFish 4 that we’ll use for running the tests. From e.g. your home directory execute the following:

mkdir mtest
cd mtest
wget http://download.java.net/glassfish/4.0/release/glassfish-4.0.zip
unzip glassfish-4.0.zip

Note that unlike the 2012 instructions it’s no longer needed to set an explicit password. The default “empty” password now works correctly. The readme in the project still says you need to install with a password, but this is thus no longer needed.

Next we’ll check out the Mojarra 2.2 “trunk”. Note here that the real trunk is dormant and all the action happens in a branch called “MOJARRA_2_2X_ROLLING”. Unfortunately Mojarra still uses SVN, but it is what it is. We’ll use the following commands;

svn co https://svn.java.net/svn/mojarra~svn/branches/MOJARRA_2_2X_ROLLING/
cd MOJARRA_2_2X_ROLLING
cp build.properties.glassfish build.properties

We now need to edit build.properties and set the following values:

jsf.build.home=[source home]
container.name=glassfishV3.1_no_cluster
container.home=[glassfish home]/glassfish
halt.on.failure=no

[source home] is the current directory where we just cd’ed into (e.g. /home/your_user/mtest/MOJARRA_2_2X_ROLLING), while [glassfish home] is the directory that was extracted from the archive (e.g. /home/your_user/mtest/glassfish4/).

If your OS supports all the following commands (e.g. Ubuntu does) you can also execute:

sed -i "s:<SET CURRENT DIRECTORY>:$(pwd):g" build.properties
sed -i "s:container.name=glassfish:container.name=glassfishV3.1_no_cluster:g" build.properties
sed -i "s:container.home=:container.home=$(readlink -f ../glassfish4/):g" build.properties

We’re now going to invoke the actual build. Unfortunately there still is a weird dependency between the main build task and the clean task, so the first time we can only execute “main” here. If the build needs to be done a subsequent time we can do “clean main” then. For now execute the following:

ant main

We can then run the ANT tests as follows:

export ANT_OPTS='-Xms512m -Xmx786m -XX:MaxPermSize=786m'
ant test.with.container.refresh

Just like the previous time, there are always a number of ANT tasks that are already failing. Whether the “trunk” of Mojarra simply has failing tests all the time, or whether it’s system dependent is something we didn’t investigate. Fact is however that after some three years of periodically building Mojarra and running its tests, on various different systems (Ubuntu, Debian, OS X) we’ve never seen it happening that out of the box all tests passed. In the current (March 28, 2014) 2.2.7-SNAPSHOT branch the following tests failed out of the box;

  1. jsf-ri/systest/src/com/sun/faces/composite/CompositeComponentsTestCase.java#testCompositeComponentResolutionWithinRelocatableResources
  2. jsf-ri/systest/src/com/sun/faces/facelets/FaceletsTestCase.java#FaceletsTestCase#testForEach
  3. jsf-ri/systest/src/com/sun/faces/facelets/ImplicitFacetTestCase.java#testConditionalImplicitFacetChild1727
  4. jsf-ri/systest/src/com/sun/faces/systest/DataTableTestCase.java#testTablesWithEmptyBody
  5. jsf-ri/systest/src/com/sun/faces/jsptest/ConverterTestCase.java#testConverterMessages
  6. jsf-test/JAVASERVERFACES-2113/i_mojarra_2113_htmlunit/src/main/java/com/sun/faces/regression/i_mojarra_2113/Issue2113TestCase.java#testBasicAppFunctionality

So if you want to test the impact of your own changes, be sure to run the tests before making those changes to get an idea of which tests are already failing on your system and then simply comment them out.

The ANT tests execute rather slowly. On the 3.2Ghz/16GB/SSD machine we used they took some 20 minutes.

The maven tests are in a separate directory and contain only tests. To give those tests access to the Mojarra artifact we just build we need to install it in our local .m2 repo:

ant mvn.deploy.snapshot.local

(if we use this method on a build server we may want to use separate users for each test. Otherwise parallel builds may conflict since the .m2 repo is global to the user running the tests)

We now CD into the test directory and start with executing the “clean” and “install” goals:

cd test
mvn clean install

After the clean install we have to tell maven about the location of our GlassFish server. This can be done via a settings.xml file or by replacing every occurrence of “C:/Glassfish3.1.2.2″ in the pom.xml that’s in the root of the folder we just cd’ed into. A command to do the latter is:

sed -i "s#C:/Glassfish3.1.2.2#$(readlink -f ../../glassfish4/)#g" pom.xml

The test directory contains several folders with tests for different situations. Since JSF 2.2 can run on both Servlet 3.0 and Servlet 3.1 containers there’s a separate Servlet 3.1 folder with tests specific to that. It’s however not clear why there still is a Servlet 3.0 folder (probably a left-over from JSF 2.0/2.1). The most important test folder is the “agnostic” one. This runs on any server and should even run with every JSF implementation (e.g. it should run on MyFaces 2.2 as well).

The following commands are used to execute them:

cd agnostic/
../bin/test-glassfish-default.sh

The Maven tests run rather fast and should be finished in some 3 to 4 minutes. Instead of modifying the pom and invoking the .sh script we can also run Maven directly via a command like the following:

mvn -Dintegration.container.home=/home/your_user/mtest/glassfish4/ -Pintegration-failsafe,integration-glassfish-cargo clean verify

(replace “/home/your_user/mtest/glassfish4/” with the actual location of glassfish on your system)

The difference is that the script is a great deal faster. It does this by calling maven 6 times with different goals. This will cause work to be done in advance for all tests instead of for each test over and over again. The fact that this needs to be done via a script instead of directly via maven is maybe indicative of a weakness in maven. Although understanding the script is not needed for the build and running the tests, I found it interesting enough to take a deeper look at it.

The pom uses a maven plug-in for the little known cargo project. Cargo is a kind of competitor for the much wider known Arquillian. Just as its more popular peer it can start and stop a large variety of Java EE containers and deploy application archives to those. Cargo has existed for much longer than Arquillian and is still actively developed. It supports ancient servers such as Tomcat 4 and JBoss 3, as well as the very latest crop like Tomcat 8 and WildFly 8.

The 6 separate invocations are the following;

  1. Copy a Mojarra artifact (javax.faces.jar) from the Maven repo to the GlassFish internal modules directory (profile integration-glassfish-prepare)
  2. Clean the project, compile and then install all tests (as war archives) in the local Maven repo (no explicit profile)
  3. Start GlassFish (profile integration-glassfish-cargo, goal cargo:start)
  4. Deploy all previously build war archives in one go to GlassFish (profile integration-glassfish-cargo, goal cargo:redeploy)
  5. Run the actual tests. These will do HTTP requests via HTML Unit to the GlassFish instance that was prepared in the previous steps (profile integration-failsafe, goal verify)
  6. Finally stop the container again (profile integration-glassfish-cargo, goal cargo:stop)

As said previously, the project is in migration status and things still change frequently. In the 2.2.7 “trunk” an additional glassfish-cargo profile appeared that’s basically a copy of the existing integration-glassfish-cargo, but without the embedded and unused copy goal (which we’d seen above was part of the integration-glassfish-prepare profile). There’s also a new glassfish-copy-mojarra-1jar goal that’s a copy of the integration-glassfish-prepare profile with some parametrized configuration items replaced by constants, etc.

With the constant change going on documenting the build and test procedure is difficult, but hopefully the instructions presented in this article are up to date enough for the moment.

Arjan Tijms

Java 7 one-liner to read file into string

24 March 2014

Reading in a file in Java used to require a lot of code. Various things had to wrapped, loops with weird terminating conditions had to be specified and so forth.

In Java 7 we can do a lot better. The actual code to do the reading is just:

String content = new String(readAllBytes(get("test.txt")));

As a full program that echos back the file’s content it looks like this:

import static java.lang.System.out;
import static java.nio.file.Files.readAllBytes;
import static java.nio.file.Paths.get;
 
public class Test {
    public static void main(String[] args) throws Exception {
	out.println(new String(readAllBytes(get("test.txt"))));
    }
}

Of course it we want to be careful that we don’t load a few gigabytes into memory and if we want to pay attention to the character set (it’s the platform default now) we need a little more code, but for quick and dirty file reading this should do the trick.

As a bonus, a version in Scala using the same JDK 7 APIs contributed by my fellow office worker Mark van der Tol:

import java.nio.file.Files.readAllBytes
import java.nio.file.Paths.get
 
object Main extends App {
    println(new String(readAllBytes(get("test.txt"))))
}

Arjan Tijms

WildFly 8 benchmarked

14 February 2014

The final version of WildFly 8 was released this week. WildFly is the new Java EE 7 compliant application server from Red Hat and is the successor to JBoss AS 7. One of the major new features in WildFly is a new high-performance web server called Undertow, which replaces the Tomcat server in previous version of JBoss. As we’ve recently been benchmarking a new application, I was curious as to how WildFly 8 would perform. To find out, I decided to benchmark WildFly using this application and compare it against the latest version of JBoss EAP, version 6.2.

The application used for the benchmark was a simple JSF-based app. For each request a JSF Facelets template, which pulls some data from a backing bean, is being rendered in real-time. The backing bean in turn retrieves the data from a local cache, which is backed by a restful API and periodically refreshed. The refresh happens asynchronously, so as to not block any user’s requests. To achieve a better performance, HTTP sessions were explicitly disabled for this application.

JSF’s stateless mode was activated as well. Although the JSF page that was rendered did not have any forms on it (and thus should not have any state to begin with), this did in fact seem to give a small performance boost. However, the performance boost was so small that it fell within the fluctuation range that we saw between runs and it’s therefor hard to say whether this really mattered.

JMeter was used for the benchmark itself. The application and JMeter were both run on the same computer, which is a 3.4 Ghz quad-core Intel Xeon with 16GB or RAM running Linux Mint 16. As the first release candidate of JDK8 was released last week, I decided to use both JDK7u45 and JDK8b128 in the benchmarks. Both JBoss EAP 6.2 and WildFly 8 were used out of the box; nothing was changed to standalone.xml or any other internal configuration file.

The benchmark itself was performed with 100 concurrent threads, each performing 2000 requests. For each application server and JDK version, four test were performed directly after each other. The results from the first test were discarded, as the JVM was still warming up, and the throughput in requests per second was averaged over the remaining three tests. You can see the average throughput below.

WildFly benchmark average throughput

These averages, however, do not paint the full picture. When taking a closer look at the results from the JBoss EAP benchmarks, the results of the individual benchmark runs fluctuate a lot more than the results from the WildFly benchmarks.

Throughput

JBoss EAP seems to perform best on the second test run in both cases, but this could be a coincidence. What is clear is that the WildFly team have done a great job in creating an application server that, while it might not be outright faster, does achieve a similar level of performance, but with a greater level of consistency. For both JBoss EAP and WildFly, the JDK8 benchmarks still fall within the standard deviation of the JDK7 benchmarks, so also seems to perform on a similar level compared to JDK7. It would be interesting to see how other application servers, like GlassFish, hold up against JBoss EAP and WildFly, so I may revisit this topic sometime soon.

Disabling all EJB timers in Java EE 6

29 October 2013

Java EE 7 has finally added a method to obtain all timers in the system. With the help of this method you can fairly conveniently cancel all timers, or only specific ones.

But Java EE 7 is still fairly new and not many vendors have released a Java EE 7 compatible server yet. So is there any way at all to say disable all scheduled timers in Java EE 6?

As it appears this is possible, with a little help of CDI and the Interceptor spec. The idea is that we install a CDI extension that dynamically adds an interceptor to all @Schedule annotated methods. This interceptor then cancels the timer for which it intercepted the method that handles it. It would be great if the CDI extension was just able to remove the @Schedule annotation and we’d be done with it. Unfortunately this is yet another example why it’s not so great that EJB is not fully alligned with CDI; even if the @Schedule annotation is removed from the so-called AnnotatedType, the EJB container will still start the timer being obvlivious to the CDI representation of the bean.

The first step is to make an annotation that represents the interceptor we need:

@Inherited
@InterceptorBinding
@Target({ TYPE, METHOD })
@Retention(RUNTIME)
public @interface DisableTimers {}

We then proceed to the actual interceptor:

@Interceptor
@DisableTimers
public class DisableTimersInterceptor {
 
    @Inject
    private Logger logger;
 
    @AroundTimeout
    public Object disableTimers(InvocationContext context) throws Exception {
 
        try {
            Object timerObject = context.getTimer();
            if (timerObject instanceof Timer) {
                Timer timer = ((Timer) timerObject);
                logger.info("Canceling timer in bean " + context.getClass().getName() + " for timer " + timer.toString());
                timer.cancel();
            }
        } catch (Exception e) {
            logger.log(SEVERE, "Exception while canceling timer:", e);
        }
 
        return null;
    }
}

Note that while there’s the general concept of an @AroundTimeout and the context has a getTimer() method, the actual type for the timer has not been globally standardized for Java EE. This means we have to resort to instance testing. It would be great if some future version of Java EE could define a standard interface that all eligable timers have to implement.

Also note that there’s isn’t a clean universal way to print the timer details so I’ve used toString() here on the Timer instance. It’s vendor specific what this actually returns.

An alternative would have been here to inject the timer service and use it to cancel all timers for the bean right away. This is perhaps a bit less intuitive though. Also note that at least on JBoss you can not inject the timer service directly but have to specify a JNDI lookup name, e.g.:

@Resource(lookup="java:comp/TimerService")
public TimerService timerService;

Unfortunately in Java EE 6 we have to register the interceptor in beans.xml:

<beans>
    <interceptors>
        <class>com.example.DisableTimersInterceptor</class>
    </interceptors>
</beans>

Next is the actual extension:

public class EjbTimerDisableExtension implements Extension {
 
    private static final Logger logger = Logger.getLogger(EjbTimerDisableExtension.class.getName());
 
    public <T> void processAnnotatedType(@Observes ProcessAnnotatedType<T> processAnnotatedType, BeanManager beanManager) {
        if (hasScheduleMethods(processAnnotatedType.getAnnotatedType())) {
 
            logger.log(INFO, "Disabling timer in " + processAnnotatedType.getAnnotatedType().getJavaClass().getName());
 
            AnnotatedTypeWrapper<T> annotatedTypeWrapper = new AnnotatedTypeWrapper<>(processAnnotatedType.getAnnotatedType());
 
            for (AnnotatedMethod<? super T> annotatedMethod : processAnnotatedType.getAnnotatedType().getMethods()) {
                if (annotatedMethod.isAnnotationPresent(Schedule.class)) {
 
                    AnnotatedMethodWrapper<? super T> annotatedMethodWrapper = new AnnotatedMethodWrapper<>(annotatedMethod);
                    annotatedMethodWrapper.addAnnotation(createAnnotationInstance(DisableTimers.class));
 
                    annotatedTypeWrapper.getMethods().remove(annotatedMethod);
                    annotatedTypeWrapper.getMethods().add(annotatedMethodWrapper);
                }
            }
 
            processAnnotatedType.setAnnotatedType(annotatedTypeWrapper);
        }
    }
 
    private <T> boolean hasScheduleMethods(AnnotatedType<T> annotatedType) {
        for (AnnotatedMethod<?> annotatedMethod : annotatedType.getMethods()) {
            if (annotatedMethod.isAnnotationPresent(Schedule.class)) {
                return true;
            }
        }
 
        return false;
    }
}

In this extension we check if a bean has methods with an @Schedule annotation, and if it indeed has one we wrap the passed-in annotated type and wrap any method representation that has this annotation. Via these wrappers we can remove the existing method and then add our own method where we dynamically add the interceptor annotation.

We need to register this extension in /META-INF/services/javax.enterprise.inject.spi.Extension by putting its FQN there:

com.example.EjbTimerDisableExtension

It’s perhaps unfortunately that CDI 1.0 doesn’t offer many convenience methods for wrapping its most important types (which e.g. JSF does do) and doesn’t provide an easy way to create an annotation instance.

Luckily my co-worker Jan Beernink had already created some convenience types for those, which I could use:

The CDI type wrappers:

public class AnnotatedMethodWrapper<X> implements AnnotatedMethod<X> {
 
    private AnnotatedMethod<X> wrappedAnnotatedMethod;
 
    private Set<Annotation> annotations;
 
    public AnnotatedMethodWrapper(AnnotatedMethod<X> wrappedAnnotatedMethod) {
        this.wrappedAnnotatedMethod = wrappedAnnotatedMethod;
 
        annotations = new HashSet<>(wrappedAnnotatedMethod.getAnnotations());
    }
 
    @Override
    public List<AnnotatedParameter<X>> getParameters() {
        return wrappedAnnotatedMethod.getParameters();
    }
 
    @Override
    public AnnotatedType<X> getDeclaringType() {
        return wrappedAnnotatedMethod.getDeclaringType();
    }
 
    @Override
    public boolean isStatic() {
        return wrappedAnnotatedMethod.isStatic();
    }
 
    @Override
    public <T extends Annotation> T getAnnotation(Class<T> annotationType) {
        for (Annotation annotation : annotations) {
            if (annotationType.isInstance(annotation)) {
                return annotationType.cast(annotation);
            }
        }
 
        return null;
    }
 
    @Override
    public Set<Annotation> getAnnotations() {
        return Collections.unmodifiableSet(annotations);
    }
 
    @Override
    public Type getBaseType() {
        return wrappedAnnotatedMethod.getBaseType();
    }
 
    @Override
    public Set<Type> getTypeClosure() {
        return wrappedAnnotatedMethod.getTypeClosure();
    }
 
    @Override
    public boolean isAnnotationPresent(Class<? extends Annotation> annotationType) {
        for (Annotation annotation : annotations) {
            if (annotationType.isInstance(annotation)) {
                return true;
            }
        }
 
        return false;
    }
 
    @Override
    public Method getJavaMember() {
        return wrappedAnnotatedMethod.getJavaMember();
    }
 
    public void addAnnotation(Annotation annotation) {
        annotations.add(annotation);
    }
 
    public void removeAnnotation(Annotation annotation) {
        annotations.remove(annotation);
    }
 
    public void removeAnnotation(Class<? extends Annotation> annotationType) {
        Annotation annotation = getAnnotation(annotationType);
        if (annotation != null ) {
            removeAnnotation(annotation);
        }
    }
 
}
public class AnnotatedTypeWrapper<T> implements AnnotatedType<T> {
 
    private AnnotatedType<T> wrappedAnnotatedType;
 
    private Set<Annotation> annotations = new HashSet<>();
    private Set<AnnotatedMethod<? super T>> annotatedMethods = new HashSet<>();
    private Set<AnnotatedField<? super T>> annotatedFields = new HashSet<>();
 
    public AnnotatedTypeWrapper(AnnotatedType<T> wrappedAnnotatedType) {
        this.wrappedAnnotatedType = wrappedAnnotatedType;
 
        annotations.addAll(wrappedAnnotatedType.getAnnotations());
        annotatedMethods.addAll(wrappedAnnotatedType.getMethods());
        annotatedFields.addAll(wrappedAnnotatedType.getFields());
    }
 
    @Override
    public <A extends Annotation> A getAnnotation(Class<A> annotationType) {
        return wrappedAnnotatedType.getAnnotation(annotationType);
    }
 
    @Override
    public Set<Annotation> getAnnotations() {
        return annotations;
    }
 
    @Override
    public Type getBaseType() {
        return wrappedAnnotatedType.getBaseType();
    }
 
    @Override
    public Set<AnnotatedConstructor<T>> getConstructors() {
        return wrappedAnnotatedType.getConstructors();
    }
 
    @Override
    public Set<AnnotatedField<? super T>> getFields() {
        return annotatedFields;
    }
 
    @Override
    public Class<T> getJavaClass() {
        return wrappedAnnotatedType.getJavaClass();
    }
 
    @Override
    public Set<AnnotatedMethod<? super T>> getMethods() {
        return annotatedMethods;
    }
 
    @Override
    public Set<Type> getTypeClosure() {
        return wrappedAnnotatedType.getTypeClosure();
    }
 
    @Override
    public boolean isAnnotationPresent(Class<? extends Annotation> annotationType) {
        for (Annotation annotation : annotations) {
            if (annotationType.isInstance(annotation)) {
                return true;
            }
        }
 
        return false;
    }
 
}

And the utility code for instantiating an annotation type:

public class AnnotationUtils {
 
    private AnnotationUtils() {
    }
 
    /**
     * Create an instance of the specified annotation type. This method is only suited for annotations without any properties, for annotations with
     * properties, please see {@link #createAnnotationInstance(Class, InvocationHandler)}.
     *
     * @param annotationType
     *            the type of annotation
     * @return an instance of the specified type of annotation
     */
    public static <T extends Annotation> T createAnnotationInstance(Class<T> annotationType) {
        return createAnnotationInstance(annotationType, new AnnotationInvocationHandler<>(annotationType));
    }
 
    public static <T extends Annotation> T createAnnotationInstance(Class<T> annotationType, InvocationHandler invocationHandler) {
        return annotationType.cast(Proxy.newProxyInstance(AnnotationUtils.class.getClassLoader(), new Class<?>[] { annotationType },
                invocationHandler));
    }
}
/**
 * {@link InvocationHandler} implementation that implements the base methods required for a parameterless annotation. This handler only implements the
 * following methods: {@link Annotation#equals(Object)}, {@link Annotation#hashCode()}, {@link Annotation#annotationType()} and
 * {@link Annotation#toString()}.
 *
 * @param <T>
 *            the type of the annotation
 */
class AnnotationInvocationHandler<T extends Annotation> implements InvocationHandler {
 
    private Class<T> annotationType;
 
    /**
     * Create a new {@link AnnotationInvocationHandler} instance for the given annotation type.
     *
     * @param annotationType
     *            the annotation type this handler is for
     */
    public AnnotationInvocationHandler(Class<T> annotationType) {
        this.annotationType = annotationType;
    }
 
    @Override
    public Object invoke(Object proxy, Method method, Object[] args) throws Throwable {
        switch (method.getName()) {
        case "toString":
            return "@" + annotationType.getName() + "()";
        case "annotationType":
            return annotationType;
        case "equals":
            return annotationType.isInstance(args[0]);
        case "hashCode":
            return 0;
        }
 
        return null;
    }
 
}

Conclusion

This approach is by far not as elegant as just injecting an @Startup @Singleton with the timer service and canceling all timers in a simple loop as we can do in Java EE 7, but it does work on Java EE 6. Timers are canceled one by one as they fire and their handler methods are never invoked.

The approach of dynamically adding interceptors to specific methods can however be used for other things as well (e.g. logging exceptions from @Asynchronous methods that return void, just to name something) so it’s a generally useful technique.



Arjan Tijms

Eclipse 4.3 SR1 again silently released!

28 September 2013

Again rather silently, the Eclipse organization yesterday released the first maintenance release of Eclipse 4.3; Eclipse 4.3.1 aka Eclipse Kepler SR1.

Surprisingly (or maybe not), this event again isn’t noted on the main homepage at eclipse.org or at the recent activity tracker. There also don’t seem to be any release notes, like the ones we had for the 4.3 release.

It seems these days the Eclipse home page is about anything and nothing except the thing we most closely associate with the term “Eclipse”; the Eclipse IDE. Seemingly the IDE itself is by far not as important as “Concierge Creation Review Scheduled” and “Web-based BPM with Stardust”.

Once again, fiddling with Bugzilla gave me a list of 112 bugs that are fixed in core packages.

Hopefully this fix will remedy the random crashes I’ve experienced in Ubuntu 13.04, but I’m not holding my breath.

The good people at the WTP project did feel like posting about this event on their homepage with a link and short description to the 3.5.1 release of WTP. Again, the new and noteworthy keeps pointing to the previous release, but there’s a list of 51 fixed bugs available.

Community reporting seems to have reached a historically low. There’s one enthusiastic user who created a rather minimalistic forum post about it, and that’s pretty much it. Maybe a few lone tweets, but nothing major.

Is the community and the Eclipse organization loosing interesting in Eclipse, or is it just that SR releases aren’t that exciting?

Serving multiple images from database as a CSS sprite

31 July 2013

Introduction

In the first public beta version of ZEEF which was somewhat thrown together (first get the minimum working using standard techniques, then review, refactor and improve it), all favicons were served individually. Although they were set to be agressively cached (1 year, whereby a reload is when necessary forced by the timestamp-in-query-string trick with the last-modified timestamp of the link), this resulted in case of an empty cache in a ridiculous amount of HTTP requests on a subject page with relatively a lot of links, such as Curaçao by Bauke Scholtz:

Yes, 209 image requests of which 10 are not for favicons, which nets as 199 favicon requests. Yes, that much links are currently on the Curaçao subject. The average modern webbrowser has only 6~8 simultaneous connections available on a specific domain. That’s thus a huge queue. You can see it in the screenshot, it took on an empty cache nearly 5 seconds to get them all (on a primed cache, it’s less than 1 second).

If you look closer, you’ll see that there’s another problem with this approach: links which doesn’t have a favicon re-requests the very same default favicon again and again with a different last-modified timestamp of the link itself, ending up in copies of exactly same image in the browser cache. Also, links from the same domain which share the same favicon, have their favicons duplicated this way. In spite of the agressive cache, this was simply too inefficient.

Converting images to common format and size

The most straightforward solution would be to serve all those favicons as a single CSS sprite and make use of CSS background-position to reference the right favicon in the sprite. This however requires that all favicons are first parsed and converted to a common format and size which allows easy manipulation by standard Java 2D API (ImageIO and friends) and easy generation of the CSS sprite image. PNG was chosen as format as that’s the most efficient and lossless format. 16×16 was chosen as default size.

As first step, a favicon parser was created which verifies and parses the scraped favicon file and saves every found image as PNG (the ICO format can store multiple images, usually each with a different dimension, e.g. 16×16, 32×32, 64×64, etc). For this, Image4J (a mavenized fork with bugfix) has been of a great help. The original Image4J had only a minor bug, it ran in an infinite loop on favicons with broken metadata, such as this one. This was fixed by vijedi/image4j. However, when an ICO file contained multiple images, this fix discarded all images, instead of only the broken one. So, another bugfix was done on top of that (which by the way just leniently returned the “broken” image — in fact, only the metadata was broken, not the image content itself). Every single favicon will now be parsed by ICODecoder and BMPDecoder of Image4J and then ImageIO#read() of standard Java SE API in this sequence. Whoever returned the first non-null BufferedImage(s) without exceptions, this will be used. This step also made us able to completely bypass the content-type check which we initially had, because we discovered that a lot of websites were doing a bad job in this, some favicons were even served as text/html which caused false negatives.

As second step, if the parsing of a favicon resulted in at least one BufferedImage, but no one was in 16×16 dimension, then it will be created based on the firstnext dimension which is resized back to 16×16 with help of thebuzzmedia/imgscalr which yielded high quality resizings.

Finally all formats are converted to PNG and saved in the DB (and cached in the local disk file system).

Serving images as CSS sprite

For this a simple servlet was been used which does basically ultimately the following in doGet() (error/cache checking omitted for simplicity):

Long pageId = Long.valueOf(request.getPathInfo().substring(1));
Page page = pageService.getById(pageId);
long lastModified = page.getLastModified();
byte[] content = faviconService.getSpriteById(pageId, lastModified);
 
if (content != null) { // Found same version in disk file system cache.
    response.getOutputStream().write(content);
    return;
}
 
Set<Long> faviconIds = new TreeSet<>();
faviconIds.add(0L); // Default favicon, appears as 1st image of sprite.
faviconIds.addAll(page.getFaviconIds());
 
int width = Favicon.DEFAULT_SIZE; // 16px.
int height = width * faviconIds.size();
 
BufferedImage sprite = new BufferedImage(width, height, BufferedImage.TYPE_INT_ARGB);
Graphics2D graphics = sprite.createGraphics();
graphics.setBackground(new Color(0xff, 0xff, 0xff, 0)); // Transparent.
graphics.fillRect(0, 0, width, height);
 
int i = 0;
 
for (Long faviconId : faviconIds) {
    Favicon favicon = faviconService.getById(faviconId); // Loads from disk file system cache.
    byte[] content = favicon.getContent();
    BufferedImage image = ImageIO.read(new ByteArrayInputStream(content));
    graphics.drawImage(image, 0, width * i++, null);
}
 
ByteArrayOutputStream output = new ByteArrayOutputStream();
ImageIO.write(sprite, "png", output);
content = output.toByteArray();
faviconService.saveSprite(pageId, lastModified, content); // Store in disk file system cache.
response.getOutputStream().write(content);

To see it in action, you can get all favicons of the page Curaçao by Bauke Scholtz (which has page ID 18) as CSS sprite on the following URL: https://zeef.com/favicons/page/18.

Serving the CSS file containing sprite-image-specific selectors

In order to present the CSS sprite images at the right places, we should also have a simple servlet which generates the desired CSS stylesheet file containing sprite-image-specific selectors with the right background-position. The servlet should basically ultimately do the following in doGet() (error/cache checking omitted to keep it simple):

Long pageId = Long.valueOf(request.getPathInfo().substring(1));
Page page = pageService.getById(pageId);
 
Set<Long> faviconIds = new TreeSet<>();
faviconIds.add(0L); // Default favicon, appears as 1st image of sprite.
faviconIds.addAll(page.getFaviconIds());
 
long lastModified = page.getLastModified().getTime();
int height = Favicon.DEFAULT_SIZE; // 16px.
 
PrintWriter writer = response.getWriter();
writer.printf("[class^='favicon-']{background-image:url('../page/%d?%d')!important}", 
    pageId, lastModified);
int i = 0;
 
for (Long faviconId : faviconIds) {
    writer.printf(".favicon-%s{background-position:0 -%spx}", faviconId, height * i++);
}

To see it in action, you can get the CSS file of the page Curaçao by Bauke Scholtz (which has page ID 18) on the following URL: https://zeef.com/favicons/css/18.

Note that the background-image URL has the page’s last modified timestamp in the query string which should force a browser reload of the sprite whenever a link has been added/removed in the page. The CSS file itself has also such a query string as you can see in HTML source code of the ZEEF page, which is basically generated as follows:

<link id="favicons" rel="stylesheet" 
    href="//zeef.com/favicons/css/#{zeef.page.id}?#{zeef.page.lastModified.time}" />

Also note that the !important is there to overrule the default favicon for the case the serving of the CSS sprite failed somehow. The default favicon is specified in general layout CSS file layout.css as follows:

#blocks .link.block li .favicon,
#blocks .link.block li [class^='favicon-'] {
    position: absolute;
    left: -7px;
    top: 4px;
    width: 16px;
    height: 16px;
}
 
#blocks .link.block li [class^='favicon-'] {
    background-image: url("#{resource['zeef:images/default_favicon.png']}");
}

Referencing images in HTML

It’s rather simple, the links were just generated in a loop whereby the favicon image is represented via a plain HTML <span> element basically as follows:

<a id="link_#{linkPosition.id}" href="#{link.targetURL}" title="#{link.defaultTitle}">
    <span class="favicon-#{link.faviconId}" />
    <span class="text">#{linkPosition.displayTitle}</span>
</a>

The HTTP requests on image files have been reduced from 209 to 12 (note that 10 non-favicon requests have increased to 11 non-favicon requests due to changes in social buttons, but that’s not further related to the matter):

It took on an empty cache on average only half a second to download the CSS file and another half a second to download the CSS sprite. Per saldo, that’s thus 5 times faster with 197 connections less! On a primed cache it’s even not requested at all. Noted should be that I’m here behind a relatively slow network and that the current ZEEF production server on a 3rd party host isn’t using “state of the art” hardware yet. The hardware will be handpicked later on once we grow.

Reloading CSS sprite by JavaScript whenever necessary

When you’re logged in as page owner, you can edit the page by adding/removing/drag’n’drop links and blocks. This all takes place by ajax without a full page reload. Whenever necessary, the CSS sprite can during ajax oncomplete be forced to be reloaded by the following script which references the <link id="favicons">:

function reloadFavicons() {
    var $favicons = $("#favicons");
    $favicons.attr("href", $favicons.attr("href").replace(/\?.*/, "?" + new Date().getTime()));
}

Basically, it just updates the timestamp in the query string of the <link href> which in turn forces the webbrowser to request it straight from the server instead of from the cache.

Note that in case of newly added links which do not exist in the system yet, favicons are resolved asynchronously in the background and pushed back via Server-Sent Events. In this case, the new favicon is still downloaded individually and explicitly set as CSS background image. You can find it in the global-push.js file:

function updateLink(data) {
    var $link = $("#link_" + data.id);
    $link.attr("title", data.title);
    $link.find(".text").text(data.text);
    $link.find("[class^='favicon-']").attr("class", "favicon")
        .css("background-image", "url(/favicons/link/" + data.icon + "?" + new Date().getTime() + ")");
    highlight($link);
}

But once the HTML DOM representation of the link or block is later ajax-updated after an edit or drag’n’drop, then it will re-reference the CSS sprite again.

The individual favicon request is also done in “Edit link” dialog. The servlet code for that is not exciting, but for the case you’re interested, the URL is like https://zeef.com/favicons/link/354 and all the servlet basically does is (error/cache checking omitted for brevity):

Long linkId = Long.valueOf(request.getPathInfo().substring(1));
Link link = linkService.getById(linkId);
Favicon favicon = faviconService.getById(link.getFaviconId());
byte[] content = favicon.getContent();
response.getWriter().write(content);

Note that individual favicons are not downloaded by their own ID, but instead by the link ID, because a link doesn’t necessarily have any favicon. This way the default favicon can easily be returned.


This article is also posted on balusc.blogspot.com.

css.php best counter