OmniFaces showcase and OpenShift app management

31 March 2014

At zeef.com we’re using OmniFaces pretty intensively (eating our own dog food).

We’re hosting an application that showcases a lot of OmniFaces’ features at OpenShift.

Mostly we’re very happy with it. Among its many environments OpenShift offers a JBoss EAP 6.x server that’s very regularly updated. JBoss EAP 6.x is Red Hat’s Java EE 6 implementation that has received many bug fixes during the years so it’s rather stable at the moment. And even though Red Hat has a Java EE 7 implementation out (WildFly 8) the Java EE 6 server keeps getting bug fixes to make it even more stable.

Yesterday however both the two nodes on which we have our showcase app deployed appeared to be suddenly down. An attempt to restart the app via the webconsole didn’t do anything. It just sits there for a long time and then eventually says there is a technical problem and doesn’t provide any further details. This is unfortunately one of the downsides of OpenShift. It’s a great platform, but the webconsole clearly lags behind.

We then tried to log into our primary gear using ssh [number]@[our app name].rhcloud.com. This worked, however the Jboss instances are not running on this primary gear but on two other gears. We tried the “ctl_all stop” and “ctl_all start” commands, but this only seemed to restart the cartridges (ha-proxy and a by default disabled JBoss) on the gear where we were logged-in, not on the other ones.

Next step was trying to login into those other gears. There is unfortunately little information available on what the exact address of those gears is. There used to be a document up at https://www.openshift.com/faq/can-i-access-my-applications-gear, but for some reason it has been taken down. Vaguely remembering that the URL address of the other gears is based on what [app url]/haproxy-status lists, we tried to ssh to that from the primary gear but “nothing happened”. It looked like the ssh command was broken. ssh’ing into foo (ssh foo) also resulted in nothing happening.

With the help of the kind people from OpenShift at the IRC channel it was discovered that ssh on the openshift gear is just silent by default. With the -v option you do get the normal response. Furthermore, when you install the rhc client tools locally you can use the following command to list the URL addresses of all your gears:

rhc app show [app] --gears

This returns the following:

ID State Cartridges Size SSH URL
[number1] started jbosseap-6 haproxy-1.4 small [number1]@[app]-[domain].rhcloud.com
[number2] started jbosseap-6 haproxy-1.4 small [number2]@[number2]-[app]-[domain].rhcloud.com
[number3] started jbosseap-6 haproxy-1.4 small [number3]@[number3]-[app]-[domain].rhcloud.com

We can now ssh into the other gears using the [numberX]@[numberX]-[domain].rhcloud.com pattern, e.g.

ssh 12ab34cd....xy@12ab34cd....xy-myapp-mydomain.rhcloud.com

In our particular case on the gear identified by [number2] the file system was completely full. Simply deleting the log files from /jbosseap/logs fixed the problem. After that we can use the gear command to stop and start the JBoss instance (ctl_all and ctl_app seem to be deprecated):

gear stop
gear start

And lo and behold, the gear came back to life. After doing the same for the [number3] gear both two nodes were up and running again and requests to our app were serviced as normal.

One thing that we also discovered was that per default OpenShift installs and starts a JBoss instance on the gear that hosts the proxy, but for some reason that probably only that one proverbial engineer that left long ago knows, there is no traffic routed to that JBoss instance.

In the ./haproxy/conf directory there’s a configuration file with among others the following content:

server gear-[number2]-[app] ex-std-node[node1].prod.rhcloud.com:[port1] check fall 2 rise 3 inter 2000 cookie [number2]-[app]
server gear-[number3]-[app] ex-std-node[node2].prod.rhcloud.com:[port2] check fall 2 rise 3 inter 2000 cookie [number3]-[app]
server local-gear [localip]:8080 check fall 2 rise 3 inter 2000 cookie local-[number1] disabled

As can be seen, there’s a disabled marker after the local-gear entry. Simply removing it and stopping/starting or restarting the gear will start routing requests to this gear as well.

Furthermore we see that the gear’s SSH URL can indeed be derived from the number that we see in the configuration and output of haproxy. The above [number2] is exactly the same number [number2] as was in the output from rhc app show showcase –gears.

This all took quite some time to figure out. How could OpenShift have done better here?

  • Not take down crucial documentation such as https://www.openshift.com/faq/can-i-access-my-applications-gear.
  • List all gear URLs in the web console when the application is scaled, not just the primary one.
  • Implement a restart in the web console that actually works, and when a failure occurs gives back a clear error message.
  • Have a restart per gear in the web console.
  • List critical error conditions per gear in the web console. In this case “disk full” or “quota exceeded” seems like a common enough condition that the UI could have picked this up.
  • Have a delete logs (or tidy) command in the web console that can be executed for all gears or for a single gear.
  • Don’t have ssh on the gear in super silent mode.
  • Have the RHC tools installed on the server. It’s weird that you can see and do more from the client than when logged-in to the server itself.

All in all OpenShift is still a very impressive system that lets you deploy completely standard Java EE 6 archives to a very stable (EAP) version of JBoss, but when something goes wrong it can be frustrating to deal with the issue. The client tools are pretty advanced, but the tools that are installed on the gear itself and the web console are not there yet.

Arjan Tijms

How to build and run the Mojarra automated tests (2014 update)

30 March 2014

At zeef.com we depend a lot on JSF (see here for details) and occasionally have the need to patch Mojarra.

Mojarra comes with over 8000 tests, but as we explained in a previous article, building it and running those tests is not entirely trivial. It’s not that difficult though if you know the steps, but the many outdated readmes and many folders in the project can make it difficult to find these steps.

Since the previous article some things have changed, so we’ll provide an update here.

Currently the Mojarra project is in a migration status. Manfred Riem is currently working on moving the complicated and ancient ANT based tests to a more modern Maven based setup. For the moment this is a bit of an extra burden as there are now two distinct test folders. Eventually though there should be only one. Since the migration is in full swing things can also still change often. The instructions below are valid for at least JSF 2.2.5 till 2.2.7-SNAPSHOT.

We’ll first create a separate directory for our build and download a fresh version of GlassFish 4 that we’ll use for running the tests. From e.g. your home directory execute the following:

mkdir mtest
cd mtest
wget http://download.java.net/glassfish/4.0/release/glassfish-4.0.zip
unzip glassfish-4.0.zip

Note that unlike the 2012 instructions it’s no longer needed to set an explicit password. The default “empty” password now works correctly. The readme in the project still says you need to install with a password, but this is thus no longer needed.

Next we’ll check out the Mojarra 2.2 “trunk”. Note here that the real trunk is dormant and all the action happens in a branch called “MOJARRA_2_2X_ROLLING”. Unfortunately Mojarra still uses SVN, but it is what it is. We’ll use the following commands;

svn co https://svn.java.net/svn/mojarra~svn/branches/MOJARRA_2_2X_ROLLING/
cd MOJARRA_2_2X_ROLLING
cp build.properties.glassfish build.properties

We now need to edit build.properties and set the following values:

jsf.build.home=[source home]
container.name=glassfishV3.1_no_cluster
container.home=[glassfish home]/glassfish
halt.on.failure=no

[source home] is the current directory where we just cd’ed into (e.g. /home/your_user/mtest/MOJARRA_2_2X_ROLLING), while [glassfish home] is the directory that was extracted from the archive (e.g. /home/your_user/mtest/glassfish4/).

If your OS supports all the following commands (e.g. Ubuntu does) you can also execute:

sed -i "s:<SET CURRENT DIRECTORY>:$(pwd):g" build.properties
sed -i "s:container.name=glassfish:container.name=glassfishV3.1_no_cluster:g" build.properties
sed -i "s:container.home=:container.home=$(readlink -f ../glassfish4/):g" build.properties

We’re now going to invoke the actual build. Unfortunately there still is a weird dependency between the main build task and the clean task, so the first time we can only execute “main” here. If the build needs to be done a subsequent time we can do “clean main” then. For now execute the following:

ant main

We can then run the ANT tests as follows:

export ANT_OPTS='-Xms512m -Xmx786m -XX:MaxPermSize=786m'
ant test.with.container.refresh

Just like the previous time, there are always a number of ANT tasks that are already failing. Whether the “trunk” of Mojarra simply has failing tests all the time, or whether it’s system dependent is something we didn’t investigate. Fact is however that after some three years of periodically building Mojarra and running its tests, on various different systems (Ubuntu, Debian, OS X) we’ve never seen it happening that out of the box all tests passed. In the current (March 28, 2014) 2.2.7-SNAPSHOT branch the following tests failed out of the box;

  1. jsf-ri/systest/src/com/sun/faces/composite/CompositeComponentsTestCase.java#testCompositeComponentResolutionWithinRelocatableResources
  2. jsf-ri/systest/src/com/sun/faces/facelets/FaceletsTestCase.java#FaceletsTestCase#testForEach
  3. jsf-ri/systest/src/com/sun/faces/facelets/ImplicitFacetTestCase.java#testConditionalImplicitFacetChild1727
  4. jsf-ri/systest/src/com/sun/faces/systest/DataTableTestCase.java#testTablesWithEmptyBody
  5. jsf-ri/systest/src/com/sun/faces/jsptest/ConverterTestCase.java#testConverterMessages
  6. jsf-test/JAVASERVERFACES-2113/i_mojarra_2113_htmlunit/src/main/java/com/sun/faces/regression/i_mojarra_2113/Issue2113TestCase.java#testBasicAppFunctionality

So if you want to test the impact of your own changes, be sure to run the tests before making those changes to get an idea of which tests are already failing on your system and then simply comment them out.

The ANT tests execute rather slowly. On the 3.2Ghz/16GB/SSD machine we used they took some 20 minutes.

The maven tests are in a separate directory and contain only tests. To give those tests access to the Mojarra artifact we just build we need to install it in our local .m2 repo:

ant mvn.deploy.snapshot.local

(if we use this method on a build server we may want to use separate users for each test. Otherwise parallel builds may conflict since the .m2 repo is global to the user running the tests)

We now CD into the test directory and start with executing the “clean” and “install” goals:

cd test
mvn clean install

After the clean install we have to tell maven about the location of our GlassFish server. This can be done via a settings.xml file or by replacing every occurrence of “C:/Glassfish3.1.2.2″ in the pom.xml that’s in the root of the folder we just cd’ed into. A command to do the latter is:

sed -i "s#C:/Glassfish3.1.2.2#$(readlink -f ../../glassfish4/)#g" pom.xml

The test directory contains several folders with tests for different situations. Since JSF 2.2 can run on both Servlet 3.0 and Servlet 3.1 containers there’s a separate Servlet 3.1 folder with tests specific to that. It’s however not clear why there still is a Servlet 3.0 folder (probably a left-over from JSF 2.0/2.1). The most important test folder is the “agnostic” one. This runs on any server and should even run with every JSF implementation (e.g. it should run on MyFaces 2.2 as well).

The following commands are used to execute them:

cd agnostic/
../bin/test-glassfish-default.sh

The Maven tests run rather fast and should be finished in some 3 to 4 minutes. Instead of modifying the pom and invoking the .sh script we can also run Maven directly via a command like the following:

mvn -Dintegration.container.home=/home/your_user/mtest/glassfish4/ -Pintegration-failsafe,integration-glassfish-cargo clean verify

(replace “/home/your_user/mtest/glassfish4/” with the actual location of glassfish on your system)

The difference is that the script is a great deal faster. It does this by calling maven 6 times with different goals. This will cause work to be done in advance for all tests instead of for each test over and over again. The fact that this needs to be done via a script instead of directly via maven is maybe indicative of a weakness in maven. Although understanding the script is not needed for the build and running the tests, I found it interesting enough to take a deeper look at it.

The pom uses a maven plug-in for the little known cargo project. Cargo is a kind of competitor for the much wider known Arquillian. Just as its more popular peer it can start and stop a large variety of Java EE containers and deploy application archives to those. Cargo has existed for much longer than Arquillian and is still actively developed. It supports ancient servers such as Tomcat 4 and JBoss 3, as well as the very latest crop like Tomcat 8 and WildFly 8.

The 6 separate invocations are the following;

  1. Copy a Mojarra artifact (javax.faces.jar) from the Maven repo to the GlassFish internal modules directory (profile integration-glassfish-prepare)
  2. Clean the project, compile and then install all tests (as war archives) in the local Maven repo (no explicit profile)
  3. Start GlassFish (profile integration-glassfish-cargo, goal cargo:start)
  4. Deploy all previously build war archives in one go to GlassFish (profile integration-glassfish-cargo, goal cargo:redeploy)
  5. Run the actual tests. These will do HTTP requests via HTML Unit to the GlassFish instance that was prepared in the previous steps (profile integration-failsafe, goal verify)
  6. Finally stop the container again (profile integration-glassfish-cargo, goal cargo:stop)

As said previously, the project is in migration status and things still change frequently. In the 2.2.7 “trunk” an additional glassfish-cargo profile appeared that’s basically a copy of the existing integration-glassfish-cargo, but without the embedded and unused copy goal (which we’d seen above was part of the integration-glassfish-prepare profile). There’s also a new glassfish-copy-mojarra-1jar goal that’s a copy of the integration-glassfish-prepare profile with some parametrized configuration items replaced by constants, etc.

With the constant change going on documenting the build and test procedure is difficult, but hopefully the instructions presented in this article are up to date enough for the moment.

Arjan Tijms

Java 7 one-liner to read file into string

24 March 2014

Reading in a file in Java used to require a lot of code. Various things had to wrapped, loops with weird terminating conditions had to be specified and so forth.

In Java 7 we can do a lot better. The actual code to do the reading is just:

String content = new String(readAllBytes(get("test.txt")));

As a full program that echos back the file’s content it looks like this:

import static java.lang.System.out;
import static java.nio.file.Files.readAllBytes;
import static java.nio.file.Paths.get;
 
public class Test {
    public static void main(String[] args) throws Exception {
	out.println(new String(readAllBytes(get("test.txt"))));
    }
}

Of course it we want to be careful that we don’t load a few gigabytes into memory and if we want to pay attention to the character set (it’s the platform default now) we need a little more code, but for quick and dirty file reading this should do the trick.

As a bonus, a version in Scala using the same JDK 7 APIs contributed by my fellow office worker Mark van der Tol:

import java.nio.file.Files.readAllBytes
import java.nio.file.Paths.get
 
object Main extends App {
    println(new String(readAllBytes(get("test.txt"))))
}

Arjan Tijms

WildFly 8 benchmarked

14 February 2014

The final version of WildFly 8 was released this week. WildFly is the new Java EE 7 compliant application server from Red Hat and is the successor to JBoss AS 7. One of the major new features in WildFly is a new high-performance web server called Undertow, which replaces the Tomcat server in previous version of JBoss. As we’ve recently been benchmarking a new application, I was curious as to how WildFly 8 would perform. To find out, I decided to benchmark WildFly using this application and compare it against the latest version of JBoss EAP, version 6.2.

The application used for the benchmark was a simple JSF-based app. For each request a JSF Facelets template, which pulls some data from a backing bean, is being rendered in real-time. The backing bean in turn retrieves the data from a local cache, which is backed by a restful API and periodically refreshed. The refresh happens asynchronously, so as to not block any user’s requests. To achieve a better performance, HTTP sessions were explicitly disabled for this application.

JSF’s stateless mode was activated as well. Although the JSF page that was rendered did not have any forms on it (and thus should not have any state to begin with), this did in fact seem to give a small performance boost. However, the performance boost was so small that it fell within the fluctuation range that we saw between runs and it’s therefor hard to say whether this really mattered.

JMeter was used for the benchmark itself. The application and JMeter were both run on the same computer, which is a 3.4 Ghz quad-core Intel Xeon with 16GB or RAM running Linux Mint 16. As the first release candidate of JDK8 was released last week, I decided to use both JDK7u45 and JDK8b128 in the benchmarks. Both JBoss EAP 6.2 and WildFly 8 were used out of the box; nothing was changed to standalone.xml or any other internal configuration file.

The benchmark itself was performed with 100 concurrent threads, each performing 2000 requests. For each application server and JDK version, four test were performed directly after each other. The results from the first test were discarded, as the JVM was still warming up, and the throughput in requests per second was averaged over the remaining three tests. You can see the average throughput below.

WildFly benchmark average throughput

These averages, however, do not paint the full picture. When taking a closer look at the results from the JBoss EAP benchmarks, the results of the individual benchmark runs fluctuate a lot more than the results from the WildFly benchmarks.

Throughput

JBoss EAP seems to perform best on the second test run in both cases, but this could be a coincidence. What is clear is that the WildFly team have done a great job in creating an application server that, while it might not be outright faster, does achieve a similar level of performance, but with a greater level of consistency. For both JBoss EAP and WildFly, the JDK8 benchmarks still fall within the standard deviation of the JDK7 benchmarks, so also seems to perform on a similar level compared to JDK7. It would be interesting to see how other application servers, like GlassFish, hold up against JBoss EAP and WildFly, so I may revisit this topic sometime soon.

Disabling all EJB timers in Java EE 6

29 October 2013

Java EE 7 has finally added a method to obtain all timers in the system. With the help of this method you can fairly conveniently cancel all timers, or only specific ones.

But Java EE 7 is still fairly new and not many vendors have released a Java EE 7 compatible server yet. So is there any way at all to say disable all scheduled timers in Java EE 6?

As it appears this is possible, with a little help of CDI and the Interceptor spec. The idea is that we install a CDI extension that dynamically adds an interceptor to all @Schedule annotated methods. This interceptor then cancels the timer for which it intercepted the method that handles it. It would be great if the CDI extension was just able to remove the @Schedule annotation and we’d be done with it. Unfortunately this is yet another example why it’s not so great that EJB is not fully alligned with CDI; even if the @Schedule annotation is removed from the so-called AnnotatedType, the EJB container will still start the timer being obvlivious to the CDI representation of the bean.

The first step is to make an annotation that represents the interceptor we need:

@Inherited
@InterceptorBinding
@Target({ TYPE, METHOD })
@Retention(RUNTIME)
public @interface DisableTimers {}

We then proceed to the actual interceptor:

@Interceptor
@DisableTimers
public class DisableTimersInterceptor {
 
    @Inject
    private Logger logger;
 
    @AroundTimeout
    public Object disableTimers(InvocationContext context) throws Exception {
 
        try {
            Object timerObject = context.getTimer();
            if (timerObject instanceof Timer) {
                Timer timer = ((Timer) timerObject);
                logger.info("Canceling timer in bean " + context.getClass().getName() + " for timer " + timer.toString());
                timer.cancel();
            }
        } catch (Exception e) {
            logger.log(SEVERE, "Exception while canceling timer:", e);
        }
 
        return null;
    }
}

Note that while there’s the general concept of an @AroundTimeout and the context has a getTimer() method, the actual type for the timer has not been globally standardized for Java EE. This means we have to resort to instance testing. It would be great if some future version of Java EE could define a standard interface that all eligable timers have to implement.

Also note that there’s isn’t a clean universal way to print the timer details so I’ve used toString() here on the Timer instance. It’s vendor specific what this actually returns.

An alternative would have been here to inject the timer service and use it to cancel all timers for the bean right away. This is perhaps a bit less intuitive though. Also note that at least on JBoss you can not inject the timer service directly but have to specify a JNDI lookup name, e.g.:

@Resource(lookup="java:comp/TimerService")
public TimerService timerService;

Unfortunately in Java EE 6 we have to register the interceptor in beans.xml:

<beans>
    <interceptors>
        <class>com.example.DisableTimersInterceptor</class>
    </interceptors>
</beans>

Next is the actual extension:

public class EjbTimerDisableExtension implements Extension {
 
    private static final Logger logger = Logger.getLogger(EjbTimerDisableExtension.class.getName());
 
    public <T> void processAnnotatedType(@Observes ProcessAnnotatedType<T> processAnnotatedType, BeanManager beanManager) {
        if (hasScheduleMethods(processAnnotatedType.getAnnotatedType())) {
 
            logger.log(INFO, "Disabling timer in " + processAnnotatedType.getAnnotatedType().getJavaClass().getName());
 
            AnnotatedTypeWrapper<T> annotatedTypeWrapper = new AnnotatedTypeWrapper<>(processAnnotatedType.getAnnotatedType());
 
            for (AnnotatedMethod<? super T> annotatedMethod : processAnnotatedType.getAnnotatedType().getMethods()) {
                if (annotatedMethod.isAnnotationPresent(Schedule.class)) {
 
                    AnnotatedMethodWrapper<? super T> annotatedMethodWrapper = new AnnotatedMethodWrapper<>(annotatedMethod);
                    annotatedMethodWrapper.addAnnotation(createAnnotationInstance(DisableTimers.class));
 
                    annotatedTypeWrapper.getMethods().remove(annotatedMethod);
                    annotatedTypeWrapper.getMethods().add(annotatedMethodWrapper);
                }
            }
 
            processAnnotatedType.setAnnotatedType(annotatedTypeWrapper);
        }
    }
 
    private <T> boolean hasScheduleMethods(AnnotatedType<T> annotatedType) {
        for (AnnotatedMethod<?> annotatedMethod : annotatedType.getMethods()) {
            if (annotatedMethod.isAnnotationPresent(Schedule.class)) {
                return true;
            }
        }
 
        return false;
    }
}

In this extension we check if a bean has methods with an @Schedule annotation, and if it indeed has one we wrap the passed-in annotated type and wrap any method representation that has this annotation. Via these wrappers we can remove the existing method and then add our own method where we dynamically add the interceptor annotation.

We need to register this extension in /META-INF/services/javax.enterprise.inject.spi.Extension by putting its FQN there:

com.example.EjbTimerDisableExtension

It’s perhaps unfortunately that CDI 1.0 doesn’t offer many convenience methods for wrapping its most important types (which e.g. JSF does do) and doesn’t provide an easy way to create an annotation instance.

Luckily my co-worker Jan Beernink had already created some convenience types for those, which I could use:

The CDI type wrappers:

public class AnnotatedMethodWrapper<X> implements AnnotatedMethod<X> {
 
    private AnnotatedMethod<X> wrappedAnnotatedMethod;
 
    private Set<Annotation> annotations;
 
    public AnnotatedMethodWrapper(AnnotatedMethod<X> wrappedAnnotatedMethod) {
        this.wrappedAnnotatedMethod = wrappedAnnotatedMethod;
 
        annotations = new HashSet<>(wrappedAnnotatedMethod.getAnnotations());
    }
 
    @Override
    public List<AnnotatedParameter<X>> getParameters() {
        return wrappedAnnotatedMethod.getParameters();
    }
 
    @Override
    public AnnotatedType<X> getDeclaringType() {
        return wrappedAnnotatedMethod.getDeclaringType();
    }
 
    @Override
    public boolean isStatic() {
        return wrappedAnnotatedMethod.isStatic();
    }
 
    @Override
    public <T extends Annotation> T getAnnotation(Class<T> annotationType) {
        for (Annotation annotation : annotations) {
            if (annotationType.isInstance(annotation)) {
                return annotationType.cast(annotation);
            }
        }
 
        return null;
    }
 
    @Override
    public Set<Annotation> getAnnotations() {
        return Collections.unmodifiableSet(annotations);
    }
 
    @Override
    public Type getBaseType() {
        return wrappedAnnotatedMethod.getBaseType();
    }
 
    @Override
    public Set<Type> getTypeClosure() {
        return wrappedAnnotatedMethod.getTypeClosure();
    }
 
    @Override
    public boolean isAnnotationPresent(Class<? extends Annotation> annotationType) {
        for (Annotation annotation : annotations) {
            if (annotationType.isInstance(annotation)) {
                return true;
            }
        }
 
        return false;
    }
 
    @Override
    public Method getJavaMember() {
        return wrappedAnnotatedMethod.getJavaMember();
    }
 
    public void addAnnotation(Annotation annotation) {
        annotations.add(annotation);
    }
 
    public void removeAnnotation(Annotation annotation) {
        annotations.remove(annotation);
    }
 
    public void removeAnnotation(Class<? extends Annotation> annotationType) {
        Annotation annotation = getAnnotation(annotationType);
        if (annotation != null ) {
            removeAnnotation(annotation);
        }
    }
 
}
public class AnnotatedTypeWrapper<T> implements AnnotatedType<T> {
 
    private AnnotatedType<T> wrappedAnnotatedType;
 
    private Set<Annotation> annotations = new HashSet<>();
    private Set<AnnotatedMethod<? super T>> annotatedMethods = new HashSet<>();
    private Set<AnnotatedField<? super T>> annotatedFields = new HashSet<>();
 
    public AnnotatedTypeWrapper(AnnotatedType<T> wrappedAnnotatedType) {
        this.wrappedAnnotatedType = wrappedAnnotatedType;
 
        annotations.addAll(wrappedAnnotatedType.getAnnotations());
        annotatedMethods.addAll(wrappedAnnotatedType.getMethods());
        annotatedFields.addAll(wrappedAnnotatedType.getFields());
    }
 
    @Override
    public <A extends Annotation> A getAnnotation(Class<A> annotationType) {
        return wrappedAnnotatedType.getAnnotation(annotationType);
    }
 
    @Override
    public Set<Annotation> getAnnotations() {
        return annotations;
    }
 
    @Override
    public Type getBaseType() {
        return wrappedAnnotatedType.getBaseType();
    }
 
    @Override
    public Set<AnnotatedConstructor<T>> getConstructors() {
        return wrappedAnnotatedType.getConstructors();
    }
 
    @Override
    public Set<AnnotatedField<? super T>> getFields() {
        return annotatedFields;
    }
 
    @Override
    public Class<T> getJavaClass() {
        return wrappedAnnotatedType.getJavaClass();
    }
 
    @Override
    public Set<AnnotatedMethod<? super T>> getMethods() {
        return annotatedMethods;
    }
 
    @Override
    public Set<Type> getTypeClosure() {
        return wrappedAnnotatedType.getTypeClosure();
    }
 
    @Override
    public boolean isAnnotationPresent(Class<? extends Annotation> annotationType) {
        for (Annotation annotation : annotations) {
            if (annotationType.isInstance(annotation)) {
                return true;
            }
        }
 
        return false;
    }
 
}

And the utility code for instantiating an annotation type:

public class AnnotationUtils {
 
    private AnnotationUtils() {
    }
 
    /**
     * Create an instance of the specified annotation type. This method is only suited for annotations without any properties, for annotations with
     * properties, please see {@link #createAnnotationInstance(Class, InvocationHandler)}.
     *
     * @param annotationType
     *            the type of annotation
     * @return an instance of the specified type of annotation
     */
    public static <T extends Annotation> T createAnnotationInstance(Class<T> annotationType) {
        return createAnnotationInstance(annotationType, new AnnotationInvocationHandler<>(annotationType));
    }
 
    public static <T extends Annotation> T createAnnotationInstance(Class<T> annotationType, InvocationHandler invocationHandler) {
        return annotationType.cast(Proxy.newProxyInstance(AnnotationUtils.class.getClassLoader(), new Class<?>[] { annotationType },
                invocationHandler));
    }
}
/**
 * {@link InvocationHandler} implementation that implements the base methods required for a parameterless annotation. This handler only implements the
 * following methods: {@link Annotation#equals(Object)}, {@link Annotation#hashCode()}, {@link Annotation#annotationType()} and
 * {@link Annotation#toString()}.
 *
 * @param <T>
 *            the type of the annotation
 */
class AnnotationInvocationHandler<T extends Annotation> implements InvocationHandler {
 
    private Class<T> annotationType;
 
    /**
     * Create a new {@link AnnotationInvocationHandler} instance for the given annotation type.
     *
     * @param annotationType
     *            the annotation type this handler is for
     */
    public AnnotationInvocationHandler(Class<T> annotationType) {
        this.annotationType = annotationType;
    }
 
    @Override
    public Object invoke(Object proxy, Method method, Object[] args) throws Throwable {
        switch (method.getName()) {
        case "toString":
            return "@" + annotationType.getName() + "()";
        case "annotationType":
            return annotationType;
        case "equals":
            return annotationType.isInstance(args[0]);
        case "hashCode":
            return 0;
        }
 
        return null;
    }
 
}

Conclusion

This approach is by far not as elegant as just injecting an @Startup @Singleton with the timer service and canceling all timers in a simple loop as we can do in Java EE 7, but it does work on Java EE 6. Timers are canceled one by one as they fire and their handler methods are never invoked.

The approach of dynamically adding interceptors to specific methods can however be used for other things as well (e.g. logging exceptions from @Asynchronous methods that return void, just to name something) so it’s a generally useful technique.



Arjan Tijms

Eclipse 4.3 SR1 again silently released!

28 September 2013

Again rather silently, the Eclipse organization yesterday released the first maintenance release of Eclipse 4.3; Eclipse 4.3.1 aka Eclipse Kepler SR1.

Surprisingly (or maybe not), this event again isn’t noted on the main homepage at eclipse.org or at the recent activity tracker. There also don’t seem to be any release notes, like the ones we had for the 4.3 release.

It seems these days the Eclipse home page is about anything and nothing except the thing we most closely associate with the term “Eclipse”; the Eclipse IDE. Seemingly the IDE itself is by far not as important as “Concierge Creation Review Scheduled” and “Web-based BPM with Stardust”.

Once again, fiddling with Bugzilla gave me a list of 112 bugs that are fixed in core packages.

Hopefully this fix will remedy the random crashes I’ve experienced in Ubuntu 13.04, but I’m not holding my breath.

The good people at the WTP project did feel like posting about this event on their homepage with a link and short description to the 3.5.1 release of WTP. Again, the new and noteworthy keeps pointing to the previous release, but there’s a list of 51 fixed bugs available.

Community reporting seems to have reached a historically low. There’s one enthusiastic user who created a rather minimalistic forum post about it, and that’s pretty much it. Maybe a few lone tweets, but nothing major.

Is the community and the Eclipse organization loosing interesting in Eclipse, or is it just that SR releases aren’t that exciting?

Serving multiple images from database as a CSS sprite

31 July 2013

Introduction

In the first public beta version of ZEEF which was somewhat thrown together (first get the minimum working using standard techniques, then review, refactor and improve it), all favicons were served individually. Although they were set to be agressively cached (1 year, whereby a reload is when necessary forced by the timestamp-in-query-string trick with the last-modified timestamp of the link), this resulted in case of an empty cache in a ridiculous amount of HTTP requests on a subject page with relatively a lot of links, such as Curaçao by Bauke Scholtz:

Yes, 209 image requests of which 10 are not for favicons, which nets as 199 favicon requests. Yes, that much links are currently on the Curaçao subject. The average modern webbrowser has only 6~8 simultaneous connections available on a specific domain. That’s thus a huge queue. You can see it in the screenshot, it took on an empty cache nearly 5 seconds to get them all (on a primed cache, it’s less than 1 second).

If you look closer, you’ll see that there’s another problem with this approach: links which doesn’t have a favicon re-requests the very same default favicon again and again with a different last-modified timestamp of the link itself, ending up in copies of exactly same image in the browser cache. Also, links from the same domain which share the same favicon, have their favicons duplicated this way. In spite of the agressive cache, this was simply too inefficient.

Converting images to common format and size

The most straightforward solution would be to serve all those favicons as a single CSS sprite and make use of CSS background-position to reference the right favicon in the sprite. This however requires that all favicons are first parsed and converted to a common format and size which allows easy manipulation by standard Java 2D API (ImageIO and friends) and easy generation of the CSS sprite image. PNG was chosen as format as that’s the most efficient and lossless format. 16×16 was chosen as default size.

As first step, a favicon parser was created which verifies and parses the scraped favicon file and saves every found image as PNG (the ICO format can store multiple images, usually each with a different dimension, e.g. 16×16, 32×32, 64×64, etc). For this, Image4J (a mavenized fork with bugfix) has been of a great help. The original Image4J had only a minor bug, it ran in an infinite loop on favicons with broken metadata, such as this one. This was fixed by vijedi/image4j. However, when an ICO file contained multiple images, this fix discarded all images, instead of only the broken one. So, another bugfix was done on top of that (which by the way just leniently returned the “broken” image — in fact, only the metadata was broken, not the image content itself). Every single favicon will now be parsed by ICODecoder and BMPDecoder of Image4J and then ImageIO#read() of standard Java SE API in this sequence. Whoever returned the first non-null BufferedImage(s) without exceptions, this will be used. This step also made us able to completely bypass the content-type check which we initially had, because we discovered that a lot of websites were doing a bad job in this, some favicons were even served as text/html which caused false negatives.

As second step, if the parsing of a favicon resulted in at least one BufferedImage, but no one was in 16×16 dimension, then it will be created based on the firstnext dimension which is resized back to 16×16 with help of thebuzzmedia/imgscalr which yielded high quality resizings.

Finally all formats are converted to PNG and saved in the DB (and cached in the local disk file system).

Serving images as CSS sprite

For this a simple servlet was been used which does basically ultimately the following in doGet() (error/cache checking omitted for simplicity):

Long pageId = Long.valueOf(request.getPathInfo().substring(1));
Page page = pageService.getById(pageId);
long lastModified = page.getLastModified();
byte[] content = faviconService.getSpriteById(pageId, lastModified);
 
if (content != null) { // Found same version in disk file system cache.
    response.getOutputStream().write(content);
    return;
}
 
Set<Long> faviconIds = new TreeSet<>();
faviconIds.add(0L); // Default favicon, appears as 1st image of sprite.
faviconIds.addAll(page.getFaviconIds());
 
int width = Favicon.DEFAULT_SIZE; // 16px.
int height = width * faviconIds.size();
 
BufferedImage sprite = new BufferedImage(width, height, BufferedImage.TYPE_INT_ARGB);
Graphics2D graphics = sprite.createGraphics();
graphics.setBackground(new Color(0xff, 0xff, 0xff, 0)); // Transparent.
graphics.fillRect(0, 0, width, height);
 
int i = 0;
 
for (Long faviconId : faviconIds) {
    Favicon favicon = faviconService.getById(faviconId); // Loads from disk file system cache.
    byte[] content = favicon.getContent();
    BufferedImage image = ImageIO.read(new ByteArrayInputStream(content));
    graphics.drawImage(image, 0, width * i++, null);
}
 
ByteArrayOutputStream output = new ByteArrayOutputStream();
ImageIO.write(sprite, "png", output);
content = output.toByteArray();
faviconService.saveSprite(pageId, lastModified, content); // Store in disk file system cache.
response.getOutputStream().write(content);

To see it in action, you can get all favicons of the page Curaçao by Bauke Scholtz (which has page ID 18) as CSS sprite on the following URL: https://zeef.com/favicons/page/18.

Serving the CSS file containing sprite-image-specific selectors

In order to present the CSS sprite images at the right places, we should also have a simple servlet which generates the desired CSS stylesheet file containing sprite-image-specific selectors with the right background-position. The servlet should basically ultimately do the following in doGet() (error/cache checking omitted to keep it simple):

Long pageId = Long.valueOf(request.getPathInfo().substring(1));
Page page = pageService.getById(pageId);
 
Set<Long> faviconIds = new TreeSet<>();
faviconIds.add(0L); // Default favicon, appears as 1st image of sprite.
faviconIds.addAll(page.getFaviconIds());
 
long lastModified = page.getLastModified().getTime();
int height = Favicon.DEFAULT_SIZE; // 16px.
 
PrintWriter writer = response.getWriter();
writer.printf("[class^='favicon-']{background-image:url('../page/%d?%d')!important}", 
    pageId, lastModified);
int i = 0;
 
for (Long faviconId : faviconIds) {
    writer.printf(".favicon-%s{background-position:0 -%spx}", faviconId, height * i++);
}

To see it in action, you can get the CSS file of the page Curaçao by Bauke Scholtz (which has page ID 18) on the following URL: https://zeef.com/favicons/css/18.

Note that the background-image URL has the page’s last modified timestamp in the query string which should force a browser reload of the sprite whenever a link has been added/removed in the page. The CSS file itself has also such a query string as you can see in HTML source code of the ZEEF page, which is basically generated as follows:

<link id="favicons" rel="stylesheet" 
    href="//zeef.com/favicons/css/#{zeef.page.id}?#{zeef.page.lastModified.time}" />

Also note that the !important is there to overrule the default favicon for the case the serving of the CSS sprite failed somehow. The default favicon is specified in general layout CSS file layout.css as follows:

#blocks .link.block li .favicon,
#blocks .link.block li [class^='favicon-'] {
    position: absolute;
    left: -7px;
    top: 4px;
    width: 16px;
    height: 16px;
}
 
#blocks .link.block li [class^='favicon-'] {
    background-image: url("#{resource['zeef:images/default_favicon.png']}");
}

Referencing images in HTML

It’s rather simple, the links were just generated in a loop whereby the favicon image is represented via a plain HTML <span> element basically as follows:

<a id="link_#{linkPosition.id}" href="#{link.targetURL}" title="#{link.defaultTitle}">
    <span class="favicon-#{link.faviconId}" />
    <span class="text">#{linkPosition.displayTitle}</span>
</a>

The HTTP requests on image files have been reduced from 209 to 12 (note that 10 non-favicon requests have increased to 11 non-favicon requests due to changes in social buttons, but that’s not further related to the matter):

It took on an empty cache on average only half a second to download the CSS file and another half a second to download the CSS sprite. Per saldo, that’s thus 5 times faster with 197 connections less! On a primed cache it’s even not requested at all. Noted should be that I’m here behind a relatively slow network and that the current ZEEF production server on a 3rd party host isn’t using “state of the art” hardware yet. The hardware will be handpicked later on once we grow.

Reloading CSS sprite by JavaScript whenever necessary

When you’re logged in as page owner, you can edit the page by adding/removing/drag’n’drop links and blocks. This all takes place by ajax without a full page reload. Whenever necessary, the CSS sprite can during ajax oncomplete be forced to be reloaded by the following script which references the <link id="favicons">:

function reloadFavicons() {
    var $favicons = $("#favicons");
    $favicons.attr("href", $favicons.attr("href").replace(/\?.*/, "?" + new Date().getTime()));
}

Basically, it just updates the timestamp in the query string of the <link href> which in turn forces the webbrowser to request it straight from the server instead of from the cache.

Note that in case of newly added links which do not exist in the system yet, favicons are resolved asynchronously in the background and pushed back via Server-Sent Events. In this case, the new favicon is still downloaded individually and explicitly set as CSS background image. You can find it in the global-push.js file:

function updateLink(data) {
    var $link = $("#link_" + data.id);
    $link.attr("title", data.title);
    $link.find(".text").text(data.text);
    $link.find("[class^='favicon-']").attr("class", "favicon")
        .css("background-image", "url(/favicons/link/" + data.icon + "?" + new Date().getTime() + ")");
    highlight($link);
}

But once the HTML DOM representation of the link or block is later ajax-updated after an edit or drag’n’drop, then it will re-reference the CSS sprite again.

The individual favicon request is also done in “Edit link” dialog. The servlet code for that is not exciting, but for the case you’re interested, the URL is like https://zeef.com/favicons/link/354 and all the servlet basically does is (error/cache checking omitted for brevity):

Long linkId = Long.valueOf(request.getPathInfo().substring(1));
Link link = linkService.getById(linkId);
Favicon favicon = faviconService.getById(link.getFaviconId());
byte[] content = favicon.getContent();
response.getWriter().write(content);

Note that individual favicons are not downloaded by their own ID, but instead by the link ID, because a link doesn’t necessarily have any favicon. This way the default favicon can easily be returned.


This article is also posted on balusc.blogspot.com.

Switching between data sources when using @DataSourceDefinition

21 May 2013

Prior to Java EE 6 setting up and configuring data sources had to be done in a proprietary (vendor specific) way. In many cases this meant a data source had to be created inside the application server.

This may make sense when such an application server runs a multitude of applications and/or those applications are externally obtained ones. In such case it’s great that all those applications can share the same data source and applications themselves don’t dictate which database is being used (via hard-coded referenced to a specific driver).

However, when you run one application per server and especially when that one application is your primary in-house developed code, it’s not always that convenient; you will have to store your data source definitions somewhere away from your code (e.g. in a CFEngine managed repository) and changes to the data source won’t be pulled in together with new code when you pull from your SCM.

For those situations, especially when working in agile and devops centered teams, Java EE 6 introduced the @DataSourceDefinition annotation and data-source element for usage in deployment descriptors such as web.xml.

Although some vendors were supposedly slightly reluctant to support this, it now works reasonably well.

A typical example:

web.xml

<data-source>
    <name>java:app/KickoffApp/kickoffDS</name>
    <class-name>org.h2.jdbcx.JdbcDataSource</class-name>
    <url>jdbc:h2:mem:test</url>
    <user>sa</user>
    <password>sa</password>
    <transactional>true</transactional>
    <isolation-level>TRANSACTION_READ_COMMITTED</isolation-level>
    <initial-pool-size>2</initial-pool-size>
    <max-pool-size>10</max-pool-size>
    <min-pool-size>5</min-pool-size>
    <max-statements>0</max-statements>
</data-source>

Source

One small issue remained though. How do you easily change the settings of such a data source for different stages (e.g. DEV, QA, Production)?

The official way in Java EE is by providing different versions of deployment descriptors like web.xml, but this has two problems:

  • The entire file needs to be swapped, even when only a few changes are needed
  • The file is embedded in the .war/.ear archive. Prying it open, changing the file and closing it again is tedious

One solution is to use build tools to swap in different versions of the deployment descriptor and/or use placeholders that are replaced at build time. Although this is certainly an option, it doesn’t always play nice with incremental builds in IDEs such as Eclipse and can be tricky (but not impossible) to fit into CI pipelines, where the build is tested on some local test server and then as-is automatically deployed to another server.

Another solution that I would like to present here is making use of a data source wrapper that loads its settings from a user defined location (which can be parametrized) and passes it on to the real data source.

Design

Creating a wrapper is by itself simple enough, but one challenge lies in the way how properties are set on a DataSource. There are hardly any properties defined via an interface and there’s no universal setter or map available. Instead, the server inspects the DataSource for JavaBeans properties via reflection and calls those via reflection as well. Obviously a wrapper cannot dynamically at run time add properties to itself.

Fortunately, there are some standard properties that are defined in the JDBC spec that we can statically implement. It’s perhaps a question if we really need them, since we’ll be setting most properties ourselves on the wrapped real data source, but it might be convenient to have them anyway.

We’ll start off with a wrapper for the CommonDataSource, which is the base class for the most important data source types, such as plain DataSource and XADataSource. The most important methods of this wrapper are initDataSource, get and set, and setWithConversion, which are discussed below. The full code is given at the end of this article.

In initDataSource we set the wrapped data source and collect its properties. There are many reflection libraries that make it easier to work reflectively with properties, but using the venerable java.beans.Introspector proved to be good enough here. The only extra thing that was needed was storing the obtained properties in a map (JDK 8 lambdas would sure make this particular task even more straightforward).

public void initDataSource(CommonDataSource dataSource) {
    this.commonDataSource = dataSource;
 
    try {
        Map<String, PropertyDescriptor> mutableProperties = new HashMap<>();
        for (PropertyDescriptor propertyDescriptor : getBeanInfo(dataSource.getClass()).getPropertyDescriptors()) {
            mutableProperties.put(propertyDescriptor.getName(), propertyDescriptor);
        }
 
        dataSourceProperties = unmodifiableMap(mutableProperties);
 
    } catch (IntrospectionException e) {
        throw new IllegalStateException(e);
    }
}

The next thing we do is creating a get and set method for the obtained properties. Calling getReadMethod().invoke(…) on a given property isn’t actually that bad, but the multitude of checked exceptions spoil the party a little. It would be really cool if there was just a Property in the JDK with a simple unchecked get and set method, but as shown it’s nothing a little helper code can’t fix:

@SuppressWarnings("unchecked")
public <T> T get(String name) {
    try {
        return (T) dataSourceProperties.get(name).getReadMethod().invoke(commonDataSource);
    } catch (IllegalAccessException | IllegalArgumentException | InvocationTargetException e) {
        throw new IllegalStateException(e);
    }
}
 
public void set(String name, Object value) {
    try {
        dataSourceProperties.get(name).getWriteMethod().invoke(commonDataSource, value);
    } catch (IllegalAccessException | IllegalArgumentException | InvocationTargetException e) {
        throw new IllegalStateException(e);
    }
}

Next we come to setWithConversion. This allows us to set a property with the correct type using a string representation of the value (which is what you typically have when reading such values from a property file). It appears this can be done with just a few lines of code using the java.beans.PropertyEditorManager class. You can obtain a kind of converter called a PropertyEditor from this based on a class type, which you can feed a String and will return a converted value for the target type.

It stands to reason PropertyEditor was originally designed for a rather different environment, seeing that it contains methods for usage with AWT such as paintValue(Graphics gfx, Rectangle box) and a really obscure method called getJavaInitializationString() that generates a Java code fragment :X. It’s a tad scary to realize these methods are present in code that runs deep inside a Java EE server with not a graphics card in sight, but alas, it’s part of the JDK and as it appeared used quite a lot on the server side by code that works with beans (like e.g. expression language).

Anyway, here’s the implementation:

public void setWithConversion(String name, String value) {
 
    PropertyDescriptor property = dataSourceProperties.get(name);
 
    PropertyEditor editor = findEditor(property.getPropertyType());
    editor.setAsText(value);
 
    try {
        property.getWriteMethod().invoke(commonDataSource, editor.getValue());
    } catch (IllegalAccessException | IllegalArgumentException | InvocationTargetException e) {
        throw new IllegalStateException(e);
    }
}

Next we’ll create a sub class that does the actual switching for a CommonDataSource. Unfortunately there is not really any notion of a lifecyle for a DataSource. The application server just creates an instance using a no-arguments constructor, calls setters on it, and then eventually retrieves a connection from it. If we want to be capable of accepting properties from the @DataSourceDefinition annotation or data-source element in addition to the ones we read from our own file, we have to use a little trick.

Initially we collect properties in a temporary map:

public void set(String name, Object value) {
    if (init) {
        super.set(name, value);
    } else {
        tempValues.put(name, value);
    }
}

When we receive the special property “configFile” we start the initialization:

public void setConfigFile(String configFile) {
    this.configFile = configFile;
    doInit();
}

In this initialization method we load our own properties, and from those fetch another special property called “className”, which is the fully qualified class name of the actual data source. Using the setter methods shown above we set the properties that we collected earlier as well as the properties we read our selves:

public void doInit() {
 
    // Get the properties that were defined separately from the @DataSourceDefinition/data-source element
    Map<String, String> properties = PropertiesUtils.getFromBase(configFile);
 
    // Get & check the most important property; the class name of the data source that we wrap.
    String className = properties.get("className");
    if (className == null) {
        throw new IllegalStateException("Required parameter 'className' missing.");
    }
 
    initDataSource(newInstance(className));
 
    // Set the properties on the wrapped data source that were already set on this class before doInit()
    // was possible.
    for (Entry<String, Object> property : tempValues.entrySet()) {
        super.set(property.getKey(), property.getValue());
    }
 
    // Set the properties on the wrapped data source that were loaded from the external file.
    for (Entry<String, String> property : properties.entrySet()) {
        if (!property.getKey().equals("className")) {
            setWithConversion(property.getKey(), property.getValue());
        }
    }
 
    // After this properties will be set directly on the wrapped data source instance.
    init = true;
}

Because of the JDBC distinction between different data source types, the one more thing left to do is to create the sub class that contains the methods specific for that data source type. This is a small nuisance, but otherwise rather straightforward. For an XA data source it contains the two getXAConnection methods, e.g.

public XADataSource getWrapped() {
    return (XADataSource) super.getWrapped();
}
 
public XAConnection getXAConnection() throws SQLException {
    return getWrapped().getXAConnection();
}

Finally via PropertiesUtils.getFromBase(configFile) a config file is loaded from some location based on a system property (-D commandline option). At the end of the article a somewhat hacky example is shown for loading this from the root of an EAR archive. Unfortunately the root of an EAR is not on the classpath and neither is its META-INF, therefor the chosen solution is rather hacky. It works on JBoss AS 7.x though.

For a WAR archive there’s no super convenient location. /conf in the root would be ideal, but unfortunately in a WAR the root is also not on the classpath and instead directly contains the resources that are made available to web clients. WEB-INF/classes/conf or WEB-INF/classes/META-INF/conf would be the most practical location.

Usage

Having created all the classes, the wrapper class can be specified in e.g. application.xml as follows:

<data-source>
    <name>java:app/myDS</name>
    <class-name>com.example.SwitchableXADataSource</class-name>
 
    <property>
        <name>configFile</name>
        <value>datasource-settings.xml</value>
    </property>
 
</data-source>

Specific settings can be put in e.g. an XML properties files as follows:

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE properties SYSTEM "http://java.sun.com/dtd/properties.dtd">
<properties>
 
    <entry key="className">org.postgresql.xa.PGXADataSource</entry>
 
    <entry key="user">user</entry>
    <entry key="password">password</entry>
 
    <entry key="serverName">database.example.com</entry>
    <entry key="databaseName">example_db</entry>
    <entry key="portNumber">5432</entry>
 
</properties>

Source

CommonDataSourceWrapper

package com.example;
 
import static java.beans.Introspector.getBeanInfo;
import static java.beans.PropertyEditorManager.findEditor;
import static java.util.Collections.unmodifiableMap;
 
import java.beans.IntrospectionException;
import java.beans.PropertyDescriptor;
import java.beans.PropertyEditor;
import java.lang.reflect.InvocationTargetException;
import java.sql.SQLException;
import java.sql.SQLFeatureNotSupportedException;
import java.util.HashMap;
import java.util.Map;
import java.util.logging.Logger;
 
import javax.sql.CommonDataSource;
 
public class CommonDataSourceWrapper implements CommonDataSource {
 
    private CommonDataSource commonDataSource;
    private Map<String, PropertyDescriptor> dataSourceProperties; 
 
    public void initDataSource(CommonDataSource dataSource) {
        this.commonDataSource = dataSource;
 
        try {
            Map<String, PropertyDescriptor> mutableProperties = new HashMap<>();
            for (PropertyDescriptor propertyDescriptor : getBeanInfo(dataSource.getClass()).getPropertyDescriptors()) {
                mutableProperties.put(propertyDescriptor.getName(), propertyDescriptor);
            }
 
            dataSourceProperties = unmodifiableMap(mutableProperties);
 
        } catch (IntrospectionException e) {
            throw new IllegalStateException(e);
        }
    }
 
    @SuppressWarnings("unchecked")
    public <T> T get(String name) {
        try {
            return (T) dataSourceProperties.get(name).getReadMethod().invoke(commonDataSource);
        } catch (IllegalAccessException | IllegalArgumentException | InvocationTargetException e) {
            throw new IllegalStateException(e);
        }
    }
 
    public void set(String name, Object value) {
        try {
            dataSourceProperties.get(name).getWriteMethod().invoke(commonDataSource, value);
        } catch (IllegalAccessException | IllegalArgumentException | InvocationTargetException e) {
            throw new IllegalStateException(e);
        }
    }
 
 
    public void setWithConversion(String name, String value) {
 
        PropertyDescriptor property = dataSourceProperties.get(name);
 
        PropertyEditor editor = findEditor(property.getPropertyType());
        editor.setAsText(value);
 
        try {
            property.getWriteMethod().invoke(commonDataSource, editor.getValue());
        } catch (IllegalAccessException | IllegalArgumentException | InvocationTargetException e) {
            throw new IllegalStateException(e);
        }
    }
 
    public CommonDataSource getWrapped() {
        return commonDataSource;
    }
 
 
    // ------------------------- CommonDataSource-----------------------------------
 
    @Override
    public java.io.PrintWriter getLogWriter() throws SQLException {
        return commonDataSource.getLogWriter();
    }
 
    @Override
    public void setLogWriter(java.io.PrintWriter out) throws SQLException {
        commonDataSource.setLogWriter(out);
    }
 
    @Override
    public void setLoginTimeout(int seconds) throws SQLException {
        commonDataSource.setLoginTimeout(seconds);
    }
 
    @Override
    public int getLoginTimeout() throws SQLException {
        return commonDataSource.getLoginTimeout();
    }
 
    // ------------------------- CommonDataSource JDBC 4.1 -----------------------------------
 
    @Override
    public Logger getParentLogger() throws SQLFeatureNotSupportedException {
        return commonDataSource.getParentLogger();
    }
 
 
    // ------------------------- Common properties -----------------------------------
 
    public String getServerName() {
        return get("serverName");
    }
 
    public void setServerName(String serverName) {
        set("serverName", serverName);
    }
 
    public String getDatabaseName() {
        return get("databaseName");
    }
 
    public void setDatabaseName(String databaseName) {
        set("databaseName", databaseName);
    }
 
    public int getPortNumber() {
        return get("portNumber");
    }
 
    public void setPortNumber(int portNumber) {
        set("portNumber", portNumber);
    }
 
    public void setPortNumber(Integer portNumber) {
        set("portNumber", portNumber);
    }
 
    public String getUser() {
        return get("user");
    }
 
    public void setUser(String user) {
        set("user", user);
    }
 
    public String getPassword() {
        return get("password");
    }
 
    public void setPassword(String password) {
        set("password", password);
    }
 
    public String getCompatible() {
        return get("compatible");
    }
 
    public void setCompatible(String compatible) {
        set("compatible", compatible);
    }
 
    public int getLogLevel() {
        return get("logLevel");
    }
 
    public void setLogLevel(int logLevel) {
        set("logLevel", logLevel);
    }
 
    public int getProtocolVersion() {
        return get("protocolVersion");
    }
 
    public void setProtocolVersion(int protocolVersion) {
        set("protocolVersion", protocolVersion);
    }
 
    public void setPrepareThreshold(int prepareThreshold) {
        set("prepareThreshold", prepareThreshold);
    }
 
    public void setReceiveBufferSize(int receiveBufferSize) {
        set("receiveBufferSize", receiveBufferSize);
    }
 
    public void setSendBufferSize(int sendBufferSize) {
        set("sendBufferSize", sendBufferSize);
    }
 
    public int getPrepareThreshold() {
        return get("prepareThreshold");
    }
 
    public void setUnknownLength(int unknownLength) {
        set("unknownLength", unknownLength);
    }
 
    public int getUnknownLength() {
        return get("unknownLength");
    }
 
    public void setSocketTimeout(int socketTimeout) {
        set("socketTimeout", socketTimeout);
    }
 
    public int getSocketTimeout() {
        return get("socketTimeout");
    }
 
    public void setSsl(boolean ssl) {
        set("ssl", ssl);
    }
 
    public boolean getSsl() {
        return get("ssl");
    }
 
    public void setSslfactory(String sslfactory) {
        set("sslfactory", sslfactory);
    }
 
    public String getSslfactory() {
        return get("sslfactory");
    }
 
    public void setApplicationName(String applicationName) {
        set("applicationName", applicationName);
    }
 
    public String getApplicationName() {
        return get("applicationName");
    }
 
    public void setTcpKeepAlive(boolean tcpKeepAlive) {
        set("tcpKeepAlive", tcpKeepAlive);
    }
 
    public boolean getTcpKeepAlive() {
        return get("tcpKeepAlive");
    }
 
    public void setBinaryTransfer(boolean binaryTransfer) {
        set("binaryTransfer", binaryTransfer);
    }
 
    public boolean getBinaryTransfer() {
        return get("binaryTransfer");
    }
 
    public void setBinaryTransferEnable(String binaryTransferEnable) {
        set("binaryTransferEnable", binaryTransferEnable);
    }
 
    public String getBinaryTransferEnable() {
        return get("binaryTransferEnable");
    }
 
    public void setBinaryTransferDisable(String binaryTransferDisable) {
        set("binaryTransferDisable", binaryTransferDisable);
    }
 
    public String getBinaryTransferDisable() {
        return get("binaryTransferDisable");
    }
 
}

SwitchableCommonDataSource

package com.example;
 
import java.util.HashMap;
import java.util.Map;
import java.util.Map.Entry;
 
import javax.sql.CommonDataSource;
 
public class SwitchableCommonDataSource extends CommonDataSourceWrapper {
 
    private boolean init;
    private String configFile;
    private Map<String, Object> tempValues = new HashMap<>();
 
    @Override
    public void set(String name, Object value) {
        if (init) {
            super.set(name, value);
        } else {
            tempValues.put(name, value);
        }
    }
 
    @SuppressWarnings("unchecked")
    @Override
    public <T> T get(String name) {
        if (init) {
            return super.get(name);
        } else {
            return (T) tempValues.get(name);
        }
    }
 
    public String getConfigFile() {
        return configFile;
    }
 
    public void setConfigFile(String configFile) {
        this.configFile = configFile;
 
        // Nasty, but there's not an @PostConstruct equivalent on a DataSource that's called
        // when all properties have been set.
        doInit();
    }
 
    public void doInit() {
 
        // Get the properties that were defined separately from the @DataSourceDefinition/data-source element
        Map<String, String> properties = PropertiesUtils.getFromBase(configFile);
 
        // Get & check the most important property; the class name of the data source that we wrap.
        String className = properties.get("className");
        if (className == null) {
            throw new IllegalStateException("Required parameter 'className' missing.");
        }
 
        initDataSource(newInstance(className));
 
        // Set the properties on the wrapped data source that were already set on this class before doInit()
        // was possible.
        for (Entry<String, Object> property : tempValues.entrySet()) {
            super.set(property.getKey(), property.getValue());
        }
 
        // Set the properties on the wrapped data source that were loaded from the external file.
        for (Entry<String, String> property : properties.entrySet()) {
            if (!property.getKey().equals("className")) {
                setWithConversion(property.getKey(), property.getValue());
            }
        }
 
        // After this properties will be set directly on the wrapped data source instance.
        init = true;
    }
 
    private CommonDataSource newInstance(String className) {
        try {
            return (CommonDataSource) Class.forName(className).newInstance();
        } catch (InstantiationException | IllegalAccessException | ClassNotFoundException e) {
            throw new IllegalStateException(e);
        }
    }
 
}

SwitchableXADataSource

package com.example;
 
import java.sql.SQLException;
 
import javax.sql.XAConnection;
import javax.sql.XADataSource;
 
public class SwitchableXADataSource extends CommonDataSourceWrapper implements XADataSource {
 
    public XADataSource getWrapped() {
        return (XADataSource) super.getWrapped();
    }
 
 
    // ------------------------- XADataSource-----------------------------------
 
    @Override
    public XAConnection getXAConnection() throws SQLException {
        return getWrapped().getXAConnection();
    }
 
    @Override
    public XAConnection getXAConnection(String user, String password) throws SQLException {
        return getWrapped().getXAConnection();
    }
 
}

PropertiesUtils

package com.example;
 
import static java.lang.System.getProperty;
import static java.util.Collections.unmodifiableMap;
import static java.util.logging.Level.SEVERE;
 
import java.io.IOException;
import java.net.URL;
import java.util.HashMap;
import java.util.Map;
import java.util.Properties;
import java.util.logging.Logger;
 
public class PropertiesUtils {
 
    private static final Logger logger = Logger.getLogger(PropertiesUtils.class.getName());
 
    public static Map<String, String> getFromBase(String base) {
        String earBaseUrl = getEarBaseUrl();
        String stage = getProperty("example.staging");
        if (stage == null) {
            throw new IllegalStateException("example.staging property not found. Please add it, e.g. -Dexample.staging=dev");
        }
 
        Map<String, String> settings = new HashMap<>();
 
        loadXMLFromUrl(earBaseUrl + "/conf/" + base, settings);
        loadXMLFromUrl(earBaseUrl + "/conf/" + stage + "/" + base, settings);
 
        return unmodifiableMap(settings);
    }
 
    public static String getEarBaseUrl() {
        URL dummyUrl = Thread.currentThread().getContextClassLoader().getResource("META-INF/dummy.txt");
        String dummyExternalForm = dummyUrl.toExternalForm();
 
        int ejbJarPos = dummyExternalForm.lastIndexOf(".jar");
        if (ejbJarPos != -1) {
 
            String withoutJar = dummyExternalForm.substring(0, ejbJarPos);
            int lastSlash = withoutJar.lastIndexOf('/');
 
            return withoutJar.substring(0, lastSlash);
        }
 
        throw new IllegalStateException("Can't derive EAR root from: " + dummyExternalForm);
    }
 
    public static void loadXMLFromUrl(String url, Map<String, String> settings) {
 
        try {
            Properties properties = new Properties();
            properties.loadFromXML(new URL(url).openStream());
 
            logger.info(String.format("Loaded %d settings from %s.", properties.size(), url));
 
            settings.putAll(asMap(properties));
 
        } catch (IOException e) {
            logger.log(SEVERE, "Eror while loading settings.", e);
        }
    }
 
    @SuppressWarnings({ "rawtypes", "unchecked" })
    private static Map<String, String> asMap(Properties properties) {
        return (Map<String, String>)( (Map) properties);
    }
 
}

Future

It would be great if Java EE had some more support for easily switching between different configurations, without necessarily having to install such configuration on a dedicated server. A while back I created JAVAEE_SPEC-19 for this.

A somewhat cleaner Property in Java SE and a simple Converter that functions like the existing PropertyEditor but doesn’t have the AWT paint related baggage would be a small but still welcome improvement as well.

For programmatic access to configuration files it would be really helpful if the root of an EAR archive or at least its META-INF folder could be put on the classpath. For a WAR, an archive type where the web resources where in a sub folder (e.g. web-resources) and the root or WEB-INF was on the classpath would be great as well, although such a big change is rather unlikely to happen anytime soon.

Specifically for data sources it would also be a sure improvement if the Java EE spec mandated that the data sources referenced by @DataSourceDefinition can be loaded from the WAR/EAR archive. Currently it doesn’t do that. Most vendors support it anyway, but GlassFish doesn’t.

As mentioned above, it would be great if Java EE had support for switching configuration, but for now the switchable data source as presented in this article can be used as an alternative.


Arjan Tijms

Simple Java based JSF 2.2 custom component

11 May 2013

In JSF components play a central role, it being a component based framework after all.

As mentioned in a previous blog posting, creating custom components was a lot of effort in JSF 1.x, but became significantly easier in JSF 2.0.

Nevertheless, there were a few tedious things left that needed to be done if the component was needed to be used on a Facelet (which is the overwhelmingly common case); having a -taglib.xml file where a tag for the component is declared, and when the component’s Java code resides directly in a web project (as opposed to a jar) an entry in web.xml to point to the -taglib.xml file.

In JSF 2.2 these two tedious things are not needed anymore as the Facelets component tag can be declared using the existing @FacesComponent annotation.

As an update to the original blog posting, a simple Java based custom component can be created as follows:

components/CustomComponent.java

@FacesComponent(value = "components.CustomComponent", createTag = true)
public class CustomComponent extends UIComponentBase {
 
    @Override
    public String getFamily() {        
        return "my.custom.component";
    }
 
    @Override
    public void encodeBegin(FacesContext context) throws IOException {
 
        String value = (String) getAttributes().get("value");
 
        if (value != null) {        
            ResponseWriter writer = context.getResponseWriter();
            writer.write(value.toUpperCase());
        }
    }
}

This is all there is to it. The above fully defines a Java based JSF custom component. There is not a single extra registration and not a single XML file needed to use this on a Facelet, e.g.

page.xhtml

<html xmlns="http://www.w3.org/1999/xhtml"
    xmlns:h="http://java.sun.com/jsf/html"
    xmlns:test="http://xmlns.jcp.org/jsf/component"
>
    <h:body>
        <test:customComponent value="test"/>        
    </h:body>
</html>

Just these two files (and only these two files) fully constitute a Java EE/JSF application. The .java file does need to be compiled to a .class of course, but then just these two can be deployed to a Java EE 7 server. There’s not a single extra (XML) file, manifest, lib, or whatever else needed as shown in the image below:

custom_component_deploy

Using GlassFish 4.0 b88, requesting http://localhost:8080/customcomponent/page.jsf will simply result in a page displaying:

TEST

So can this be made any simpler? Well, maybe there’s still some room for improvement. What about the getFamily method that still needs to be implemented? It would be great if that too could be defaulted to something. Likewise, the component name could be defaulted to something as well, and while we’re at it, let’s give createTag a default value of true in case the component name is defaulted (only in that case such as not to cause backwards compatibility issues).

Another improvement would be if component attributes could be declared JPA-style via annotations. That way tools can learn about their existence and the code could become a tiny but simpler. E.g.

@FacesComponent
public class CustomComponent extends UIComponentBase {
 
    @Attribute
    String value;
 
    @Override
    public void encodeBegin(FacesContext context) throws IOException { 
        if (value != null) {        
            ResponseWriter writer = context.getResponseWriter();
            writer.write(value.toUpperCase());
        }
    }
}

The above version is not reality yet, but the version at the beginning of this article is and that one is really pretty simple already.


Arjan Tijms

Eclipse 4.2 SR2 released!

2 March 2013

Today the Eclipse organization released the second maintenance release of Eclipse 4.2; Eclipse 4.2.2 aka Eclipse Juno SR2.

This time around the event is actually noted on the main homepage at eclipse.org, where it briefly says:

The packages for the Juno SR2 release are now available for download.

Lists of bugs that were resolved can be found when clicking on the details link for a specific Eclipse variant on the above mentioned download page.

I didn’t see a complete list anywhere, but some fiddling with Bugzilla again revealed a list of 162 bugs that are fixed in core packages.

For the second time in a row the WTP project also posted about this event on their homepage. Following the Eclipse 4.2.2 release train, WTP was upgraded from 3.4.1 to 3.4.2. There’s again no release specific “new and noteworthy”, but there are release notes, which point to a matrix that shows no less than 77 bugs were fixed.

Community reporting about 4.2 SR2 is a little underwhelming again. DZone has a highly voted Eclipse 4.2 sr2 news blurb, and there’s the lone tweet on twitter, but that seems to be about it.

Of course the highlight of this release is the much awaited answer to the abysmal performance that has plagued the 4.2 series since it was released last year. A separate patch has been available for some time, but this has now been integrated into the ready-to-download official release.

Reports about the patch have been largely positive, so it’s to be expected that Eclipse 4.2 SR2 will indeed perform better. Further performance improvements are promised for Eclipse 4.3 (Kepler).

best counter