Providing alternatives for JSF 2.3’s injected artifacts

6 November 2012

At the JSF 2.3 EG we’re currently busy with introducing the ability to inject several of JSF’s own artifacts in your own beans.

On the implementation side this is done via a dynamic CDI producer. There’s for instance a producer for the FacesContext, which is then registered via a CDI extension.

This can be tested via a simple test application. See these instructions for how to obtain a JSF 2.3 snapshot build and update GlassFish with it.

The test application will consist of the following code:


<?xml version="1.0" encoding="UTF-8"?>

This file is needed to activate injection of JSF artefacts. For backwards compatibility reasons this feature is only activated when running with a JSF 2.3 deployment descriptor. The second purpose of a (near) empty faces-config.xml is to signal JSF to automatically map the FacesServlet, so we don’t have to create a more verbose web.xml with an explicit mapping. (however the default mappings are not the best ones as the most obvious one, *.xhtml is missing. This is something we hope to rectify in JSF 2.3 as well)


An empty beans.xml is still needed in GlassFish 4.1 to actually enable CDI in a web archive.


<!DOCTYPE html>
<html lang="en"
    <head jsf:id="head">
        <title>FacesContext inject test app</title>
    <body jsf:id="body">

[java src]/test/

package test;
import javax.enterprise.context.RequestScoped;
import javax.faces.context.FacesContext;
import javax.inject.Inject;
import javax.inject.Named;
public class TestBean {
    private FacesContext context;
    public String getTest() {
        return context.toString();

Deploying this to our updated GlassFish and requesting http://localhost:8080/itest/index.jsf will result in something like the following:


So injection works! Now what if we want to “override” the default producer provided by JSF, e.g. what if we want to provide our own alternative implementation?

The answer is to provide your own producer, but mark it as @Dependent, @Alternative and @Priority. E.g. add the following class to the files shown above:

[java src]/test/

package test;
import static javax.interceptor.Interceptor.Priority.APPLICATION;
import javax.annotation.Priority;
import javax.enterprise.context.Dependent;
import javax.enterprise.inject.Alternative;
import javax.enterprise.inject.Produces;
import javax.faces.context.FacesContext;
import javax.faces.context.FacesContextWrapper;
public class ContextProducer {
    public FacesContext producer() {
        return new FacesContextWrapper() {
            public String toString() {
                return "Still ours";
            public FacesContext getWrapped() {
                return FacesContext.getCurrentInstance();

Then deploying this again and request http://localhost:8080/itest/index.jsf once more, will now result in the following:

Still ours 

As we see, the JSF provided producer can be overridden by standard CDI means.

The feature is not finalized yet so things may still change, but hopefully this gives some idea of what direction JSF 2.3 is moving in.

Arjan Tijms

Eclipse 4.2 SR1 silently released!

30 September 2012

Rather silently, the Eclipse organization 2 days ago released the first maintenance release of Eclipse 4.2; Eclipse 4.2.1 aka Eclipse Juno SR1.

Surprisingly, this event isn’t noted on the main homepage at or at the recent activity tracker. There also don’t seem to be any release notes, like the ones we had for the 4.2 release.

Fiddling with Bugzilla gave me a list of 80 bugs that are fixed in core packages.

This time around, the WTP project did feel obliged to post about this event on their homepage. Following the Eclipse 4.2.1 release train, WTP was upgraded from 3.4.0 to 3.4.1. The famous “new and noteworthy” of WTP 3.4.1 unfortunately still points to the previous 3.4.0 release, but there is a list of fixed bugs available that luckily does point to the right version.

Community reporting about 4.2 SR1 has been equally underwhelming, although just today Steffen Schäfer posted about this release focussing on the new JGit/Git 2.1 versions. Besides him there’s a Chinese post about 4.2.1, which just says the following:

To enhance performance, fixed many bug and the export of war with java source code bug fixes!

Is it perhaps so that with the increasing size, complexity and sheer number of plug-ins and projects on, the one thing that we all simply call “Eclipse” becomes more and more difficult to identify as a specific product on that site and as such harder to report about?

And what about the adoption of the Eclipse 4.2 platform? There has been much debate recently about the abysmal performance of the new platform. Is SR1 a step in the right direction, or do we have to wait for 4.2 SR2, or possibly even 4.3?

What’s new in JSF 2.2?

1 September 2012

jsf 2.2

Sample CRUD application with JSF and RichFaces

30 March 2012

During my thesis project I will be using JavaServer Faces. Therefore it is important I get familiar with the framework. To get familiar I made a small CRUD (Create Read Update Delete) application. It is a simple application that makes it possible to keep track of users. It consists of a user list, a page to add/edit users and a page to delete a user.

The code

The research project will focus on when it is beneficial to use client-side scripting instead of/complementary to server side programming. To get up to speed a small CRUD application has been made without the use of client-side scripting and the same CRUD application was adapted to use client-side scripting by using RichFaces. The client-side scripting was added to the editing form to allow validating the form, without the need of making requests to the server.

The structure of the source project is as follows:

  • backing
    • – Backing for index.xhtml
    • – Backing for UserDelete.xhtml
    • – Backing for UserEdit.xhtml
  • constraints
    • –  Validation annotation for fields. Fields with this annotation are validated to be a proper email address.
    • – Performs the validation for email addresses.
  • ejb
    • – Data Access Object for users.
  • entities
    • – Bean object for a user. The fields of this object are annotated with validators.
    • – Converts a userId to a user.
  • util
    • – Utility object that helps with sending messages between pages.

The following JSF pages are in the project:

  • index.xhtml – Page with a list with all users
  • user_delete.xhtml – Page used to confirm whether a user should be validated
  • user_edit.xhtml – Page used for adding and editing users

While creating this application, I tried as much as possible to adhere to best practices. For examples, to go from the master (list) view to the detail (edit) view a GET request is used with the user id as parameter. The user is modified via POST and there’s a redirect and GET back to the master view (PRG pattern).

Both applications make use of Enterprise Java Beans, Bean Validation and Java Persistence API. EJB is used to inject persistence in the managed beans. Bean validation is used to ensure the data is consistent with the business rules.

In the RichFaces version user_edit.xhtml is updated to have client-side validations. Only the email address cannot be validated on the client. For that field is Ajax used.

The code of the project has been uploaded to Google Code so it can viewed by everyone. The code without the use of client-side scripting is put in the default branch and the code with client-side scripting is put in the RichFaces branch.


The compiled applications have been uploaded to OpenShift. It makes showing your work to the public very easy. This is a free cloud platform which runs a JBoss server. The projects can be directly uploaded from Eclipse to the OpenShift server. The live demo can be viewed here.

To upload your project yourself to OpenShift it is first required to make an account at OpenShift. Then register your public key at OpenShift. When you have the OpenShift plug-in installed in Eclipse you can add the OpenShift server to the server view in Eclipse. From there you can get the default project from OpenShift. This project is only used to send the war files to server, not to hold the code. By adding the JSF project to the OpenShift server it automatically places the war file in the OpenShift project during a publish. By pushing the OpenShift project to the server using GIT, the application is put in the cloud and is ready to use. The detailed process to upload to OpenShift from Eclipse is available here.

So wrapping up, I’ve made a small project to get familiar with the code, I have uploaded the code to Google Code and I’ve put the application OpenShift.

Mark van der Tol

Foute foutjes

27 February 2012

Soms zijn de foutmeldingen die een computer je geeft te leuk om weg te klikken. Ik druk dan tegenwoordig steevast op PrintScreen. Hieronder een paar voorbeeldjes:

Wat gebeurt er als je Instant Messenger vastloopt?

Arjan is not responding

De content assistent in de war

Onderstaand voorbeeld is een beetje een vreemde mix van scriptlets in de code. Maar de multiple annotations die hier gevonden worden zijn ook wel vreemd…


How to run the Mojarra automated tests

26 February 2012

The JSF RI implementation Mojarra comes with a very extensive test suite. There are some instructions available on how to actually run those tests, such as the entry “How do I run the automated tests?” in the official FAQ and the document “TESTING_A_BUILD.txt” at the root of the Mojarra repository.

Both sets of instructions are not overly detailed and they don’t seem to be entirely up to date either. For instance, the entry in the FAQ states the following:

This target will cause the JSF API and implementation to be deployed to the GlassFish server
that is downloaded as part of the build process

This however doesn’t happen for testing JSF 2.x anymore (there was code that downloaded GlassFish V2 for JSF 1.x, but this can’t be used for 2.x).

The TESTING_A_BUILD.txt document mentions:

Be sure to install glassfish with a password on admin
make sure that password.txt is found in container.home

But how do we install glassfish with a password? If GlassFish is distributed as a .zip there is no install script, and the version with an installer also doesn’t offer any option to specify a password. I guess that there once was an option in the installer for this but it seems to have been removed since. Additionally, the password.txt in container.home appears to be not entirely correct.

After some trial & error I got the tests to run (more or less, read below). For my own reference and in the hope it’ll be useful to someone, I’m jotting the instructions down here.

Download GlassFish 3.1.1 from (I choose the zip archive).

Unzip GlassFish somewhere; we’ll call the resulting directory [glassfish home] below. This will look something like:

glassfish3 <--- [glassfish home]

The tests assume a password is being used, so we're now going to set one:

cd into [glassfish home]/bin and execute the following command:

./asadmin start-domain domain1

On Mac OS X this may take a minute due to a reverse DNS lookup being attempted (there's a fix).

After GlassFish has started, execute the following command:

./asadmin change-admin-password

Press enter twice and enter adminadmin for the password as below:

Enter admin user name [default: admin]> (press enter)
Enter admin password> (press enter)
Enter new admin password> (enter adminadmin)
Enter new admin password again> (enter adminadmin)
Command change-admin-password executed successfully.

Most commands in the test suite that interact with GlassFish use a password file (--passwordfile option), but for some reason not all. For those an additional .asadminpass file is required to be in the home directory of the user as whom the tests are run. This can be created via the following command:

./asadmin login

Press enter once and again enter adminadmin for the password as below:

Enter admin user name [default: admin]> (press enter)
Enter admin password> (enter adminadmin)
Login information relevant to admin user name [admin]
for host [localhost] and admin port [4848] stored at
[/home/youruser/.asadminpass] successfully.
Make sure that this file remains protected.
Information stored in this file will be used by
asadmin commands to manage the associated domain.
Command login executed successfully.

You can shutdown the container now using the following command:

./asadmin stop-domain domain1

Mojarra has recently gone from being distributed as 2 jars to 1 jar. The Glassfish 3.1.1 that you downloaded still uses 2 jars. In the past the test code patched GlassFish, but probably in anticipation of GlassFish shipping with 1 jar this no longer happens. In the mean time, it's necessary to patch GlassFish yourself.

In [glassfish home]/glassfish/domains/domain1/config/default-web.xml and [glassfish home]/glassfish/lib/templates/default-web.xml remove the entries jsf-api.jar and jsf-impl.jar and replace them by a single javax.faces.jar. I.e.:





Make a backup of the resulting [glassfish home] directory now. The test suite occasionally corrupts GlassFish and you often need to restore a fresh version. In fact, to have reliable test results you might opt to use a fresh GlassFish for every test run.

The Mojarra tests depend a lot on Maven, so if haven't installed Maven then install it now. E.g. on Ubuntu:

sudo apt-get install maven2

(This is not needed on OS X, since it already comes with Maven)

Now we're ready to check out the Mojarra source code. I'll assume SVN knowledge here.

Check out:

We'll rever to the directory where you checked out those sources as [source home] below. E.g.

Mojarra <--- [source home]

Prior to starting the tests, a few files need to be customized. First, put the following in [source home]/password.txt:


Now copy [source home]/ to [source home]/

Set the following entries in the newly created [source home]/[source home]
container.home=[glassfish home]/glassfish

For example:

If you haven't already set this up, define JAVA_HOME. This is typically not needed on OS X, but is needed on a e.g. a fresh Ubuntu installation. If you don't do this, Maven will let you think it can't find an artifact from a remote repository, while in fact it couldn't compile some source files.

Additionally set ANT_OPTS to allow more memory to be used. E.g. in Ubuntu (bash):

export JAVA_HOME=/opt/jdk1.6.0_31/
export PATH=/opt/jdk1.6.0_31/bin:$PATH
export ANT_OPTS='-Xms512m -Xmx786m -XX:MaxPermSize=786m'

Now we're *almost* ready to run the build and the test. Unfortunately the Mojarra trunk seems to have a circular dependency between the clean and main targets at the moment. To break this dependency, comment out two ant tasks in [source home]/jsf-test/build.xml (as of writing on line 119):

  1. <!-- ensure the api jar is deployed to the local maven repo -->
  3. <ant dir="${api.dir}" target="main">
  4.     <property name="skip.javadoc.jar"  value="true" />
  5. </ant>
  6. <ant dir="${api.dir}" target="mvn.deploy.snapshot.local">
  7.     <property name="skip.javadoc.jar"  value="true" />
  8. </ant>
  9. -->

Now cd to [source home] and execute the following command:

ant clean main

If you're lucky, the build will succeed without failures. Now undo the commenting in [source home]/jsf-test/build.xml, so the file will look like below again:

  1. <!-- ensure the api jar is deployed to the local maven repo -->
  2. <ant dir="${api.dir}" target="main">
  3.     <property name="skip.javadoc.jar"  value="true" />
  4. </ant>
  5. <ant dir="${api.dir}" target="mvn.deploy.snapshot.local">
  6.     <property name="skip.javadoc.jar"  value="true" />
  7. </ant>

cd to [source home] and execute the following command again:

ant clean main

If everything still went correctly, we can now run the tests. There are quite a lot of tests and running the suite till completion took 31 minutes on my 2.93Ghz i7 iMac and 48 minutes on an Ubuntu 11.10 installation running inside VirtualBox 4.1.8 on the same machine. The tests generate a whopping 1.6MB of logging, so if you want to scan it later you might want to divert it to a file:

ant test.with.container.refresh > ~/test.txt

If you want to follow the proceedings of the test run, you can use e.g. tail in a second terminal:

tail -f ~/test.txt

In my case, whatever I tried and however clean my system was, some tests were always failing. E.g. testFormOmittedTrinidad, TestLifecycleImpl, ScrumToysTestCase, AdminGuiTestCase.

Occasionally some tests don't honor the halt.on.failure=no setting that we did in the files. If this happens comment out the offending test. If you want to test your own changes to Mojarra, make sure to run the test suite a couple of times to get an idea of what tests are already failing for your system. I personally tried a couple of different machines and environments but none of them was able to pass all tests.

Finally, some extra information for Eclipse users. The tests can be run via Eclipse as well. If you initially check out the Mojarra project in Eclipse, you'll see a lot of errors. There are Eclipse project files, but they are obviously outdated (this is fixable, but I'll leave that for a next article). When working with Eclipse, you'll use Eclipse for editing the source code and for navigation (call references, navigate into and such). Even if the Eclipse project files are corrected, the actual compiler output will be thrown away.

To start the ant build via Eclipse, do the following:

Right click on build.xml, click Run As -> Ant Build… Go to the JRE tab and under VM arguments add the following:

-Xms512m -Xmx786m -XX:MaxPermSize=786m

On the same dialog go to the Environment tab and set JAVA_HOME to your installed JDK, e.g.
variable JAVA_HOME, Value /opt/jdk1.6.0_31 (this is not specifically needed on OS X or if you've already set JAVA_HOME globally for your system).

Go to the Targets tab and deselect everything. Then first select clean and then main. The target execution order in the bottom left corner of the dialog should list them in the right order. Having followed the instructions outlined above for command line ant (commenting out entries in [source home]/jsf-test/build.xml) now click Run. Follow the same instructions for restoring the file and again click Run. Finally, deselect both targets and select the test.with.container.refresh target.

Arjan Tijms

Eclipse 3.7 SR2 released!

24 February 2012

With once again an amazing release accuracy, today Eclipse Indigo Service Release 2, aka Eclipse 3.7.2 has been released.

In the core packages, some 89 bugs have been fixed.

Important projects that are part of the release train were updated as well, for instance WTP was updated from 3.3.1 to 3.3.2 (a fact not mentioned yet on the WTP homepage, but it can be found here). WTP 3.3.2 fixes no less than 112 bugs.

My personal favorite bug that has been fixed is 100% CPU for long time in ASTUtils.createCompilationUnit. Just having a fairly innocent page in your workspace would completely hang the CPU for minutes or more.

For the next version of Eclipse we’re supposedly going to get to see the 4.x line by default on the downloads page, so this might well be the last 3.x version that’s prominently featured on said page. Time will tell of course.

For now, Eclipse 3.7.2 can thus be downloaded from the usual place at and the general release notes can be found at

Arjan Tijms

Try-with-resources in JDK7 without scoped declarations

26 September 2011

A handy new feature in JDK7 is the try-with-resources statement. This statement is meant to eliminate a lot of the boilerplate code required for managing InputStreams and OutputStreams. Say for example, that I would want to copy the contents of an InputStream to an OutputStream. This would require the following code only to manage the in- and output stream:

InputStream in = createInputStream();
try {
   OutputStream out = createOutputStream();
   try {
      /* Copy data here */
   } finally {
      try {
      } catch(IOException e) {
         //Prevent this exception from suppressing actual exception
} finally {
   try {
   } catch(IOException e) {
      //Prevent this exception from suppressing actual exception

Using the new try-with-resources statement, the above code can be rewritten as the following:

try (InputStream in = createInputStream(); OutputStream out = 
      createOutputStream()) {
   /* Copy data here */

The InputStream and OutputStream are automatically closed at the end of the try-with-resources block. If an exception is thrown during the main block and then again during the closing of one (or both) of the streams, the exception on the close operation is added to the original exception as a suppressed exception, so no exceptions are swallowed silently anymore. The try-with-resources blocks are also not limited to be used for in- and output streams, but can be used on any object that implements the AutoCloseable interface.

There is one minor disadvantage to the try-with-resource statement, it is required to define the variable that refers to the object to be closed within the brackets after the try. For example, if you have a method that receives an InputStream as a parameter, the Java compiler would not allow you to do this:

public void readData(InputStream in) {
   try(in) {
      int input;
      while((input = >=0) {
         //Use input here

The above code would produce a compiler error as no variable has been defined between the brackets of the try-with-resources statement. I would propose the following workaround for this situation:

public void readData(InputStream in) {
   try(InputStream autoCloseableInputStream = in) {
      int input;
      while((input = >=0) {
         //Use input here


This code does compile and the stream is automatically closed at the end of the try block. Since the autoCloseableInputStream and in variables refer to the exact same object, it is not necessary to actually use the autoCloseableInputStream variable in the code. Using a name like autoCloseableInputStream makes it clear that this variable is only defined in order to be able to use the try-with-resources statement.

Integration testing using Arquillian without Maven

25 January 2011

Over the past year, we have increasingly resorted to the use of EJBs to implement our business logic. In addition to all the benefits of using EJBs, we also ran into one of the issues. Due to things like dependency injection, automatic transaction management and JPA-based persistence, performing unit tests on EJBs is no easy task.

In order to still be able to perform testing on our EJBs, we initially decided to create a number of mock-objects, including a mock version of the EntityManager, and to perform the dependency injection manually. While this did allow us to perform unit tests on our EJBs, it also made our unit tests dependant on the internal implementation of the EJBs. Adding a newly injected dependency to a EJB would mean that the unit test needed to be expanded to also inject this dependency. Another issue was that since we were using a mock EntityManager, our tests were not covering our native SQL or JPQL queries or covering the JPA entity mappings. We also had no way of testing if transactions were correctly being rolled back or not.

We needed a way to perform our unit tests within an instance of the application server we use. This would solve many of the drawbacks of using mock-objects and performing manual dependency injection in the unit tests. Thankfully, an open source project named Arquillian provides a solution to these problems. Arquillian is an integration testing framework that can be used to perform testing on EJB’s or JPA code. Arquillian can be run either in conjunction JUnit or TestNG.

One of the disadvantages of using Arquillian, at least in our case and for the current Alpha 4 release, is that Maven or another dependency management tool is needed in order to run. Our main project does not use Maven, so we needed to find out a way of running Arquillian without using Maven.

The initial problem was to figure out the bare minimum set of libraries required to run Arquillian. This was achieved by manually going through the pom.xml files in the JBoss repository. This led us to the following minimum set of libraries required to run Arquillian:




Arquillian has very little in the way of configuration to determine to what type of container the test cases need to be deployed. Instead of using configuration, Arquillian uses a plug-in system to determine the type of container to deploy to. We needed to test our code in a JBoss 5.1 container, so we needed the plug-in for deploying to a managed JBoss 5.1 instance. This plug-in has a number of other dependencies. In order to actually be able to launch JBoss, the plug-in required the JBoss Server Manager library. Arquillian also requires a protocol implementation in order to execute the test cases once they have been deployed to the server. For JBoss 5.1, Arquillian requires the servlet 2.5 implementation. Lastly, in order to test EJB’s, we also needed the EJB and resource test enricher libraries. This lead us to the list of the following list of additional libraries required to run Arquillian using a managed JBoss 5.1 instance:


The managed JBoss 5.1 plugin also depends on the JBOSS_HOME environment variable being set to the directory where the JBoss instance that is to be used for testing is located. After we added all the above libraries to our testing classpath, we were finally able to run our Arquillian tests from our Ant scripts and from within Eclipse.

Simulating logins

After we got our first unit tests running under Arquillian, we ran into a problem. Certain parts of our code check if the user is allowed to perform certain actions by checking if the user has a certain role or by checking the caller principal’s name. The user needs to have been authenticated before these parts of the code can be called. For testing, this means we needed to have a way of simulating a login by the user.

Arquillian calls the unit tests in a managed or remotely running container through a servlet in the web module of the test application. In order to do this Arquillian adds its own web module to the EAR file it deployes to the container. This web module needs to be configured for the correct security domain, but Arquillian doesn’t provide a way to add any files to the web module.

This web module is generated automatically by the protocol module being used, which in our case was the servlet 2.5 protcol. More specifically, the web module is being created by the createAuxiliaryArchive() in the class org.jboss.arquillian.protocol.servlet_2_5.ProtocolDeploymentAppender. We fixed the problem by creating a custom version of the servlet 2.5 protocol module in which we added our own custom jboss-web.xml file containing the correct security domain configuration. We then added a line to the createAuxiliaryArchive() method so that our configuration file would be included in the generated web module. Hopefully, future versions of Arquillian will add the functionality to customize the web module before deployment.

Scrum, why we use this agile methodology

13 January 2011

M4N is used to develop its software in an Agile way. We develop in relatively short cycles, we create as few documentation as possible, we anticipate for change and work via personal interaction instead of via formal processes. A few months ago we decided to take this to a higher level and we “officially” embraced the Scrum method.

Now that we have finished a handful of Scrum sprints we can share our first lessons.

What we like about Scrum:

  • As the timeframe is short, developers can plan their work precisely.
  • The team delivers finished software that is ready to test.
  • The whole company is involved. By announcing Sprints internally and making progress visible, the entire organization knows what the development team is currently working on.
  • The team likes the diversity in tasks
  • It is fun to work in a committed team that is self-organized.

What we learned:

  • Keep the team small. Five developers (plus or minus two) is the maximum.
  • Make sure the team is not exposed to external interferences.
  • Don’t make the Sprint too long. Two or three weeks is best.
  • Make sure the Scrumboard is always accessible by the team.
  • Make the team homogeneous. Each member should be able to work on each ticket, so members don’t have to wait until someone has finished his task.
  • Automated testing. The organization of the deployment process must be ready for releasing frequently. This means the organization requires a whole lot of automated testing facilities.
  • It is a challenge to involve remote employees, but it is possible.

We are pretty enthusiastic about Scrum though we realize that Scrum is hard to implement seamlessly.

css.php best counter