Providing alternatives for JSF 2.3′s injected artifacts

6 November 2014

At the JSF 2.3 EG we’re currently busy with introducing the ability to inject several of JSF’s own artifacts in your own beans.

On the implementation side this is done via a dynamic CDI producer. There’s for instance a producer for the FacesContext, which is then registered via a CDI extension.

This can be tested via a simple test application. See these instructions for how to obtain a JSF 2.3 snapshot build and update GlassFish with it.

The test application will consist of the following code:

WEB-INF/faces-config.xml

<?xml version="1.0" encoding="UTF-8"?>
<faces-config 
	xmlns="http://xmlns.jcp.org/xml/ns/javaee"
	xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
	xsi:schemaLocation="http://xmlns.jcp.org/xml/ns/javaee http://xmlns.jcp.org/xml/ns/javaee/web-facesconfig_2_3.xsd"
	version="2.3"
>
</faces-config>

This file is needed to activate injection of JSF artefacts. For backwards compatibility reasons this feature is only activated when running with a JSF 2.3 deployment descriptor. The second purpose of a (near) empty faces-config.xml is to signal JSF to automatically map the FacesServlet, so we don’t have to create a more verbose web.xml with an explicit mapping. (however the default mappings are not the best ones as the most obvious one, *.xhtml is missing. This is something we hope to rectify in JSF 2.3 as well)

WEB-INF/beans.xml
(empty)

An empty beans.xml is still needed in GlassFish 4.1 to actually enable CDI in a web archive.

index.xhtml

<!DOCTYPE html>
<html lang="en"
    xmlns="http://www.w3.org/1999/xhtml"
    xmlns:jsf="http://xmlns.jcp.org/jsf"
>
    <head jsf:id="head">
        <title>FacesContext inject test app</title>
    </head>
 
    <body jsf:id="body">
	   #{testBean.test}
    </body>
</html>

[java src]/test/TestBean.java

package test;
 
import javax.enterprise.context.RequestScoped;
import javax.faces.context.FacesContext;
import javax.inject.Inject;
import javax.inject.Named;
 
@Named
@RequestScoped
public class TestBean {
 
    @Inject
    private FacesContext context;
 
    public String getTest() {
        return context.toString();
    }
}

Deploying this to our updated GlassFish and requesting http://localhost:8080/itest/index.jsf will result in something like the following:

com.sun.faces.context.FacesContextImpl@7c46fc07 

So injection works! Now what if we want to “override” the default producer provided by JSF, e.g. what if we want to provide our own alternative implementation?

The answer is to provide your own producer, but mark it as @Dependent, @Alternative and @Priority. E.g. add the following class to the files shown above:

[java src]/test/ContextProducer.java

package test;
 
import static javax.interceptor.Interceptor.Priority.APPLICATION;
 
import javax.annotation.Priority;
import javax.enterprise.context.Dependent;
import javax.enterprise.inject.Alternative;
import javax.enterprise.inject.Produces;
import javax.faces.context.FacesContext;
import javax.faces.context.FacesContextWrapper;
 
@Dependent
@Alternative
@Priority(APPLICATION)
public class ContextProducer {
 
    @Produces
    public FacesContext producer() {
        return new FacesContextWrapper() {
 
            @Override
            public String toString() {
                return "Still ours";
            }
 
            @Override
            public FacesContext getWrapped() {
                return FacesContext.getCurrentInstance();
            }
        };
    }
}

Then deploying this again and request http://localhost:8080/itest/index.jsf once more, will now result in the following:

Still ours 

As we see, the JSF provided producer can be overridden by standard CDI means.

The feature is not finalized yet so things may still change, but hopefully this gives some idea of what direction JSF 2.3 is moving in.

Arjan Tijms

Mysterious new Java EE 6 server shows up at Oracle certification pages

31 October 2014

Oracle publishes a page listing all officially certified Java EE servers. The page has been known to list a couple of fairly obscure servers; servers that indeed exist but seemingly nobody has ever heard of (as partly proven by a diverse range of surveys).

Recently a new mysterious server just showed up on this page: InforSuite Standard Edition V9.1

It’s not entirely clear when this was exactly added, but as I visited the page a few weeks before I guess it must have been certified at most a couple of weeks ago. At the very least it was added after last May, as I copied the entire list of servers there for an article I was writing and it sure wasn’t there then.

As for the component listed, it’s unfortunately the usual inconsistent table (every entry has a different table with different terms). It mentions using EclipseLink (JPA), Weld (CDI) and Hibernate Validation (Bean Validation), as well as Oracle and Oracle, which probably means the Oracle reference implementations of JAX-WS (Metro core) and JAXB.

It’s interesting that this late in the Java EE 6 cycle, with Java EE 7 having been released a while ago and Java EE 8 preparations being in full swing, there are still brand new Java EE 6 servers coming out.

Arjan Tijms

Eclipse 4.4 SR1 once again completely silently released

29 September 2014

Another year, another SR1 release, another deafining amount of silence; the Eclipse organization released maintance release one of Eclipse 4.4 three days ago.

As usual, there’s no notion of this event anywhere on the eclipse.org homepage. What counts this time is the Eclipse Newsletter about Project Quality and the fact that LocationTech has announced the 2014 Tour. The actual IDE that is Eclipse doesn’t seem to be that important.

As it appears, there were 131 bugs fixed for this release in the core of Eclipse.

Among others a high profile bug is fixed where Eclipse generated a bad class file and another bug where JSR 45 support (JSP debugging among others) was completely broken.

Furthermore several bugs related to Java 8 were fixed. As Eclipse uses its own compiler (JDT), supporting new language features is always extra difficult for Eclipse as compared to other IDEs that just use javac.

Eclipse 4.4 SR1 fixed some nasty issues with lambda type inference (and another one), deserialization, bridge methods, and explicit null-annotations.

Unfortunately, even with the focus on Java 8 fixes, a very basic and known type inference bug is still in Eclipse 4.4.1, as found out by my co-worker Jan Beernink:

luna_fail

This time around even the good people from the WTP project did not feel like posting about the SR1 event on their homepage. Fiddling with the friendly bugzilla URLs revealed a list of 32 bugs that are likely to be fixed in WTP 3.6.1, the version that should be the one that’s bundled with Eclipse 4.4.1.

Among the highlights of bugs that WTP 3.6.1 fixed is a fix for that fact the JSF EL validation is too strict and that when using the famous “Run on Server” feature, a wrong URL mapping is used.

Following the trend, community reporting is even lower than last year. This year there’s virtually no reporting at all. There’s a lone post from an Eclipse vendor, and that seems to be it.

Unfortunately Eclipse 4.4.1 also introduced a major new bug, one that’s appears right away when you startup a completely clean freshly downloaded instance:

java.lang.ClassCastException: org.eclipse.osgi.internal.framework.EquinoxConfiguration$1 cannot be cast to java.lang.String
	at org.eclipse.m2e.logback.configuration.LogHelper.logJavaProperties(LogHelper.java:26)
	at org.eclipse.m2e.logback.configuration.LogPlugin.loadConfiguration(LogPlugin.java:189)
	at org.eclipse.m2e.logback.configuration.LogPlugin.configureLogback(LogPlugin.java:144)
	at org.eclipse.m2e.logback.configuration.LogPlugin.access$2(LogPlugin.java:107)
	at org.eclipse.m2e.logback.configuration.LogPlugin$1.run(LogPlugin.java:62)
	at java.util.TimerThread.mainLoop(Timer.java:555)
	at java.util.TimerThread.run(Timer.java:505)

It seems clear that SR releases aren’t that exciting, but the complete lack of attention to them and the completely silent releases of what should be the most important product that the Eclipse organization delivers remains a curious thing.

Update

Maybe after reading this article (who knows ;)), but a few days later the Eclipse organisation did finally post about the release event, and the following appeared on the eclipse.org homepage:

2014/09/29
Eclipse Luna SR1 Now Available
The SR1 release of the Eclipse Luna release train is now available for download.

BalusC joins JSF 2.3 EG

12 September 2014

A few weeks ago zeef.com joined the JCP:

Via zeef.com, BalusC (Bauke Scholtz) applied this morning for membership of the JSF EG and was accepted later on the day by spec lead Ed Burns:

Bauke is well known for his many answers on StackOverflow, his blog and of course his work for the JSF utility library OmniFaces.

As members of the JSF 2.3 EG, Bauke and myself (I joined the JSF EG as well) will be looking among others at the possibility of bringing over things from OmniFaces into the JSF core spec and overall help out with taking JSF to the next level. What will eventually end up in JSF is of course subject to community feedback and EG negotiation and approval.

At any length, both Bauke and I are looking forward to working together with the other EG members and contributing to the JSF core spec :)

Arjan Tijms

Dynamic CDI producers

1 August 2014

CDI has the well known concept of producers. Simply put a producer is a kind of general factory method for some type. It’s defined by annotating a method with @Produces. An alternative “factory” for a type is simply a class itself; a class is a factory of objects of its own type.

In CDI both these factories are represented by the Bean type. The name may be somewhat confusing, but a Bean in CDI is thus not directly a bean itself but a type used to create instances (aka a factory). An interesting aspect of CDI is that those Bean instances are not just internally created by CDI after encountering class definitions and producer methods, but can be added manually by user code as well.

Via this mechanism we can thus dynamically register factories, or in CDI terms producers. This can be handy in a variety of cases, for instance when a lot of similar producer methods would have to be defined statically, or when generic producers are needed. Unfortunately, generics are not particularly well supported in CDI. Instead of trying to create a somewhat generic producer an alternative strategy could be to actually scan which types an application is using and then dynamically create a producer for each such type.

The following code gives a very bare bones example using the plain CDI API:

public class DynamicIntegerProducer implements Bean<Integer> {
 
    @SuppressWarnings("all")
    public static class DefaultAnnotationLiteral extends AnnotationLiteral<Default> implements Default {
        private static final long serialVersionUID = 1L;
    }
 
    @Override
    public Class<?> getBeanClass() {
        return Integer.class;
    }
 
    @Override
    public Set<Type> getTypes() {
        return new HashSet<Type>(asList(Integer.class, Object.class));
    }
 
    @Override
    public Integer create(CreationalContext<Integer> creationalContext) {
        return new Integer(5);
    }
 
    @Override
    public Set<Annotation> getQualifiers() {
        return singleton((Annotation) new DefaultAnnotationLiteral());
    }
 
    @Override
    public Class<? extends Annotation> getScope() {
        return Dependent.class;
    }
 
    @Override
    public Set<Class<? extends Annotation>> getStereotypes() {
        return emptySet();
    }
 
    @Override
    public Set<InjectionPoint> getInjectionPoints() {
        return emptySet();
    }
 
    @Override
    public boolean isAlternative() {
        return false;
    }
 
    @Override
    public boolean isNullable() {
        return false;
    }
 
    @Override
    public String getName() {
        return null;
    }
 
    @Override
    public void destroy(Integer instance, CreationalContext<Integer> creationalContext) {
 
    }
}

There are a few things to remark here. First of all the actual producer method is create. This one does nothing fancy and just returns a new Integer instance (normally not a good idea to do it this way, but it’s just an example). The getTypes method is used to indicate the range of types for which this dynamic producer produces types. In this example it could have been deducted from the generic class parameter as well, but CDI still wants it to be defined explicitly.

The getQualifiers method is somewhat nasty. Normally if no explicit qualifiers are used in CDI then the Default one applies. This default is however not implemented in the core CDI system it seems, but is done by virtue of what this method returns. In our case it means we have to explicitly return the default qualifier here via an AnnotationLiteral instance. These are a tad nasty to create, as they require a new class definition that extends AnnotationLiteral and the actual annotation needs to be present as both a (super) interface AND as a generic parameter. To add insult to injury, Eclipse in particular doesn’t like us doing this (even though it’s the documented approach in the CDI documentation) and cries hard about this. We silenced Eclipse here by using the @SuppressWarnings(“all”) annotation. To make the code even more nasty, due to the way generics and type inference work in Java we have to add an explicit cast here (alternatively we could have used Collections.<Annotation>singleton).

For the scope we can’t return a null either, but have to return the CDI default as well if we want that default. This time it’s an easy return. For the scope and stereo types we can’t return a null if we don’t use them, but have to return an empty set. The isNullable method (deprecated since CDI 1.1) can return false. Finally, getName is the only method that can return a null.

Dynamic producers like this have to be added via a CDI extension observing the AfterBeanDiscovery event:

public class DynamicProducerExtension implements Extension {
    public void afterBean(final @Observes AfterBeanDiscovery afterBeanDiscovery) {
        afterBeanDiscovery.addBean(new DynamicIntegerProdcuer());
    }
}

As with all CDI extensions the extension class has to be registered by putting its FQN in META-INF/services/javax.enterprise.inject.spi.Extension.

After doing this, injection can be done as usual, e.g.:

@Singleton
@Startup
public class TestBean {
 
    @Inject
    private Integer integer;
 
    @PostConstruct
    public void init() {
        out.println(integer);
    }
}

Deploying an application with only the code shown in this post will print 5 in the logs ;)

Arjan Tijms

Eclipse Luna and JDK8

25 June 2014

Today marked the launch of Eclipse Luna, which is the first version of Eclipse which ships with Java 8 support built in. As we like to stay on the cutting edge here at ZEEF, I decided to give this new version of Eclipse a try straight away.

Unfortunately, I ran into a bug in the Java 8 support fairly quickly. Type inference in Luna seems to fail when the return type of a method needs to be inferred from a lambda within a lambda that has been used as a method parameter. For example, the following code snippet compiles normally using javac, but fails in Eclipse Luna:

    Stream.of("test")
        .flatMap(s -> s.chars().mapToObj(i -> Character.valueOf((char)i)))
        .filter(Character::isLowerCase)
        .toArray();

Eclipse seems to infer that the return type of the “i -> Character.valueOf((char)i)” lambda is Object instead of Character, which leads to infer that the return type of flatMap() should be Stream<Object> instead of Stream<Character>. As the method reference used as the filter parameter, the compiler fails on this line as this method reference is not applicable for an argument of type Object.

Fortunately, it’s easy to work around this bug in Eclipse’s compiler. Any change to either the outer or inner lambda which makes the return type explicit allows Eclipse to correctly compile the code. Casting either the inner or the outer lambda to the correct type, assigning either to a variable or replacing either with a method reference allows Eclipse to correctly compile the code. For example, if you were to assign the inner lambda to a variable, you would end up with the following code:

    IntFunction<Character> characterFromInt = i -> Character.valueOf((char)i);
    Stream.of("test")
        .flatMap(s -> s.chars().mapToObj(characterFromInt))
        .filter(Character::isLowerCase)
        .toArray();

The need to for such a workaround is unfortunate as improved type inference is one of the major enhancements in Java 8. Hopefully a fix for this issue will be available soon.

Experiences with migrating from JBoss AS 7 to WildFly 8.1

24 June 2014

This month the first major update of WildFly 8 (the base for a future JBoss EAP 7) was released; WildFly 8.1.0.Final.

I tried to get zeef.com running on it, which currently runs on JBoss AS 7. Zeef.com is a relatively new Java EE 6 based web application that was started about a year ago. As such there is not yet that much legacy code present in it.

This article is about the issues I encountered during this initial migration.

In broad lines the issues fell into the following categories:

  • Datasource
  • JASPIC
  • Tomcat/Undertow differences
  • Valves

Datasource

The first issue I run into was a failed deployment. WildFly spitted out page upon page of unintelligible mumble jumble. Upon a closer look there was something about our datasource in the middle of all of this. Apparently WildFly was trying to tell me that the datasource couldn’t be found. For zeef.com we define our datasource in application.xml (the Java EE standard way) and switch between stages uses a delegating switcheable datasource.

Moving the datasource definition to ejb-jar.xml solved the problem. Now WildFly 8.1 does support the definition of datasources in application.xml as demonstrated by this test (although a small workaround is needed). It might be an issue with loading the SwitchableXADataSource from the EAR level, but I didn’t investigate this further.

JASPIC

The next problem concerned an amount of issues related to JASPIC, the Java EE standard authentication API. JASPIC is an important but troublesome spec. It’s suspected that its TCK is very light, as (preview) implementations that don’t actually work (yet) have been certified.

The WildFly team however has done a fair amount of work to make sure its JASPIC implementation is reasonably correct, among others by using an external set of tests designed to verify that JASPIC does the most basic things right (like actually authenticating). Unfortunately one case slipped through, and that mainly concerned the behavior of HttpServletRequest#authenticate.

Specifically the following issues occured:

  • authenticate() does nothing and closes response (UNDERTOW-263)
  • authenticate() closes response when no authentication happened (UNDERTOW-259)
  • NullPointerExceptions right after request processing (WFLY-3514)
  • NullPointerExceptions for requests after a session is created (WFLY-3518

The first two issues could be worked around by installing an HttpServletRequestWrapper, e.g. via an Undertow Handler as follows:

public class JaspicHandler implements HttpHandler {
 
	private Field authenticationStateField; 
	private HttpHandler next;
 
	public JaspicHandler(HttpHandler next) {
		this.next = next;
		try {
			authenticationStateField = SecurityContextImpl.class.getDeclaredField("authenticationState");
			authenticationStateField.setAccessible(true);
		} catch (NoSuchFieldException | SecurityException e) {
			throw new RuntimeException(e);
		}
	}
 
	@Override
	public void handleRequest(final HttpServerExchange exchange) throws Exception {
 
		ServletRequestContext context =	exchange.getAttachment(ATTACHMENT_KEY);
		if (context != null) {
			ServletRequest request = context.getServletRequest();
 
			HttpServletRequestWrapper wrapper = new HttpServletRequestWrapper((HttpServletRequest) request) {
				@Override
				public boolean authenticate(HttpServletResponse response) throws IOException, ServletException {
					if (response.isCommitted()) {
						throw MESSAGES.responseAlreadyCommited();
					}
 
					SecurityContext securityContext = exchange.getSecurityContext();
					securityContext.setAuthenticationRequired();
 
					if (securityContext instanceof SecurityContextImpl) {
						SecurityContextImpl securityContextImpl = (SecurityContextImpl) securityContext;
						try {
							// Perform the actual reset of the authentication state
							authenticationStateField.set(securityContextImpl, authenticationStateField.get(new SecurityContextImpl(null, null)));
						} catch (IllegalArgumentException | IllegalAccessException | SecurityException e) {
							throw new RuntimeException(e);
						}
					}
 
					if (securityContext.authenticate()) {
						if (securityContext.isAuthenticated()) {
							return true;
						} else {
							throw MESSAGES.authenticationFailed();
						}
					} else {
						// Just return false. The original method for some reason closes the stream here.
						 // see https://issues.jboss.org/browse/UNDERTOW-259
						return false;
					}
				}
			};
 
			context.setServletRequest(wrapper);
		}
 
		next.handleRequest(exchange);
	}
 
}

And then register this as an innerHandler:

public class UndertowHandlerExtension implements ServletExtension {
    @Override
    public void handleDeployment(final DeploymentInfo deploymentInfo, final ServletContext servletContext) {
        deploymentInfo.addInnerHandlerChainWrapper(handler -> new JaspicHandler(handler)); 
    }
}

The handler extension itself has to be registered by putting its fully qualified class name in /META-INF/services/io.undertow.servlet.ServletExtension.

For the other two issues JASPIAuthenticationMechanism had to be patched by inserting a simple guard around obtaining the so-called “cached account”:

cachedAccount = authSession == null? null : authSession.getAccount();

and by inserting another guard in secureResponse to check if a wrapper hadn’t been installed before:

ServletRequest request = exchange.getAttachment(ATTACHMENT_KEY).getServletRequest();
 
if (!TRUE.equals(request.getAttribute("JASPIAuthenticationMechanism.secureResponse.installed"))) {
    request.setAttribute("JASPIAuthenticationMechanism.secureResponse.installed", TRUE);
    // original code
}

(the fix for this last issue was committed rather fast by WildFly developers and fixes it in a better way)

Tomcat/Undertow differences

The next problems were about (small) differences between Tomcat/JBossWeb that JBoss previously used and the new Undertow that’s used in WildFly 8.

The first of those issues are about what HttpServletRequest#getRequestURI and HttpServletRequest#getServletPath return when a welcome file is requested. Tomcat will return the requested location for getRequestURI and the welcome file resource for getServletPath, while Undertow will return the welcome file resource for both calls.

E.g. with a welcome file declaration in web.xml as follows:

<welcome-file-list>
    <welcome-file>index</welcome-file>
</welcome-file-list>

and when requesting the context root of an application (e.g. http://localhost:8080 for a root deployment), the results are as follows:

getRequestURI getServletPath
Tomcat/JBossWeb / /welcome
Undertow /welcome /welcome

The information about the requested resource can be used to redirect the user to the root (“/”) when for some reason the welcome file resource is directly requested. With such a redirect the website will always display a ‘clean’ URL in the address bar. With the way Undertow does things there’s no way to distinguish a request to “/” from a request to “/welcome” and thus no opportunity to redirect. If a redirect was already in place based on the Tomcat/JBossWeb behavior an endless redirect loop will be the result.

A workaround here is to create another Undertow handler as follows:

public class RequestURIHandler implements HttpHandler {
 
    private HttpHandler next;
 
    public RequestURIHandler(HttpHandler next) {
        this.next = next;
    }
 
    @Override
    public void handleRequest(final HttpServerExchange exchange) throws Exception {
 
        String requestURI = exchange.getRequestURI();
 
        next.handleRequest(exchange);
 
        exchange.setRequestURI(requestURI);
    }
}

This handler too has to be registered like we did for the JaspicHandler, but this time as an initialHandler.

Another difference (likely a bug) between JBossWeb and Undertow is that for a root deployment the former will write the JSESSIONID cookie with the path set to “/”. Undertow however leaves the path empty.

An empty path is however interpreted by the browser as being the requested URI, meaning that the JSESSIONID cookie is set on each path where the user happens to do something that creates a session. It doesn’t require much imagination to understand this causes chaos in an application.

As it appears the fact that APIs for getting the context root often return the empty string for the root deployment (instead of “/”), but the path name for other deployments (e.g. “/foo”) is the culprit here. Undertow is not the first one to fall for this. Mojarra once had an identical bug with respect to setting a cookie for The Flash.

We can workaround this issue by setting the path explicitly in web.xml as follows:

<session-config>
    <cookie-config>
        <path>/</path>
        <http-only>true</http-only>
        <secure/>
    </cookie-config>
    <tracking-mode>COOKIE</tracking-mode>
</session-config>

Valves

Tomcat (and thus JBossWeb) has a low level mechanism called a Valve, which is a kind of Filter like element but at a much lower level and with access to some of the internal server APIs.

Ideally an application wouldn’t have to resort to using these, but sometimes there’s no choice. For instance a Filter can not change the outcome of the Servlet pipeline. It’s already fully established when the Filter is called. A Filter can redirect or forward, but these mechanisms have various side-effects. There’s also no mechanism in Servlet to intercept ALL cookies being written. By wrapping the HttpServletResponse you can catch a lot, but not these emitted by lower level components like the server generated JSESSIONID and cookies set by e.g. a SAM.

For zeef.com we had to resort to use a few of these. As Valves are highly specific to Tomcat it’s only logical that Undertow can’t support them. It does have another construct called HttpHandler, which we already used above for some workarounds.

Those handlers are quite powerful. There are ones that are called before the request pipeline is set up (initial handlers) and ones that execute during this pipeline (inner and outer handlers).

One of the things where we used a Valve for in Tomcat/JBossWeb is to do universal cookie rewriting. Unfortunately Undertow doesn’t have a way to rewrite a cookie right away. There is an opportunity to have a kind of listener called right before the response is being written, but it’s a bit non-obvious.

Via a handler and some reflective code we can do this more directly, as shown by the following code:

public class CookieRewriteHandler implements HttpHandler {
 
    private Field responseCookiesField;
    private final HttpHandler next;
 
    public CookieRewriteHandler(HttpHandler next) {
 
        this.next = next;
 
        try {
            responseCookiesField = HttpServerExchange.class.getDeclaredField("responseCookies");
        } catch (NoSuchFieldException | SecurityException e) {
            throw new RuntimeException(e);
        }
        responseCookiesField.setAccessible(true);
    }
 
    @Override
    public void handleRequest(final HttpServerExchange exchange) throws Exception {
        // Overwrite the map used to store cookies internally so we can intercept
        // the cookies being written.
 
        // Alternatively: there's a wrapper handler called just before the response
        // is being written. We could use that to iterate over the cookie map.
        responseCookiesField.set(exchange, new HashMap<String, Cookie>() {
 
            private static final long serialVersionUID = 1L;
 
            @Override
            public Cookie put(String key, Cookie value) {
                // *****************************
                // rewrite cookie here as needed
                // *****************************
                return super.put(key, value);
            }
 
        });
 
        next.handleRequest(exchange);
    }
}

Of course using reflection to hack into the internals of a server is rarely a good idea and this handler is at risk of breaking with every minor update to Undertow.

Conclusion

During the initial attempt to get our application running on WildFly I certainly encountered a fair number of issues. It certainly wasn’t the case of just deploying the app and having everything working.

Of course we do have to realize that the Java EE standard datasource and authentication mechanism are unfortunately still not used that much as most vendors keep documenting their proprietary mechanisms first and foremost, which likely causes users to use those most, which on its turn may cause these standard ways to get less testing hours.

The Tomcat/WildFly differences initially looked major, but are after all just small bugs.

The Valves issue is debatable. It’s a major change in the product JBoss, but most Java EE applications should maybe not have used them in the first place. Purely from the point of view of the Java EE spec it doesn’t matter that Red Hat changed this, but looking from the point of view of JBoss itself it does matter and people have little choice but to rewrite their code if they want to upgrade.

Finally we have to realize that WildFly from a certain point of view is like an open beta. It has freely downloadable binaries, but the product is early in its lifecycle and there’s no commercial support available for it yet. When the WildFly branch transitions to JBoss EAP 7 many bugs that the community discovers now will undoubtedly have been fixed and commercial support will be available then. In a way this is the price we pay for a free product such as this. As David Blevins wrote “Open Source Isn’t Free”. Users “pay” by testing the software and producing bug reports and perhaps patches, which IMHO is a pretty good deal.

At any length there’s the eternal tradeoff to be made: adopt the new Java EE 7 spec early with WildFly 8 but be prepared to run into some issues, or wait a good deal longer for JBoss EAP 7 but then have a much more stable product to begin with. This is of course a choice everyone has to make for themselves.



Arjan Tijms

OmniFaces showcase and OpenShift app management

31 March 2014

At ZEEF.com we’re using OmniFaces pretty intensively (eating our own dog food).

We’re hosting an application that showcases a lot of OmniFaces’ features at OpenShift.

Mostly we’re very happy with it. Among its many environments OpenShift offers a JBoss EAP 6.x server that’s very regularly updated. JBoss EAP 6.x is Red Hat’s Java EE 6 implementation that has received many bug fixes during the years so it’s rather stable at the moment. And even though Red Hat has a Java EE 7 implementation out (WildFly 8) the Java EE 6 server keeps getting bug fixes to make it even more stable.

Yesterday however both the two nodes on which we have our showcase app deployed appeared to be suddenly down. An attempt to restart the app via the webconsole didn’t do anything. It just sits there for a long time and then eventually says there is a technical problem and doesn’t provide any further details. This is unfortunately one of the downsides of OpenShift. It’s a great platform, but the webconsole clearly lags behind.

We then tried to log into our primary gear using ssh [number]@[our app name].rhcloud.com. This worked, however the Jboss instances are not running on this primary gear but on two other gears. We tried the “ctl_all stop” and “ctl_all start” commands, but this only seemed to restart the cartridges (ha-proxy and a by default disabled JBoss) on the gear where we were logged-in, not on the other ones.

Next step was trying to login into those other gears. There is unfortunately little information available on what the exact address of those gears is. There used to be a document up at https://www.openshift.com/faq/can-i-access-my-applications-gear, but for some reason it has been taken down. Vaguely remembering that the URL address of the other gears is based on what [app url]/haproxy-status lists, we tried to ssh to that from the primary gear but “nothing happened”. It looked like the ssh command was broken. ssh’ing into foo (ssh foo) also resulted in nothing happening.

With the help of the kind people from OpenShift at the IRC channel it was discovered that ssh on the openshift gear is just silent by default. With the -v option you do get the normal response. Furthermore, when you install the rhc client tools locally you can use the following command to list the URL addresses of all your gears:

rhc app show [app] --gears

This returns the following:

ID State Cartridges Size SSH URL
[number1] started jbosseap-6 haproxy-1.4 small [number1]@[app]-[domain].rhcloud.com
[number2] started jbosseap-6 haproxy-1.4 small [number2]@[number2]-[app]-[domain].rhcloud.com
[number3] started jbosseap-6 haproxy-1.4 small [number3]@[number3]-[app]-[domain].rhcloud.com

We can now ssh into the other gears using the [numberX]@[numberX]-[domain].rhcloud.com pattern, e.g.

ssh 12ab34cd....xy@12ab34cd....xy-myapp-mydomain.rhcloud.com

In our particular case on the gear identified by [number2] the file system was completely full. Simply deleting the log files from /jbosseap/logs fixed the problem. After that we can use the gear command to stop and start the JBoss instance (ctl_all and ctl_app seem to be deprecated):

gear stop
gear start

And lo and behold, the gear came back to life. After doing the same for the [number3] gear both two nodes were up and running again and requests to our app were serviced as normal.

One thing that we also discovered was that per default OpenShift installs and starts a JBoss instance on the gear that hosts the proxy, but for some reason that probably only that one proverbial engineer that left long ago knows, there is no traffic routed to that JBoss instance.

In the ./haproxy/conf directory there’s a configuration file with among others the following content:

server gear-[number2]-[app] ex-std-node[node1].prod.rhcloud.com:[port1] check fall 2 rise 3 inter 2000 cookie [number2]-[app]
server gear-[number3]-[app] ex-std-node[node2].prod.rhcloud.com:[port2] check fall 2 rise 3 inter 2000 cookie [number3]-[app]
server local-gear [localip]:8080 check fall 2 rise 3 inter 2000 cookie local-[number1] disabled

As can be seen, there’s a disabled marker after the local-gear entry. Simply removing it and stopping/starting or restarting the gear will start routing requests to this gear as well.

Furthermore we see that the gear’s SSH URL can indeed be derived from the number that we see in the configuration and output of haproxy. The above [number2] is exactly the same number [number2] as was in the output from rhc app show showcase –gears.

This all took quite some time to figure out. How could OpenShift have done better here?

  • Not take down crucial documentation such as https://www.openshift.com/faq/can-i-access-my-applications-gear.
  • List all gear URLs in the web console when the application is scaled, not just the primary one.
  • Implement a restart in the web console that actually works, and when a failure occurs gives back a clear error message.
  • Have a restart per gear in the web console.
  • List critical error conditions per gear in the web console. In this case “disk full” or “quota exceeded” seems like a common enough condition that the UI could have picked this up.
  • Have a delete logs (or tidy) command in the web console that can be executed for all gears or for a single gear.
  • Don’t have ssh on the gear in super silent mode.
  • Have the RHC tools installed on the server. It’s weird that you can see and do more from the client than when logged-in to the server itself.

All in all OpenShift is still a very impressive system that lets you deploy completely standard Java EE 6 archives to a very stable (EAP) version of JBoss, but when something goes wrong it can be frustrating to deal with the issue. The client tools are pretty advanced, but the tools that are installed on the gear itself and the web console are not there yet.

Arjan Tijms

How to build and run the Mojarra automated tests (2014 update)

30 March 2014

At zeef.com we depend a lot on JSF (see here for details) and occasionally have the need to patch Mojarra.

Mojarra comes with over 8000 tests, but as we explained in a previous article, building it and running those tests is not entirely trivial. It’s not that difficult though if you know the steps, but the many outdated readmes and many folders in the project can make it difficult to find these steps.

Since the previous article some things have changed, so we’ll provide an update here.

Currently the Mojarra project is in a migration status. Manfred Riem is currently working on moving the complicated and ancient ANT based tests to a more modern Maven based setup. For the moment this is a bit of an extra burden as there are now two distinct test folders. Eventually though there should be only one. Since the migration is in full swing things can also still change often. The instructions below are valid for at least JSF 2.2.5 till 2.2.7-SNAPSHOT.

We’ll first create a separate directory for our build and download a fresh version of GlassFish 4 that we’ll use for running the tests. From e.g. your home directory execute the following:

mkdir mtest
cd mtest
wget http://download.java.net/glassfish/4.0/release/glassfish-4.0.zip
unzip glassfish-4.0.zip

Note that unlike the 2012 instructions it’s no longer needed to set an explicit password. The default “empty” password now works correctly. The readme in the project still says you need to install with a password, but this is thus no longer needed.

Next we’ll check out the Mojarra 2.2 “trunk”. Note here that the real trunk is dormant and all the action happens in a branch called “MOJARRA_2_2X_ROLLING”. Unfortunately Mojarra still uses SVN, but it is what it is. We’ll use the following commands;

svn co https://svn.java.net/svn/mojarra~svn/branches/MOJARRA_2_2X_ROLLING/
cd MOJARRA_2_2X_ROLLING
cp build.properties.glassfish build.properties

We now need to edit build.properties and set the following values:

jsf.build.home=[source home]
container.name=glassfishV3.1_no_cluster
container.home=[glassfish home]/glassfish
halt.on.failure=no

[source home] is the current directory where we just cd’ed into (e.g. /home/your_user/mtest/MOJARRA_2_2X_ROLLING), while [glassfish home] is the directory that was extracted from the archive (e.g. /home/your_user/mtest/glassfish4/).

If your OS supports all the following commands (e.g. Ubuntu does) you can also execute:

sed -i "s:<SET CURRENT DIRECTORY>:$(pwd):g" build.properties
sed -i "s:container.name=glassfish:container.name=glassfishV3.1_no_cluster:g" build.properties
sed -i "s:container.home=:container.home=$(readlink -f ../glassfish4/):g" build.properties

We’re now going to invoke the actual build. Unfortunately there still is a weird dependency between the main build task and the clean task, so the first time we can only execute “main” here. If the build needs to be done a subsequent time we can do “clean main” then. For now execute the following:

ant main

We can then run the ANT tests as follows:

export ANT_OPTS='-Xms512m -Xmx786m -XX:MaxPermSize=786m'
ant test.with.container.refresh

Just like the previous time, there are always a number of ANT tasks that are already failing. Whether the “trunk” of Mojarra simply has failing tests all the time, or whether it’s system dependent is something we didn’t investigate. Fact is however that after some three years of periodically building Mojarra and running its tests, on various different systems (Ubuntu, Debian, OS X) we’ve never seen it happening that out of the box all tests passed. In the current (March 28, 2014) 2.2.7-SNAPSHOT branch the following tests failed out of the box;

  1. jsf-ri/systest/src/com/sun/faces/composite/CompositeComponentsTestCase.java#testCompositeComponentResolutionWithinRelocatableResources
  2. jsf-ri/systest/src/com/sun/faces/facelets/FaceletsTestCase.java#FaceletsTestCase#testForEach
  3. jsf-ri/systest/src/com/sun/faces/facelets/ImplicitFacetTestCase.java#testConditionalImplicitFacetChild1727
  4. jsf-ri/systest/src/com/sun/faces/systest/DataTableTestCase.java#testTablesWithEmptyBody
  5. jsf-ri/systest/src/com/sun/faces/jsptest/ConverterTestCase.java#testConverterMessages
  6. jsf-test/JAVASERVERFACES-2113/i_mojarra_2113_htmlunit/src/main/java/com/sun/faces/regression/i_mojarra_2113/Issue2113TestCase.java#testBasicAppFunctionality

So if you want to test the impact of your own changes, be sure to run the tests before making those changes to get an idea of which tests are already failing on your system and then simply comment them out.

The ANT tests execute rather slowly. On the 3.2Ghz/16GB/SSD machine we used they took some 20 minutes.

The maven tests are in a separate directory and contain only tests. To give those tests access to the Mojarra artifact we just build we need to install it in our local .m2 repo:

ant mvn.deploy.snapshot.local

(if we use this method on a build server we may want to use separate users for each test. Otherwise parallel builds may conflict since the .m2 repo is global to the user running the tests)

We now CD into the test directory and start with executing the “clean” and “install” goals:

cd test
mvn clean install

After the clean install we have to tell maven about the location of our GlassFish server. This can be done via a settings.xml file or by replacing every occurrence of “C:/Glassfish3.1.2.2″ in the pom.xml that’s in the root of the folder we just cd’ed into. A command to do the latter is:

sed -i "s#C:/Glassfish3.1.2.2#$(readlink -f ../../glassfish4/)#g" pom.xml

The test directory contains several folders with tests for different situations. Since JSF 2.2 can run on both Servlet 3.0 and Servlet 3.1 containers there’s a separate Servlet 3.1 folder with tests specific to that. It’s however not clear why there still is a Servlet 3.0 folder (probably a left-over from JSF 2.0/2.1). The most important test folder is the “agnostic” one. This runs on any server and should even run with every JSF implementation (e.g. it should run on MyFaces 2.2 as well).

The following commands are used to execute them:

cd agnostic/
../bin/test-glassfish-default.sh

The Maven tests run rather fast and should be finished in some 3 to 4 minutes. Instead of modifying the pom and invoking the .sh script we can also run Maven directly via a command like the following:

mvn -Dintegration.container.home=/home/your_user/mtest/glassfish4/ -Pintegration-failsafe,integration-glassfish-cargo clean verify

(replace “/home/your_user/mtest/glassfish4/” with the actual location of glassfish on your system)

The difference is that the script is a great deal faster. It does this by calling maven 6 times with different goals. This will cause work to be done in advance for all tests instead of for each test over and over again. The fact that this needs to be done via a script instead of directly via maven is maybe indicative of a weakness in maven. Although understanding the script is not needed for the build and running the tests, I found it interesting enough to take a deeper look at it.

The pom uses a maven plug-in for the little known cargo project. Cargo is a kind of competitor for the much wider known Arquillian. Just as its more popular peer it can start and stop a large variety of Java EE containers and deploy application archives to those. Cargo has existed for much longer than Arquillian and is still actively developed. It supports ancient servers such as Tomcat 4 and JBoss 3, as well as the very latest crop like Tomcat 8 and WildFly 8.

The 6 separate invocations are the following;

  1. Copy a Mojarra artifact (javax.faces.jar) from the Maven repo to the GlassFish internal modules directory (profile integration-glassfish-prepare)
  2. Clean the project, compile and then install all tests (as war archives) in the local Maven repo (no explicit profile)
  3. Start GlassFish (profile integration-glassfish-cargo, goal cargo:start)
  4. Deploy all previously build war archives in one go to GlassFish (profile integration-glassfish-cargo, goal cargo:redeploy)
  5. Run the actual tests. These will do HTTP requests via HTML Unit to the GlassFish instance that was prepared in the previous steps (profile integration-failsafe, goal verify)
  6. Finally stop the container again (profile integration-glassfish-cargo, goal cargo:stop)

As said previously, the project is in migration status and things still change frequently. In the 2.2.7 “trunk” an additional glassfish-cargo profile appeared that’s basically a copy of the existing integration-glassfish-cargo, but without the embedded and unused copy goal (which we’d seen above was part of the integration-glassfish-prepare profile). There’s also a new glassfish-copy-mojarra-1jar goal that’s a copy of the integration-glassfish-prepare profile with some parametrized configuration items replaced by constants, etc.

With the constant change going on documenting the build and test procedure is difficult, but hopefully the instructions presented in this article are up to date enough for the moment.

Arjan Tijms

Java 7 one-liner to read file into string

24 March 2014

Reading in a file in Java used to require a lot of code. Various things had to wrapped, loops with weird terminating conditions had to be specified and so forth.

In Java 7 we can do a lot better. The actual code to do the reading is just:

String content = new String(readAllBytes(get("test.txt")));

As a full program that echos back the file’s content it looks like this:

import static java.lang.System.out;
import static java.nio.file.Files.readAllBytes;
import static java.nio.file.Paths.get;
 
public class Test {
    public static void main(String[] args) throws Exception {
	out.println(new String(readAllBytes(get("test.txt"))));
    }
}

Of course it we want to be careful that we don’t load a few gigabytes into memory and if we want to pay attention to the character set (it’s the platform default now) we need a little more code, but for quick and dirty file reading this should do the trick.

As a bonus, a version in Scala using the same JDK 7 APIs contributed by my fellow office worker Mark van der Tol:

import java.nio.file.Files.readAllBytes
import java.nio.file.Paths.get
 
object Main extends App {
    println(new String(readAllBytes(get("test.txt"))))
}

Arjan Tijms

css.php best counter