Archive for March, 2014

OmniFaces showcase and OpenShift app management

31 March 2014

At ZEEF.com we’re using OmniFaces pretty intensively (eating our own dog food).

We’re hosting an application that showcases a lot of OmniFaces’ features at OpenShift.

Mostly we’re very happy with it. Among its many environments OpenShift offers a JBoss EAP 6.x server that’s very regularly updated. JBoss EAP 6.x is Red Hat’s Java EE 6 implementation that has received many bug fixes during the years so it’s rather stable at the moment. And even though Red Hat has a Java EE 7 implementation out (WildFly 8) the Java EE 6 server keeps getting bug fixes to make it even more stable.

Yesterday however both the two nodes on which we have our showcase app deployed appeared to be suddenly down. An attempt to restart the app via the webconsole didn’t do anything. It just sits there for a long time and then eventually says there is a technical problem and doesn’t provide any further details. This is unfortunately one of the downsides of OpenShift. It’s a great platform, but the webconsole clearly lags behind.

We then tried to log into our primary gear using ssh [number]@[our app name].rhcloud.com. This worked, however the Jboss instances are not running on this primary gear but on two other gears. We tried the “ctl_all stop” and “ctl_all start” commands, but this only seemed to restart the cartridges (ha-proxy and a by default disabled JBoss) on the gear where we were logged-in, not on the other ones.

Next step was trying to login into those other gears. There is unfortunately little information available on what the exact address of those gears is. There used to be a document up at https://www.openshift.com/faq/can-i-access-my-applications-gear, but for some reason it has been taken down. Vaguely remembering that the URL address of the other gears is based on what [app url]/haproxy-status lists, we tried to ssh to that from the primary gear but “nothing happened”. It looked like the ssh command was broken. ssh’ing into foo (ssh foo) also resulted in nothing happening.

With the help of the kind people from OpenShift at the IRC channel it was discovered that ssh on the openshift gear is just silent by default. With the -v option you do get the normal response. Furthermore, when you install the rhc client tools locally you can use the following command to list the URL addresses of all your gears:

rhc app show [app] --gears

This returns the following:

ID State Cartridges Size SSH URL
[number1] started jbosseap-6 haproxy-1.4 small [number1]@[app]-[domain].rhcloud.com
[number2] started jbosseap-6 haproxy-1.4 small [number2]@[number2]-[app]-[domain].rhcloud.com
[number3] started jbosseap-6 haproxy-1.4 small [number3]@[number3]-[app]-[domain].rhcloud.com

We can now ssh into the other gears using the [numberX]@[numberX]-[domain].rhcloud.com pattern, e.g.

ssh 12ab34cd....xy@12ab34cd....xy-myapp-mydomain.rhcloud.com

In our particular case on the gear identified by [number2] the file system was completely full. Simply deleting the log files from /jbosseap/logs fixed the problem. After that we can use the gear command to stop and start the JBoss instance (ctl_all and ctl_app seem to be deprecated):

gear stop
gear start

And lo and behold, the gear came back to life. After doing the same for the [number3] gear both two nodes were up and running again and requests to our app were serviced as normal.

One thing that we also discovered was that per default OpenShift installs and starts a JBoss instance on the gear that hosts the proxy, but for some reason that probably only that one proverbial engineer that left long ago knows, there is no traffic routed to that JBoss instance.

In the ./haproxy/conf directory there’s a configuration file with among others the following content:

server gear-[number2]-[app] ex-std-node[node1].prod.rhcloud.com:[port1] check fall 2 rise 3 inter 2000 cookie [number2]-[app]
server gear-[number3]-[app] ex-std-node[node2].prod.rhcloud.com:[port2] check fall 2 rise 3 inter 2000 cookie [number3]-[app]
server local-gear [localip]:8080 check fall 2 rise 3 inter 2000 cookie local-[number1] disabled

As can be seen, there’s a disabled marker after the local-gear entry. Simply removing it and stopping/starting or restarting the gear will start routing requests to this gear as well.

Furthermore we see that the gear’s SSH URL can indeed be derived from the number that we see in the configuration and output of haproxy. The above [number2] is exactly the same number [number2] as was in the output from rhc app show showcase –gears.

This all took quite some time to figure out. How could OpenShift have done better here?

  • Not take down crucial documentation such as https://www.openshift.com/faq/can-i-access-my-applications-gear.
  • List all gear URLs in the web console when the application is scaled, not just the primary one.
  • Implement a restart in the web console that actually works, and when a failure occurs gives back a clear error message.
  • Have a restart per gear in the web console.
  • List critical error conditions per gear in the web console. In this case “disk full” or “quota exceeded” seems like a common enough condition that the UI could have picked this up.
  • Have a delete logs (or tidy) command in the web console that can be executed for all gears or for a single gear.
  • Don’t have ssh on the gear in super silent mode.
  • Have the RHC tools installed on the server. It’s weird that you can see and do more from the client than when logged-in to the server itself.

All in all OpenShift is still a very impressive system that lets you deploy completely standard Java EE 6 archives to a very stable (EAP) version of JBoss, but when something goes wrong it can be frustrating to deal with the issue. The client tools are pretty advanced, but the tools that are installed on the gear itself and the web console are not there yet.

Arjan Tijms

How to build and run the Mojarra automated tests (2014 update)

30 March 2014

At zeef.com we depend a lot on JSF (see here for details) and occasionally have the need to patch Mojarra.

Mojarra comes with over 8000 tests, but as we explained in a previous article, building it and running those tests is not entirely trivial. It’s not that difficult though if you know the steps, but the many outdated readmes and many folders in the project can make it difficult to find these steps.

Since the previous article some things have changed, so we’ll provide an update here.

Currently the Mojarra project is in a migration status. Manfred Riem is currently working on moving the complicated and ancient ANT based tests to a more modern Maven based setup. For the moment this is a bit of an extra burden as there are now two distinct test folders. Eventually though there should be only one. Since the migration is in full swing things can also still change often. The instructions below are valid for at least JSF 2.2.5 till 2.2.7-SNAPSHOT.

We’ll first create a separate directory for our build and download a fresh version of GlassFish 4 that we’ll use for running the tests. From e.g. your home directory execute the following:

mkdir mtest
cd mtest
wget http://download.java.net/glassfish/4.0/release/glassfish-4.0.zip
unzip glassfish-4.0.zip

Note that unlike the 2012 instructions it’s no longer needed to set an explicit password. The default “empty” password now works correctly. The readme in the project still says you need to install with a password, but this is thus no longer needed.

Next we’ll check out the Mojarra 2.2 “trunk”. Note here that the real trunk is dormant and all the action happens in a branch called “MOJARRA_2_2X_ROLLING”. Unfortunately Mojarra still uses SVN, but it is what it is. We’ll use the following commands;

svn co https://svn.java.net/svn/mojarra~svn/branches/MOJARRA_2_2X_ROLLING/
cd MOJARRA_2_2X_ROLLING
cp build.properties.glassfish build.properties

We now need to edit build.properties and set the following values:

jsf.build.home=[source home]
container.name=glassfishV3.1_no_cluster
container.home=[glassfish home]/glassfish
halt.on.failure=no

[source home] is the current directory where we just cd’ed into (e.g. /home/your_user/mtest/MOJARRA_2_2X_ROLLING), while [glassfish home] is the directory that was extracted from the archive (e.g. /home/your_user/mtest/glassfish4/).

If your OS supports all the following commands (e.g. Ubuntu does) you can also execute:

sed -i "s:<SET CURRENT DIRECTORY>:$(pwd):g" build.properties
sed -i "s:container.name=glassfish:container.name=glassfishV3.1_no_cluster:g" build.properties
sed -i "s:container.home=:container.home=$(readlink -f ../glassfish4/):g" build.properties

We’re now going to invoke the actual build. Unfortunately there still is a weird dependency between the main build task and the clean task, so the first time we can only execute “main” here. If the build needs to be done a subsequent time we can do “clean main” then. For now execute the following:

ant main

We can then run the ANT tests as follows:

export ANT_OPTS='-Xms512m -Xmx786m -XX:MaxPermSize=786m'
ant test.with.container.refresh

Just like the previous time, there are always a number of ANT tasks that are already failing. Whether the “trunk” of Mojarra simply has failing tests all the time, or whether it’s system dependent is something we didn’t investigate. Fact is however that after some three years of periodically building Mojarra and running its tests, on various different systems (Ubuntu, Debian, OS X) we’ve never seen it happening that out of the box all tests passed. In the current (March 28, 2014) 2.2.7-SNAPSHOT branch the following tests failed out of the box;

  1. jsf-ri/systest/src/com/sun/faces/composite/CompositeComponentsTestCase.java#testCompositeComponentResolutionWithinRelocatableResources
  2. jsf-ri/systest/src/com/sun/faces/facelets/FaceletsTestCase.java#FaceletsTestCase#testForEach
  3. jsf-ri/systest/src/com/sun/faces/facelets/ImplicitFacetTestCase.java#testConditionalImplicitFacetChild1727
  4. jsf-ri/systest/src/com/sun/faces/systest/DataTableTestCase.java#testTablesWithEmptyBody
  5. jsf-ri/systest/src/com/sun/faces/jsptest/ConverterTestCase.java#testConverterMessages
  6. jsf-test/JAVASERVERFACES-2113/i_mojarra_2113_htmlunit/src/main/java/com/sun/faces/regression/i_mojarra_2113/Issue2113TestCase.java#testBasicAppFunctionality

So if you want to test the impact of your own changes, be sure to run the tests before making those changes to get an idea of which tests are already failing on your system and then simply comment them out.

The ANT tests execute rather slowly. On the 3.2Ghz/16GB/SSD machine we used they took some 20 minutes.

The maven tests are in a separate directory and contain only tests. To give those tests access to the Mojarra artifact we just build we need to install it in our local .m2 repo:

ant mvn.deploy.snapshot.local

(if we use this method on a build server we may want to use separate users for each test. Otherwise parallel builds may conflict since the .m2 repo is global to the user running the tests)

We now CD into the test directory and start with executing the “clean” and “install” goals:

cd test
mvn clean install

After the clean install we have to tell maven about the location of our GlassFish server. This can be done via a settings.xml file or by replacing every occurrence of “C:/Glassfish3.1.2.2” in the pom.xml that’s in the root of the folder we just cd’ed into. A command to do the latter is:

sed -i "s#C:/Glassfish3.1.2.2#$(readlink -f ../../glassfish4/)#g" pom.xml

The test directory contains several folders with tests for different situations. Since JSF 2.2 can run on both Servlet 3.0 and Servlet 3.1 containers there’s a separate Servlet 3.1 folder with tests specific to that. It’s however not clear why there still is a Servlet 3.0 folder (probably a left-over from JSF 2.0/2.1). The most important test folder is the “agnostic” one. This runs on any server and should even run with every JSF implementation (e.g. it should run on MyFaces 2.2 as well).

The following commands are used to execute them:

cd agnostic/
../bin/test-glassfish-default.sh

The Maven tests run rather fast and should be finished in some 3 to 4 minutes. Instead of modifying the pom and invoking the .sh script we can also run Maven directly via a command like the following:

mvn -Dintegration.container.home=/home/your_user/mtest/glassfish4/ -Pintegration-failsafe,integration-glassfish-cargo clean verify

(replace “/home/your_user/mtest/glassfish4/” with the actual location of glassfish on your system)

The difference is that the script is a great deal faster. It does this by calling maven 6 times with different goals. This will cause work to be done in advance for all tests instead of for each test over and over again. The fact that this needs to be done via a script instead of directly via maven is maybe indicative of a weakness in maven. Although understanding the script is not needed for the build and running the tests, I found it interesting enough to take a deeper look at it.

The pom uses a maven plug-in for the little known cargo project. Cargo is a kind of competitor for the much wider known Arquillian. Just as its more popular peer it can start and stop a large variety of Java EE containers and deploy application archives to those. Cargo has existed for much longer than Arquillian and is still actively developed. It supports ancient servers such as Tomcat 4 and JBoss 3, as well as the very latest crop like Tomcat 8 and WildFly 8.

The 6 separate invocations are the following;

  1. Copy a Mojarra artifact (javax.faces.jar) from the Maven repo to the GlassFish internal modules directory (profile integration-glassfish-prepare)
  2. Clean the project, compile and then install all tests (as war archives) in the local Maven repo (no explicit profile)
  3. Start GlassFish (profile integration-glassfish-cargo, goal cargo:start)
  4. Deploy all previously build war archives in one go to GlassFish (profile integration-glassfish-cargo, goal cargo:redeploy)
  5. Run the actual tests. These will do HTTP requests via HTML Unit to the GlassFish instance that was prepared in the previous steps (profile integration-failsafe, goal verify)
  6. Finally stop the container again (profile integration-glassfish-cargo, goal cargo:stop)

As said previously, the project is in migration status and things still change frequently. In the 2.2.7 “trunk” an additional glassfish-cargo profile appeared that’s basically a copy of the existing integration-glassfish-cargo, but without the embedded and unused copy goal (which we’d seen above was part of the integration-glassfish-prepare profile). There’s also a new glassfish-copy-mojarra-1jar goal that’s a copy of the integration-glassfish-prepare profile with some parametrized configuration items replaced by constants, etc.

With the constant change going on documenting the build and test procedure is difficult, but hopefully the instructions presented in this article are up to date enough for the moment.

Arjan Tijms

Java 7 one-liner to read file into string

24 March 2014

Reading in a file in Java used to require a lot of code. Various things had to wrapped, loops with weird terminating conditions had to be specified and so forth.

In Java 7 we can do a lot better. The actual code to do the reading is just:

String content = new String(readAllBytes(get("test.txt")));

As a full program that echos back the file’s content it looks like this:

import static java.lang.System.out;
import static java.nio.file.Files.readAllBytes;
import static java.nio.file.Paths.get;
 
public class Test {
    public static void main(String[] args) throws Exception {
	out.println(new String(readAllBytes(get("test.txt"))));
    }
}

Of course it we want to be careful that we don’t load a few gigabytes into memory and if we want to pay attention to the character set (it’s the platform default now) we need a little more code, but for quick and dirty file reading this should do the trick.

As a bonus, a version in Scala using the same JDK 7 APIs contributed by my fellow office worker Mark van der Tol:

import java.nio.file.Files.readAllBytes
import java.nio.file.Paths.get
 
object Main extends App {
    println(new String(readAllBytes(get("test.txt"))))
}

Arjan Tijms

css.php best counter