WildFly 8 benchmarked

14 February 2014, by: Jan Beernink

The final version of WildFly 8 was released this week. WildFly is the new Java EE 7 compliant application server from Red Hat and is the successor to JBoss AS 7. One of the major new features in WildFly is a new high-performance web server called Undertow, which replaces the Tomcat server in previous version of JBoss. As we’ve recently been benchmarking a new application, I was curious as to how WildFly 8 would perform. To find out, I decided to benchmark WildFly using this application and compare it against the latest version of JBoss EAP, version 6.2.

The application used for the benchmark was a simple JSF-based app. For each request a JSF Facelets template, which pulls some data from a backing bean, is being rendered in real-time. The backing bean in turn retrieves the data from a local cache, which is backed by a restful API and periodically refreshed. The refresh happens asynchronously, so as to not block any user’s requests. To achieve a better performance, HTTP sessions were explicitly disabled for this application.

JSF’s stateless mode was activated as well. Although the JSF page that was rendered did not have any forms on it (and thus should not have any state to begin with), this did in fact seem to give a small performance boost. However, the performance boost was so small that it fell within the fluctuation range that we saw between runs and it’s therefor hard to say whether this really mattered.

JMeter was used for the benchmark itself. The application and JMeter were both run on the same computer, which is a 3.4 Ghz quad-core Intel Xeon with 16GB or RAM running Linux Mint 16. As the first release candidate of JDK8 was released last week, I decided to use both JDK7u45 and JDK8b128 in the benchmarks. Both JBoss EAP 6.2 and WildFly 8 were used out of the box; nothing was changed to standalone.xml or any other internal configuration file.

The benchmark itself was performed with 100 concurrent threads, each performing 2000 requests. For each application server and JDK version, four test were performed directly after each other. The results from the first test were discarded, as the JVM was still warming up, and the throughput in requests per second was averaged over the remaining three tests. You can see the average throughput below.

WildFly benchmark average throughput

These averages, however, do not paint the full picture. When taking a closer look at the results from the JBoss EAP benchmarks, the results of the individual benchmark runs fluctuate a lot more than the results from the WildFly benchmarks.

Throughput

JBoss EAP seems to perform best on the second test run in both cases, but this could be a coincidence. What is clear is that the WildFly team have done a great job in creating an application server that, while it might not be outright faster, does achieve a similar level of performance, but with a greater level of consistency. For both JBoss EAP and WildFly, the JDK8 benchmarks still fall within the standard deviation of the JDK7 benchmarks, so also seems to perform on a similar level compared to JDK7. It would be interesting to see how other application servers, like GlassFish, hold up against JBoss EAP and WildFly, so I may revisit this topic sometime soon.

4 comments to “WildFly 8 benchmarked”

  1. Tomaz Cerar says:

    What ware configuration options for both servers?
    Did you just use defaults?

    Can you try testing with this io subsystem configuration in standalone.xml for WildFly:

    http://fpaste.org/77436/

    Just a side note with that what you are actually comparing is speed of JSF framework not really the speed of servlet container…

  2. Arjan Tijms says:

    Tomaz, even though I wasn’t directly involved with the test (Jan did all the work), I do know that both servers used the defaults. This has been added to the text now. Thanks for the remark 😉

    Also interesting to note may be that the servers were started via Eclipse using JBoss tools (non-debug mode). Left to mention I guess are the exact memory settings that Jan used (-Xmx etc).

    The intent of the test was to test the entire server, and indeed not only the Servlet container. It’s just that the Servlet container had a huge change between the two versions of the JBoss servers that were tested. JSF changed as well (Mojarra 2.1 vs 2.2), but a lot of the implementation changes that could matter for performance were back ported to Mojarra 2.1. So my careful guess would be that this should not have such a big influence on performance levels.

  3. Stuart Douglas says:

    Is the source for this benchmark available anywhere?

  4. Enterprise Java Newscast - Episode 31 - Feb 2016 - Home - knowesis says:

    […] http://jdevelopment.nl/wildfly-8-benchmarked/ […]

Type your comment below:

Time limit is exhausted. Please reload CAPTCHA.

css.php best counter