I was curious reading about openj9 as a JVM being high performance and using a low memory footprint. I was working in a project, where the environment of software systems consisted of around 30 Java Applications, a few running on tomcat, a few on weblogic and the most running on dropwizard microservice framework. The desirable goal for every developer was to start the complete platform on his local notebook and therefor a virtalbox image was build using vagrant. As there were so many applications, each microservice itself consumed around 250MB of RAM and with the number of services growing we already hit 24GB image size of the virtualbox image. I found the blogpost https://codeburst.io/microservices-in-java-never-a7f3a2540dbb which describes the same issue and that if you those java microsevices in a cloud infrastructure, you would even have to pay even more money only because of the memory footprint.
I tested it on my own (just for curiosity) and came to the same result. In terms of memory consumption, there is a potential improvement with openj9 as JVM alternative.
I picked three helloworld examples using maven archetypes from dropwizard, helidon and Spring Boot.
Then I compared memory consumption using application metrics and dockers stats and could have a rough idea of the differences. Here is the result of the dropwizard app:
Whether you have the goal of running your complete software on your developer laptop or you want to save money running your services in a public cloud, openj9 provides a possibility to reduce the memory footprint of your java application by around 50%.
As it was mentioned in the other blog posts, there are also a few downsides with this, but if you test everything and the requirements (in terms of computation performance) are met, you should give it a try.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
The metrics can easily been reported to graphite database and visualized via Kibana.
WYIIWYG – What you instrument, is what you get!
On the other hand, a microservice often contains client libraries to access other services via http. Feign is client library, which provides a wrapper and simplifies the api to communicate to the target services.
In contrast to the inbound metrics from the example above, it is also desirable to monitor the outbound metrics of each of the targeted operations.
Looking at the third-party libraries of http://metrics.dropwizard.io/3.2.3/manual/third-party.html there is already something to retrieve metrics on http level. So in case you are using okhttp as http client implementation you can use https://github.com/raskasa/metrics-okhttp and you will receive information about request durations and connection pools. Same holds good for Apache httpclient instrumentation.
okhttp example
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
As you can see, the provided metrics only provid information on http level, not really showing differences between different service endpoints. The only differentation is available on the httpclient metrics, which shows metrics based on host and http methods.
Closing the gap
What was missing in my eyes was a way to instrument metrics on the interface level, which is provided from the Feign builder. In my example below I am calling the github API on two different resource endpoints, contributors and repositorySearch. With the instrumentation on http, one is not able to see and monitor those one by one.
Therefore I created a library, which makes it possible to instrument metrics on method or interface level by using annotations like you do it in jersey resource classes.
Using this instrumentation you are able to retrieve metrics based on the interface and methods the client is calling. So for example when you start reporting via JMX, you are able to see the metrics in jconsole.
Usage of the library
To instrument the feign interfaces you basically have to do three things:
add the maven dependency to the pom.xml of your project.
add FeignOutboundMetricsDecorator as invocationHandlerFactory in Feign.builder
add the metric annotations @Timed, @Metered and @ExceptionMetered to the interface you are using with feign.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
I used to use dropwizard built-in metrics annotations and graphite, but now I wanted to integrate these into my javaee project exposing prometheus metrics format. The main difference between graphite and prometheus is the push or pull mentality. So instead of pushing the metrics data to the sink, they are provided via http servlet and the prometheus server is scraping them from there.
There are two registry classes which we have to bring together. One is the com.codahale.metrics.MetricRegistry which holds all codahale metrics and the other one is the io.prometheus.client.CollectorRegistry which holds all metrics being published in the prometheus. So in our case, we will receive all metrics from our Jax-RS resource classes annotated with com.codahale.metrics.annotation.Timed or com.codahale.metrics.annotation.ExceptionMetered.
The prometheus library contains some default Jvm metrics (DefaultExports), but I could not use them because of some sun jdk classes which do not exist in OpenJDK for example. But it is good to add these in addition to the metrics coming from our annotated Jax-RS resource classes. So this is the class
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
As you can see, by the help of CDI we are able to bind everything together during the startup phase of the application inside of the JEE container.
Then we add the servlet to the web.xml like this and we are done:
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters