Background
Recently, RedHat shutdown its openshift platform v2. Now only openshift 3 is available and it is based on kubernetes and docker.
So why do I care?
I had some JEE projects running on openshift 2 and now I was forced to migrate them to the new infrastructure. Basically my applications consist of a tomcat or wildfly server running next to a mysql database. This setup was pretty easy and I could run this at no cost as long as I could tolerate, that the cartridges would be shutdown if they idle longer then 24 hours (meaning no http request was coming to the server during this time). But ok…
So now I had to migrate my projects following the migration guide provided by Redhat. But in parallel I was interested whether I have other options or if there are any other Paas providers giving out a free tier for personal small projects. And yes, there are plenty of them, but as I found, all have their limitations.
AWS is for 1 year, then pay. Google cloud … cant remember what was holding me back…Oracle cloud gives a certain amount of money, but evaluation phase is 3 month max. Microsoft…really?
Going with Heroku, but…
Then I found heroku offering “free dyno”, which also idles after 30min inactivity, but I wanted to give it a try. Later I found, that if you want to use a database with heroku, the limitations on the “free” database are like 5MB or 100 rows, so even if I have only a few playaround datasets, that was too small.
Then I had the idea of connecting two Paas providers. One giving me the application tier for free, the other one is giving me the database for free.
I ended up looking on how to connect to a database running on openshift from outside of the docker environment. What I found is the same way an admin would connect to it, via a tunnel and/or port forwarding.
Oh my god, the latency between application and database will be huge!
Yes, that is possible, but as I do not want to propose this setup for an enterprise application running in production, I am fine.
So here is my solution on how to connect from an application running in a heroku dyno to a mysql database running in openshift.
Heroku provides a mechanism which allows you to pack anything into your dyno, which you need to run your application. So if you need java, then the buildpack “heroku/java” is for you. If you need node, there is a buildpack for node and so on. The nice thing about the buildbacks is, that you can also create them on your own, using the buildback API
A buildpack can contain a shell script (profile.d), which is executed during startup of the container. The perfect way to create a tunnel and provide access to my remote database. You can find the buildpack at https://github.com/mwiede/heroku-buildpack-oc
So here is how you can create a heroku application having access to a remote database:
- install Heroku CLI
- create an app
- add my buildpack
heroku buildpacks:add https://github.com/mwiede/heroku-buildpack-oc
- configure environment variables
$ heroku config:set OC_LOGIN_ENDPOINT=https://api.starter-ca-central-1.openshift.com $ heroku config:set OC_LOGIN_TOKEN=askdjalskdj $ heroku config:set OC_POD_NAME=mysql-1-weuoi $ heroku config:set OC_LOCAL_PORT=3306 $ heroku config:set OC_REMOTE_PORT=3306
- deploy the app
- look for the logs, whether connection works properly.
Advanced usage
The profile.d script contains a loop so whenever the connection of the tunnel shuts down, it tries to open it up again.
From the perspective of openshift, the database runs in a so called pods and unfortunely it’s name can change.
I tried to make this as robust as possible, so the name in OC_POD_NAME should only contain a prefix of how the pod is named, for instance “mysql” is enough to detect the right one.