Wednesday, 18 December 2019

jruby-launcher upgrade

Logstash uses JRuby. So, while trying to read the code regarding an issue I was having in Logstash, I had installed JRuby and I had set up multiple Ruby versions using rbenv.

To check whether the right version of Ruby is being used, I tried ruby -v. I was getting an error as follows.

2019-12-17T19:44:52.167+05:30 [main] WARN FilenoUtil : Native subprocess control requires open access to
Pass '--add-opens java.base/' or '=org.jruby.core' to enable.
java.lang.IllegalCallerException: is not open to module org.jruby.dist
at java.base/java.lang.Module.addOpens(
at org.jruby.dist/com.headius.backport9.modules.impl.Module9.addOpens(
at org.jruby.dist/com.headius.backport9.modules.Modules.addOpens(
at org.jruby.dist/$ReflectiveAccess.(
at org.jruby.dist/
at org.jruby.dist/
at org.jruby.dist/
at org.jruby.dist/
at org.jruby.dist/org.jruby.RubyIO.(
at org.jruby.dist/org.jruby.RubyFile.(
at org.jruby.dist/org.jruby.parser.Parser.parse(
at org.jruby.dist/org.jruby.Ruby.parseFileAndGetAST(
at org.jruby.dist/org.jruby.Ruby.parseFileFromMainAndGetAST(
at org.jruby.dist/org.jruby.Ruby.parseFileFromMain(
at org.jruby.dist/org.jruby.Ruby.parseFromMain(
at org.jruby.dist/org.jruby.Ruby.runFromMain(
at org.jruby.dist/org.jruby.Main.doRunFromMain(
at org.jruby.dist/org.jruby.Main.internalRun(
at org.jruby.dist/
at org.jruby.dist/org.jruby.Main.main(
jruby (2.5.3) 2019-04-09 8a269e3 OpenJDK 64-Bit Server VM 11.0.5+10 on 11.0.5+10 +jit [darwin-x86_64]

It didn't stop the printing of the version though.

A bit of googling pointed to the discussion at JRuby Github repo which further pointed to another discussion in jruby-launcher repo. I was not aware of the reason of the existence of jruby-launcher but it is an interesting take on a difficult situation.

From the discussion, it seemed that the fix was available in version 1.1.10. So, I tried looking at my Gemfile.lock to check the version I am using; but there was no version mentioned in that. However, it was present in the list of installed gems. I added the fixed version in the Gemfile and the issue was fixed.

Wednesday, 28 August 2019

Setting up jEnv

With OpenJDK, there is a need to have multiple JDK versions installed as there are many releases that are used by different projects. More and more projects are supporting multiple Java versions as well. This problem was not as prevalent in the Java ecosystem but the Ruby ecosystem had it long back. There are multiple tools to manage Ruby versions. The approach of rbenv is probably the simplest. On the same lines, we have jEnv for Java.

To setup jEnv, the steps are simple

  1. Install jEnv (brew install jenv)
  2. Setup shell profile (~/.bash_profile)
  3. export PATH="$HOME/.jenv/bin:$PATH"
    eval "$(jenv init -)"

  4. Install Java
  5. Set local or shell version of Java (jenv local openjdk64-

Thursday, 22 August 2019

Jacoco instrumentation error

In a Java project build, I was getting an error complaining about Jacoco failing to instrument a particular class. Looking at the complete stack trace of the error, it was an exception about index going out of bounds.

Caused by: java.lang.ArrayIndexOutOfBoundsException: 6
at org.jacoco.core.internal.BytecodeVersion.get(
at org.jacoco.core.instr.Instrumenter.instrument(
at org.jacoco.core.instr.Instrumenter.instrument(
... 25 more
Fortunately, a little bit of search showed that it is related to Java. My initial version of Java was ''. The fix was backported to ''. I also had '' installed. So, I switched versions and tried a clean build. It worked fine.

Tuesday, 16 April 2019

Learning Puppet: Idempotence

I am learning Puppet so I wanted to try out the manifests on a local VM. So, I created a Vagrant based Arch linux VM.

Vagrant.configure("2") do |config| = "archlinux/archlinux"
 config.vm.hostname = ""
 config.vm.synced_folder "puppet/", "/home/vagrant/puppet"

When I tried to bring the VM up, I got the following error.

Vagrant was unable to mount VirtualBox shared folders. This is usually
because the filesystem "vboxsf" is not available. This filesystem is
made available via the VirtualBox Guest Additions and kernel module.
Please verify that these guest additions are properly installed in the
guest. This is not a bug in Vagrant and is usually caused by a faulty
Vagrant box. For context, the command attempted was:

mount -t vboxsf -o uid=1000,gid=1000 vagrant /vagrant

The error output from the command was:

: Invalid argument

It did not stop the VM from coming up so I was able to test. I created the following manifest to start with and it worked fine.

file {
 '/tmp/motd': content => 'hello puppet'

I moved on to actually installing a package.

class logstash{
      name => "logstash",
      alias => "logstash",
      ensure => "7.0"


I tried to apply it using the following:

puppet apply logstash.pp -v

The package was not installed. I soon figured out it was because, I was not applying the class. So, I modified the manifest to the following:

      name => "logstash",
      alias => "logstash",
      ensure => "7.0"

Now Puppet tried to install the package but it failed with the following error.

Error: Parameter ensure failed on Package[logstash]: Provider pacman must have features 'versionable' to set 'ensure' to '7.0' (file: /home/vagrant/logstash.pp, line: 2)

Arch linux uses pacman as its package manager and Puppet was trying to install a specific version of the package using that. However, pacman does not support different versions of the same package. So, Puppet can't ensure a specific version. So, I changed the manifest to the following:

      name => "logstash",
      alias => "logstash",
      ensure => "installed"

Now, Puppet attempted to install the package again and hit the following error.

Error: Execution of '/usr/bin/pacman --noconfirm --needed --noprogressbar -Sy logstash' returned 1: error: you cannot perform this operation unless you are root.
Error: /Stage[main]/Main/Package[logstash]/ensure: change from 'absent' to 'present' failed: Execution of '/usr/bin/pacman --noconfirm --needed --noprogressbar -Sy logstash' returned 1: error: you cannot perform this operation unless you are root.

So, I attempted using sudo. The process was taking time. I waited for some time then interrupted it and tried to see the logs. The logs seemed fine so I started again and the this time I got the following error.

Error: Execution of '/usr/bin/pacman --noconfirm --needed --noprogressbar -Sy logstash' returned 1: :: Synchronizing package databases...
error: failed to update core (unable to lock database)
error: failed to update extra (unable to lock database)
error: failed to update community (unable to lock database)
error: failed to synchronize all databases
Error: /Stage[main]/Main/Package[logstash]/ensure: change from 'absent' to 'present' failed: Execution of '/usr/bin/pacman --noconfirm --needed --noprogressbar -Sy logstash' returned 1: :: Synchronizing package databases...
error: failed to update core (unable to lock database)
error: failed to update extra (unable to lock database)
error: failed to update community (unable to lock database)
error: failed to synchronize all databases

This is specific to pacman. As I had interrupted the execution, the lock file was not deleted, i.e. pacman had not cleaned up properly. I manually removed the lock file from /var/lib/pacman/db.lck and tried to apply the manifest again. It completed successfully and logstash was installed on the VM. Now, it is established that the script works but it might need manual intervention. In other words, the manifest needs to be idempotent so that manual intervention is minimal.

To achieve that, I modified the manifest as follows.

      name => "logstash",
      alias => "logstash",
      ensure => "installed"
      path => "/var/lib/pacman/db.lck",
      name => "/var/lib/pacman/db.lck",
      ensure => "absent"

To test that the manifest is idempotent, I removed logstash and applied the manifest. Midway, I interrupted the execution and re-applied the manifest. This time re-application succeeded. The logs show that the lock file was created when the manifest was applied the previous time and interrupted. It was deleted when the manifest was applied again.

Info: Applying configuration version '1555432879'
Info: Computing checksum on file /var/lib/pacman/db.lck
Info: /Stage[main]/Main/File[/var/lib/pacman/db.lck]: Filebucketed /var/lib/pacman/db.lck to puppet with sum d41d8cd98f00b204e9800998ecf8427e
Notice: /Stage[main]/Main/File[/var/lib/pacman/db.lck]/ensure: removed
Notice: Applied catalog in 0.25 seconds

Key take-aways:
  1. Puppet is a framework that depends on providers to install packages and its capabilities are as good as the providers.
  2. Without idempotent behaviour, Puppet manifests will not be achieve much of automation.

Monday, 21 January 2019

Finding where a package is installed

Recently, I faced an issue on a CentOS box where I had to find the location of a package. My application was running on it and I wanted to modify its config. I was unable to pick the config in /etc so I figured it must be in the folder in which the application is installed.

The application was installed as a daemon but it seems the PATH was not updated. I searched in the usual locations of binaries and packages but I could not find it. The only option left for me was to find out where it was installed.

Using the following command showed all the files of the application package.

rpm -ql <my application>

From the list, I could see where my config file was placed during install.

Friday, 18 January 2019

Unexpected exit of Elasticsearch container

I was trying the Elasticsearch container image on docker hub. I was disappointed by the complete lack of error message in the following scenario.

I had my setup defined in a compose file. When I tryed to start the container, I got only the following log line.

OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.

As it is a warning, I expected the container to start but it had not started. Digging a bit more, I found that the exit code was 137 which meant the container needed more memory than the docker daemon was configured for. The fix is quite easy of course but I think it could have been communicated better.