Managing system scoped jars with maven and the addjars plugin

When using some third party dependencies in a maven project you realize that sometimes some of jars are not available in any of the public maven repositories available (e.g. the oracle jdbc driver that is not available for copyright reasons)

When developing a team project and that you are required to use a jar of this type you have mainly 3 options :

1. Using Maven repository repository manager

There are a few maven repository managers out there which allow mainly to mirror distant maven repositories such as maven central, by storing the remote jars locally and making them available inside the company

This is where you can store your companies jars and wars and some other jar not available in the public repositories

I will not go into the details of the different repository managers but there are mainly 3

I never used Archiva but I really like Nexus. There is a nice comparison matrix over here so you can make and educated choice

Anyhow if you or your company have a repository manager you can easily install the required JAR on your repository.

Once the JAR is installed in the repository it will be available to anyone that uses the repository as long as your settings.xml file or pom.xml file are properly configured.

The main advantage if this approach is that once the JAR downloaded and installed in the repository manager it will be available for all the users and future projects

The main inconvenient is the fact that you need to install a repository manager

2. Installing the JAR in the maven local repository

Installing a JAR in the user's maven local repo is quite straightforward

a) Download the JAR to the local machine
b) Install the JAR in the local repository
    mvn install:install-file -Dfile=/data/downloads/ojdbc6.jar -DgroupId=com.oracle -DartifactId=ojdbc6 -Dversion=11.2.0.3
c) Declare the JAR as a maven dependency

   

       .....

      
          
              =com.oracle
              ojdbc6
              11.2.0.3
         
     

  

This is the quickest and simplest approach to implement

The major inconvenient is that every new developer will need to install the JAR in their local repository

3. Using the addjars shade plugin and system as dependency scope

Maven allows you to easily defined dependencies with a system scope. But there is a problem with system scoped jars. When packaging the application (WAR, JAR, etc.) the system scoped jars will not be included in the packaged result. Here is where the addjar plugins comes in handy, this plugin allows you to include system scoped jars when packaging the application

Below is a quick example of a simple java web app with system scoped jar (ojdbc6.jar) :

my-app
|-- pom.xml
`-- src
    |-- main
    |   `-- java
    |   `-- webapp
    |-- external
    |   `-- ojdbc6.jar   

As stated before if I run a mvn package on this project I will get a WAR file but the ojdbc6.jar JAR will not be included under WEB-INF/lib

Thankfully the addjars plugins comes to the rescue here this plugin will include your third party jar in your project's classpath when packaging

You can see below a sample configuration for the plugin in the pom.xml file :


   
...

  
      
       com.googlecode.addjars-maven-plugin
       addjars-maven-plugin
       1.0.5
          
             
                
                  add-jars
                
                
                  
                    
                      ${basedir}/external/
                    
                 
              
           
        
      
    
  
...

The main advantage is that once the pom.xml configured with the dependency and the plugin and the JAR committed into the source control the building becomes transparent for the users

The main inconvenient comes from the fact that you will need to commit the JAR file in your source control repository and handle JAR versions manually

It's good to note that the add-jars plugin requires maven 3.0.3+

Reconfiguring a mongodb replicaset after a "loading local.system.replset config (LOADINGCONFIG)" error

Weird thing happen today when I came to work my MongoDB replicaset was not working anymore after a system update

There was a mongodb instance running but when I checked for the replicaset status I had this error message :

> rs.status()

{
        "startupStatus" : 1,
        "ok" : 0,
        "errmsg" : "loading local.system.replset config (LOADINGCONFIG)"
}

Now I have no idea what caused the problem but I could see (from the error message) that it was related to the replicaset configuration

The problem was actually pretty weird since when I printed the replicaset config it showed an old configuration

> rs.config()
{
        "_id" : "rps",
        "version" : 2,
        "members" : [
                {
                        "_id" : 0,
                        "host" : "192.168.13.200:27017"
                },
                {
                        "_id" : 1,
                        "host" : "192.168.13.201:27017"
                }
        ]
}

Since this is a development environment I didn't have a database backup so I really needed the replicaset to start running again

Thankfully MongoDB (since version 2.0) offers a relatively easy way to reconfigure a replicaset that is the rs.reconfig() command

In my case what I had to to was to reconfigure my replicaset with the only surviving member (itself) since the second one was under heavy maintenance.

This operation can be done in 6 steps :

1. Get the current replicaset configuration into a variable
> var cfg = rs.config()
2. Overwrite the members property with the remaining replicaset nodes

> cfg.members = [{"_id" : 3, "host"  : "192.168.1.100"}]
[ { "_id" : 3, "host" : "192.168.1.100" } ]
3. Reconfigure the replicaset with the new configuration
> rs.reconfig(cfg , {force : true})
{
        "msg" : "will try this config momentarily, try running rs.conf() again in a few seconds",
        "ok" : 1
}

4. Wait for the configuration to be applied

Give it a few seconds and run again the conf command to see if the new configuration is properly applied

> rs.conf()
{
        "_id" : "rps",
        "version" : 77682,
        "members" : [
                {
                        "_id" : 3,
                        "host" : "192.168.1.100:27017"
                }
        ]
}
5. Restart your mondod instance

/etc/init.d/mongod restart
6. Check that everything is running properly

rps:PRIMARY> rs.status()
{
        "set" : "rps",
        "date" : ISODate("2013-05-23T07:54:47Z"),
        "myState" : 1,
        "members" : [
                {
                        "_id" : 3,
                        "name" : "192.168.1.100:27017",
                        "health" : 1,
                        "state" : 1,
                        "stateStr" : "PRIMARY",
                        "uptime" : 8,
                        "optime" : {
                                "t" : 1352365425,
                                "i" : 1
                        },
                        "optimeDate" : ISODate("2012-11-08T09:03:45Z"),
                        "self" : true
                }
        ],
        "ok" : 1
}

Sources :
Reconfigure a Replica Set with Unavailable Members

JSF 2 performance improvement on latest release (Mojarra)

According to this blog the latest version of Mojarra JSF (2.1.22) performance should improve dramatically when handling pages with a large quantity of objects (1000+).

jsf-performance-mojarra-improves-dramatically

Always acording to the blog author with the previous version of Mojarra, rendering a page with 1000+ components would take up to 5 times more compared to the other JSF implementation Apache MyFaces.

This is great news and comes as a welcomed improvement!

This improvement comes mainly from the resolution of the bug :

JAVASERVERFACES-2494

Custom Database function using JPA JPQL and eclipselink

Sometimes it's useful to exploit some of the native functions provided by database vendors while using JPA and JPQL.

And although it's possible to execute native queries, you'll loose some of the JPA magic in the process.

Thankfully Eclipselink (and probably Hibernate) allow some degree of flexibility through special operators.

According to the official documentation the FUNC operator allows for a database function to be call from JPQL. It allows calling any database functions not supported directly in JPQL, and calling user or library specific functions.


@Entity
public class Employee implements Serializable{
   
   @Id
   private Long id;   
   private String firstname;
   private String lastname;

  // Setters - Getters

}

Below is a quick example on how to use FUNC to call Oracle's TO_CHAR function and convert all the ids in the query to a String.

Let's say that for an obscure reason we would like to get a list of all the employees id's but not as a list of java.lang.Long but as a list of java.lang.String.

We could write the query like so :



List employeesId = 
     em.createQuery("SELECT FUNC('TO_CHAR', e.id) FROM Employee e ").getResultList();

The usage is pretty straightforward the first argument is the name of the native function followed by the function arguments

More information about this can be found in Eclipselink's official documentation here

It's good no note that FUNC will be changed to FUNCTION in JPA 2.1

JPA Eclipselink optimizing query performance on a large resultset

Recently I had to look for ways of optimizing a very slow database request on a very large ResultSet (more than 5000 results with several @OneToMany associations).

While reading EclipseLink's documentation I stumbled upon an interesting piece of information regarding handling large ResultSets. EclipseLink recommends using Streamed Cursors when handling large resultsets

According to the documentation : Cursored streams provide the ability to read back a query result set from the database in manageable subsets, and to scroll through the result set stream.

The initial query would load practically the entire database into memory in one request (all the associations are marked with QueryHint.LEFT_FETCH in order to eagerly fetch the associations upon the initial request).

I will not go into details regarding the process since it's irrelevant to this post not the goal of this post, but basically the program would query an Oracle Database transform every JPA Entity and store it in a somewhat different object in a Mongo Database

The process was really long (more than 1 hour between recovering all the objects from the Oracle Database and processing them into the Mongo Database

The DAO class

public CursoredStream getMyLargeResultset(){

   Query programsQuery = em.createNamedQuery("MyNamedQuery");
   // tell the EntityManager to return a StreamedCursor
   programsQuery.setHint("eclipselink.cursor", true);    
   return (CursoredStream) programsQuery.getSingleResult();
}

The client


public void synchronizeDatabases(){


    final int PAGE_SIZE = 50;

    //get the cursor
    CursoredStream cursor= myDao.getMyLargeResultset();
      
    // iterate through curor 
    cursor(!cursor.atEnd()){ 
 
        // get the next batch of objects and
        // cast it to the target entity 
        List myEntities = (List)(List)cursor.next(PAGE_SIZE);
 processEntities(myEntities);
      }

    cursor.close();

}

By using this technique I was able to reduce by a factor of 10 the process time of this operation!

PS: I realize that the double casting on the List is not very pretty and I could use a different approach by using the next() method without arguments but the existent method processEntities() accepted a List as a parameter and I wasn't allowed to modify that method

Documentation sources


MongoDB $where clause to query array length

There is no direct way in mongodb to return all the documents in where a sub collection has at least X number of entries

  {
    "name" : "Terry Brooks",
    "books" : [ 
              "The Sword of Shannara", 
               "The Elfstones of Shannara",
               "The Wishsong of Shannara"
              ]
  },
 {
    "name" : "Oscar Wilde",
    "books" : [ 
              "The Picture of Dorian Gray"
             
              ]
  }

Let's say that I want all the authors that have written more than 1 book. There is no direct way in mongodb to do this. it needs to be done either by map reduce or perhaps with the new aggregation framework but you cannot combine $gt and $size operators like so :
    db.AUTHORS.find({"books" : {$size : {$gt : 1}}});
It doesn't work, you wont get any error messages but an empty result. MongoDb allows Javascript evaluation through the $where operator although it's significantly slower than native operators it's very flexible and a quick way of executing a query without using map reduce or other means :
    db.AUTHORS.find({$where : "this.books.length > 1"});
But when this query was executed the following error kept coming up :
{
    "error": {
        "$err": "erroroninvocationof$wherefunction: JSError: TypeError: this.bookshasnopropertiesnofile_a: 0",
        "code": 10071
    }
}

The error is not very helpfull (at least to me) and as it turns out the origin of the problem was the fact that not all Author documents in my database had the "books" array. So in order to execute a length query on the "books" array it's necessary to ensure that the array field (books) exists :
    db.AUTHORS.find({"books" : {$exists: true}, $where : "this.books.length > 0"});

EclipseLink (JPA) Spring - No persistence exception translators exception

I recently stumbled upon a Spring Exception while working with Spring Data JPA and attempting to deploy a WAR file on glassfish. I have a JTA datasource defined in my glassfish which is used by Spring (through JNDI) to instantiate my entity manager. The JPA configuration was pretty straight forward but every time I tried to deploy the app on Glassfish I stumbled upon the error :
org.springframework.beans.factory.BeanCreationException: 
Error creating bean with name 
'org.springframework.dao.annotation.PersistenceExceptionTranslationPostProcessor#0': 
Initialization of bean failed; 
nested exception is java.lang.IllegalStateException: 
No persistence exception translators found in bean factory.
Cannot perform exception translation.
Apparently when using Hibernate there is an easy fix for this : Declaring a Hibernate Exception Translator in the spring beans config file :

When using EclipseLink though this doesn't work... So I looked for implementations of the spring interface :

org.springframework.dao.support.PersistenceExceptionTranslator 

And found out that there is not an EclipseLinkExceptionTranslator (as for Hibernate) but there is a EclipseLinkJpaDialect
   org.springframework.orm.jpa.vendor.EclipseLinkJpaDialect

That implements the interface so I created a bean in the spring configuration file and I was able to deploy the app on the glassfish.

OSX show used ports or listening applications with their PID

On OSX you can display applications listening on a given port using the lsof the commands described below will show listening application...