Wednesday, December 19, 2012

Generating Barcodes in PDFs with Flying-Saucer

Flying-Saucer is a nice library to generate PDF documents from within Java applications. Just generate a bunch of XHTML, throw it into the renderer and let it produce the desired document utilizing iText.

When it comes to barcodes however, Flying-Saucer cannot access the built in barcode functionality of iText (at least I didn't find any documentation for it).

However, being OpenSource and well designed, one only needs to create one subclass to achieve the task: Flying-Saucer relies on a factory named ReplacedElementFactory, which can replace elements by custom objects. This is also used to embed images, as the class ITextReplacedElementFactory shows. Now we can simply create a subclass, which replaces images with an appropriate barcode: 


<img src="0123456789" type="code128" style="height: 1cm" />

One simply needs to override the createReplacedElement method like this (the whole code can be found here: BarcodeReplacedElementFactory.java (GitHub)):

    @Override
    public ReplacedElement createReplacedElement(

            LayoutContext c, 
            BlockBox box,
            UserAgentCallback uac, 

            int cssWidth, 
            int cssHeight) 
    {

        Element e = box.getElement();
        if (e == null) {
            return null;
        }

        String nodeName = e.getNodeName();
        if (nodeName.equals("img")) {
            if ("code128".equals(e.getAttribute("type"))) {
                try {
                    Barcode128 code = new Barcode128();
                    code.setCode(e.getAttribute("src"));
                    FSImage fsImage = new ITextFSImage(

                                       Image.getInstance(
                                         code.createAwtImage(

                                           Color.BLACK, 
                                           Color.WHITE
                                         ),
                                         Color.WHITE

                                       ));
                    if (cssWidth != -1 || cssHeight != -1) {
                        fsImage.scale(cssWidth, cssHeight);
                    }
                    return new ITextImageElement(fsImage);
                } catch (Throwable e1) {
                    return null;
                }

            }
        }

        return super.createReplacedElement(

                        c, box, uac, cssWidth, cssHeight);
    }


Granted, "type" is no valid XHTML-Element for <img /> but as you can see in the code above, you could easily replace it with data-type or any other attribute. Flying-Saucer doesn't seem to care about this anyway.

Note: The code above can only handle Code128-Barcodes, but can easily be extended to handle EAN and the like (iText supports a whole bunch of barcodes by default).

In order to make our factory work, we need to pass it to the renderer - which is pretty darn easy:
      
        ITextRenderer renderer = new ITextRenderer();
        renderer.getSharedContext().setReplacedElementFactory(
                new BarcodeReplacedElementFactory(

                        renderer.getOutputDevice()
                ));
        renderer.setDocumentFromString(inputAsString);
        renderer.layout();
        renderer.createPDF(outputAsStream);

Thursday, February 9, 2012

Keyboard Navigation between Input Fields using JQuery

If you have a form with several input fields, one can easily add keyboard based navigation using JQuery. This means, that if the user is in the 1st input field and pushes the down key, the focus jumps to the next field, etc.

To achive this, we define two helper functions, which find the next or the previous instance for a given selected range:

jQuery.fn.elementAfter = function(other) {
    for(i = 0; i < this.length - 1; i++) {
        if (this[i] == other) {
            return
jQuery(this[i + 1]);
        }
    }
    return jQuery;
};

jQuery.fn.elementBefore = function(other) {
    if (this.length > 0) {              
        for(i = 1; i &lt; this.length; i++) {
            if (this[i] == other) {
                return
jQuery(this[i - 1]);
            }
        }
    }
    return jQuery;
};


To use this with input fields, we can add the following in the jQuery(document).ready function:

$('input').bind('keyup', function(evt) {
  if (evt.keyCode == 40) {
      // down key
      $('input').elementAfter(this).focus();
  } else if (evt.keyCode == 38) {
      // up key
      $('input').elementBefore(this).focus();
  }
});

There you go, once the focus is in any input field, you can easily navigate to the previous/next one, using the up/down keys.

Monday, February 6, 2012

QSyntaxHighlighter - Colorized Braces

As I've written in one of my previous posts I like the idea to color subsequent braces differently to improve the overall readability.

Hacking this into QSyntaxHighlighter is, as almost everything in Qt, quite easy. I created a struct, which saves the number of braces, left open for each block (paragraph):

struct BlockData : public QTextBlockUserData
{
    int numBraces;
};
Within the highlightBlock method, one can easily access this information, process all braces of the current paragraph, and save the number of open braces:
void Highlighter::highlightBlock(const QString &text)
{
    BlockData* data = dynamic_cast<BlockData*>(
                currentBlock().previous().userData());
    int numberOfOpenBraces = 0;
    if (data != NULL) {
        numberOfOpenBraces = data->numBraces;
    }

..tokenizing...update numOfOpenBraces..

    BlockData* newData = dynamic_cast<BlockData*>(currentBlockUserData());
    if (newData == NULL) {
        newData = new BlockData();
    }
    newData->numBraces = numberOfOpenBraces;
    setCurrentBlockUserData(newData);
}




This is an example expression, without coloring or braces:

This is the same example, with coloring: (Yes, I might want to change the colors, I know...)


The complete source code can be found here: Highlighter.cpp

Thursday, February 2, 2012

Better Syntax Coloring for IDEs

Modern IDEs like Eclipse already provide excellent syntax highlighting. Still I think there is room for improvement.

One idea of my ideas would be to color brackets differently. Instead of drawing them all in the same color, rotating through a color wheel would make it much easier to check, where which bracket is closed.

So this example code:

Would look like this:



Of course it's a detail, but a visual cue like this can help when you would try to extract the inner expression in FTFilter.notAnalyzed - Just start your selection from the green bracket, to the closing green bracket and you're done.

I already downloaded the JDT sources of Eclipse, but in order to have a fast and precise syntax coloring, the code is quite complex and I don't have the time to work into this. Still I believe a JDT developer could implement this feature very quickly - if you happen to know one, you might want to forward the URL ;-)

If you think this is a good idea, vote me up on DZone, or retweet this or +1 it on google... If there is enough positive feedback I'll go and file a bug for Eclipse, maybe it'll be in one of the next versions...

Update 1: I received a lot of positive feedback, thanks everybody. Three other programs seem to already support this feature: Microsoft Excel, Pharo Smalltalk (I think, only when the cursor is near a brace) and the Closure Plugin for Eclipse.

Additionally I hacked this functionality into a subclass of QSyntaxHighlighter for one of my Qt toy projects. As I've written here, this is quite simple.

Update 2: Just filed a Bug for Eclipse JDT-Text: https://bugs.eclipse.org/bugs/show_bug.cgi?id=370745

Monday, January 23, 2012

Modular Java Applications - A Microkernel Approach

Software Engineering is all about reuse. We programmers therefore love to split applications up into smaller components so that each of them can be reused or extended in an independent manner.

A keyword here is "loose coupling". Slightly simplified, this means, each component should have as few dependencies to other components as possible. Most important, if I have a component B which relies on component A, I don't want that A needs to know about B. The component A should just provide a clean interface which could be used and extended by B.

In Java there are many frameworks which provide this exact functionality: JavaEE, Spring, OSGI. However, each of those frameworks come with their own way to do things and provide lots and lots of additional functionality - whether you want it or not!

Since we here at scireum love modularity (we build 4 products out of a set of about 10 independet modules) we built our own little framework. I factored out the most important parts and now have a single class with less than 250 lines of code+comments!

I call this a microkernel approach, since it nicely compares to the situation we have with operating systems: There are monolithic kernels like the one of Linux with about 11,430,712 lines of code. And there is a concept called a microkernel, like to one of Minix with about 6,000 lines of executable kernel code. There is still an ongoing discussion which of the two solituons is better. A monolithic kernel is faster, a microkernel has way less critical code (critical code means: a bug there will crash the complete system. If you haven't already, you should read more about mikrokernels on Wikipedia.

However one might think about operating systems - when it comes to Java  I prefer less dependencies and if possible no black magic I don't understand. Especially if this magic involves complex ClassLoader structures. Therefore, here comes Nucleus...

How does this work?
The framework (Nucleus) solves two problems of modular applications:
  • I want to provide a service to other components - but I only want to show an interface and they should be provided with my implementation at runtime without knowning(referencing) it.
  • I want to provide a service or callback for other components. I provide an interface, and I want to know all classes implementig it, so I can invoke them.
Ok, we probably need examples for this. Say we want to implement a simple timer service. It provides an interface:

public interface EveryMinute {
    void runTimer() throws Exception;
}

All classes implementing this interface should be invoked every minute. Additionally we provide some infos - namely, when was the timer executed last.

public interface TimerInfo {
    String getLastOneMinuteExecution();
}

Ok, next we need a client for our services:


@Register(classes = EveryMinute.class)
public class ExampleNucleus implements EveryMinute {

    private static Part<TimerInfo> timerInfo = 
                                     Part.of(TimerInfo.class);

    public static void main(String[] args) throws Exception {
        Nucleus.init();

        while (true) {
            Thread.sleep(10000);
            System.out.println("Last invocation: "
                    + timerInfo.get().getLastOneMinuteExecution());
        }

    }

    @Override
    public void runTimer() throws Exception {
        System.out.println("The time is: "
                + DateFormat.getTimeInstance().format(new Date()));
    }
}

The static field "Part<TimerInfo> timerInfo" is a simple helper class which fetches the registered instance from Nucleus on the first call and loads it into a private field. So accessing this part has almost no overhead to a normal field access - yet we only reference an interface, not an implementation.

The main method first initializes Nucleus (this performs the classpath scan etc.) and then simply goes into an infinite loop, printing the last execution of our timer every ten seconds.

Since our class wears a @Register annotation, it will be discovered by a special ClassLoadAction (not by Nucleus itself) instantiated and registered for the EveryMinute interface. Its method runTimer will then be invoced by our timer service every minute.

Ok, but how would our TimerService look like?

@Register(classes = { TimerInfo.class })
public class TimerService implements TimerInfo {

    @InjectList(EveryMinute.class)
    private List<EveryMinute> everyMinute;
    private long lastOneMinuteExecution = 0;

    private Timer timer;

    public TimerService() {
        start();
    }

    public void start() {
          timer = new Timer(true);
          // Schedule the task to wait 60 seconds and then invoke
          // every 60 seconds.
          timer.schedule(new InnerTimerTask(), 
                         1000 * 60, 
                         1000 * 60);
    }
    private class InnerTimerTask extends TimerTask {

        @Override
        public void run() {
            // Iterate over all instances registered for
            // EveryMinute and invoke its runTimer method.
            for (EveryMinute task : everyMinute) {
                    task.runTimer();
            }
                             // Update lastOneMinuteExecution
            lastOneMinuteExecution = System.currentTimeMillis();
        }

    }

    @Override
    public String getLastOneMinuteExecution() {
        if (lastOneMinuteExecution == 0) {
            return "-";
        }
        return DateFormat.getDateTimeInstance().format(
                new Date(lastOneMinuteExecution));
    }
}




This class also wears a @Register annotation so that it will also be loaded by the ClassLoadAction named above (the ServiceLoadAction actually). As above it will be instantiated and put into Nucleus (as implementation of TimerInfo). Additionally it wears an @InjectList annotation on the everyMinute field. This will be processed by another class named Factory which performs simple dependency injection. Since its constructur starts a Java Timer for the InnerTimerTask, from that point on all instances registered for EveryMinute will be invoced by this timer - as the name says - every minute.

How is it implemented?
The good thing about Nucleus is, that it is powerful on the one hand, but very simple and small on the other hand. As you could see, there is no inner part for special or privileged services. Everything is built around the kernel - the class Nuclues. Here is what it does:

  • It scans the classpath and looks for files called "component.properties". Those need to be in the root folder of a JAR or in the /src folder of each Eclipse project respectively. 
  • For each identified JAR / project / classpath element, it then collects all contained class files and loads them using Class.forName.
  • For each class, it checks if it implements ClassLoadAction, if yes, it is put into a special list.
  • Each ClassLoadAction is instanciated and each previously seen class is sent to it using: void handle(Class<?> clazz)
  • Finally each ClassLoadAction is notified, that nucleus is complete so that final steps (like annotation based dependency injection) could be performed.
That's it. The only other thing Nucleus provides is a registry which can be used to register and retrieve objects for a class. (An in-depth description of the process above, can be found here: http://andreas.haufler.info/2012/01/iterating-over-all-classes-with.html).

Now to make this framework useable as shown above, there is a set of classes around Nucleus. Most important is the class ServiceLoadAction, which will instantiate each class which wears a @Register annoation, runs Factory.inject (our mini DI tool) on it, and throws it into Nucleus for the listed classes. Whats important: The ServiceLoadActions has no specific rights or privileges, you can easily write your implementation which does smarter stuff.

Next to some annotations, there are three other handy classes when it comes to retrieving instances from Nucleus: Factory, Part and Parts. As noted above, the Factory is a simple dependency injector. Currently only the ServiceLoadAction autmatically uses the Factory, as all classes wearing the @Register annotation are scanned for required injections. You can however use this factory to run injections on your own classes or other ClassLoadActions to do the same as ServiceLoadAction. If you can't or don't want to rely in annotation based dependency magic, you can use the two helper classes Part and Parts. Those are used like normal fields (see ExampleNucleus.timerInfo above) and fetch the appropriate object or list of objects automatically. Since the result is cached, repeated invocations have almost no overhead compared to a normal field.

Nucleus and the example shown above is open source (MIT-License) and available here:
https://github.com/andyHa/scireumOpen/blob/master/src/examples/ExampleNucleus.java
https://github.com/andyHa/scireumOpen/tree/master/src/com/scireum/open/nucleus


If you're interested in using Nucleus, I could put the relevant souces into a separater repository and also provide a release jar - just write a comment below an let me know.

Update:  I moved nucleus into a repository on its own: https://github.com/andyHa/nucleus - It even includes a distribution jar.


This post is the fourth part of the my series "Enterprisy Java" - We share our hints and tricks how to overcome the obstacles when trying to build several multi tenant web applications out of a set of common modules.

Tuesday, January 10, 2012

Launching and Debugging Tomcat from Eclipse without complex plugins

 Modern IDEs like Eclipse provide various Plugins to ease web developement. However, I believe that starting Tomcat as "normal" Java application still provides the best debugging experience. Most of the time, this is because these tools launch Tomcat or any other servlet container as external process and then attach a remote debugger on it. While you're still able to set breakpoints and inspect variables, other features like hot code replacement don't work that well.

Therefore I prefer to start my Tomcat just like any other Java application from within Eclipse. Here's how it works:

This article addresses experienced Eclipse users. You should already know how to create projects, change their built path and how to run classes. If you need any help, feel free to leave a comment or contact me.

We'll add the Tomcat as additional Eclipse project, so that paths and all remain platform independent. (I even keep this project in our SVN so that everybody works with the same setup).

Step 1 - Create new Java project named "Tomcat7"




Step 2 - Remove the "src" source folder





Step 3 - Download Tomcat (Core Version) and unzip into our newly created project. This should now look something like this:




Step 4 - If you havn't, create a new Test project which contains your sources (servlets, jsp pages, jsf pages...). Make sure you add the required libraries to the built path of the project





 Step 5.1 - Create a run configuration. Select our Test project as base and  set org.apache.catalina.startup.Bootstrap as main class.





Step 5.2 -  Optionally specify larger heap settings as VM arguments. Important: Select the "Tomcat" project as working directory (Click on the "Workspace" button below the entry field.





Step 5.3 - Add bootstrap.jar and tomcat-juli.jar from the Tomcat7/bin directory as bootstrap classpath.Add everything in Tomcat7/lib as user entries. Make sure the Test project and all other classpath entries (i.e. maven dependencies) are below those.






Now you can "Apply" and start Tomcat by hitting "Debug". After a few seconds (check the console output) you can go to http://localhost:8080/examples/ and check out the examples provided by Tomcat.


Step 6 - Add Demo-Servlet - Go to our Test project, add a new package called "demo" and a new servlet called "TestServlet". Be creative with some test output - like I was...




Step 7 - Change web.xml - Go to the web.xml of the examples context and add our servlet (as shown in the image). Below all servlets you also have to add a servlet-mapping (not shown in the image below). This looks like that:

    <servlet-mapping>
        <servlet-name>test</servlet-name>
        <url-pattern>/demo/test</url-pattern>
    </servlet-mapping>




Hit save and restart tomcat. You should now see your debug output by surfing to http://localhost:8080/examples/demo/test - You now can set breakpoints, change the output (thanks to hot code replacement) and do all the other fun stuff you do with other debugging sessions.


Hint: Keeping your JSP/JSF files as well as your web.xml and other resources already in another project? Just create a little ANT script which copies them into the webapps folder of the tomcat - and you get re-deployment with a single mouse click. Even better (this is what we do): You can modify/override the ResourceResolver of JSF. Therefore you can simply use the classloader to resolve your .xhtml files. This way, you can keep your Java sources and your JSF sources close to each other. I will cover that in another post - The fun stuff starts when running multi tenant systems with custom JSF files per tenant. The JSF implementation of Sun/Oracle has some nice gotchas built-in for that case ;-)

Friday, January 6, 2012

Iterating over all Classes with an Annotation or Interface

...is impossible unless you use JavaEE, right? Wrong!

With some tricks you can iterate over all classes which fullfill a given predicate, like implementing an interface or wearing an annotation. But why should you care?

Well, software engineering is all about reuse. For example we split our software up in about five modules and build currently four different products on top of that. Since a module obviously has no knowledge about any of there products and their classes, we need to discover extensions and handlers and other hooks at runtime.

Yes, we could use OSGI or Spring for that, but when we started out, but we both of these "feature battleships" are far too large for our concerns. So we built or own little DI (dependency injection) framework (with about a handful of classes). Well DI is probably not the key aspect, it's actually all about getting all classes implemeting a given  interface or annotation. (Some concrete examples will follow in the next posts).

So how do we get this magic list? Well, it's tricky - but work's like a charm in our setting:

As I said, we have several modules which each will participate in the seach for classes. Therefore each module will become a JAR file. Now what we do is, we place a file called "component.properties" in the root folder of this JAR, also in the root folder of the Eclipse project repsectively. This file contains some meta-data like name, version and build-date (filled by ant) - but that's irrelevant now.

Now when we want to discover our classes, we first get a list of all component.properties in the classpath, using the technique above, there will be one per module/JAR:

Enumeration<URL> e = Nucleus.class.getClassLoader().
                       getResources("component.properties");

We then use each of the returned URLs and apply the following (dirty) algorithm:

    /**
     * Takes a given url and creates a list which contains 
     * all children of the given url. 
     * (Works with Files and JARs).
     */
    public static List<String> getChildren(URL url) {
        List<String> result = new ArrayList<String>();
        if ("file".equals(url.getProtocol())) {
            File file = new File(url.getPath());
            if (!file.isDirectory()) {
                file = file.getParentFile();
            }
            addFiles(file, result, file);
        } else if ("jar".equals(url.getProtocol())) {
            try {
                JarFile jar = ((JarURLConnection)
                                url.openConnection())
.getJarFile();
                Enumeration<JarEntry> e = jar.entries();
                while (e.hasMoreElements()) {
                    JarEntry entry = e.nextElement();
                    result.add(entry.getName());
                }
            } catch (IOException e) {
                Log.UTIL.WARN(e);
            }
        }
        return result;
    }

    /**
     * Collects all children of the given file into the given 
     * result list. The resulting string is the relative path
     * from the given reference.
     */
    private static void addFiles(File file, 
                                 List<String> result, 
                                 File reference) 
    {
        if (!file.exists() || !file.isDirectory()) {
            return;
        }
        for (File child : file.listFiles()) {
            if (child.isDirectory()) {
                addFiles(child, result, reference);
            } else {
                String path = null;
                while (child != null && !child.equals(reference)) {
                    if (path != null) {
                        path = child.getName() + "/" + path;
                    } else {
                        path = child.getName();
                    }
                    child = child.getParentFile();
                }
                result.add(path);
            }
        }
    }


So what we now have is a list of files like:
    com/acme/MyClass.class
    com/acme/resource.txt
    ... 

We now can filter this list for classes, and load each one, we then can check our predicates (implements interface, has annotation):

...iterating over each result of getChildren(url)...:


if (relativePath.endsWith(".class")) {
  // Remove .class and change / to .
  String className = relativePath.substring(0,
                      relativePath.length() - 6).replace("/", ".");
  try {
     Class<?> clazz = Class.forName(className);
     for (ClassLoadAction action : actions) {
        action.handle(clazz);
     }
  } catch (ClassNotFoundException e) {
     System.err.println("Failed to load class: " + className);
  } catch (NoClassDefFoundError e) {
     System.err.println("Failed to load dependend class: " +
                        className);
  }
}

Now you have loaded all classes which match your predicates. We do this once on startup and fill a lookup Map. Once a component wants to know all implementations of X, we simply query this Map. Furthermore we use the component.properties to let the framework know which component depends on which. We then load the classes in the correct order. This is important, since while loading classes you can provide more implementations of ClassLoadAction - Yes, you can extend the extension loader while loading extensions.

I've made the framework open source. An article about it can be found here: http://andreas.haufler.info/2012/01/modular-java-applications-microkernel.html

This post is the third part of the my series "Enterprisy Java" - We share our hints and tricks how to overcome the obstacles when trying to build several multi tenant web applications out of a set of common modules.

Java: Caching without Crashing

When building larger Java applications you sooner or later stumble over the decision "Quite intensive computation  - should I recompute this every time?". Most of the time the alternative is to take a Map<K,V> and cache the computed value. This works for short lived objects where the total number of cached values predicted. But sometimes you just don't know how many values would be cached. So you'd say, we better take Apache Collection's LRUMap and limit the size of each cache, so things don't go out of hands. This is of course far better than the first solution. However, those caches still grow and grow until they reached their max size - and they'll never shrink!

We've had such a solution running on our servers. After some days of operation, the JVM had always maxed out it's heap. Well it didn't crash, it was acutally very stable, it just consumed a lot of expensive resources, since all caches where full, no matter if they were currently used or not.

So what we needed and built is a CacheManager. Whenever you need a cache, you ask the CacheManager to provide one for you. Internally it will still use Apache Collection's robust LRUMap, along with some statistics and bookkeeping. It will therefore let you specify a maximal "Time To Live (TTL)" for each cache. The CacheManager then checks all caches regularly and cleans out unused entries. Using this, you can easily use caches here and there, knowing that once the utilization goes down, the entries will be evicted and no longer block expensive resources.

Here's how you'd use this class - see ExampleCache.java for a full example:

    Cache<String, Integer> test = CacheManager
            .createCache("Test", // Name of the cache
                         10,     // Max number of entires
                         1,      // TTL
                         TimeUnit.HOUR //Unit of TTL (hours) 
                         );

The Cache can then be used like a Map:

     value = test.get("key");
     test.put("key", value);

All you need to do is set up a Timer or another service which regulary invokes: "CacheManager.runEviction()" - to clean up all caches. 

A complete example and the implementation can be found here (open source - MIT-License):

Each Cache can also provide statistics like size, hit rate, number of uses, etc:
Visualization of the provided usage statistics in our systems - captions are in German, sorry ;-)

This post is the second part of the my series "Enterprisy Java" - We share our hints and tricks how to overcome the obstacles when trying to build several multi tenant web applications out of a set of common modules.