Monday, February 2, 2015

A better MessageFormat for Java

The MessageFormat class is widely used by Java, especially when it comes to internationalisation. At first sight, using it is simple and straight forward. Define a pattern like "There are {0} files on {1}", either in Java - or better in a in a .properties file. Create a new instance of MessageFormat and supply arguments for the two parameters when calling the format method. This create a formatted string like "There are 100 files on /dev/sda".

Couldn't be easier, right? Yep, but there's still room for improvement. One such improvement is replacing parameter indices with names. "There are ${numberOfFiles} on ${disk}", provides way more context to the poor soul having to translate a properties files.

Another improvement are optional sections. Imagine you have to represent a person as string. It can have a salutation, a firstname and a lastname. Depending on what a user entered, the salutation and or the firstname might be empty. Valid combinations could be "Mr. John Foo", "John Foo", "Mr. Foo". Using a pattern like "[${salutation} ][${firstname} ]${lastname}" is enough to create this output if optional patterns are supported. That idea is, that blocks in angular brackets are only output if at least one enclosed parameter is replaced with a non-null value.

All this is implemented by the Formatter class provided by Sirius. Using it is quite simple. For internationalisation, use NLS.fmtr("Property.key").set("paramName", value).format(). Note that Sirius automatically loads all properties files and makes them available using the NLS class. To use the smart formatting capabilities a formatter can be directly instantiated like this: Formatter.create("${foo}[ ${bar}]").set("foo", foo).set("bar", bar).smartFormat().



Thursday, June 12, 2014

JavaMail can be evil (and force you to restart your app server)

JavaMail always had an interesting approach when it comes to its configuration. Basically you have to fill an untyped map or Properties structure and hope for the correct interpretation. Countless tutorials on the net show the minimal properties required to make it work (send / receive mails).

However, as we painfully just learned, there are some lesser known properties you should probably take care of, which is timeout settings for socket IO. By default, JavaMail uses an infinite timeout for all socket operations (connect, IO, ...)!

Now suppose you have a cluster of SMTP servers which handle outgoing mail, accessed via a DNS round robin. If one of those servers fail, which happens to be the one JavaMail wanted to connect to, your mail sending thread will hang - forever! This is exactly what happened to us and we needed to perform some real nasty magic to avoid tragedy.

Therefore, we now set timeouts for all operations:

  String MAIL_SMTP_CONNECTIONTIMEOUT ="mail.smtp.connectiontimeout";
  String MAIL_SMTP_TIMEOUT = "mail.smtp.timeout";
  String MAIL_SMTP_WRITETIMEOUT = "mail.smtp.writetimeout";

  String MAIL_SOCKET_TIMEOUT = "60000";

  // Set a fixed timeout of 60s for all operations - 
  // the default timeout is "infinite"
  props.put(MAIL_SMTP_CONNECTIONTIMEOUT, MAIL_SOCKET_TIMEOUT);
  props.put(MAIL_SMTP_TIMEOUT, MAIL_SOCKET_TIMEOUT);
  props.put(MAIL_SMTP_WRITETIMEOUT, MAIL_SOCKET_TIMEOUT);


Also, if you plan to access DNS round robin based services (like amazon S3) or in our case a mail cluster, don't forget to also configure the DNS cache tiemout of Java (which is also infinite by default):

 // Only cache DNS lookups for 10 seconds
 java.security.Security.setProperty("networkaddress.cache.ttl","10");

And while we're at it, for us it turned out to be a good idea to set all encodings to UTF-8 (independent of the underlying OS) to provide a stable environment:

 System.setProperty("file.encoding", Charsets.UTF_8.name());
 System.setProperty("mail.mime.charset", Charsets.UTF_8.name());


...you don't want to care about stuff like this at all? Feel free to use our open source Java library SIRIUS, which takes care of all that by providing a neat fluet API for sending mails:
Sources on GitHub

An example usage can be found in the cluster manager:

    @Part
    private MailService ms;

    private void alertClusterFailure() {
        ...
        ms.createEmail()

          .useMailTemplate("system-alert", ctx)
          .toEmail(receiver).send();
        ...
    }


Thursday, February 20, 2014

Multithreaded Java - Screencast on the synchronized keyword

synchronized is quite well known in the Java community. Due to its early implementation which had a significant runtime overhead, it has quite a bad image. In modern JVMs this isn't the case anymore - still there's something to look out for.

Watch the screencast to learn more:


Multithreaded Java - Screencast on the volatile keyword

volatile is probably one of the least known keywords in Java. Still it serves an important purpose - an not knowing about it might ruin your day....

Watch this screencast to learn more:


Monday, February 3, 2014

Version Numbering Scheme - Yet another approach


Version numbering schemes are probably one of the few things we software engineers have more than sort algorithms. However, there's always room for one more.










While the classic approach of MAJOR.MINOR.PATCH (e.g. 1.8.2) works quite well for libraries or products which are distributed in a broad manner, it is still not as easy as it seems. What is a major change? What a minor? What comes after 1.9? 2.0 or 1.10? There are tons of examples where this classic approach fails, Java being one of the most prominent examples.

One the other hand, this approach is almost perfectly suited for libraries, as the rules are quite obvious here:
  • increment minor version for every release (2.4 -> 2.5)
  • increment major version when a backward incompatible change was made (2.4 -> 3.0)
  • increment the patch level for each update, which only fixed bugs but didn't add functionality (2.4 -> 2.4.1)
However, for software which runs in the cloud or is only delivered to a number of customers, the distinction is not always this clear. As we do not distinguish between minor or major updates (ask our sales guys, each release is a major step forward), we ended up using the build numbers of our Jenkins build server as version number.

Although this approach works quite well, there are two problems with it:
  1. You need a build server which issues consecutive build numbers
  2. Without looking at the build server, you cannot tell the age of a release (How much older is BUILD-51 compared to BUILD-52?)
Therefore we now started to switch to another approch for our SIRIUS based products: Inspired by date code placed on ICs, we started to use the same codes for our releases. A date code consists of four digits, the first two being the year and the second two being the week number. So this blog post would have 1406 as version.

As we don't perform more than one release per week, a version number is always unique. Furthermore these numbers are quite short and easy to remember (compared to full dates like foo-20130527). Still they provide a rough information concerning the release date.

Now as I said, this scheme is not superior over others. It's just a good solution for our problem. Use it if you like it, ignore it otherwise ;-)

Tuesday, January 7, 2014

Making HTTP content compression work in netty 4

Netty is really a great framework providing all the things needed to build a high performance HTTP server. The nice thing is, that nearly everything comes out of the box and has just to be put together in the right way. And content compression (gzip or deflate) is no exception. However, when it comes to compressing static content I stumbled quite a few times before everything worked as expected:





Update: First of all, widely used tools like wget use HTTP 1.0 and not HTTP 1.1 - therefore we cannot always deliver a chunked response (we have to live with disabling compression then). Also note that the netty guys had pretty much the same idea now: HttpChunkedInput - The problem with HTTP 1.0 or non compressable responses (see SmartContentCompressor below) however remains...

Based on the http/file example provided by netty I used to following approach to serve static files (same as used in netty 3.6.6):

RandomAccessFile raf = new RandomAccessFile(file, "r");
HttpResponse response = new DefaultHttpResponse(HTTP_1_1, OK); 
ctx.write(response);

if (useSendFile) {
    ctx.write(new DefaultFileRegion(raf.getChannel(), 0, fileLength));
} else {
    ctx.write(new ChunkedFile(raf, 0, fileLength, 8192));
}
However, as soon as I added a HttpContentCompressor to the pipeline, Firefox failed with a message like "invalid content encoding".
As it turns out, the HttpContentCompressor expects HttpContent objects as input chunks to be compressed. However, the ChunkedWriteHandler directly sent ByteBufs to the downstream. Also sending a FileRegion (useSendFile=true) left the content compressor unimpressed.
In order to overcome this problem I create a class named ChunkedInputAdapter which takes a ChunkedInput<ByteBuf> and represents ChunkedInput<HttpContent>. However, two things still weren't satisfying: First, FileRegions and the zero-copy capbility still couldn't be used and second, already compressed files like JPEGs will be compressed again. Therefore I sublassed HttpContentCompressor with a class called SmartContentCompressor. This class check if either a header "Content-Encoding: Identity" or a specific content-type or a content-length less than 1 kB is present. In there cases the content compression is bypassed.

Using this combination permits to use both, content compression when it is useful and the zero copy capability if the file is already compressed.
All the sources mentioned above are open sourced under the MIT license and part of the SIRIUS framework.

Wednesday, December 25, 2013

Getting a simple CDC demo (serial port via USB) working with a PIC32 (PIC32MX575)

This is just a quick write-up to save others some debugging time. The adventure is called: Get the "cdc_serial_emulator" example supplied of Microchip Harmony (/harmony/v0_70b/apps/usb/device/cdc_serial_emulator) up and running on my custom PIC32 board.

Once one understands the inner workings of the firmeware, the first steps are quite simple. First of all I modified the code which assumed we would run on an Explorer Board. Therefore I commented everything uncompilable and unneeded in bsp_sys_init.c out.

Now that I was able to compile the project, I was ready to enter the USB descriptor hell. And believe it or not, my Windows 7 machine was the most helpful tool for my first steps. Every time I plugged my board in, those annoying "USB device connected" sounds let me know if the communication worked or not. Of course, it didn't  by default. The reason is the clock frequency of the PIC. The USB module has an internal PLL which generates the required 48 MHz. This PLL is fed by the main oscillator and therefore everything needs to be setup correctly. I used (for no special reason) a 20 MHz crystal instead of the 8 MHz crystal assumed by the example.

Therefore I needed to tweak the config settings found in system_init.c: 
OSCO Pin(OSCIOFNC)                        = Enable
Primary Oscillator Configuration(POSCMOD) = External (Highspeed)
Secondary Oscillator Enable(FSOSCEN)      = Disabled
Oscillator Selection Bits(FNOSC)          = Primary osc with PLL


#pragma config OSCIOFNC = ON, POSCMOD = HS, FSOSCEN = OFF, FNOSC = PRIPLL

PLL Input Divider (FPLLIDIV)               = Divide by 5 
(20 Mhz / 5 = 4 MHz) 
PLL Multiplier (FPLLMUL)                   = Multiply by 20

(4 MHz * 20 = 80MHz)
System PLL Output Clock Divider (FPLLODIV) = Divide by 1
(80 MHz / 1 = 80 MHz system clock)
Peripheral Clock Divisor (FPBDIV)          = Divide by 1
(80 MHz / 1 = 80 MHz peripheral clock)

Watchdog Timer Enable (FWDTEN)                 = Disabled
Clock Switching and Monitor Selection (FCKSM)  

 = Clock Switch Enable, Fail Safe Clock Monitoring Enable

#pragma config FPLLIDIV = DIV_5, FPLLMUL = MUL_20, FPLLODIV = DIV_1
#pragma config FWDTEN = OFF, FCKSM = CSECME, FPBDIV = DIV_1 

Enable PLL for USB clock generation 

#pragma config UPLLEN   = ON

Divide external input clock by 5 before it is fed into the USB PLL.

20 MHz / 5 = 4 MHz (This is multiplied by 24 and then divided by 2 - see Reference Manual Page 27-3). This will result in the desired 48 MHz USB reference clock.

#pragma config UPLLIDIV = DIV_5


As this only took 3 evenings to figure out, this can be considered the easy part...The problem was, that neither Linux nor OSX would recognize the device as serial port (the device itself was found). Therefore it was clear that the timing was ok, but the USB descriptor wasn't.

After days of debugging (mostly on OSX using USBProbe) I switched to Linux and tried "lsusb -v". And there you go: The CDC-Descriptor provided by microchip was/is wrong! lsusb said something like "INVALID CDC(UNION) 0x04 0x24 0x06 0x00. Looking at the descriptor, this was really the input in system_config.c:


    // Size of the descriptor
    sizeof(USB_CDC_UNION_FUNCTIONAL_DESCRIPTOR_HEADER),
    // CS_INTERFACE
    USB_CDC_DESC_CS_INTERFACE,
    // Type of functional descriptor
    USB_CDC_FUNCTIONAL_UNION,
    //com interface number
    0,


Compared to any other CDC device I tried, this was missing one byte as the record (descriptor) has to look like LENGTH, TYPE, SUBTYPE, MASTER_INTERFACE, SLAVE_INTERFACE - so this was clearly missing one byte. Therefore I changed to code to:

0x05, // Size (5 bytes)
0x24, // DescriptorType: CS_INTERFACE
0x06, // DescriptorSubtype: Union Functional Descriptor
0x00, // MasterInterface
0x01, // SlaveInterface0


Of course, that didn't work out of the box - as I changed the descriptor by adding one byte, I had to fix the size at the begin of the descriptor:

/* Configuration 1 Descriptor */
const uint8_t configDescriptor1[]={
   
    /* Configuration Descriptor */
    //sizeof(USB_CFG_DSC),    // Size of this descriptor in bytes
    0x09,
    // CONFIGURATION descriptor type
    USB_DESCRIPTOR_CONFIGURATION,
    // Total length of data for this cfg
    67,0, // This was 66 originally


Using this, both, my Mac and Linux machines now recognize the device and supply a tty for it :-)