JavaMail always had an interesting approach when it comes to its configuration. Basically you have to fill an untyped map or Properties structure and hope for the correct interpretation. Countless tutorials on the net show the minimal properties required to make it work (send / receive mails).
However, as we painfully just learned, there are some lesser known properties you should probably take care of, which is timeout settings for socket IO. By default, JavaMail uses an infinite timeout for all socket operations (connect, IO, ...)!
Now suppose you have a cluster of SMTP servers which handle outgoing mail, accessed via a DNS round robin. If one of those servers fail, which happens to be the one JavaMail wanted to connect to, your mail sending thread will hang - forever! This is exactly what happened to us and we needed to perform some real nasty magic to avoid tragedy.
Therefore, we now set timeouts for all operations:
String MAIL_SMTP_CONNECTIONTIMEOUT ="mail.smtp.connectiontimeout";
String MAIL_SMTP_TIMEOUT = "mail.smtp.timeout";
String MAIL_SMTP_WRITETIMEOUT = "mail.smtp.writetimeout";
String MAIL_SOCKET_TIMEOUT = "60000";
// Set a fixed timeout of 60s for all operations -
// the default timeout is "infinite"
props.put(MAIL_SMTP_CONNECTIONTIMEOUT, MAIL_SOCKET_TIMEOUT);
props.put(MAIL_SMTP_TIMEOUT, MAIL_SOCKET_TIMEOUT);
props.put(MAIL_SMTP_WRITETIMEOUT, MAIL_SOCKET_TIMEOUT);
Also, if you plan to access DNS round robin based services (like amazon S3) or in our case a mail cluster, don't forget to also configure the DNS cache tiemout of Java (which is also infinite by default):
// Only cache DNS lookups for 10 seconds
java.security.Security.setProperty("networkaddress.cache.ttl","10");
And while we're at it, for us it turned out to be a good idea to set all encodings to UTF-8 (independent of the underlying OS) to provide a stable environment:
System.setProperty("file.encoding", Charsets.UTF_8.name());
System.setProperty("mail.mime.charset", Charsets.UTF_8.name());
...you don't want to care about stuff like this at all? Feel free to use our open source Java library SIRIUS, which takes care of all that by providing a neat fluet API for sending mails:
Sources on GitHub
An example usage can be found in the cluster manager:
@Part
private MailService ms;
private void alertClusterFailure() {
...
ms.createEmail()
.useMailTemplate("system-alert", ctx)
.toEmail(receiver).send();
...
}
Thoughts on Java, Language Design, Database- and Web Technologies...
Thursday, June 12, 2014
Thursday, February 20, 2014
Multithreaded Java - Screencast on the synchronized keyword
synchronized is quite well known in the Java community. Due to its early implementation which had a significant runtime overhead, it has quite a bad image. In modern JVMs this isn't the case anymore - still there's something to look out for.
Watch the screencast to learn more:
Watch the screencast to learn more:
Multithreaded Java - Screencast on the volatile keyword
volatile is probably one of the least known keywords in Java. Still it serves an important purpose - an not knowing about it might ruin your day....
Watch this screencast to learn more:
Watch this screencast to learn more:
Monday, February 3, 2014
Version Numbering Scheme - Yet another approach
Version numbering schemes are probably one of the few things we software engineers have more than sort algorithms. However, there's always room for one more.
While the classic approach of MAJOR.MINOR.PATCH (e.g. 1.8.2) works quite well for libraries or products which are distributed in a broad manner, it is still not as easy as it seems. What is a major change? What a minor? What comes after 1.9? 2.0 or 1.10? There are tons of examples where this classic approach fails, Java being one of the most prominent examples.
One the other hand, this approach is almost perfectly suited for libraries, as the rules are quite obvious here:
- increment minor version for every release (2.4 -> 2.5)
- increment major version when a backward incompatible change was made (2.4 -> 3.0)
- increment the patch level for each update, which only fixed bugs but didn't add functionality (2.4 -> 2.4.1)
Although this approach works quite well, there are two problems with it:
- You need a build server which issues consecutive build numbers
- Without looking at the build server, you cannot tell the age of a release (How much older is BUILD-51 compared to BUILD-52?)
As we don't perform more than one release per week, a version number is always unique. Furthermore these numbers are quite short and easy to remember (compared to full dates like foo-20130527). Still they provide a rough information concerning the release date.
Now as I said, this scheme is not superior over others. It's just a good solution for our problem. Use it if you like it, ignore it otherwise ;-)
Tuesday, January 7, 2014
Making HTTP content compression work in netty 4
Netty is really a great framework providing all the things needed to build a high performance HTTP server. The nice thing is, that nearly everything comes out of the box and has just to be put together in the right way. And content compression (gzip or deflate) is no exception. However, when it comes to compressing static content I stumbled quite a few times before everything worked as expected:
Update: First of all, widely used tools like wget use HTTP 1.0 and not HTTP 1.1 - therefore we cannot always deliver a chunked response (we have to live with disabling compression then). Also note that the netty guys had pretty much the same idea now: HttpChunkedInput - The problem with HTTP 1.0 or non compressable responses (see SmartContentCompressor below) however remains...
Based on the http/file example provided by netty I used to following approach to serve static files (same as used in netty 3.6.6):
ctx.write(response);
if (useSendFile) {
ctx.write(new DefaultFileRegion(raf.getChannel(), 0, fileLength));
Update: First of all, widely used tools like wget use HTTP 1.0 and not HTTP 1.1 - therefore we cannot always deliver a chunked response (we have to live with disabling compression then). Also note that the netty guys had pretty much the same idea now: HttpChunkedInput - The problem with HTTP 1.0 or non compressable responses (see SmartContentCompressor below) however remains...
Based on the http/file example provided by netty I used to following approach to serve static files (same as used in netty 3.6.6):
RandomAccessFile raf = new RandomAccessFile(file, "r");
HttpResponse response = new DefaultHttpResponse(HTTP_1_1, OK); ctx.write(response);
if (useSendFile) {
ctx.write(new DefaultFileRegion(raf.getChannel(), 0, fileLength));
} else {
ctx.write(new ChunkedFile(raf, 0, fileLength, 8192));
}
However, as soon as I added a HttpContentCompressor to the pipeline, Firefox failed with a message like "invalid content encoding".
As it turns out, the HttpContentCompressor expects HttpContent objects as input chunks to be compressed. However, the ChunkedWriteHandler directly sent ByteBufs to the downstream. Also sending a FileRegion (useSendFile=true) left the content compressor unimpressed.
In order to overcome this problem I create a class named ChunkedInputAdapter which takes a ChunkedInput<ByteBuf> and represents ChunkedInput<HttpContent>. However, two things still weren't satisfying: First, FileRegions and the zero-copy capbility still couldn't be used and second, already compressed files like JPEGs will be compressed again. Therefore I sublassed HttpContentCompressor with a class called SmartContentCompressor. This class check if either a header "Content-Encoding: Identity" or a specific content-type or a content-length less than 1 kB is present. In there cases the content compression is bypassed.
Using this combination permits to use both, content compression when it is useful and the zero copy capability if the file is already compressed.
All the sources mentioned above are open sourced under the MIT license and part of the SIRIUS framework.
Subscribe to:
Posts (Atom)