This time a short post, as most of its contents live on GitHub:
Andy's Software Engineering Corner
Thoughts on Java, Language Design, Database- and Web Technologies...
Monday, October 26, 2020
Monday, August 29, 2016
OSX El Capitan hangs or freezes during boot or after login acutally
I twice had the case that after upgrading and after an update, my MacBook Pro froze doring / after the login screen.
What didn't help - but what's always worth a try:
Note however, that in this tutorial, the step of unlocking FileVault is missing, which is essential when using an encrypted harddisk.
What didn't help - but what's always worth a try:
- Deleting the NVRAM (Cmd + P + R + Pwr)
- Booting in safe mode (shift + Pwr)
- Boot into recovery mode (Cmd + R+ Pwr)
- If you have FileVault enabled, open the Disk Utilities, select your main drive (which is grayed out) and select "Unlock..." in the Menu
- Start a terminal
- Remount everything writeable:
mount -rw /
- Naviagte to your Kernel Extensions folder:
cd /Volumes/System/Library/Extensions
- Look for new or non-standard extensions and move them in a subdirectory called "Unsupported:
mkdir Unsupported; mv Stuff.kext Unsupported/
The list of standard extensions can be found here: ls /Library/Extensions
- Good candidates in my case were Logitech Drivers and or a USB to Serial driver
- Reboot
Note however, that in this tutorial, the step of unlocking FileVault is missing, which is essential when using an encrypted harddisk.
Monday, April 11, 2016
Using the Raspberry Pi to program a Microchip PIC (PIC24) device via ICSP
Why would someone want to do that? Well, basically for two reasons. First, the classic: "this should be possible" - so lets try it, spend the better part of three weekends and some nights to finally get it working. Might sound stupid, but to me, this is still the best way to really learn a technology or technical topic. Starting with basic C++ skills, I learned a lot about PIC assembler (yep, ICSP for PICs is basically sending a bunch of op codes to the device), PIC memory layout and of course PIC ICSP and controlling GPIOs of the Raspberry Pi.
What's the second reason? In my case, I have a PIC connected to the Pi anyway, talking to each other via RS232 - so I figured I could use the spare pins on the connector to assign some GPIOs to the appropriate PINs on the PIC and I don't need a second connector for ICSP, the board is quite full anyway. Also I can update the firmware without having to build a bootloader (which is a topic on its own).
I case you're curious, the whole project will eventually be a driver board for my 3 axis CNC. Currently I have a PC + Arduino + GBRL-Shield, but I'm not happy with the setup. So I'm planning on building a web based controller on the Pi. This way I can control the machine using a cheap Galaxy Tab A or my mobile Phone. Also I can directly upload the G-Code (produced by Fusion 360 - AWESOME!) from my laptop - either directly or via my NAS or Dropbox or the like (both machines are quite a bit apart).
OMG Andy, all this has already been built?! Yes - it is still fun to try it yourself and learn a whole lot new :-)
Because I found a lot of inspiration and help in the projects of others, I tried and document my codequite well a bit. Also I added a bit of a description on how it all works - so if you're into ICSP / PICs / GPIOs have a look here: OpenCobra on GitHub
What's the second reason? In my case, I have a PIC connected to the Pi anyway, talking to each other via RS232 - so I figured I could use the spare pins on the connector to assign some GPIOs to the appropriate PINs on the PIC and I don't need a second connector for ICSP, the board is quite full anyway. Also I can update the firmware without having to build a bootloader (which is a topic on its own).
I case you're curious, the whole project will eventually be a driver board for my 3 axis CNC. Currently I have a PC + Arduino + GBRL-Shield, but I'm not happy with the setup. So I'm planning on building a web based controller on the Pi. This way I can control the machine using a cheap Galaxy Tab A or my mobile Phone. Also I can directly upload the G-Code (produced by Fusion 360 - AWESOME!) from my laptop - either directly or via my NAS or Dropbox or the like (both machines are quite a bit apart).
OMG Andy, all this has already been built?! Yes - it is still fun to try it yourself and learn a whole lot new :-)
Because I found a lot of inspiration and help in the projects of others, I tried and document my code
Saturday, February 6, 2016
C code always runs way faster than Java, right? Wrong!
So we all know the prejudice that Java being interpreted is slow and that C being compiled and optimized runs very fast. Well as you might know, the picture is quite different.
TL;DR Java is faster for constellations, where the JIT can perform inlining as all methods/functions are visible whereas the C compiler cannot perform optimizations accross compilation units (think of libraries etc.).
A C compiler takes the C code as input, compiles and optimizes it and generates machine code for a specific CPU or architecture to be executed. This leads to an executable which can be directly run on the given machine without further steps. Java on the other hand, has an intermediate step: Bytecode. So the Java compiler takes Java code as input and generates bytecode, which is basically machine code for an abstract machine. Now for each (popular) CPU architecture there is a Java Virual Machine, which simulates this abstract machine and executes (interprets) the generated bytecode. And this is as slow as it sounds. But on the other hand, bytecode is quite portable, as the same output will run on all platforms - hence the slogan "Write once, run everywhere".
Now with the approach described above it would be rather "write once, wait everywhere" as the interpreter would be quite slow. So what a modern JVM does is just in time compilation. This means the JVM internally translates the bytecode into machine code for the CPU at hands. But as this process is quite complex, the Hotspot JVM (the one most commonly used) only does this for code fragments which are executed often enough (hence the name Hotspot). Next to being faster at startup (interpreter starts right away, JIT compiler kicks in as needed) this has another benefit: The hotspot JIT known already what part of the code is called frequently and what not - so it might use that while optimizing the output - and this is where our example comes into play.
Now before having a look at my tiny, totally made up example, let me note, that Java has a lot of features like dynamic dispatching (calling a method on an interface) which also comes with runtime overhead. So Java code is probably easier to write but will still generally be slower than C code. However, when it comes to pure number crunching, like in my example below, there are interesting things to discover.
So without further talk, here is the example C code:
int test(int i);
int main(int argc, char** argv) {
int sum = 0;
for(int l = 0; l < 1000; l++) {
int i = 0;
while(i < 2000000) {
if (test(i))
sum += compute(i);
i++;
}
}
return sum;
}
return i + 1;
}
int test(int i) {
return i % 3;
}
Now what the main function actually computes isn't important at all. The point is that it calls two functions (test and compute) very often and that those functions are in anther compilation unit (test1.c). Now lets compile and run the program:
> gcc -O2 -c test1.c
> gcc -O2 -c test.c
> gcc test.o test1.o
> time ./a.out
real 0m6.693s
user 0m6.674s
sys 0m0.012s
So this takes about 6.6 seconds to perform the computation. Now let's have a look at the Java program:
> javac Test.java
> time java Test
real 0m3.411s
user 0m3.395s
sys 0m0.030s
So taking 3.4 seconds, Java is quite faster for this simple task (and this even includes the slow startup of the JVM). The question is why? And the answer of course is, that the JIT can perform code optimizations that the C compiler can't. In our case it is function inlining. As we defined our two tiny functions in their own compilation unit, the comiler cannot inline those when compiling test.c - on the other hand, the JIT has all methods at hand and can perform aggressive inlining and hence the compiled code is way faster.
So is that a totally exotic and made-up example which never occurs in real life? Yes and no. Of course it is an extreme case but think about all the libraries you include in your code. All those methods cannot be considered for optimization in C whereas in Java it does not matter from where the byte code comes. As it is all present in the running JVM, the JIT can optimize at its heart content. Of course there is a dirty trick in C to lower this pain: Marcos. This is, in my eyes, one of the mayor reasons, why so many libraries in C still use macros instead of proper functions - with all the problems and headache that comes with them.
Now before the flamewars start: Both of these languages have their strenghs and weaknesses and both have there place in the world of software engineering. This post was only written to open your eyes to the magic and wonders that a modern JVM makes happen each and every day.
TL;DR Java is faster for constellations, where the JIT can perform inlining as all methods/functions are visible whereas the C compiler cannot perform optimizations accross compilation units (think of libraries etc.).
A C compiler takes the C code as input, compiles and optimizes it and generates machine code for a specific CPU or architecture to be executed. This leads to an executable which can be directly run on the given machine without further steps. Java on the other hand, has an intermediate step: Bytecode. So the Java compiler takes Java code as input and generates bytecode, which is basically machine code for an abstract machine. Now for each (popular) CPU architecture there is a Java Virual Machine, which simulates this abstract machine and executes (interprets) the generated bytecode. And this is as slow as it sounds. But on the other hand, bytecode is quite portable, as the same output will run on all platforms - hence the slogan "Write once, run everywhere".
Now with the approach described above it would be rather "write once, wait everywhere" as the interpreter would be quite slow. So what a modern JVM does is just in time compilation. This means the JVM internally translates the bytecode into machine code for the CPU at hands. But as this process is quite complex, the Hotspot JVM (the one most commonly used) only does this for code fragments which are executed often enough (hence the name Hotspot). Next to being faster at startup (interpreter starts right away, JIT compiler kicks in as needed) this has another benefit: The hotspot JIT known already what part of the code is called frequently and what not - so it might use that while optimizing the output - and this is where our example comes into play.
Now before having a look at my tiny, totally made up example, let me note, that Java has a lot of features like dynamic dispatching (calling a method on an interface) which also comes with runtime overhead. So Java code is probably easier to write but will still generally be slower than C code. However, when it comes to pure number crunching, like in my example below, there are interesting things to discover.
So without further talk, here is the example C code:
test.c:
int compute(int i);int test(int i);
int main(int argc, char** argv) {
int sum = 0;
for(int l = 0; l < 1000; l++) {
int i = 0;
while(i < 2000000) {
if (test(i))
sum += compute(i);
i++;
}
}
return sum;
}
test1.c:
int compute(int i) {return i + 1;
}
int test(int i) {
return i % 3;
}
Now what the main function actually computes isn't important at all. The point is that it calls two functions (test and compute) very often and that those functions are in anther compilation unit (test1.c). Now lets compile and run the program:
> gcc -O2 -c test1.c
> gcc -O2 -c test.c
> gcc test.o test1.o
> time ./a.out
real 0m6.693s
user 0m6.674s
sys 0m0.012s
So this takes about 6.6 seconds to perform the computation. Now let's have a look at the Java program:
Test.java:
public class Test {
private static int test(int i) {
return i % 3; }
private static int compute(int i) {
return i + 1; }
private static int exec() {
int sum = 0;
for (int l = 0; l < 1000; l++) {
int i = 0;
while (i < 2000000) {
if (test(i) != 0) {
sum += compute(i);
}
i++;
}
}
return sum;
}
public static void main(String[] args) {
System.out.println(exec());
}
}
Now lets compile and execute this:> javac Test.java
> time java Test
real 0m3.411s
user 0m3.395s
sys 0m0.030s
So taking 3.4 seconds, Java is quite faster for this simple task (and this even includes the slow startup of the JVM). The question is why? And the answer of course is, that the JIT can perform code optimizations that the C compiler can't. In our case it is function inlining. As we defined our two tiny functions in their own compilation unit, the comiler cannot inline those when compiling test.c - on the other hand, the JIT has all methods at hand and can perform aggressive inlining and hence the compiled code is way faster.
So is that a totally exotic and made-up example which never occurs in real life? Yes and no. Of course it is an extreme case but think about all the libraries you include in your code. All those methods cannot be considered for optimization in C whereas in Java it does not matter from where the byte code comes. As it is all present in the running JVM, the JIT can optimize at its heart content. Of course there is a dirty trick in C to lower this pain: Marcos. This is, in my eyes, one of the mayor reasons, why so many libraries in C still use macros instead of proper functions - with all the problems and headache that comes with them.
Now before the flamewars start: Both of these languages have their strenghs and weaknesses and both have there place in the world of software engineering. This post was only written to open your eyes to the magic and wonders that a modern JVM makes happen each and every day.
Monday, June 1, 2015
Highlighting Checkstyle Links using Maven and IntelliJ IDEA
Although IntelliJ IDEA has en excellent Maven integration, it doesn not recognize file references or file links in the output of Maven commands. One such generator of file links is checkstyle which generates an output like this:
Now our live would be a lot easier, if we could just click on the message to fix the issue. Luckily, with a litte hack, this is possible: IntelliJ provides a possibility to define custom output filters for "External Tools". Therefore navigate to "Preferences > Tools > External Tools". Add a new one with "mvn" as command and "validate" or whatever triggers checkstyle as parameter.
Then click on "Output Filters" and a a Filter with an arbitrary name and "$FILE_PATH$:$LINE$(:$COLUMN$)?.*" as Regular Expression.
Now our live would be a lot easier, if we could just click on the message to fix the issue. Luckily, with a litte hack, this is possible: IntelliJ provides a possibility to define custom output filters for "External Tools". Therefore navigate to "Preferences > Tools > External Tools". Add a new one with "mvn" as command and "validate" or whatever triggers checkstyle as parameter.
Then click on "Output Filters" and a a Filter with an arbitrary name and "$FILE_PATH$:$LINE$(:$COLUMN$)?.*" as Regular Expression.
If you now choose "Tools -> External Tools -> Checkstyle" Maven will run again producing a nicely linked output:
Wednesday, May 20, 2015
Fixing Logjam for Pound
The recently discovered problem "Logjam" in TLS (or the Diffie Hellman algorithm to be exact) is also present in pound by Apsis. Especially if you're using a pre-build binary via apt-get or rpm, as the DH parameters are built into the pound binary itself.
So, to block the support of DH Export, it is enough to change or specify a "Ciphers" setting:
ListenHTTPS
...
...
Ciphers "ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA"
End
However, to be absolutely on the safe side, I'd recommend to compile your own binary with 2048 bit long DH params (the default ones are "just" 1024 bit anyway).
Luckily the steps are quite simple and straight forward:
Also consider adding "Disable SSLv3" (just above Ciphers) to disable SSL3 which is considered insecure.
Using all this will give you a solid A- on https://www.ssllabs.com/ssltest/analyze.html
So, to block the support of DH Export, it is enough to change or specify a "Ciphers" setting:
ListenHTTPS
...
...
Ciphers "ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA"
End
However, to be absolutely on the safe side, I'd recommend to compile your own binary with 2048 bit long DH params (the default ones are "just" 1024 bit anyway).
Luckily the steps are quite simple and straight forward:
- wget http://www.apsis.ch/pound/Pound-2.7.tgz
- tar -xzf Pound-2.7.tgz
- cd Pound-2.7
- ./configure --with-dh=2048 --prefix= --exec_prefix=/usr
- make
- make install
Also consider adding "Disable SSLv3" (just above Ciphers) to disable SSL3 which is considered insecure.
Using all this will give you a solid A- on https://www.ssllabs.com/ssltest/analyze.html
Tuesday, April 21, 2015
Using Rhino with Java 8
Java brings Nashorn as new JavaScript implementation for JSR 223 (javax.scripting). While this is certainly great news (Nashorn is way faster than Rhino by directly generating Java code), it comes with some challenges: Nashorn is not 100% compatible with Rhino.
Rhino had some extensions and more or less other interpretations on how to combine the Java world with JavaScript. Therefore you cannot simply replace Rhino by Nashorn. One case (which ruined our day) is that you cannot call static methods on instances. Therefore we had to get Rhino up and running in Java 8 until we have our scripts re-written.
Although there is an extensive documentation available in java.net, it is a bit confusing (some URLs are wrong, some steps are missing). So here are the steps which worked for us:
That's all you need to backport Rhino to Java 8.
Update: Here's another tutorial on this Topic: Java 8 Features Tutorial
Rhino had some extensions and more or less other interpretations on how to combine the Java world with JavaScript. Therefore you cannot simply replace Rhino by Nashorn. One case (which ruined our day) is that you cannot call static methods on instances. Therefore we had to get Rhino up and running in Java 8 until we have our scripts re-written.
Although there is an extensive documentation available in java.net, it is a bit confusing (some URLs are wrong, some steps are missing). So here are the steps which worked for us:
- Download Rhino: https://github.com/downloads/mozilla/rhino/rhino1_7R4.zip
- Download JSR-223: svn checkout svn checkout https://svn.java.net/svn/scripting~svn
Yes that is a ~ in the URL! - cd scripting~svn/trunk/engines/javascript/lib
- Copy the js.jar from rhino1_7R4.zip into this directory (replace the existing js.jar)
- cd ../make
- ant clean all
- Copy ../build/js-engine.jar AND js.jar (of Rhino) into your classpath
- Now change:
ScriptEngineManager manager = new ScriptEngineManager();
ScriptEngine engine = manager.getEngineByName("js");
to:
ScriptEngineManager manager = new ScriptEngineManager();
ScriptEngine engine = manager.getEngineByName("rhino");
That's all you need to backport Rhino to Java 8.
Update: Here's another tutorial on this Topic: Java 8 Features Tutorial
Subscribe to:
Posts (Atom)