This is just a quick write-up to save others some debugging time. The adventure is called: Get the "cdc_serial_emulator" example supplied of Microchip Harmony (/harmony/v0_70b/apps/usb/device/cdc_serial_emulator) up and running on my custom PIC32 board.
Once one understands the inner workings of the firmeware, the first steps are quite simple. First of all I modified the code which assumed we would run on an Explorer Board. Therefore I commented everything uncompilable and unneeded in bsp_sys_init.c out.
Now that I was able to compile the project, I was ready to enter the USB descriptor hell. And believe it or not, my Windows 7 machine was the most helpful tool for my first steps. Every time I plugged my board in, those annoying "USB device connected" sounds let me know if the communication worked or not. Of course, it didn't by default. The reason is the clock frequency of the PIC. The USB module has an internal PLL which generates the required 48 MHz. This PLL is fed by the main oscillator and therefore everything needs to be setup correctly. I used (for no special reason) a 20 MHz crystal instead of the 8 MHz crystal assumed by the example.
Therefore I needed to tweak the config settings found in system_init.c:
OSCO Pin(OSCIOFNC) = Enable
Primary Oscillator Configuration(POSCMOD) = External (Highspeed)
Secondary Oscillator Enable(FSOSCEN) = Disabled
Oscillator Selection Bits(FNOSC) = Primary osc with PLL
#pragma config OSCIOFNC = ON, POSCMOD = HS, FSOSCEN = OFF, FNOSC = PRIPLL
PLL Input Divider (FPLLIDIV) = Divide by 5
(20 Mhz / 5 = 4 MHz)
PLL Multiplier (FPLLMUL) = Multiply by 20
(4 MHz * 20 = 80MHz)
System PLL Output Clock Divider (FPLLODIV) = Divide by 1
(80 MHz / 1 = 80 MHz system clock)
Peripheral Clock Divisor (FPBDIV) = Divide by 1
(80 MHz / 1 = 80 MHz peripheral clock)
Watchdog Timer Enable (FWDTEN) = Disabled
Clock Switching and Monitor Selection (FCKSM)
= Clock Switch Enable, Fail Safe Clock Monitoring Enable
#pragma config FPLLIDIV = DIV_5, FPLLMUL = MUL_20, FPLLODIV = DIV_1
#pragma config FWDTEN = OFF, FCKSM = CSECME, FPBDIV = DIV_1
Enable PLL for USB clock generation
#pragma config UPLLEN = ON
Divide external input clock by 5 before it is fed into the USB PLL.
20 MHz / 5 = 4 MHz (This is multiplied by 24 and then divided by 2 - see Reference Manual Page 27-3). This will result in the desired 48 MHz USB reference clock.
#pragma config UPLLIDIV = DIV_5
As this only took 3 evenings to figure out, this can be considered the easy part...The problem was, that neither Linux nor OSX would recognize the device as serial port (the device itself was found). Therefore it was clear that the timing was ok, but the USB descriptor wasn't.
After days of debugging (mostly on OSX using USBProbe) I switched to Linux and tried "lsusb -v". And there you go: The CDC-Descriptor provided by microchip was/is wrong! lsusb said something like "INVALID CDC(UNION) 0x04 0x24 0x06 0x00. Looking at the descriptor, this was really the input in system_config.c:
// Size of the descriptor
sizeof(USB_CDC_UNION_FUNCTIONAL_DESCRIPTOR_HEADER),
// CS_INTERFACE
USB_CDC_DESC_CS_INTERFACE,
// Type of functional descriptor
USB_CDC_FUNCTIONAL_UNION,
//com interface number
0,
Compared to any other CDC device I tried, this was missing one byte as the record (descriptor) has to look like LENGTH, TYPE, SUBTYPE, MASTER_INTERFACE, SLAVE_INTERFACE - so this was clearly missing one byte. Therefore I changed to code to:
0x05, // Size (5 bytes)
0x24, // DescriptorType: CS_INTERFACE
0x06, // DescriptorSubtype: Union Functional Descriptor
0x00, // MasterInterface
0x01, // SlaveInterface0
Of course, that didn't work out of the box - as I changed the descriptor by adding one byte, I had to fix the size at the begin of the descriptor:
/* Configuration 1 Descriptor */
const uint8_t configDescriptor1[]={
/* Configuration Descriptor */
//sizeof(USB_CFG_DSC), // Size of this descriptor in bytes
0x09,
// CONFIGURATION descriptor type
USB_DESCRIPTOR_CONFIGURATION,
// Total length of data for this cfg
67,0, // This was 66 originally
Using this, both, my Mac and Linux machines now recognize the device and supply a tty for it :-)
Thoughts on Java, Language Design, Database- and Web Technologies...
Wednesday, December 25, 2013
Saturday, December 21, 2013
How to write one of the fastest expression evaluators in Java
Granted, the title is a bit of an attention grabber, but nevertheless true (You course you never trust a benchmark you didn't fake yourself - but that's another story).
So last week I was looking for a small and usable library to evaluate mathematical expressions. I almost directly stumbled upon this stackoverflow post. The recommended library (Expr) is really quite fast and had almost everything I needed. However, what it didn't provide was the ability to limit the scope of variables (everything is in one global namespace within the VM).
Therefore I did, what one normally shouldn't do: I reinvented the wheel and wrote my own parser / evaluator. It was a rainy saturday anyway so I thought a small recursive descending parser, an AST which simplifies and eventually computes expressions along with a little helper for managing variables doesn't seem to be a big deal anyway. And it wasn't. I had an initial implementation up and running quite fast. Once I had some tests giving me confidence that it computed everything the right way, I wanted to know how fast the evaluator was, compared to other libraries mentioned in the original post. Having not hand-optimized every inner loop and everything, I had't much expectations, some of the libraries are commercial ones afterall. So I was quite suprised when I looked at the results. The list below shows a micro benchmark which evaluates the same expression using the respective library. The measurements for parsii, which is my library, were done using the final version, which performs some simplifications, like pre-evaluating constant expressions. However, no "black magic" like bytecode generation or anything in that league is done.
For a performance measurement the expression "2 + (7 - 5) * 3.14159 * x^(12-10) + sin(-3.141)" was evaluated with x running from 0 to 1000000. This was done 10 times to warm the JIT and then 15 times again of which the average execution time was taken:
- PARSII: 28.3 ms
- EXPR: 37.2 ms
- MathEval: 7748.5 ms
- JEP: 647.0 ms
- MESP: 220.8 ms
- JFEP: 274.3 ms
For those of you who are not too deep into compiler contruction, here's a small outline of how it works:
As any parser or compiler parsii uses the classic approach of having a tokenizer, which converts a stream of characters into a stream of tokens. Therefore "4 + 3 *8" which is '4', ' ', '+', ' ', '3' , ' ', '*', '8' as character array will be converted into:
- 4 (INTEGER)
- + (SYMBOL)
- 3 (INTEGER)
- * (SYMBOL)
- 8 (INTEGER)
The parser which translates the given stream of tokens into an AST (Abstract Syntax Tree) which can then be evaluated, is a classic recursive descending parser. This is one of the simplest ways to build a parser, as it is completely written by hand and not generated by a tool. A parser like this basically contains a method for every syntax rule.
Again a lot of tutrials for this kind of parsers are available. However, what most example leave out is proper error handling. Next to parsing an expression correcly and fast, good error handling is one of the central aspects of a good parser. And it's not that hard: As you can see in the source code, the parser never throws an exception while parsing the expression. All errors are collected and the parser continues to go on as long as possible. Even though after the first error, the resulting AST cannot be evaluated correctly, it is important to go on as we can and should report as many errors as possible in one run. The same approach is used for the tokenizer, as reports malformed tokens, like decimal numbers with two decimal separators, to the same list of errors.
Evaluating an AST which is the result of a parsed expressions is quite easy. Each node of the syntax tree has an evaluate method which will be called by its parent node, starting from the root node. The result of eval here, is the result of evaluating the expression. A basic example of this approach can be found in BinaryOperation, which represents operations like +, -, * and so on.
In order to improve evaluation time a bit, three optimizations are performed:
First, after parsing the AST is reduced calling a method called simplify on the root node, which propagates to each child node. Each node then decides if a simpler representation of the own sub-expression can be found. As an example: For binary operations, we check if both operands are constant (numbers). In that case, we evaluate the expression and returns a new constant containing the result of the operation. The same is done for functions where all parameters are constant.The second optimization is done when using variables in expressions. The naiive approach here is to use a map and read or write the values of the variable when needed. While this certainly works, a lot of lookups while be performed. Therefore we have a special class called Variable which contains the name and the numeric value of the variable. When an expression is parsed, the variable is looked up once in the scope (which is basically just a map) and then used from now on. As each lookup returns the same instance, variable access when evaluating expressions is as cheap as a field read or write, as we just access the value field of Variable.
The third and last optimization won't probably often come into play. But as it is simple to realize, it was implemented anyway. It basically goes by the name "lazy evaluation" and is used when calling functions. A function does not automatically evaluate all its arguments and then perform the function call iteself. It rather looks at the arguments and can deciede by iteself, which argument to evaluate and which not. An example where this is used, can be found in the if function.
parsii is licensed under the MIT license. All sources can be found on GitHub along with a pre-compiled jar
(Speedy Gonzales Image-(c) by http://benztownbranding.wordpress.com/2011/11/29/become-speedy-gonzales-or-the-short-cuts-imperium/)
Subscribe to:
Posts (Atom)