Schlagwort-Archiv: Android

Setup Dagger with Eclipse

The dependency injection library Dagger by Square setup for using it with Eclipse is a bit different when not using Maven.

At the time of writing, there seem to be no complete explaination on what is needed to setup everything. I got everything working with combined information from the comments in issue 126 by staxgr and arichiardi.

  • Prepare the Eclipse project and create the directories libs and libs-compile.
  • Download the following libraries:
      • dagger-x.x.x.jar goes into the libs directory
      • dagger-compiler-x.x.x.jar goes in the libs-compile directory
    • the javawriter jar and put it into libs-compile
    • javax.inject goes into libs
  • Enable Annotation Processing and add all four libraries to the factory path of the annotation processing settings
    • Project Properties → Java Compiler → Annotation Processing
      • check "Enable project specific settings" and "Enable annotation processing"
    •  Project Properties → Java Compiler → Annotation Processing → Factory Path
      • "Add JARs..." for each downloaded jar

When starting a build, the generated files should appear in .apt_generated. This directory should be automatically configured as a source folder after annotation processing has Bern enabled.

StringEntity Doesn't Play by the Rules of Android

So here is the situation: you build up some JSON objects and pop them into a StringEntity so your HTTPClient can send the stuff to server. We know on Android, the default charset is UTF-8 and use that knowledge through out the whole application, not considering that this fact might change in some places. But it does! Using a default StringEngtity with data supposed to be encoded in UTF-8 won't give us the expected results on Android. A look into the code explains why:

// ...
if (charset == null) {
charset = HTTP.DEFAULT_CONTENT_CHARSET;
}
// ...

And guess what, HTTP.DEFAULT_CONTENT_CHARSET does not take advantage of Charset.defaultCharset() but instead is set up with "ISO-8859-1".

To get real UTF-8 data we need to do something like new StringEntity("âáàéèê", HTTP.UTF_8);.

FPU Boost

I'm kind of thrilled right now.

I just updated JavaCV to the latest version (2011-07-05) which includes OpenCV 2.3.0 and some precompiled code for Android devices with an FPU (armeabi-v7a). Just by replacing JavaCV I gained an enormously computation boost. Well, of course I knew that an FPU is better for heavy number crunching than a CPU. Nevertheless I'm surprised that this makes such a big difference even in the tiny smartphone (HTC Desire HD).

My current implementation detects Shi-Thomasi corners and then tries to track them in following video frames. Where yesterday (without "FPU-code") my detector needed around 3 to 4 seconds to detect about 50 corners (even though it didn't much matter if I detected 50 or let's say 250) now it needs about 170 milliseconds and very often even less. My tracker needed about 1000 milliseconds to find known corners in new frames -- now 20 milliseconds. The detector is about 18 times faster.

I know, I know ... this is kind of supposed to be like that ;-) . But I'm just surprised to see this working. As result from that I will just target on devices with an FPU. I'm not quite sure yet if a certain Android version requires an FPU, so could just say e.g. from 2.3.0 and up my program will work or if I need to actually check this during runtime.