Category Archives: Thesis

Latex Glossaries with TeXnicCenter

Well, sometimes Latex can be annoying. How to create a glossary?

The package glossaries seems to be state of the art. I set it up as the following on my Windows machine with TeXnicCenter:

% in the preamble
\usepackage{glossaries}
\makeglossaries

Define a new output profile: Copy one of the existing profiles (e.g. "LaTeX > PDF") and add the following entries to the postprocessing tab:

    • makeglossaries #1
      Application: {full path to}/makeindex.exe
      Arguments: -s "%bm".ist -t "%bm".glg -o "%bm".gls "%bm".glo
    • makeacronyms #1
      Application: {full path to}/makeindex.exe
      Arguments: -s "%bm".ist -t "%bm".alg -o "%bm".acr "%bm".acn
    • pdflatex #2
      copy from (La)Tex tab
    • makeglossaries #2
      Application: {full path to}/makeindex.exe
      Arguments: -s "%bm".ist -t "%bm".glg -o "%bm".gls "%bm".glo
    • makeacronyms #1
      Application: {full path to}/makeindex.exe
      Arguments: -s "%bm".ist -t "%bm".alg -o "%bm".acr "%bm".acn
    • pdflatex #3
      copy from (La)Tex tab

This procedure avoids to use of makeglossaries.exe which requires that you have Perl installed. The iterative calls are described in the Latex Wikibook and it seems to work well this way. You can import my output profile: latex-pdf-glossary+acronym.tco

Mind, that if you use the hyperref package, to load this before the glossaries package. This way glossary entries will be automatically linked in the PDF output.

FPU Boost

I'm kind of thrilled right now.

I just updated JavaCV to the latest version (2011-07-05) which includes OpenCV 2.3.0 and some precompiled code for Android devices with an FPU (armeabi-v7a). Just by replacing JavaCV I gained an enormously computation boost. Well, of course I knew that an FPU is better for heavy number crunching than a CPU. Nevertheless I'm surprised that this makes such a big difference even in the tiny smartphone (HTC Desire HD).

My current implementation detects Shi-Thomasi corners and then tries to track them in following video frames. Where yesterday (without "FPU-code") my detector needed around 3 to 4 seconds to detect about 50 corners (even though it didn't much matter if I detected 50 or let's say 250) now it needs about 170 milliseconds and very often even less. My tracker needed about 1000 milliseconds to find known corners in new frames -- now 20 milliseconds. The detector is about 18 times faster.

I know, I know ... this is kind of supposed to be like that ;-) . But I'm just surprised to see this working. As result from that I will just target on devices with an FPU. I'm not quite sure yet if a certain Android version requires an FPU, so could just say e.g. from 2.3.0 and up my program will work or if I need to actually check this during runtime.

ARmsk

Just found yet another AR library for Android called ARmsk. The description sounds promising. But the demo video is kind of shocking. They have like half a frame per second. The code looks like they used a lot of examples, like the OpenGL Cube renderer from the Android API demos and some stuff from the OpenCV-Android examples.

Last updates were in January this year, so quite up-to-date actually.

Oh, btw, guys why can't I find you when I search for "AR Android". I mean seriously, take care of your website keywords ;-)

If you are suddenly getting high framerates, you might doing something wrong with the FPS calculation! ;-)

AndEngine

AndEngine is a 2D OpenGL Engine. Actually a game engine. But it has also some small AR extension. I've stumbled upon this project a little while ago but then somehow forgot about it and now rediscovered it. So far it looks promising (even though I already found some kind of "lazy bug" which prevents the AR example to run :? ). Especially the the animation stuff and sensor helpers look great.

Hopefully this can boost my project a little.Will see.