Just another desperate try to get something connected with OpenCV compiled under Windows.
AndroidExperimental looked promising at the beginning. But of course everything just works on Linux without problems.
The step where
get_ndk_toolchain_linux.sh is referenced won't work on Windows and actually you don't need to do everything in there if you already have the NDK. So, assuming the NDK is already installed, just run the last portion of the script. The following is what worked for me with Cygwin.
ANDROID_NDK was defined in the Windows environment variables and therefore had backslashes in it, somehow this wouldn't want to work, so I overwrite it.
NDK_TMPDIR is used in the shell script and with Cygwin it has no write access to this folder. Choosing another folder worked.
$ANDROID_NDK/build/tools/make-standalone-toolchain.sh --platform=android-5 --install-dir=$ANDROID_NDK/android-toolchain --system=windows
ln -fs $ANDROID_NDK/android-toolchain /opt/android-toolchain
So, this prep-work is done. Unfortunately compiling the
hello-cmake sample didn't work and ends with:
-- Check for working C compiler: /cygdrive/d/android-ndk-windows/toolchains/arm-linux-androideabi-4.4.3/prebuilt/linux-x86/bin/arm-linux-androideabi-gcc -- broken
CMake Error at /usr/share/cmake-2.8.4/Modules/CMakeTestCCompiler.cmake:52 (MESSAGE):
The C compiler "/cygdrive/d/android-ndk-windows/toolchains/arm-linux-androideabi-4.4.3/prebuilt/linux-x86/bin/arm-linux-androideabi-gcc" is not able to compile a simple test program.
I'm not sure why this is happening. Maybe it's because the path to the project has spaces in it. Or what ever.
Trying something else for now.
Somehow there doesn't seem to be good AR libraries which are either working well or even up to date. It looks like there where some buzz between 2007 and 2009. Quiet some people developed libraries and applications using marker-based AR. Like the ARToolkit which was last updated in 2007. The successor (or at least an extended version) of that is ARToolkitPlus, which was actually updated in February this year. But well, both work just with markers, which I don't need at all. I thought I could hack the code a little to extend it to work with natural features (FAST corners or something like that), but the creepy C++ code is just ugly and would probably require too much time to go through. There is also AndAR, but this just uses ARToolkit internally and last update sometime in 2010, so no real help for me. NyARToolkit looked nice in the beginning. It's similar to ARToolkit (and I somehow guess that there is at least one person working on it who was also working on the other library) but a bit more lagging. I could work with that -- if the code wouldn't be almost completely commented in Japanese. Again, no help for me.
But there is more. Qualcomms QCAR augmented reality SDK is well up to date and working incredibly good. Though, there is a huge problem: If you want to track images with natural features you'd need to first upload your images, let them go through Qualcomms "Image Target System", download some binary file (which has most likely interest points from the image in it) and then compile everything into the Android application. I mean, come on! How useful is that!? So I cannot use QCAR either, because I want to dynamically load interest points and other information during runtime.
What is left? I guess I need to try it on my own. OpenCV has all the functionality I need (There is this Android-optimized version which compiles nicely with the NDK). I "just" need to use it. JavaCV is Java-based wrapper to the OpenCV API, it is not commented at all but hopefully it will be of any use for me.
If there is someone out there with (helping) suggestions, please drop me a line in the comments.
The described related work tracks motion using optical flow algorithms. It seems that those produce satisfying results but not yet cover the full potential of a AR tracking system. Others use interest-point based algorithms which are commonly known as very computational expensive. SIFT descriptors are probably the most used ones although they might belong the the most expensive ones. Nevertheless, some improvements have been achieved with SIFT and also SURF algorithms.
Similar to PTAM Wagner et. al also use a separated detection and tracking system. The detection system tries to find known targets in the currently available camera image using a modified SIFT algorithm. Instead of calculating the kind of expensive Differences of Gaussian (DoG) they use a FAST corner detection over multiple scales. Memory consumption is then reduced by using only 36-dimensional features instead of the original 128-dimensions of SIFT. Found descriptors are matched with entries from multiple spill trees, which is a similar data structure like the k-d-tree used in the original SIFT.
-- unfinished --
JavaCV is a wrapper for a couple of libraries, including OpenCV. They also provide a pre-compiled OpenCV for Android. Looks promising. Will take a look at it. Wonder if this slows down the application somehow, because they are some more method calls because of the wrapper stuff I guess. But if it speeds up the development process I could live with that for now.
At github I found an optimized version of OpenCV for Android. Different to the original Android version this one can be compiled with the Android NDK (except with those creepy work-arounds described at opencv/AndroidTrunk).
Unfortunately, the build process took like hours: because some variables needed by the linker aren't set on Windows (but obviously on Linux). So after looots of trying it worked finally.