Tag Archives: Paper

Review: „Multiple Target Detection and Tracking with Guaranteed Framerates on Mobile Phones“

The described related work tracks motion using optical flow algorithms. It seems that those produce satisfying results but not yet cover the full potential of a AR tracking system. Others use interest-point based algorithms which are commonly known as very computational expensive. SIFT descriptors are probably the most used ones although they might belong the the most expensive ones. Nevertheless, some improvements have been achieved with SIFT and also SURF algorithms.

Similar to PTAM Wagner et. al also use a separated detection and tracking system. The detection system tries to find known targets in the currently available camera image using a modified SIFT algorithm. Instead of calculating the kind of expensive Differences of Gaussian (DoG) they use a FAST corner detection over multiple scales. Memory consumption is then reduced by using only 36-dimensional features instead of the original 128-dimensions of SIFT. Found descriptors are matched with entries from multiple spill trees, which is a similar data structure like the k-d-tree used in the original SIFT.

-- unfinished --

Review: "Parallel Tracking and Mapping on a Camera Phone"

Based on the PTAM Project (here PTAM-d for PTAM running on a desktop computer), which originally runs on a computer, Klein and Murray adapted the code to run on an iPhone 3G (here PTAM-m for PTAM running on mobile device such as smartphone). Of course everything need to become more light-weighted so that it can actually be handled by the smartphone.

Bringing computational expensive programs to smartphones a couple of problems (or "challenges"). Obviously the CPU in such a small device is rather slow compared to a desktop computer. The iPhone used has a CPU with approx. 400 MHz, which is not much but yet quite a number for a small device (especially a couple of years ago). The next thing is the camera. It is also much slower compared to a webcam. The iPhone's camera has a frame rate below 15Hz (where a webcam might have double this number), plus it has a rolling shutter (so you might not get the whole frame at once but just a part of it). Klein et al. also argues that the camera has a narrow field-of-view and since the original PTAM requires a wide-angle camera (the demonstration videos look almost like they had fish-eye lenses) this meant even more adapting work.

One of the changes is the reduction of calculated map points. The computer version takes every pound found into consideration and adds it to the map. This can sum up to over 1000 map points. PTAM-m is limited to 35 points. This reduces processing costs significantly. To not loose to much accuracy an image pyramid is calculated, from each 240 by 320 pixels video frame, up to a level of five (with the size of 15 by 20 pixels). On this image pyramids  corner detection (Shi-Thomasi corners) is performed. By limiting the number of simultaneous map points this just needs to work faster (or smoothly on a low-end CPU). There are more reductions and limitations compared to PTAM-d.

Tracking motion on a smartphone needs to address potential motion blur, which will occur quickly since the in-built cameras are kind of tiny and not as light sensitive as a webcam or even real cameras. The user will most likely move the device rather slowly, because the device is literary the display which the user wants to look at. Nevertheless, when the light condition is not perfect the camera will produce blurry (and crispy) images. Where PTAM-d does quite extensive work to compensate blurriness using feature search around FAST corners, PTAM-m omits this completely. PTAM-m performs a feature point search in the first level of the image pyramid evaluating a 4-pixel-radius around a feature using a zero-normalised sum of squared differences against an 8x8 pixel template.

--unfinished--

Review: "Parallel Tracking and Mapping for Small AR Workspaces"

The paper describes the methods and techniques used in the PTAM project.

Usually AR systems only work probably when the running system has a certain knowledge of the surrounding area. This knowledge could pre-acquired information of an object or a know marker (a marker could for example a monochrome pattern that is put, like a sticker, onto an object). Those AR systems then can recognize and track their known object in the video stream, but just as long as it's within the view area of the camera. As soon as the object moves away and is not visible any more, the tracking stops.

PTAM works in completely unknown environments and uses "extensible tracking" techniques to achieve this. It requires a static and somewhat small environment, meaning that it is not designed to track and augment constantly moving objects or being moved (by the user) across a big area like a city. Other systems which perform tracking and mapping mostly do each task directly linked, at every video keyframe. Because most of those systems are being used in the robotics field this approach might be satisfying since robots tend to be moving slow in a predictable manner. But this is most likely not the case for a camera held by an user (who doesn't know or even care what is happening inside the device she is holding, but just want it to work smoothly and accurately). PTAM splits tracking and mapping into two independent tasks which perform in different threads. By having the mapping process separated it is not necessary to work on each new keyframe, which usually includes processing lots of redundant information, and by that have more useful and more accurate information. The approach within PTAM is kind of adopted by SLAM. Where the original SLAM methods (in the robotics field) use laser sensors, in this case we obviously don't have lasers but only one camera -- that's why we talk here about "monocular SLAM".

 

Augmented reality technologies, systems and applications

Augmented reality technologies, systems and applications.

This paper surveys the current state-of-the-art of technology, systems and applications in Augmented Reality. It describes work performed by many different research groups, the purpose behind each new Augmented Reality system, and the difficulties and problems encountered when building some Augmented Reality applications. It surveys mobile augmented reality systems challenges and requirements for successful mobile systems. This paper summarizes the current applications of Augmented Reality and speculates on future applications and where current research will lead Augmented Reality’s development. Challenges augmented reality is facing in each of these applications to go from the laboratories to the industry, as well as the future challenges we can forecast are also discussed in this paper. Section 1 gives an introduction to what Augmented Reality is and the motivations for developing this technology. Section 2 discusses Augmented Reality Technologies with computer vision methods, AR devices, interfaces and systems, and visualization tools. The mobile and wireless systems for Augmented Reality are discussed in Section 3. Four classes of current applications that have been explored are described in Section 4. These applications were chosen as they are the most famous type of applications encountered when researching AR apps. The future of augmented reality and the challenges they will be facing are discussed in Section 5.

http://www.springerlink.com/content/02244g887j35608h/