Posts

Showing posts from October, 2023

Monocular Visual Odometry

Image
github:   https://github.com/Juhyung-L/visual_odometry Visual odometry is the process of estimating the change in position of a robot by processing the image frames from a moving camera. It works by taking individual frames from a prerecorded video or live feed and putting it through a processing pipeline. The first step in the visual odometry pipeline is feature detection. A feature in an image is a point in the image that can be easily identified in multiple views of the same scene.  Figure 1: Features extracted from image of a chapel. Each colored circle is a feature. These two images are pictures of the same building taken from two different view points. There are parts of the building that can easily be identified in both pictures. For example, the three arches on the bottom floor or the circular pattern at the center of the building. For the human brain, features are usually encoded as whole structures just as I have listed a...

Iterative Closest Point Algorithm using Octree (CUDA Accelerated)

Image
github:  https://github.com/Juhyung-L/cuda_icp Background Scan matching algorithms are ubiquitous in the field of mobile robotics. Although there are countless variations of the algorithm with each one having its benefits and drawbacks, the overarching goal of the algorithms is the same. Given two sets of points, the algorithm finds the optimal rigid body transformation (rotation and translation) that aligns one set of points onto the other. They are specifically useful in localization (estimating the robot's pose) algorithms as finding the rigid transformation between two consecutive laser scans from the same robot means finding the accurate change in position and orientation of the robot. One of the most commonly used variation of the scan matching algorithm is the iterative closest point (ICP) algorithm.  ICP takes a 2-step approach to scan matching Finding the corresponding pairs of points in the two point sets ...