Visual Odometry Using SIFT Feature
github: https://github.com/Juhyung-L/visual_odometry Visual odometry is estimating the change in position of a robot using the image frames from a moving camera. It works by taking individual frames from a prerecorded video or live feed of a camera and putting it through a processing pipeline. The first step in the visual odometry pipeline is feature detection. A feature in an image is a point in the image that can be easily identified in multiple views of the same scene. These two images are pictures of the same building taken from two different view points. There are parts of the building that can easily be identified in both pictures. For example, the three arches on the bottom floor or the circular pattern at the center of the building. For the human brain, features are usually encoded as whole structures just as I have listed above, but they are points for computers and they are identified using mathematical techniques involving the pixel color values. There are a variety of di