Posts

Stereo Visual Odometry (with Bundle Adjustment)

Image
github:  https://github.com/Juhyung-L/stereo_visual_odometry Visual odometry is the process of estimating a camera's position by processing the individual images taken from a moving camera. It can be performed with different types of cameras: monocular (single camera), stereo (two cameras), depth (camera with depth info). In this blog, I will explain the inner workings of a visual odometry system for a stereo camera setup. Figure 1: My implementation of stereo visual odometry. A stereo camera is a pair of cameras facing the same direction that are separated by a known distance.  Figure 2: Stereo camera setup. Unlike a monocular camera, the stereo camera can estimate the depth of a pixel on an image using a technique called triangulation. This is similar to how humans can sense how far away an object is using both eyes. The system also includes bundle adjustment, a non-linear error minimization technique used to reduce the accumulation of error in the pose estimates. I hope to see i

Behavior Tree Navigation

Image
 Github:  https://github.com/Juhyung-L/behavior_tree_navigation This is an extension of the blog  https://juhyungsprojects.blogspot.com/2024/04/dynamic-window-approach-for-local-path.html . In said blog, I explained the my implementation of dynamic obstacle avoidance in a mobile robot. The strategy was to use two planners: global and local. The global planner uses the global costmap to find an optimal path from the robot's current pose to the goal pose (A* search). The local planner then takes this path and uses the local costmap to generate a velocity command that would best follow the path while avoiding dynamic obstacles (Dynamic Window Approach). In this blog, I will explain how I added a recovery behavior that would trigger when the mobile robot gets stuck. Background Navigating through real-life terrain is full of unexpected obstacles. Things that are hard to catch with the onboard sensor cannot be avoided with just a local planner. In

Dynamic Obstacle Avoidance

Image
Github:  https://github.com/Juhyung-L/GNC There are several things that go into play when a robot is moving from point A to point B. Assuming the robot has a map of the environment and is localized inside that map, it first needs a GLOBAL path planner. The role of the global path planner is to find the shortest path from the robot's current pose to the goal pose while avoiding obstacle defined by the global map. However, the global path planner alone is not sufficient for moving in a dynamic environment as the global map is static.

Bird's eye view for wheelchair

Image
Github:  https://github.com/Juhyung-L/bird_view Background For my university's capstone project, I was tasked with creating a device that can help our client in visualizing the surrounding environment while driving his power wheelchair. Our client suffered from paralysis below the neck, which meant that his only source of mobility was his power wheelchair. He specifically needed a device that could assist him in visualizing the back half of his wheelchair. Our team took inspiration from the bird's eye view that some car models provide. The working principle behind the bird's eye view is to use multiple wide-angle (fisheye) cameras attached to the sides of the car and computationally combine all video streams to provide a top-down view of the car. Although the system does not look too difficult to implement at first, we quickly realized that making a system that is both durable and

PID Controller for DC Motor

Image
Background This is part of a series of posts where I document my progress in building an autonomous mobile bot from scratch. The ultimate goal of this project is to build a robot that can accomplish the following tasks: Autonomously map an indoor space using SLAM and save the generated map Load the generated map file and localize the robot inside the map Move the robot around by sending move commands from a GUI that shows the map During the first few months of the project, I developed an autonomy stack for the mobile robot in a simulated environment and I documented part of the process in part 1( https://juhyungsprojects.blogspot.com/2023/08/autonomous-mobile-bot-part-1-autonomous.html ) The simulated environment consisted of an indoor environment and a two-wheeled robot inside Gazebo, which is a simulation software able

Monocular Visual Odometry

Image
github:   https://github.com/Juhyung-L/visual_odometry Visual odometry is the process of estimating the change in position of a robot by processing the image frames from a moving camera. It works by taking individual frames from a prerecorded video or live feed and putting it through a processing pipeline. The first step in the visual odometry pipeline is feature detection. A feature in an image is a point in the image that can be easily identified in multiple views of the same scene.  Figure 1: Features extracted from image of a chapel. Each colored circle is a feature. These two images are pictures of the same building taken from two different view points. There are parts of the building that can easily be identified in both pictures. For example, the three arches on the bottom floor or the circular pattern at the center of the building. For the human brain, features are usually encoded as whole structures just as I have listed above, but they

Iterative Closest Point Algorithm using Octree (CUDA Accelerated)

Image
github:  https://github.com/Juhyung-L/cuda_icp Background Scan matching algorithms are ubiquitous in the field of mobile robotics. Although there are countless variations of the algorithm with each one having its benefits and drawbacks, the overarching goal of the algorithms is the same. Given two sets of points, the algorithm finds the optimal rigid body transformation (rotation and translation) that aligns one set of points onto the other. They are specifically useful in localization (estimating the robot's pose) algorithms as finding the rigid transformation between two consecutive laser scans from the same robot means finding the accurate change in position and orientation of the robot. One of the most commonly used variation of the scan matching algorithm is the iterative closest point (ICP) algorithm.  ICP takes a 2-step approach to scan matching Finding the corresponding pairs of points in the two point sets Findi