Posts

Behavior Tree Navigation

Image
 Github:  https://github.com/Juhyung-L/behavior_tree_navigation This is an extension of the blog  https://juhyungsprojects.blogspot.com/2024/04/dynamic-window-approach-for-local-path.html . In said blog, I explained the my implementation of dynamic obstacle avoidance in a mobile robot. The strategy was to use two planners: global and local. The global planner uses the global costmap to find an optimal path from the robot's current pose to the goal pose (A* search). The local planner then takes this path and uses the local costmap to generate a velocity command that would best follow the path while avoiding dynamic obstacles (Dynamic Window Approach). In this blog, I will explain how I added a recovery behavior that would trigger when the mobile robot gets stuck. Background Navigating through real-life terrain is full of unexpected obstacles. Things that are hard to catch with the onboard sensor cannot be avoided with just a local planner. In my specific case, I only had a 2D LiDAR

Dynamic Obstacle Avoidance for Mobile Robot

Image
Github:  https://github.com/Juhyung-L/GNC There are several things that go into play when a robot is moving from point A to point B. Assuming the robot has a map of the environment and is localized inside that map, it first needs a GLOBAL path planner. The role of the global path planner is to find the shortest path from the robot's current pose to the goal pose while avoiding obstacle defined by the global map. However, the global path planner alone is not sufficient for moving in a dynamic environment as the global map is static.

RGB-D Visual Odometry

Image
 Github:  https://github.com/Juhyung-L/RGB-D_visual_odometry Background In a previous blog, I wrote an explanation on my implementation of a visual odometry system and I developed ( https://juhyungsprojects.blogspot.com/2023/10/visual-odometry-using-sift-feature.html ). The system used RGB image frames as its input to calculate the change in robot pose. A major shortcoming of this system was that the estimated change in the position of the robot was scale-ambiguous, meaning only the direction of motion was estimate-able, but not the magnitude. This ambiguity was a physical limitation of trying to obtain 3D information (change in x, y, z position) using a 2D input (image frames). In this blog, I will give an explanation of another visual odometry system I developed, which uses a RGB-D camera instead of a RGB camera. A RGB-D camera is basically a RGB camera with depth sensors, which allows it to obtain 3D point cloud of the scene in addition to the image. By utilizing the point cloud, th

Bird's eye view for wheelchair

Image
Github:  https://github.com/Juhyung-L/bird_view Background For my university's capstone project, I was tasked with creating a device that can help our client in visualizing the surrounding environment while driving his power wheelchair. Our client suffered from paralysis below the neck, which meant that his only source of mobility was his power wheelchair. He specifically needed a device that could assist him in visualizing the back half of his wheelchair. Our team took inspiration from the bird's eye view that some car models provide. The working principle behind the bird's eye view is to use multiple wide-angle (fisheye) cameras attached to the sides of the car and computationally combine all video streams to provide a top-down view of the car. Although the system does not look too difficult to implement at first, we quickly realized that making a system that is both durable and reliable for long-term use while being attached to a moving wheelchair was extremely difficult

Autonomous Mobile Bot Part 3: PID Controller for DC Motor

Image
This is part of a series of posts where I document my progress in building an autonomous mobile bot from scratch. The ultimate goal of this project is to build a robot that can accomplish the following tasks: Autonomously map an indoor space using SLAM and save the generated map Load the generated map file and localize the robot inside the map Move the robot around by sending move commands from a GUI that shows the map Background During the first few months of the project, I developed an autonomy stack for the mobile robot in a simulated environment and I documented part of the process in part 1 and 2 of this series. Part 1: https://juhyungsprojects.blogspot.com/2023/08/autonomous-mobile-bot-part-1-autonomous.html Part 2:  https://juhyungsprojects.blogspot.com/2023/09/autonomous-mobile-bot-part-2-node.html The simulated environment consisted of an indoor environment and a two-wheeled robot inside Gazebo, which is a simulation software able accurately replicate real-world physics. The r

Visual Odometry Using SIFT Feature

Image
github:   https://github.com/Juhyung-L/visual_odometry Visual odometry is estimating the change in position of a robot using the image frames from a moving camera. It works by taking individual frames from a prerecorded video or live feed of a camera and putting it through a processing pipeline. The first step in the visual odometry pipeline is feature detection. A feature in an image is a point in the image that can be easily identified in multiple views of the same scene.  These two images are pictures of the same building taken from two different view points. There are parts of the building that can easily be identified in both pictures. For example, the three arches on the bottom floor or the circular pattern at the center of the building. For the human brain, features are usually encoded as whole structures just as I have listed above, but they are points for computers and they are identified using mathematical techniques involving the pixel color values. There are a variety of di

Iterative Closest Point Algorithm using Octree (CUDA Accelerated)

Image
github:  https://github.com/Juhyung-L/cuda_icp Background Scan matching algorithms are ubiquitous in the field of mobile robotics. Although there are countless variations of the algorithm with each one having its benefits and drawbacks, the overarching goal of the algorithms are the same. Given two sets of points, the algorithm finds the optimal rigid body transformation (rotation and translation) that results in the least error. They are specifically useful in localization algorithms as finding the rigid transformation between two consecutive laser scans from the same robot means finding the accurate change in position and orientation of the robot. One of the most commonly used variation of the scan matching algorithm is the iterative closest point (ICP) algorithm.  ICP takes a 2-step approach to scan matching Finding the corresponding pairs of points in the two point sets Finding the rigid body transformation Finding corresponding pairs means finding two points (one in each point set