- I am deeply passionate about robotics technology that enables machines to autonomously navigate and perceive their environment across both physical and virtual spaces.
- My primary focus has been on sensor fusion, localization, and control systems, where I have actively pursued learning and hands-on project development.
- I find great satisfaction in solving complex problems and transforming ideas into code that brings real-world motion and intelligence to life.
(Dec 2024 ~ Oct 2025)
Led the perception team in developing an autonomous driving system for a real vehicle platform, creating two distinct sensor fusion packages that integrate camera and LiDAR data for accurate environmental perception and maintained several perception packages related to the competition in order to perceive construction cones on the circuit.
Key Contributions:
CALICO - High-Performance C++ Multi-Camera Late Fusion
- Designed and implemented a sensor fusion system that matches YOLO 2D detections with LiDAR 3D clusters using the Hungarian algorithm
- Achieved precise message synchronization with ApproximateTimeSynchronizer and stable 19+ Hz output
- Implemented UKF tracking to integrate color and position information with ID-based object tracking
- Achieved average processing time <10ms
PRISM - Real-time LiDAR Interpolation & Color Mapping
- Implemented Catmull-Rom spline-based LiDAR upsampling algorithm from 32 to more than 128 channels (4x interpolation)
- Generated point-level features by projecting color information from dual camera system onto 3D points
- Optimized for high performance using OpenMP multi-threading and SOA (Structure of Arrays) memory layout (processing time 15-30ms)
- Implemented efficient memory management with zero-copy memory pool
(Unused)FRONTIER - Frustum-based Direct 3D Fusion
- Developed a novel fusion methodology by back-projecting 2D bounding boxes into 3D view frustums
- Implemented IoU calculation between LiDAR 3D bounding boxes and frustums with Hungarian matching
- Supported multi-camera setup with camera-specific color visualization (20Hz real-time updates)
- Improved projection accuracy by precisely calculating frustum apex at camera lens center
Sensor Calibration
- Performed extrinsic/intrinsic calibration between Ouster OS1-32 LiDAR and dual Logitech webcams
- Established checkerboard-based calibration pipeline and validation
Tech Stack:
ROS2 Humble,C++,Python,OpenCV,PCL,Ouster OS1-32,YOLO
Related Links:
(Aug 2025 ~ Sep 2025)
Participated in the autonomous driving competition hosted by HL Mando and MORAI, responsible for the vehicle controller.
Designed and implemented MPC-based controllers in the MORAI simulation environment, successfully completing both qualification and final rounds.
Key Contributions:
Offline System Identification
- Collected vehicle response data for steering angle and velocity step inputs in the simulator
- Identified vehicle dynamics model parameters using least-squares-based parameter estimation
- Validated time response characteristics and accuracy of the estimated model
MPC Controller Design & Implementation
- Derived linearization and state-space equations based on Kinematic Bicycle Model
- Lateral control: Steering angle MPC for path tracking (minimizing path deviation and heading error)
- Longitudinal control: Velocity MPC for target speed tracking (acceleration/deceleration control)
- Solved real-time QP problems and handled constraints using
cvxpyandosqp-pythonlibrary
State Estimation with EKF
- Implemented EKF for fusing GPS, IMU, and dead reckoning sensor data (such as steering wheel degree and velocity)
- Analyzed sensor noise characteristics and tuned Kalman Gain
- Achieved stable vehicle state estimation (position, velocity, heading)
Tech Stack:
ROS1 Noetic,C++,Python,MPC,EKF,cvxpy,osqp,MORAI Simulator
Related Links:
(Mar 2025 ~ July 2025)
Developed a hierarchical grid-based compression and localization system for efficient utilization of large-scale 3D LiDAR maps as a personal "Dream Semester" project. Built a campus-wide map using Ouster OS1-32 LiDAR, converted it into a lightweight 2.5D feature map, and implemented real-time localization.
Key Contributions:
2.5D Pillar Map Compression Algorithm
- Implemented spatial discretization technique to partition 3D point clouds into grid pillars
- Extracted Z-axis distribution statistics (mean, variance, max/min) within each pillar as features
- Achieved tens-of-times storage reduction while preserving structural characteristics
Hierarchical Grid Search-based Global Localization
- Implemented efficient initial pose estimation through multi-resolution grid search (coarse-to-fine)
- Applied identical compression to real-time sensor scans and matched based on feature similarity
- Combined with ICP (Iterative Closest Point) algorithm for precise pose refinement
- Provided real-time localization through ROS2 service interface
System Development & Data Collection
- Built a mobile mapping system by mounting sensor equipment and processing hardware on a scooter
- Collected and preprocessed LiDAR map data covering approximately 1km of campus area
Tech Stack:
ROS2 Humble,Python,Open3D,NumPy,Ouster OS1-32
Related Links:
(Mar 2025 ~ Jun 2025)
Built a cooperative mission system where UGV and UAV work together in a Gazebo simulation environment. Implemented UAV offboard autonomous flight and ArUco marker-based target recognition with position sharing through PX4 SITL integration.
Key Contributions:
PX4 SITL Integration
- Interfaced ROS2 with PX4 SITL (Software-in-the-Loop) environment using uXRCE-DDS
- Implemented offboard control mode configuration and state machine-based flight control
- Developed control logic for stable altitude and position control
Waypoint Following & Target Detection
- Implemented navigation system for autonomous flight through predefined waypoint paths
- Detected and recognized ArUco markers using onboard camera at each waypoint
- Estimated 3D marker positions, transformed to global coordinates, and visualized in RViz2
- Implemented target position sharing interface for UGV cooperation
Tech Stack:
ROS2 Humble,Gazebo Harmonic,PX4 Autopilot,C++,Python,OpenCV
Related Links:
(Jan 2025 ~ Mar 2025)
Attempted to develop a cone-based Graph SLAM system for Formula Student Driverless.
Experimented with a novel approach by adding geometric constraints (Inter-landmark Constraints) between co-observed cones to the factor graph, in addition to traditional pose-landmark constraints.
Implementation Details:
GTSAM-based Factor Graph Optimization
- Implemented nonlinear optimization backend using GTSAM library
- Implemented traditional factors including odometry and cone observation factors
- Developed inter-landmark distance factors to add distance and angle constraints between cones
Tentative Landmark System
- Prevented false positives through observation buffering (minimum 3 observations required)
- Implemented color voting-based cone classification (yellow, blue, orange)
- Developed track ID-based data association
ROS2 Integration & Visualization
- Implemented SLAM node for processing cone detection messages
- Real-time map and trajectory visualization using RViz2 markers
- Built simulation environment with dummy publisher
Challenges & Limitations:
- Failed to achieve real-time performance target (20Hz), currently at 10Hz
- Accumulated drift during long runs due to unimplemented loop closure
- Fixed-lag smoother not applied for bounded memory usage
- Optimization convergence instability due to cone sparsity and insufficient structural constraints
Through this project, I gained valuable experience in graph SLAM fundamentals, GTSAM usage, and sensor data processing pipelines, while learning about the gap between theory and practical implementation.
Tech Stack:
ROS2 Humble,C++17,GTSAM 4.2,Eigen3,Factor Graph Optimization
Related Links:
(Sep 2025 ~ Oct 2025)
Ported the LION (LiDAR-Only Instance segmentation) deep learning framework to operate in real-time within ROS2 environment. Built a PyTorch/CUDA-based inference pipeline and optimized data conversion and preprocessing stages to improve processing speed.
Implementation Details:
ROS2 Integration Pipeline
- Implemented real-time 3D object detection node subscribing to
/ouster/pointstopic - Converted and published detection results as
vision_msgs/Detection3DArray - Visualized 3D bounding boxes using RViz2 markers
PointCloud2 Conversion Optimization
- Initial implementation: Python
struct.unpack-based loop (118-256ms, 48-68% of total time) - Optimized: NumPy vectorization with structured arrays (1.3-1.5ms)
- Achieved 100x speedup, improving overall FPS from 3.2Hz → 7.7Hz
Data Preprocessing Pipeline
- Implemented voxelization and feature extraction (~2ms)
- Optimized batch generation and CUDA memory transfer for model input format
- Set maximum point count limit (150,000 points) for safety
Performance Analysis & Bottleneck Identification
- Measured processing time for each stage through systematic profiling
- Final bottleneck: Model inference time (~112-124ms, 95% of total time)
- Failed to achieve 10Hz+ real-time processing target (constrained by 100ms+ inference time)
Through this project, I gained experience in ROS integration of deep learning models, Python-NumPy optimization techniques, and real-time system performance analysis. I learned about the challenges of GPU inference optimization and the critical importance of real-time constraints.
Tech Stack:
ROS2 Humble,Python 3.10,NumPy,Open3D,Ouster OS1-32
Related Links:


















