We believe personalization is the key to unlocking real-world adoption of wearable robotic exoskeletons. Just as shoes come in different sizes, exoskeletons shouldn’t be one-size-fits-all. Yet today, most exoskeleton controls are either generic or require long, resource-heavy calibration sessions. So how can we quickly extract user-specific information and generate meaningful, personalized data without expensive motion capture? Instead of relying on full-body motion capture, we used minimal motion data from a new user to generate a digital twin through physics informed biomechanical simulation. We then trained a speed-adaptive walking agent using adversarial imitation learning, creating a personalized virtual agent that walks like the user across a range of walking speeds. What’s powerful about this approach is not just its biomechanical plausibility, but the potential to use this synthetic user-specific motion data to personalize the underlying exoskeleton control. Key innovations: 1. A synthetic gait generator built from open-source biomechanics data, producing realistic joint trajectories at variable speeds using minimal user input. 2. A training pipeline that combines imitation learning with curriculum learning to create adaptable locomotion policies. 3. Agent that achieves not only kinematic but also kinetic plausibility, opening the door to training user-specific exoskeleton models. We’re now extending this work to more complex locomotor tasks (like stair ascent), refining biomechanical reward functions, and integrating this virtual agent into real exoskeleton control tuning pipelines. This project was led by Yi-Hung (Bernie) Chiu and Ung Hee Lee in collaboration with Manaen Hu and Changseob Song, presented at ICORR Consortium RehabWeek Paper Link: https://lnkd.in/e6nmnt3f #WearableRobotics #Exoskeleton #ImitationLearning #Simulation #Biomechanics #MetaMobilityLab
Imitation Learning Techniques for Complex Tasks
Explore top LinkedIn content from expert professionals.
-
-
Imitation learning has seen great success, but IL policies still struggle with OOD observations such as changes in camera pose. We designed a 3D backbone, Adapt3R, that can combine with your favorite Imitation Learning algorithm to enable zero-shot generalization to unseen embodiments and camera viewpoints! Learning 3D representations is hard without 3D data 💡 The key idea is to use a 2D foundation model to extract semantic features, and use 3D information to localize those features in a canonical 3D space without extracting any semantic information from the 3D data. Adapt3r unprojects 2D features into a point cloud, transforms them into the end effector’s coordinate frame, and uses attention pooling to condense them into a single conditioning vector for IL. Notice that Adapt3R attends to the same points before and after the camera change! 2D features lifted into 3D are an effective representation for this scenario and Adapt3R makes good use of them! So, what did we observe empirically? - Adapt3R is just as proficient as RGB based baselines in case of in distribution evaluations. - But, Adapt3R is very good at embodiment transfers - Most importantly, Adapt3R handles viewpoint changes at test time! No More fixing the camera to match training distribution! Overall this means Adapt3R provides 3D representations as a drop in replacement for 2D RGB baselines. this was led by Albert Wilcox with help from Mohamed Ghanem, Masoud Moghani, Pierre Barroso, Benjamin Joffe and Animesh Garg Check out more at and play with the code in your next robot learning project. 🌐 Website: https://lnkd.in/dWneBJ5d 📄 Paper: https://lnkd.in/dvcbA_22 🖥️ Code: https://lnkd.in/dFKYjym7
-
3D Diffusion Policy Imitation learning provides an efficient way to teach robots dexterous skills; however, learning complex skills robustly and generalizablely usually consumes large amounts of human demonstrations. To tackle this challenging problem, we present 3D Diffusion Policy (DP3), a novel visual imitation learning approach that incorporates the power of 3D visual representations into diffusion policies, a class of conditional action generative models. The core design of DP3 is the utilization of a compact 3D visual representation, extracted from sparse point clouds with an efficient point encoder. In our experiments involving 72 simulation tasks, DP3 successfully handles most tasks with just 10 demonstrations and surpasses baselines with a 55.3% relative improvement. In 4 real robot tasks, DP3 demonstrates precise control with a high success rate of 85%, given only 40 demonstrations of each task, and shows excellent generalization abilities in diverse aspects, including space, viewpoint, appearance, and instance. Interestingly, in real robot experiments, DP3 rarely violates safety requirements, in contrast to baseline methods which frequently do, necessitating human intervention. Our extensive evaluation highlights the critical importance of 3D representations in real-world robot learning.