Robot control has been studied extensively in the context of industrial robots and systems that have well defined environments. But when we add a human to the equation, robot control needs to be safer, more aligned to human movement, and follow human behavior patterns. Typically, exoskeletons like Harmony will use impedance control as a mid-level controller to ensure the wearer's safety. But the actual movement goal of the robot depends on what you want to achieve through the movement. As part of ReNeu, I've worked on two projects focused on high-level control strategies for the Harmony exoskeleton.
The first is an inverse kinematic (IK) method designed so the robot can follow an end effector trajectory. This implementation isn't trivial because the Harmony exoskeleton has both unusual parallelogram linkages (making the IK math hard) and joint coordination constraints. For example, at the shoulder, instead of the usual series connections between the different joints, the robot uses a parallelogram to allow the wearer to elevate/depress and protract/retract their shoulder girdle. On top of this complex mechanism, we impose (through robot control) the scapulohumeral rhythm, a natural joint coordination that is often lost post neurological injury like stroke. I assisted Stefano Gasperina (a visiting researcher at ReNeu) in his goal of solving this IK problem for Harmony. As you can see in the video, we are able to prescribe a trajectory to the robot's end effector and describe the joint angle behaviors needed for that trajectory. The method uses the null space of the robot and allows us to maintain the SHR while performing the movement. This implementation required offline conversion from end-effector to joint-angle space because of the optimization involved. But we are now working on making the same algorithm work in real-time on the robot.
The second control algorithm I want to mention is one I designed for my dissertation. The Harmony exoskeleton, like the human arm, is overactuated. Since the robot controls 6 dimensional end effector motion with 7 degrees of freedom, there is an inherent redundancy in the system. Humans exploit this redundancy by coordinating their joints similarly for similar movements. We used this idea to implement a joint angle coordination control law that uses impedance control to ensure joint coordination in real time. The video shows the last 3 joints of the robot coordinated (J5 - shoulder flexion/extension, J6 - elbow flexion/extension, and J7 - wrist pronation/supination). Moving any of the three joints independently causes the other two joints to move as well so that the desired coordination is maintained. We plan to use this control law and others like it to design training interventions to teach people to learn novel tasks. For more on the kind of tasks we use for our training experiments, take a look at my blog post on gamified training environments.