Research
using robots to perform difficult or complicated tasks at long distances
Sending robots places where it is too dangerous or difficult to send a human, to do tasks that humans are incredible at. And trying to make robots incredible at them too.
remote tasks
My research centers around sending robots to remote locations to manipulate objects there. These locations could be places that we do not want to send humans to (nuclear reactors), places that are hard to send humans to (space), or places where it is annoying to send humans to (offshore platforms).
In all cases, my focus is empowering the remote tasks to be done. Currently, I believe that means keeping a human operator in control of the robot while slowly adding more and more autonomy for the robot. Fully autonomous robots are the future, but we are not close enough to replacing human intuition and decision making to risk deploying autonomous agents to potentially dangerous areas. Thus, I focus on remote semi-autonomous teleoperation.
Doing remote tasks introduces a number of challenges. The biggest two challenges are informing the operator about what is happening around the robot without under- or over-loading them with information, and the communication delay making the operator directly controlling the robot's motion infeasible. Many of my colleagues at UT Austin are working on the first problem (you can learn more here). I am working on the second, which I believe semi-autonomy addresses nicely.
affordances
I focus on "affordances", which are the idea that an environment offers or affords specific agents possible actions in an inherently coupled way. For example, a loosely tightened bolt allows my human fingers to turn it, but a very tight bolt may not. Robotics researchers use affordances to encode robot-environment interactions. With enough encoded interactions, an autonomous robot could reason about a chain of actions it needs to take to accomplish a goal at a very high level.
My research is more low level: I want to model generic affordances for contact tasks (opening doors, flipping switches, turning valves, etc) in a way that is independent of a specific robot or its autonomous manipulation software. During my PhD, I created a robust framework that maps a contact task's parameters (position, required motion, required forces) to a manipulator's motion, to plan and execute contact tasks with minimal operator involvement.
Prior Work
compliant control
My Master's project involved using a mechanically compliant manipulator from HEBI. The project was to deploy it on a mobile base to an underground ventilation tunnel at a nuclear processing site, where the acidity of the process was melting the tunnel's cement walls and the radioactivity meant no human could go in to assess their structural integrity. My work involved using compliant control methodologies with the robot's mechanical compliance to perform tasks like sample collection, camera positioning, sensor deployment, and potentially onboard maintenance.
sample collection
Part of the tunnel inspection project was sample collection. We needed to collect wall scrapings and mud from the tunnel, while isolating the samples from each other and shielding them from the human operators recovering the vehicle (due to their potential radioactivity). We came up with a tool change system that was passive (no power required), light, included a built in camera at the toolpoint, met the isolation requirements, and also provided a place to stow the manipulator when not in use.
12 Dof Navigation
I worked on the control for a 12 Degree of Freedom mobile base for the tunnel inspection project. The complexity was dictated by the difficulty of the terrain in the tunnel. Each of the 4 wheel modules could "drive", "steer", and "lift" where each set of wheels could rotate to lift the base up or down. This platform had a number of challenges for its control, including potential self-collisions, changing contact modes (1 wheel vs 2 wheels down), wheel slip dynamics, and full 3D motion (rotation and translation) possible. I wrote a multimodal controller using instant centers, inverse kinematics, and collision checking to enable the robot to drive in interesting ways on level ground as well as position the manipulator in 3D and climb obstacles including stairs.