What is Robotics?


My advisor, Dr. Lydia Kavraki, likes to define robots as “systems that observe their environment and change it in response”. If this seems too general to you, then you have begun to perceive the fundamental problem of robotics research: almost any mechanical or organic system can be thought of as a robot. And for all the things that can be called robots, there are as many definitions of the most important questions in robotics. We can’t even agree on the right approach! If you ask a roboticist with a mechanical engineering background to solve a problem, they will solve a very different version of that problem than a roboticist with a computer science or electrical engineering background.

Now, all of this is not to suggest that robotics is totally lost as a field. Indeed, there have been many successes in robotics to show that this enterprise is not some pipe dream. More recently, self-driving cars have become one of the most prominent commercial applications of robotics. From these examples, it is clear that robotics holds great promise for improving people’s lives—if only we could focus robotics research toward some consistent goal.

A First Definition: Autonomy

As anyone who’s spent any amount of time programming will tell you, human beings are hard to deal with. They’re slow, need their hands held with Graphical User Interfaces (GUIs), and tend to find bugs that you would rather not have to fix. These problems are magnified in robotics; now our bugs have heavy, metal limbs to swing around, and those slow humans might be in the way of the goal that our robot wants to accomplish. But there’s an easy way around these problems: we just never let human and robot meet.

This is not as outlandish a constraint as it might seem. In fact, this is precisely the way that the robots on factory floors operate today. Robot Welding Each robot is locked inside a huge metal cage, and before any human intrudes on this sanctuary they must double- and triple-check that the robot is inoperable. Plenty of tasks that we would like to automate fall into this category, where humans and robots can be blissfully unaware of each other’s existence (e.g., welding, firefighting, construction). Why not simply adopt the attitude that “robotics is everything that can be automated without human input?”

Well, I can think of two reasons. The first is pragmatic: many of the repetitious daily annoyances that people would like to get rid of—cleaning, dishwashing, cooking—require a robot to exist in human spaces where huge metal cages tend to be unwelcome. The second reason is a little meta: simply put, good research does not come from considering autonomy alone. In my estimation, this is because it is too easy to abstract away uncomfortable difficulties when you, as researcher, are not beholden to human critics. It is too easy to engineer an ad hoc solution to a corner case or wave away concern that the robot moves at 2 feet a minute. No, robotics must include “humans in the loop”.

A Second Definition: Collaboration

Alright, so we’ve decided that robotics can’t get away with ignoring people. Damn. That’s sort of the fun part of being a shut-in computer science PhD student. Maybe that means that robotics should only be human-robot collaboration? Human-Robot Interaction (HRI) is certainly an established and well-respected subfield of modern robotics (See, for instance, the work of Sonia Chernova or Aude Billard), so perhaps “robotics is everything an engineered system can do to assist a human directly?”

Let’s slow down a little bit. What do we mean by “assist”? After all, if a robot goes by itself to the store and buys groceries, isn’t it “assisting” me to prepare dinner, even if it never steps foot in the kitchen and just drops the purchased food into a bin that I can access from the other side of a protective metal cage? For the sake of establishing a false dichotomy, I am going to say for now that this would not fit our second definition of robotics. The robot was operating autonomously to and from the store, and that just doesn’t seem very collaborative to me. Maybe as a surrogate for our second definition of robotics, we could say that “robotics is everything a robot does within 5 unobstructed feet of a human being.”

Right away, this seems too restrictive. The reasons are fairly obvious: for certain tasks, such as manufacturing, humans would simply be babysitting robots. Clearly, their time is better spent (and human societal progress better achieved) if these tasks can be automated reliably without human attention. I would argue, however, that this second definition of robotics is a better motivator for robotics research. There are problems that researchers simply don’t consider in a vacuum that become patently obvious once human testers are placed near a robot, such as: How does the robot avoiding running over a person’s foot? In light of our second definition of robotics, these problems are not distractions or annoyances but rather fundamental research questions, as they should be. Still, this is not a perfect solution; in HRI research, fundamental technical and algorithmic questions must take a backseat to human interface questions at some point. If only we could do both…

A Final Definition: Like Autonomy, but with People

Boom! False dichotomy busted. I’m so proud of what we’ve accomplished together. Yes, the answer here is that neither isolated, segregated robotic autonomy nor purely human-robot collaboration are desirable by themselves. We must have both to fully reap the benefits of robotics. Look at Spot Mini, pictured here; this little guy really just wants to help you out with the dishwashing. We need robots that can go to the store by themselves, pick up the groceries, and then come home and help us around the house. Spot, the helpful robotWe need self-driving cars that can also transfer control to a human driver. We need drones that can put out huge fires and rescue people from burning buildings. These and a million more applications are only possible if we understand (1) how to make robots operate well autonomously, (2) how to make robots collaborate well with humans, and (3) how to decide between these modes of operation.

But now we’re back to the problem that began this article: with such a broad definition of what robotics can and should be, how is a researcher supposed to decide on a consistent direction? To be honest, I’m not sure that a simple answer to that question exists. Robotics is somewhat adversarial to the typical kinds of beauty in research; by their nature robots are complex, tangled systems, whereas most researchers would like to seek concise, elegant solutions. But perhaps by utilizing a taxonomy of goals in robotics (and by recognizing that autonomy, HRI, and the interfaces between them are all important research directions) we can start to reduce the clutter. If I, as a researcher, am able to situate my work as solidly in the autonomy camp, then it serves to simplify the questions that I need to consider.

I will end with one call for action. If this synthesis definition (which has been the implicit one for decades) of robotics is to work, then more attention must be paid to part (3) above. All of the autonomy and HRI research in the world will not matter if robotic systems continue to exhibit only one or the other. As a robotics community, we must be better about recognizing and encouraging work that seeks to integrate both autonomy and human-robot collaboration. Some examples of these projects exist (see interesting recent work by Vaibhav Unhelkar), but more work is clearly needed.

Published by Cannon Lewis on March 31, 2019