Alumni Profile

Alumni profile: Serena Booth, A.B. ’16

Helping humans understand robots

Serena Booth

At MIT, computer science graduate student Serena Booth studies how humans understand and interact with robots.

Computer scientist Serena Booth spends a lot of her time thinking about humans.

“Many people do computer science to understand the computer better, make it faster, get things to work that didn’t before. I instead care mostly about people,” said Booth, A.B. ’16, a computer science concentrator at the Harvard John A. Paulson School of Engineering and Applied Sciences. “How does the human interact with the machine? How do they understand it and use it? How does it change their life?”

Now a graduate student pursuing a computer science Ph.D. at MIT, Booth is working to design better robot and AI collaborators by enabling machines to more effectively explain their behavior to humans.

It’s an area of research she first dipped her toes into during her undergraduate studies at SEAS. While taking “Privacy and Technology” (CS 105), taught by Jim Waldo, Gordon McKay Professor of the Practice of Computer Science, Booth was pushed to think about the relationship between people and technology in a different way.

Those new understandings fed into her senior thesis project, where she studied whether people put too much trust in robotic systems. Booth placed a wheeled robot outside several Harvard houses and evaluated different situations in which students would let the robot enter the access-controlled buildings.

“That project was really formative for my research now and the things that I’m interested in. Focusing on whether the person trusted the machine has led me into the research of interpretability,” she said. “How can we give people the information they need to make an informed decision about whether to trust this automation, whether to use it or not?”

Serena Booth presenting research

Booth and Interactive Robotics Group labmates Yilun Zhou and Ankit Shah, presenting an early version of Bayes-TrEx at the Association for the Advancement of Artificial Intelligence meeting in 2020. 

After graduation, she joined Google as an associate project manager where she worked on search and augmented reality projects. Pursuing a Ph.D. had always appealed to her, so after two years she left the company and enrolled at MIT.

“I was concerned about the level of power that Google had and my responsibility as an employee, and how to wield that power appropriately and responsibly,” she said. “I wanted to think about how to manage that kind of power, rather than just participate in it.”

Booth’s work in the Interactive Robotics Group, which is led by Julie Shah, Professor in the Department of Aeronautics and Astronautics, is focused on interpretability, or understandability. Booth has studied the best techniques for presenting large, logical formulas so they are understood by humans and used cognitive science to explore how people learn to comprehend a robot’s capabilities and limitations.

One recent project, Bayes-TrEx, seeks to shed light on the notoriously murky inner workings of neural networks. The tool enables a researcher to specify the end behavior they are interested in studying with a model, and then automatically find test cases that identify that behavior in a data distribution.

The idea is that a person can sit down with this tool and learn about how their model behaves in the world, even though they have a fixed amount of data and it is relatively small,” she said. “They can use this tool to generate new data and see if it matches their expectations as a human being for how this thing should behave.”

Serena Booth and Willie Boag standing in front of a senator's office in Washington, D.C.

Booth and MIT CSAIL colleague, Willie Boag (also a Ph.D. student), stand in front of a senator's office on one of their Science Policy Initiative trips to Washington, D.C.

Booth and her colleagues have recently been adapting this project to help humans understand robots better, too. Given a robot controller, they find scenes in which the controller underperforms relative to human expectations. People can then use these scenes as test cases to ensure they create robots which behave as expected.

A big challenge of her work comes from the interdisciplinary nature of the research, which touches many areas of cognitive science.

“Studying people is what inspires me, but it is also so hard. You want to assume that people are rational and consistent and honest and all of these things, and you find out that none of them hold, so you are constantly trying to handle massive amounts of noise in your data,” she said.

While she focuses on understanding how humans and machines interact, she’s also keenly interested in exploring the ethics behind the algorithms that make particularly significant decisions, such as whether an individual is qualified to receive a loan from a bank. For the past few summers, she has taught an ethics class for undergraduates and draws inspiration from the thoughtfulness with which her students approach such thorny problems.

She’s also gotten involved in the MIT Science Policy Initiative, which she will lead as president in the coming year. The group provides an opportunity for students to consider the role of science in government, and they bring students to Congress to provide hands-on experience in science advocacy.

Serena Booth riding a tandem bike

Booth and her partner enjoy a tandem bike ride.

Booth hopes those two interests converge in her future career. She’d like to be a CS faculty member who takes on engagements with the government to help regulators better understand the power and pitfalls of robots and AI.

Ensuring that robots are safe and beneficial for humanity will require collaboration between academics and policymakers, she said.

“There is a lot of disappointment of robotics because we have had this promise that we will create more creative jobs and give people freedom to live their lives in new ways, and I think so far what we’ve seen is really the opposite. We’ve created worse jobs for people, like repeatedly picking up and putting down objects where there are expectations on the number of objects that you pick up and put down, and the AI is monitoring you and telling you that you’re not performing well enough. These jobs are dull or harmful to people,” she said. “My long term objective is to push this in a different direction. I want the robots to be useful to us, and I think a core piece of that is making sure we understand them and can interact with them safely.”