News

The past and future of AI: A chat with Barbara Grosz

Barbara Grosz has spent her career working to make human-computer interactions as fluent as human-to-human interaction.

Barbara Grosz, Higgins Professor of Natural Sciences at Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS), has spent her career working to make human-computer interactions as fluent as human-to-human interaction. In pursuit of that goal, Grosz has explored the foundations of dialogue, the fundamental characteristics of teamwork, and has revolutionized the field of artificial intelligence not once but twice.

Her first major breakthrough was to create the first computational model of discourse, which spawned an entirely new field of research and influenced language-processing technologies. Next, with colleagues, she completely rethought how multi-agent systems collaborate as a team, once again leading the field to a new frontier of discovery. 

“Most researchers are bricklayers, moving the field forward step-by-step,” said Michael Wooldridge, head of the department of computer science at Oxford University. “But the real leaders are the people who see the big picture — the architects of the fields. Barbara is one of those architects. She has shaped the direction of the field in a fundamental way.”

Grosz was awarded the Research Excellence Award of the International Joint Conference on Artificial Intelligence at their meeting in Buenos Aires for her pioneering work in multi-agent systems and natural language processing.

“Barbara’s work brought the field of natural language processing some of its best, most innovative moments, each fundamental in their own right,” said Oliviero Stock, of the Fondazione Bruno Kessler and senior fellow at the Center for Information and Communication Technology in Italy. “Her work on dialogue and discourse processing shifted the field. Before her, the field was much more limited. After her, there were so many more avenues that could be explored in discourse processing.”

“Barbara’s work has had an enormous influence on so many of us in this field,” said Julia Hirschberg, chair of the computer science department at Columbia University.  “The models Barbara and her collaborators created have been central to the way we think about how people plan and communicate with each other about those plans.”

We sat down with Grosz, who is also the former dean of the Radcliffe Institute for Advanced Study, to talk about her work, her field, and building smart machines that work with, not against people.  

Throughout the history of Artificial Intelligence, many researchers have thought they could ignore the ways humans process language or performed other intelligent acts. As the thinking went, airplanes don’t fly like birds, why should machines think like humans? But you took a different approach. Why?

If you’re going to build agents that interact with people, you have to think about people’s cognition and the ways they behave. That doesn’t necessarily mean you have to do cognitive modeling — although that is an interesting approach — but you do need to care about how people process information and communicate.

When I was working on speech understanding systems at SRI in the 1970s, other research team members were responsible for syntax and grammar — determining the structure and building a computer representation of the meaning of an individual sentence. Everyone involved in early speech understanding systems  knew that wasn’t enough.  When people talk, the context matters.  They use pronouns and definite descriptions.  They depend on each other to interpret those imprecise expressions appropriately in context. For example, depending on the setting, “the cup” might mean my coffee cup or the cup you received as a gift. We knew that if we were going to have a system that could carry on a dialogue and be able to handle the way people actually spoke, we needed to have a computational model of dialogue that could track context. Many researchers thought if they sat in a chair and thought really hard, they could figure it out. I expected that wouldn’t work and devised a way to capture dialogue about the same topic from many different pairs of people.  This was actually the first “Wizard of Oz” experiment in dialogue systems, though that name came later.  I placed two people in separate rooms and had one give the other instructions in how to put together a piece of equipment — an air compressor.  My analysis of the way they talked led to the first computational model of discourse.

We’ve come a long way from that first model of discourse to the era of Siri and Cortana. Has AI reached Turing’s Imitation Game goal which postulates that a ‘thinking machine’ can carry on a dialogue well enough to be indistinguishable from a person?

These systems are major accomplishments, but they don’t come close to human dialogue capabilities.  When Siri first came out people said to me, ‘you have nothing left to do, right?’ So, I borrowed a phone with Siri and it took me two questions to break the system. I asked, “Where are the nearest gas stations,” and then I asked, “Which ones are open?” It replied, “Would you like me to search the web for ‘which ones are open?’” It had no context, no discourse. Siri has improved since then, but it’s still pretty easy to break the system with a question that depends on dialogue context.  No current system is thinking to the extent Turing imagined computers might be by now.

You’ve recently suggested an updated “Turing Test” focused on collaboration, asking whether a computer-agent team member could behave in a way so that its team members would “not notice it’s not human.” What are the biggest challenges to a machine passing this test?

To clarify: I suggested that the way we use computers had changed so much, as had our knowledge of human cognition, that Turing himself might ask a different question now.  My new question is rooted in our now knowing that collaboration is essential to intelligent behavior and seems to play a fundamental role in the ways infants learn.  Can we design systems that behave so well that they pass for human? One big challenge, which my team is addressing in our research, is getting delegation to work well.  Delegation of particular responsibilities to different team members is a hallmark of teamwork.  To make teamwork work (or as we might say in computer science, to make it tractable), team members have to share information but not overwhelm each other with too much information.  An enormous challenge for systems is to be able to determine what information to share with whom when.

For example, I’m making dinner with Bobby and Susie. Susie is assigned appetizers, Bobby is assigned the main dish and I’m assigned dessert. I don’t ask Bobby how he is making the main course because if he has to tell me everything he’s doing, it’s a huge cognitive load. That said, it’s still crucial to know certain things, such as if we both need the same pan.

This is a big challenge in healthcare, right?

Right. We’re working with a pediatrician at Stanford University Hospital whose patients have complex diseases, many of them seeing 10 to 15 doctors. The cognitive load for coordinating care among 15 people (turning the group into a real team) is enormous — no care giver needs to see everything everyone else is doing but they may need to know something about each other’s work.  A key question is when one member of the team learns something new about a patient, who should get that information and when? Our goal is to build the foundations for smart computer care coordination systems to help.  To do that, we need to figure how to effectively compute the information to be shared in the absence of detailed models of how people are carrying out their responsibilities.  If we do this, we’ll also know how to build computer agents that are good teammates.

Computer systems in healthcare are notoriously taxing. What do you tell students, who will be designing these systems in the future, about usability?

One of the things I want students to learn is the importance of designing artifacts for the people who will use them.  A computer system should make us feel smarter, not dumber and work seamlessly with us, like a human partner. I tell students to look for limitations and cracks in a system and think about the unintended consequences of those limitations. If you’re only focused on what you’re building, you’re blind to what a system may do that you hadn’t thought about.

Those unintended consequences have been in the news a lot recently, with Stephen Hawking and Elon Musk warning of the dangers of artificial intelligence. What do you make of that?

The fear of AI systems running amok or taking over the world is greatly exaggerated. Some of the predictions are based on lack of understanding of the current state of AI (or even of what’s actually computable).  Also, it’s important not to lose sight of who’s in charge:  people design AI systems, and they can design any number of plugs to pull. If we design systems to work with people — which has always been my goal — then the probability of them running amok is greatly lowered.

Even so, as the people who develop these systems, AI scientists and practitioners need to take responsibility for the uses to which AI capabilities are put.   We should be clear about the limitations of the technology.  Should we think – and talk – about negative or potential unintended consequences? Absolutely! Are these concerns reasons not to develop systems that are smart? Absolutely not.

On the contrary, a smart autonomous system is certainly preferable to a dumb one.

Note: This fall, Grosz is teaching a new course that explores many of these issues, Intelligent Systems: Design and Ethical Challenges.

Topics: AI / Machine Learning

Scientist Profiles

Press Contact

Leah Burrows | 617-496-1351 | lburrows@seas.harvard.edu