Advertisement
News

The Seattle MacArthur Fellow who teaches common sense to computers

UW researcher Dr. Yejin Choi has spent a career pursuing 'risky' AI research that bridges the communication gap between humans and technology.

The Seattle MacArthur Fellow who teaches common sense to computers

by

Hannah Weinberger

Repuplish

How might one feel when they win a MacArthur award?

Proud. Happy. Accomplished. Excited. Good. Might want to celebrate with family and friends.

That’s according to COMET, an experimental text-based artificial intelligence web application, when asked to think about the context behind the statement “[Person] wins a MacArthur award.” Dr. Yejin Choi nods knowingly at the application’s output on her shared Zoom screen: The program generates common-sense assumptions based on simple statements. She’s demonstrating the program, which stands for COMmonsEnse Transformers, for Crosscut on Wednesday, Oct. 19, a week after being announced by the John D. and Catherine T. MacArthur Foundation as one of 25 MacArthur Fellows.

Choi, a professor in the University of Washington’s Paul G. Allen School of Computer Science & Engineering, received the designation and an $800,000 “genius grant” for her groundbreaking work in natural language processing. The subfield of artificial intelligence explores technologies’ ability to understand and respond to human language.

Natural language processing research impacts all of us, whether or not we interact with artificial intelligence directly. Every time we ask a smart device like Siri or Alexa to remind us to buy milk, woozily type an early-morning text relying on AutoCorrect’s help or allow Google to autocomplete our search queries, we’re asking artificial intelligence programs to analyze our voices and keystrokes and correctly interpret our requests. And increasingly, this technology is key to global business strategy, involved in everything from supply chain management to healthcare.

But computers still take our requests literally, without understanding the “why”s behind our questions. The processors behind AI assistants don’t inherently understand ethics or social norms, slang or context.

“Human language, regardless of which country’s language, is fascinatingly ambiguous,” Choi said. “When people say, ‘Can you pass me the salt bottle?’, I’m not asking you whether you’re capable of doing so, right? So there’s a lot of implied meanings.”

At worst, creating AI algorithms based on content scraped from the internet can riddle them with racism and misogyny. That means they can be not only unhelpful at times, but also actively harmful.

Choi works at the vanguard of research meant to give artificial intelligence programs the context they need to figure out what we really mean and answer us in ways that are both accurate and ethical. In addition to COMET, she helped develop Grover, an AI “fake news” detector, and Ask Delphi, an AI advice generator that judges whether certain courses of action or statements are moral, based on what it’s processed from online advice communities.

Crosscut recently caught up with Choi to talk about her MacArthur honor, demo some of her research projects and discuss the responsibility she feels to help AI develop ethically. This conversation has been condensed and lightly edited for length and clarity.

Crosscut: How did you feel when you found out that you’d won this award?
Choi: I came a long way, is one way to put it. I consider myself as more of a late bloomer: a bit weird and working on risky projects that may or may not be promising, but certainly adventurous.

The reason why I chose to work on it wasn't necessarily because I anticipated an award like this in the end, but rather that I felt that I’m kind of nobody, and if I try something risky and fail, nobody will notice. Even if I fail, maybe we will learn something from that experience. I felt that that way, I can contribute better to the community than [by] working on what other, smarter people can do.

What first attracted you to AI research, especially the risky aspects you’ve mentioned?
I wanted to study computer programs that can understand language. I was attracted to language and intelligence broadly, and the role of language for human intelligence. We use language to learn, we use language to communicate, we use language to create new things. We conceptualize verbally and that was fascinating for me, perhaps because I wasn't very good with language growing up. Now my job requires me to write a lot and speak a lot, so I became much better at it.

I had a hunch that intelligence is really important — but it was just a vague hunch that I had. I was gambling with my career.

It became a lot more exciting than I anticipated.

How much does AI understand us right now?
Computers are like parrots in the sense that they can repeat what humans said — much better than a parrot — but they don’t truly understand. That’s the problem: If you deviate a little bit from frequent patterns, that’s where they start to make strange mistakes humans would never make.

Computers can appear to be creative, maybe generating something a little bit weird and different, and humans tend to project meaning to it. But the truth is, there’s no sentience or understanding.

portrait of a person with brown hair

Dr. Yejin Choi’s research focuses on ways to bridge the communication gap between humans and computers. (Amanda Snyder/Crosscut)

What has trying to teach computers revealed to you about how people learn?
When I think about how a child learns, they are able to ask a lot of questions about very simple things, and their caregiver will say things that they would never tell other adults — very basic declarative definitions or declarative explanations about how the world works. If you imagine that the child doesn't have access to that caregiver’s interactions at all, and if the child is only provided with YouTube, Reddit, The New York Times, I don't think that the child can actually learn successfully.

Humans invest so much effort in developing human education material, like different textbooks [that get regularly updated]. Whereas with AI, we just feed it raw data. What happens out of that is that AI is showing racism and sexism and all the bad stuff that's out there on the internet.

How does that make you feel about the responsibility that comes with training these programs?
There's a lot of responsibilities that are new to AI researchers, especially ethical implications. Even several years ago when AI didn't work as well, people didn't get concerned about it as much because nothing worked so it didn’t matter. But now it's becoming more integral to humans' everyday lives.

I do think that there has to be more policy and regulations governing how AI is developed and used. But at the same time, I don't think that alone will be enough. I think AI researchers also need to learn more about these issues and think about them, and also focus on developing AI, so that AI can learn to align better with humans.

Some of the AI programs you’ve designed are intended to model how humans decide what’s ethical. What are some of the challenges you’ve run into in the process?
The challenge is that humans may or may not agree with whether something is racist or not. It's fascinating that some people think a comment is just about freedom of speech, or there's no problem, that I tend to think of as a clear case of a microaggression. But then even if I think that way, human annotators [who add notes to the information AIs train on to help them process information] may or may not be aligned with one's viewpoint. So it's a human challenge that AI is not going to necessarily solve, but at least we can try to make things better. That's my hope. And AI, in some sense, can really reveal what sort of challenges we have. It's an interesting mirror.

How could greater diversity in who develops these programs affect how ethical they become?
I think minimally AI could learn to recognize and respect when there's disagreement as opposed to siding with one political view, and then insist on that to everybody. So part of the challenge is to avoid clearly unjust cases, while also at the same time respecting where humans currently disagree.

Can AI learn to be ethical or moral?
I don't think AI will be the most moral, necessarily, especially when humans never agreed upon a moral framework. I don't think we will ever arrive at one conclusion of what is right from wrong. Hopefully, AI should be able to align with that, at least, like a minimal code of conduct.

So if we're deploying these tools that don't necessarily have common sense yet, it sounds like the people creating them need to be willing to bear some responsibility for them.
Yeah.