Colby Debates a Blueprint for an AI Bill of Rights

The campus hears from a coauthor of the White House’s statement of principles for artificial intelligence

Sorelle Friedler talks with students during Assistant Professor of Anthropology Farah Qureshi’s AI and Inequality class. Friedler is coauthor of the White House Office of Science and Technology Policy’s Blueprint for an AI Bill of Rights.
Share
By Abigail Curtis Photography by Ashley L. Conti
April 20, 2023

A blueprint is a guide to making something—a way to move forward, in other words. 

Colby students and faculty recently considered some of the ways the United States might ethically move forward within the field of artificial intelligence when the College hosted a conversation with computer scientist Sorelle Friedler, a coauthor of the White House Office of Science and Technology Policy’s Blueprint for an AI Bill of Rights

The document, which was published last fall, was conceived as a way to help guide the design, development, and deployment of artificial intelligence and other automated systems so they protect the rights of the American public. It can’t come soon enough for some. Since the blueprint’s publication, AI developers have introduced new technologies that write essays, create artwork, and produce fake photos that look real. 

The blueprint is a critical step, according to Alison Beyea, executive director of the Goldfarb Center for Public Affairs, which hosted the event in collaboration with the Davis Institute for Artificial Intelligence and the departments of computer science, anthropology, and science, technology, and society. 

“It would be a mistake to believe only the darkest warnings of science fiction or to overestimate the worst possible outcomes of a new technology,” Beyea, the former director of the ACLU of Maine, said before Friedler’s talk last week. “But it would be an even bigger mistake to ignore the warnings completely and race forward without doing the hard work of building a social and political framework that protects the common good.” 

The promise and peril of AI

Artificial intelligence, or AI, is a quickly evolving technology that mimics human intelligence to solve tasks. It has already had big effects in the fields of health care, transportation, manufacturing, and education, and experts think it will continue to make changes both large and small in the way we work, our expectations of privacy, and more. 

A Colby student demonstrated an AI program before the recent Blueprint for an AI Bill of Rights presentation on campus.

There’s both promise and peril with the use of AI technology. Just last year, AI models and algorithms helped fight climate change, diagnose diseases, protect biodiversity, and identify buried landmines from a safe distance. Those same technologies can surveil people in their neighborhoods, workplaces, and schools, and implement biased lending and hiring practices that further harm marginalized communities. 

The AI Bill of Rights contains no laws and is not a regulatory or enforceable document. Instead, it is a statement of principles, Friedler said, adding that there already are laws in place to protect people’s rights, opportunities, and civil liberties. 

“Our goal here was really to cut through a lot of that noise,” said Friedler, whose parents are Louis Frielder ’66 and Sharon Eschenbeck Friedler ’70. “There is reasonably lots of excitement when technology can do something new. But we can’t make policy in that sort of reactive, fast-moving way, and we don’t need to. There are plenty of things that are based in our values that we know hold true, even if these [automated] systems become more powerful.” 

A statement of values

The AI Bill of Rights may be a good starting point, but it certainly can’t address all the potential concerns about the ever-changing technology, according to Amanda Stent, director of the Davis Institute for Artificial Intelligence. For example, new regulations for AI have been adopted in the European Union and China, she said, but especially in China, those may not lead to more clarity. 

“The Chinese regulation says that the producers of generative AI have to make sure that the AI’s output is factual and not discriminatory, but what ‘factual and discriminatory’ mean in China may not match what you and I think those terms mean on the surface,” she said.

In this country, there’s a call for public comment about proposed AI regulation that is causing her to ask similar questions. If future AI regulation requires that output must protect people’s privacy and be factual, there could be troubling ramifications. Who decides what is true? 

Students from different disciplines, including natural sciences, social sciences, and the humanities, came together to discuss the future of AI.

“It’ll come down to what do we as a society think is factual about things like birth control and racism,” Stent said. 

As well, something that she and her students spend a lot of time talking about is what happens when values are in conflict with each other. 

“The AI Bill of Rights is really a statement of values, like privacy and autonomy,” Stent said. “Let’s say that safety comes into conflict with privacy. If somebody is planning an attack, and in order to figure out who’s planning the attack you have to poke into everybody’s cell phone messages, which of those values wins in that context? It’s going to be a subject for debate for a long time.” 

Using technology to help

For Beyea, having an honest conversation about AI, which she called a “massive technological advancement,” was important. So, too, she said, was the way the Friedler event brought together students and faculty from different disciplines, including natural sciences, social sciences, and the humanities, to think and talk about it. 

“To me, the advancement of AI is the kind of disruption and rapid change that will cause us to rethink the rules,” she said. “We’re going to need smart, talented Colby students to engage in developing meaningful policy to make sure that we’re regulating AI and that AI’s not regulating us.” 

Alexandra Gillespie ’25, a computational psychology major from New Canaan, Conn., hopes to do just that. She said she was inspired by Friedler, whom she described as a trailblazer in the field of ethical AI. 

“My long-term goal is to make sure that people are using technology to help people rather than to just improve technology for the sake of improving technology.”

Alexandra Gillespie ’25

Gillespie doesn’t expect that to be easy. 

“The struggle is that a lot of computer scientists will get so wrapped up in being a computer scientist that they forget to be a person, which is understandable because what they’re doing is incredible,” she said. “But it’s also terrifying. If you lose that sense of humanity, and the goals that come along with that, you don’t stand to actually make any developments that are worthwhile.” 

Stent hopes that these and other student views will be shared with and heard by lawmakers and policymakers. 

“Young people today get to decide what is the future that they want, but they only get to decide it if they speak up,” she said. “I think the take-home message really should be that Colby’s students should figure out the world that they want to live in five or 10 or 15 years from now and speak up, because now is a time where they can really make a difference.” 

related

Highlights