Rethinking AI Responsibility

Francesca Tripodi is putting people at the heart of AI — challenging bias, teaching ethics, and shaping a more responsible future for technology.

a woman reflected in a window
Francesca Tripodi developed the data science and AI ethics course for the master's program in the UNC School of Data Science and Society. She studies how AI is reshaping search engines. (photo by Megan Mendenhall)
June 23rd, 2025

“With great power comes great responsibility.” Francesca Tripodi believes this iconic quote from Uncle Ben in “Spider-Man” captures the challenge of working with powerful technologies like artificial intelligence (AI).

As a socio-technologist, Tripodi studies how society and technology shape one another. Her journey into this field began unexpectedly in the mid-2000s, while she was a master’s student at Georgetown University. For her thesis, she studied a group of domestic workers learning how to use the internet at Cisco Networking Academy in Costa Rica. Cisco’s metric of success was about job growth, but she found that there were many more gains than simply enhanced employment.

“It helped them feel more powerful in their current jobs,” Tripodi reflects. “They were able to look up their labor rights and call home using Skype. They didn’t have cell phones, but they could afford to go to an internet café to connect with their families.”

That experience sparked a deeper interest in how people use technology and how emerging technologies shape society. After graduating, Tripodi pursued a PhD in sociology at the University of Virginia, where she was mentored by media scholars Andrea Press and Siva Vaidhyanathan. She went on to conduct ethnographic research on platforms like YikYak — an online forum for college students — and Wikipedia.

Today, as a professor in the UNC School of Information and Library Science, Tripodi studies how AI is reshaping search engines like Google — and how it can amplify the biases already present in the data used to train these systems. She also teaches “Governance, Bias, and Ethics in Data Science and AI,” a master’s-level course within the UNC School of Data Science and Society. Through her research and teaching, she unpacks how AI is changing how we access and understand information.

AI is rapidly expanding, with the market projected to reach $1.34 billion by 2030. It’s already being used in everything from self-driving cars to surgical tools to health apps. And while AI offers benefits like increased efficiency and decision-making abilities, it also raises serious concerns.

Impact Report

Data Icon

While artificial intelligence (AI) is touted for benefits like increased efficiency and decision-making abilities, it also raises concerns about environmental impact, data privacy, algorithmic bias, and workforce disruption.

Old Well Icon

Francesca Tripodi designed the ethics course for UNC-Chapel Hill’s Master of Applied Data Science program. The class examines the history of data science and AI, their impact on privacy, and how they can introduce bias in areas like policing, the judicial system, labor and employment, health, education, finance, government, and entertainment.

By 2026, AI data centers are expected to consume 1,050 terawatts of energy. One terawatt is roughly equivalent to one day of continuous energy use across the globe. That level of demand could generate massive amounts of carbon emissions and strain local water supplies used for cooling. And that’s just the environmental impact. Other challenges include data privacy, algorithmic bias, and workforce disruption.

The truth is: AI isn’t perfect. And while researchers at Carolina are using it to make life-changing discoveries, they’re also learning to navigate its flaws — and training the next generation of data scientists to do the same. This holistic approach encourages UNC-Chapel Hill students and professors like Tripodi to think critically as the technology evolves.

UNC Research Stories sat down with Tripodi to discuss these issues and why ethics must be at the heart of AI development.

You teach a master’s-level course on AI ethics. How did you develop the curriculum and what kind of information does it include?

Francesca Tripodi: Before designing my syllabus, I reached out to faculty at information schools across the U.S. who teach information or data ethics. I asked for their syllabi to identify overlaps and gaps. I also drew from “Data Feminism” by MIT’s Kathleen de Ignacio and Emory University’s Lauren Klein, which helped lay the groundwork for understanding data ethics.

I considered the role mathematicians play in AI and applied the idea of the Hippocratic Oath — the ethical principles used by doctors — to data science. We often think of math as objective, but mathematicians and data scientists hold a lot of power. Most people can’t do the level of math they can, and they should consider how they can do the most good while minimizing harm.

I also challenge the idea that ethics is purely a Western concept. We explore Buddhist, Confucian, Islamic, and Hindu thought, along with ethical theories like deontology, which distinguishes right from wrong, and consequentialism, which focuses on outcomes.

Finally, we apply these frameworks to examine how data science and AI impact privacy and create biases in areas like policing, the judicial system, labor and employment, health, education, finance, government, and entertainment.

How do ethical considerations shape the way we collect and use data in AI systems?

FT: Ethics are messy. You can’t just “do” ethics; you have to keep incorporating them. Plus, ethical frameworks are often at odds with one another. In AI and data science, there’s this idea of creating unbiased automated decision-making. But I try to teach students that everything — from how you define the problem to the data you use — is shaped by human choices.

It’s easy to say, “Wouldn’t it be cool if AI could identify cancer?” And yes, I’m pro identifying cancer. But to train that model, you need medical data and often from people who’ve died from cancer.

That makes me think of Henrietta Lacks. Her cancer cells became one of the most important cell lines in medical research, helping cure countless diseases. But her cells were taken without her consent.

And so what concerns me is: How are we getting the data? Are we getting access to data from places with more lax consent procedures? Are we creating agreements with other countries where citizens don’t have the same data rights? What are the larger societal consequences?

Bias in AI is a widely discussed issue. Explain some of the complexities around it.

FT: One of my findings in studying Wikipedia is that women are underrepresented in English-language entries and that women’s biographies are more likely to be challenged as non-notable.

Wiki data is a major source for AI training because it’s free and open source. If you ask ChatGPT to name 10 notable scientists, only one is typically a woman. What’s frustrating is when people respond, “Well, maybe that’s just accurate.” We know it’s not. These biases are reinforcing stereotypes and information absences in the training data.

Another example of this is search engines. Before AI, search engines would rank websites, but now they offer overview summaries with links under each sentence. A postdoctoral scholar in my lab, Anna Beers, is auditing these citations and found they often pull from free sources like Wikipedia, Reddit, and YouTube.

As AI tools become more prominent, I worry they’re pulling narratives without recognizing how bias in the original data shapes the output. It doesn’t erase the bias — it amplifies it.

What are the pros and cons of using AI tools in everyday life?

FT: I think all tools have the potential to help or harm. Take ChatGPT. I used to make camping lists and always forgot something. ChatGPT generated a checklist in seconds and saved me hours. AI can save time and increase clarity.

But let’s look at other applications. For example, there are new e-commerce tools being used to determine when someone sees a doctor. In theory, they reduce bias — patients might otherwise be seen out of order due to how they look or act, or because of underlying social biases. But those same biases may already be embedded in the AI’s training data. Nurses, doctors, and patients cannot really override the algorithm if their experience or “gut instinct” tells them otherwise.

What worries me is that we’re investing heavily in machines and not in people. These systems are marketed as neutral, but they’re really about cutting costs — and what’s being cut is investment in human beings. For every task we automate, could we instead invest in human infrastructure?

People often think of AI in extremes — either as a miraculous tool or an existential threat. How do you approach this kind of polarization in your work?

FT: This kind of polarization has happened throughout history. Take the light bulb — it was revolutionary. People could stay up late, read after dark, and be more productive. But it also led to less sleep and more time indoors.

Every new technology has pros and cons. In our lifetimes, we’ve seen this with the internet. Some say it’s the best thing ever; others say it’s the worst. The truth is, we land somewhere in the middle.

There’s a lot of uncertainty around what AI means for us. We need to experiment with it and understand it before rushing to judgment. And I believe we can still foster critical thinking — especially in classrooms. We just need to rethink the assignment.

What role should private companies, governments, and universities play in setting the ethical boundaries for AI development and deployment?

FT: The corporate development of AI is key. Companies have a responsibility to approach it with integrity and caution — not just rush to monetize it without understanding the long-term impacts.

Governments also have a role to play. It’s disappointing that we still lack real legislation around data privacy and governance. The federal budget reconciliation bill is especially concerning. It removes states’ abilities to regulate data, which goes against the U.S. federal structure.

At the education level, we need to teach students how to use these tools responsibly, think ethically, and help improve them.

Francesca Tripodi is an associate professor in the UNC School of Information and Library Science, lead faculty within the UNC School of Data Science and Society, a principal investigator at the Center for Information Technology and Public Life, and an affiliate at the Data & Society Research Institute.