Back to list
Project Extra

FAQs and Glossary: “If We Don’t, Who Will?

June 24, 2025
Stephanie Dinkins
Wondering how stories, data, and AI come together in Stephanie Dinkins’ public art project If We Don’t, Who Will? In the FAQs and glossary below, Dinkins shares how the project centers equity, care, and invites everyday people to shape the technologies that influence our lives.

If We Don’t Who Will? is a public art project by artist and technologist, Stephanie Dinkins. The project is a performative and functional artificial intelligence laboratory in a public plaza in Downtown Brooklyn, June 25 – September 28, 2025.

Frequently Asked Questions

1. Why are you collecting stories from the public?

We’re gathering self-determined stories as a way to build datasets from the ground up—datasets rooted in care, specificity, and the multitudes of human experience. These aren’t just stories. They’re interventions—acts of refusal against systems that flatten, erase, or misrepresent us. We believe the people most impacted by technology should also shape the foundations of it. If we don’t, who will?

2. How will my story be used?

Your contribution becomes part of a data commons—created by and for the public—that helps train experimental AI models made with values of care, transparency, and community input. The system doesn’t just consume your story; it reflects it back to you through generative visuals and shared dialogue. None of this is for commercial use. It’s for collective insight and critical play.

3. Do I have to be an expert in AI to participate?

Absolutely not. We believe everyone has expertise—on their lives, cultures, fears, and futures. This work is designed for people who’ve never thought about AI, who are skeptical of it, or who are deeply curious. Your presence here is the expertise. Together, we demystify the system by touching it, talking to it, resisting it, and remaking it.

4. How does this make AI more equitable?

AI is not neutral. It inherits and amplifies the histories and hierarchies encoded into its training data. By contributing stories that speak from the margins, the silenced, the overlooked—we challenge that foundation. This isn’t about fixing AI by adding “diverse voices.” It’s about reimagining the systems entirely, from the data up, with community consent and care at the center. This is a living model for how justice might live in our technologies.

5. What kinds of stories are you looking for?

We’re not looking for perfect narratives—we’re looking for truths. Stories that speak to how you move through the world, what you fear, what you love, what you hope for. The unpolished, the fragmented, the poetic and the practical. Every offering contributes to a more nuanced collective archive of being.

6. What will you do with my data?

We handle all contributions with reverence and radical care. Your story—your data—is anonymized, randomized, and stripped of identifying information before it enters the system. We don’t extract; we reciprocate. We don’t collect; we co-create. You’re invited to share only what you feel good about releasing to the world, knowing it will help challenge the dominant stories machines are learning to tell.

7. What’s the AI lab experience like?

The lab is a container—for reflection, creativity, and resistance, equal parts sculpture, studio, and salon housed in an upcycled shipping container collaboratively created by Dinkins and  LOT-EK. Here, you can share your story, see how AI interprets it, and ask deeper questions about what this technology reflects and what it leaves out.

8. Why is this called If We Don’t, Who Will?

Because silence is not an option. Because systems are being built with or without us—and the stakes are too high to stand by. This is the first chapter of The Stories We Tell Our Machines, a body of work that insists artists, neighbors, and communities—not just engineers—must shape the systems that are shaping us. If we don’t show up and make different futures possible, who will?

9. Isn’t AI harmful to the environment? How does this project address that?

Yes. The energy costs of large-scale AI systems are real and urgent. That’s why this project resists the logic of “scale at all costs.” Our models are bespoke, intentionally small, and experimental. We choose care over speed, local over global, and nuance over dominance. Environmental justice is inseparable from technological justice—and both must start with community, not capital.

Glossary: If We Don't Who Will

AI Ecosystem

The interconnected network of technologies, data sources, policies, institutions, and people that shape how artificial intelligence operates and impacts our lives. Ecosystems are never neutral. They are made, and they can be remade.

Ancestral Time

A way of understanding time that connects the past, present, and future through a sense of intergenerational community and responsibility. It emphasizes the ongoing influence of ancestors on the present and the responsibilities we have towards future generations. 

As Ralph Ellison wrote in Invisible Man  “…the end is in the beginning and lies far ahead.”

Bias

An embedded, often invisible preference or assumption baked into data, systems, and institutions. Bias is not just a technical flaw—it is a reflection of social histories, hierarchies, and exclusions. We don’t pretend to remove bias—we name it and work to counterbalance it.

Co-Creation

The process of building systems, datasets, or experiences with—not just for—communities. Co-creation means design and decision-making are shared, distributed, and reciprocal. It refuses the top-down model of “innovation.”

Collaborative Dataset

A dataset built through open participation and guided curation. It reflects particular cultures, experiences, and identities—especially from those historically marginalized or misrepresented in mainstream data systems. These datasets are not objective—they are intentional.

Cultural Specificity

The opposite of generic. It refers to practices, languages, aesthetics, and meanings that emerge from particular communities and lineages. It means we recognize the difference between a wedding sari and a white gown—not just visually, but contextually.

Data Commons

A shared, community-created resource made up of stories, images, ideas, and terms offered by everyday people—not corporations or institutions. Data commons are rooted in principles of access, consent, transparency, and mutual benefit. Unlike most datasets that extract information, commons are about collective contribution and careful stewardship.

Generative AI

A type of artificial intelligence that creates images, text, or sounds by remixing patterns from existing data. It doesn’t think or feel—it predicts based on what it’s seen before.

In this project, Generative AI is a mirror. We use it to reveal what’s missing or distorted in mainstream data—and to show what’s possible when the dataset reflects our communities instead. Generative AI as an approach may soon be obsolete. Prepare for what’s next.

Machine Learning

A branch of artificial intelligence where machines “learn” patterns from data in order to make predictions or generate new content. Most machine learning systems mirror the biases in the data they’re fed. Our work uses machine learning not to automate, but to expose, reflect, and question.

Nuance

The details, contradictions, and cultural context that make stories and identities complex. Tech often treats nuance as a problem. We treat it as the point.

Nuance is what makes a label more than a tag, and a story more than data. Without it, AI systems fail to see us clearly. With it, we build technology that respects who we really are.

Radical Care

An ongoing, intentional practice that values context, history, and humanity over efficiency. It is an ethic that challenges extractive systems by asking: Who is this for? Who is left out? Who is harmed? Radical care is not a feature—it’s a stance.

Technological Justice

A framework that understands technology as political. It demands that communities most affected by AI systems have the power to shape them. It centers equity, sustainability, and reparative action in all stages of tech development.

Training Data

The raw material that teaches an AI system how to function. If the data lacks diversity or nuance, the system will reflect that absence. In this project, you are the source of training data—on your own terms, through your own words and images.

Transparency

Not just telling people what’s being done, but showing them how it works. In AI, transparency means exposing the pipeline: where the data comes from, how decisions are made, who has power, and what gets left out.

Contact

Name(Required)
Topic