Conference Explores Inequality in the Age of AI
Held on March 17th in Sacramento, our 2026 conference brought together researchers from across the nation and beyond.

How will AI affect existing socioeconomic, health, and opportunity gaps? This and other pressing questions were explored by an international slate of academic researchers, members of the policy community, and industry representatives at our conference on March 17th, 2026.

Titled Artificial Intelligence and Social Inequality and held at the UC Student and Policy Center, a short distance from the California State Capitol, the event featured research presentations, keynote talks, and meaningful dialogue.

The conference was a great success. We caught up with some of the organizers, keynote speakers, and presenters to find out more.

 

JACOB HIBEL (Organizer)

Associate Professor of Sociology, UC Davis
Co-director, UC Davis Center for Poverty and Inequality Research

Why was it important to hold this conference now, in the spring of 2026?

If anything, this kind of convening is overdue. One thing we’ve observed is that a lot of the conversation around AI among academics falls into one of three buckets: discussions of how is AI impacting our students and our experiences as course instructors, sharing new developments around AI as a research tool, and exchanging high-level philosophical and ethical musings about the meaning of AI’s rapidly expanding influence on our lives. But now that we are several years into the LLM-driven AI revolution, social scientists are in a position to share empirical findings about how AI is reshaping core societal institutions like labor markets, health care, and education. We do not have to just wonder or guess about how AI is likely to shape patterns of social inequality, we can study its effects using our disciplines’ theoretical and empirical toolkits, and we wanted this conference to provide such a venue. 

What were your goals for the event?

On the one hand, we aimed for the conference to serve the traditional function of academic conferences, bringing researchers together to share and provide feedback on cutting-edge research an artificial intelligence and social inequality. But we had another, equally important goal of engaging members of the policy community in bi-directional conversations. To design the best, most impactful research on these topics, it is important for researchers to understand policymakers’ goals and concerns. By the same token, I believe most researchers in the field of AI and social inequality hope that their findings will help inform sensible, forward-looking policy development, so we viewed bringing policymakers, advocates, tech sector representatives, and other non-academics into the conference space as essential. This is why we hosted the conference at UCSPC, just across the street from the state Capitol. We hope to get researchers and policymakers more accustomed to working closely together.

 

TINA LAW (Organizer)

Assistant Professor of Sociology, UC Davis

What made the conference a success?

As social scientists, we work hard to conduct research that we hope will shape public policy for the greater good, but often those aspirations take a very long time to make it out of the pages of peer-reviewed journals, if at all. With this conference, we sought to bring this cutting-edge research on AI directly to the footsteps of the Capitol. Instead of waiting for the policy community to come to us, we went to them. And hopefully this will, in the long run, help to improve both research and policy on AI.

What was your main takeaway?

Social scientists, computer scientists, and the policy community are eager to work together to ensure that AI is broadly beneficial and that they need more opportunities to get together to learn and collaborate.

 

GENEVIEVE MACFARLANE SMITH (Keynote Speaker)

Director, Responsible AI Initiative | Berkeley AI Research Lab
Professional Faculty | Haas School of Business
University of California, Berkeley

What did you address in your keynote speech?

Data reflects the world’s inequities. And choices that shape AI — what data to use, what to optimize for, what counts as fair — are not neutral. Technologies encode dominant norms and power hierarchies by default… This is not a glitch, but a predictable outcome of uncritical design. But design can be done differently. Four principles can guide this work: Start with people. Center communities. Reframe to equity. Build together.

What was your main takeaway?

I’m inspired that people continue to work on understanding the ways AI intersects with inequality. As the technology continues to rapidly advance and be integrated into daily life and work, much work remains. There are opportunities for AI to deepen or mitigate inequality. This is partially in the control of the developers and managers of powerful models and tools, but it is also up to us.

 

JULIANNE MCCALL (Keynote Speaker)

CEO, California Council on Science and Technology (CCST)

Why was it important to hold this conference now, in the spring of 2026?

The timing of this conference is critical because we are at an inflection point in the development and deployment of artificial intelligence. AI systems are already shaping decisions in job hiring, healthcare, education, and other areas that directly affect social and economic opportunity. At the same time, policymakers are being asked to make high-stakes decisions about governance, safety, and accountability—often without access to the full breadth of technical and social science expertise. For instance, with the Transparency in Frontier Artificial Intelligence Act (SB 53) having just gone into effect in January, policymakers are currently navigating the practicalities of oversight. Convening researchers, policymakers, and practitioners now creates an opportunity to align evidence with action at a moment when those decisions will have long-term consequences for equity and inclusion.

What did you address in your keynote speech?

I focused on the dual role AI can play in reducing or exacerbating social inequality. I emphasized that AI is not a neutral technology—it reflects the data, design choices, and institutional structures behind it. I highlighted how these dynamics are already playing out across labor markets, healthcare, and education, and underscored the importance of cross-sector collaboration to understand and address these impacts. I also shared how the California Council on Science and Technology (CCST) is working to support evidence-based policymaking through initiatives like our AI Academy for legislative staff, Science Advisor program for Governor’s Cabinet Secretaries, and Science & Tech Policy Fellowship training pipeline—all efforts designed to equip decision-makers with the knowledge and expertise needed to govern AI responsibly.

What was your main takeaway?

The clear urgency—and the real opportunity—for sustained collaboration across disciplines and sectors. The impacts of AI on inequality are not theoretical; they are already unfolding in real and measurable ways. What gives me optimism is the depth of expertise and shared commitment in the room. When technical experts, social scientists, and policymakers come together, we are far better positioned to ensure that AI is designed and deployed in ways that expand opportunity rather than limit it. The challenge ahead is to translate this collective insight into durable policy solutions that serve all communities.

 

GÜL SECKIN (Presenter)

Associate Professor of Medical Sociology, University of North Texas
What was the subject of your presentation?

My presentation explored how artificial intelligence (AI) in healthcare intersects with social inequality in the United States. It introduced the concept of algorithmic stratification, which explains how AI systems used for diagnosis, treatment, and risk prediction can exacerbate disparities in trust, access, care, and health outcomes across social groups. The talk positioned AI not just as a technological innovation, but as a governance and equity issue within modern health systems.

What key findings did you present?

The findings underscore several critical challenges for health policy and regulation:

  • Unequal trust in AI systems: Racial and ethnic minority groups—particularly Black and Latino populations—demonstrate significantly lower trust in AI-driven healthcare, reflecting both historical inequities and concerns about algorithmic bias.
  • Digital and socioeconomic divides: Individuals with higher income and digital access are more likely to benefit from AI-enabled care, while lower-income populations face heightened concerns about surveillance, data misuse, and reduced access—pointing to a need for equitable digital infrastructure policies.
  • Age-related disparities: Older adults are less comfortable with AI in clinical decision-making, raising questions about informed consent, autonomy, and patient-centered care standards.
  • Algorithmic ambivalence: While the public recognizes AI’s potential to reduce bias, there is widespread concern that it may instead institutionalize or worsen disparities if left unregulated.

These findings align with broader policy research emphasizing that without oversight, AI can reinforce structural inequities in healthcare systems

What made the conference a success?

The conference brought together interdisciplinary researchers, fostering meaningful dialogue between researchers, policymakers, and practitioners. The exchange of ideas allowed participants to critically engage with emerging issues—such as AI in healthcare—not only as technical developments but as deeply social phenomena with real implications for equity and access.

What was your main takeaway?

AI must be approached as a core health equity issue, not just a technological innovation. Technological systems are not neutral; they reflect and can amplify structural inequalities. The conference highlighted the need for more inclusive, equitable approaches to the design, implementation, and governance of AI in healthcare and beyond. Discussions reinforced that without careful attention to structural inequalities, new technologies risk reproducing or amplifying existing disparities.

To stay up to date on CPIR events and publications, join our mailing list