Considering “Community”
A DISCUSSION OF
Bringing Communities In, Achieving AI for AllIn “Bringing Communities In, Achieving AI for All” (Issues, Summer 2024), Shobita Parthasarathy and Jared Katzman’s call for making community concerns the focus of meaningful AI development is important and timely. In particular, their emphasis on the role of universities in this effort rings true to me. In fact, I would argue that we—university educators and scholars who have access to resources and power—have a responsibility to consider what technological innovation and progress mean for how we envision our collective futures. We can leverage that responsibility to address the issue that “community” remains a vague (albeit en vogue) term. When we say community, we must specify who we mean, who is convening that community, and in what way.
A core community in the world of (higher) education and in the academic profession is the student body. Today, education is conditioned on the enrollment of students in technology systems, increasingly those that are artificial intelligence-enabled: learning analytics platforms, classroom surveillance technology, proctoring software, automated plagiarism detectors, college admissions algorithms, predictive student advising software, and more. Students are constantly surveilled and have no way of knowing about or refusing to get enrolled in AI systems—even though the pervasiveness of large-scale data collection and predictive analytics can affect their lives far in the future. At the same time, student power in the United States has been rolled back significantly over the past three decades. As a highly affected and often marginalized community in the university setting, students are structurally and culturally excluded from having a say about AI, because they generally have no say, regardless of how well they are organized as a community.
This tale holds a lesson: when we ask for communities to be brought in, we must also ask under what conditions. I profoundly agree that regulators play a crucial role in ensuring equity in AI, in all the important ways that Parthasarathy and Katzman describe. But I also note that engaging with constituents comes naturally to politicians and civil servants. It doesn’t to industry leaders, including leaders in the tech industry or in education. As educators, we can help change that. In the classroom, we can work toward the socially committed AI research that the authors place at the heart of equitable AI. Adjacent and outside of the classroom, we can support our local community of students in organizing around technology and policy issues as they pertain to their immediate educational environment. And we can help them work to establish structures and processes that institutionalize student power—or community power—in the context of technology governance. One such structure is the idea of a student technology council, a small group of students that represents the student body on positions about campus technology and governance and that actively participates in administrative decisions on technology procurement and implementation. This may have a signaling effect to AI vendors whose biggest clients are large educational institutions.
We have a long way to go from idea to community-led deliberation and implementation. But thinking about student-driven governance of AI provides an opportunity to create more permanent structures around community engagement on AI that push beyond individual projects and allow us to get very concrete on community needs and desires.
Mona Sloane
Assistant Professor of Data Science and Media Studies
University of Virginia