UW Games Institute holds panel on feminist and responsible software design


On Dec. 2, the Games Institute held a lunchtime panel on feminist and responsible design. The Games Institute is an interdisciplinary research centre at UW that seeks to advance the study, design, and purpose of interactive and immersive technologies, composed of academics from multiple fields. The panel featured two guest speakers, followed by a question-and-answer forum. The event was open to the public in person and online through Microsoft Teams.

The event’s goal was to juxtapose work currently being done to maintain and uplift marginalized groups in digital spaces like social media with the work being done to futureproof up-and-coming technologies such as virtual reality (VR) by including ethical principles early in the design process. 

The first speaker, Daniel Harley, is an assistant professor in Digital Media and Design at UW’s Stratford school of interaction design and business. The thesis of Harley’s presentation was that “attempts to expand the design space of VR narratives [can] reveal opportunities to present richer and more explicit theoretical foundations” that he argues are sorely needed to tackle the ethical challenges of new technology like VR. 

Harley highlighted the rise of commercial VR projects like the Metaverse, which emphasize “how the worst of online culture also exists in online VR spaces — now with the added threat of physical interaction.” Harley discussed his work on responsible design of VR narratives, along with the core concepts of design space and human-computer interaction (HCI). 

In this context, “design space” is a metaphor for the artificial boundaries given to a study or project rather than a physical location. “If designed objects create space around themselves, an important question for us is — what spaces do we want to create?” Harley said. Similarly, HCI refers to how users interact with computers and how we can design around them, not just from a user experience standpoint, but also from an ethical standpoint. “We’re essentially saying ‘Slow down. What are we leaving behind? Who are we leaving behind?’” 

As an example of how these principles are often ignored in VR research, Harley brought up a 2021 review paper he co-authored which found that out of a pool of 71 publications looking into the issue of cybersickness — motion sickness induced by interacting with VR environments — most of them failed to account for the experiences of female-identifying individuals. Harley argued that despite the concept of HCI greatly expanding over the course of the last decade to account for queer, feminist, and intersectional perspectives, most of this expansion was superficial. “The adoption [is] mostly one-dimensional. Mostly crediting the work, but not really engaging in its steeper implications.” 

To end his presentation, Harley emphasized the importance of fostering diversity within design research communities and acknowledging everyone’s contributions. “If we want to change how we design, we have to be very cognizant of the kinds of research design communities we create.”

The second speaker, Briana Wiens, is an assistant professor in Digital Media and Rhetoric at UW and co-director of Feminist Think Tank, an intersectional feminist design research group. Wiens’s work focuses on the role digital technology plays in maintaining communities and fighting existing power structures. Her presentation had a more practical angle and focused on her work co-opting the existing mechanics of social media platforms (particularly Instagram) to uplift otherwise marginalized voices within the system. 

“We recognize the affordances of Instagram and the culture of production that circulates around it, and then we try to repurpose those tools for other means,” Wiens said. “And so, we don’t buy into influencer culture. We don’t try to just produce endlessly because we’re supposed to, [but rather] try to question the circulation and the promotion of particular feelings, ideologies, and politics on the platform.” 

Wiens specifically called out a post that quoted actress Emma Watson (“There is no criteria for feminism. If you feel that you’re able to live a life of self-content and joy, then you don’t have to fit into any set of standards”) as an example of the kind of hollow feminism that is popular online. 

“Feminism is for everybody, but the criteria is that it’s a movement to end sexist repression. It’s not just about living contently. It’s not just about being joyful like that post there says,” Wiens said. “It’s really important to notice how long things stay trendy and what it means to be trendy. Feminism became popular, but not a kind of feminism we wanted to actually buy into. We don’t want this post-feminist, neoliberal white crap. We want intersectional feminism, [but] that hasn’t come into mainstream media yet.”

Wiens’s work attracted the attention of the Coalition of Muslim Women of Kitchener-Waterloo, who reached out to Feminist Think Tank to aid in the development of workshops aimed at young Muslim women to provide them with tools to deal with the rise of Islamophobia seen online in recent years. A key concept that arose from those workshops was that of counterspeech, which is the tactic of countering online hate speech or misinformation by presenting an alternative narrative, rather than outright censoring the offending speaker. 

“The goal is not necessarily to directly address the person or social media bot who is guilty of Islamophobia, but instead, we’re trying to flood newsfeeds with more positive representations of Islam and Muslim people,” Wiens said. “It’s dissuading online audiences from wanting to contribute to the spread of vitriol, and it’s galvanizing more acts of counterspeech. So, when they see one act of counterspeech, others who are not necessarily connected to [the Coalition] actually pile on and add more of their own experiences to help the person who is being attacked.”

Sid Heeg, a PhD student at UW’s faculty of environment and a research assistant at the Games Institute, asked Wiens whether she ever thought that algorithm-driven social media systems are rotten to the core and should be scrapped altogether. In response, Wiens chose to highlight all the positive community building that marginalized groups were able to foster on platforms like Twitter when they would’ve otherwise found themselves silenced within real-world systems. 

“We saw black feminists mobilizing online, we saw responses to Gamergate, there are all these kinds of responses to the hate that we see, and I think that’s what makes social media a place that is still deserving of research because there are still these spaces of hope.” 

Wiens also highlighted how Elon Musk’s recent buyout of Twitter conveniently coincided with a rise in progressive activism seen on social media  in recent years, specifically within the context of the MeToo movement. “Before, we used to see maybe 10 years of good protesting before facing pushback. As of this October, it’s been exactly five years since MeToo went viral on social media, and suddenly we’re already regressing. I don’t know if that was [Musk’s] active thought, but I don’t think it’s a coincidence that these things overlap,” Wiens said.

Ali Rizvi, a Ph.D. student studying HCI at UW and a former senior product manager for Amazon Alexa, brought a pro-tech perspective to the conversation. “A lot of times, we will make tech the bogeyman and take away the agency of the humans using it. Twitter inherently isn’t hateful; Facebook inherently isn’t hateful. If you’re getting invaded by something, and you’re engaging with it, [the algorithm is] amplifying that engagement, which is why now you have these buckets of republican Facebook, democrat Facebook, people who don’t really look outside their bubble. But even today [you] can find new connections where [you] don’t see that hate being amplified, [you] see those connections being amplified.” 

In response, Wiens addressed the implicit biases of those who craft the algorithms even before users have had any chance to engage with them. “We know algorithms are created with the biases of their creators inherently baked into that technology, and that’s what’s getting applied. How is that affecting the ways that certain messages can or can’t actually be circulated in online messaging?” 

Meanwhile, Harley emphasized that the critiques being made aren’t necessarily anti-technology but rather aim to address known issues with social media as part of the design process. 

“How do we honestly recognize the fact that it might amplify hate better than it amplifies the opposite? How can we recognize the opportunities to change those kinds of things from the inside, right from the beginning with the algorithm, with the structure of the company?”