Rua M. Williams · @FractalEcho
314 followers · 577 posts · Server kolektiva.social

To support public access to research, I am writing plain language summaries of the ’s work in . Our first thread will be on a paper we wrote while I was still in school with Simone Smarr, Diandra Prioleau, and Dr. Juan Gilbert.

Summary of "Oh No Not Another Trolley"
In 2020, we sent out a survey to computer science (CS) students at universities in the United States. CS students learn about how to make computer systems.

We wanted to know how CS students think about ethics when they make computer systems. Ethics means thinking about how what you make can do things to other people. Sometimes computer systems can hurt people.

Usually, the people who get hurt the most are people who are already treated badly in society because of their race, gender, age, or disability. We wanted to know how CS students think about building systems that might hurt people.

In our survey, we shared five examples of computer systems that might hurt people, and asked CS students to tell us what they thought. The examples were of “algorithmic decision making supports”. These are computer systems that use math to help humans make choices.

Our first example was about doctors using a computer system to help them decide if a person had a disability or illness.

Our second example was about doctors’ offices using a computer system to help them decide when to schedule visits for certain patients.

In example 3, a computer system told doctors if a patient might not survive COVID, and then the doctor asked that person to sign a “Do Not Resuscitate” order. A DNR order means that if you are dying because your heart is stopping or you can’t breathe, the hospital can not help.

In our fourth example, doctors used a computer system to decide who would get a ventilator to help them breathe if there were too many people and not enough ventilators.

In our fifth example, hospitals would use a computer system to decide if someone’s personal ventilator should be given to someone else because the system thought the other person would live longer.

All of these examples were based on real things that happened during the beginning of the pandemic.

CS students told us what they thought the good and the bad things were about these examples.

We studied these answers to find out how CS students think about ethics problems in computer systems.

Most students believed that these systems could only do bad things if they were built wrong. Most students thought that these systems would be good “for society”.

But, when the students described society, they usually meant doctors, family members, and business owners. When the students talked about society, they talked about sick and disabled patients as if they were not members of society.

Many students did not understand what a DNR order was, and thought that it was just “useful information” and not a rule that would keep a hospital from saving your life.

Most of the students that took our survey said they thought a lot about ethics. But students also said that they did not learn a lot about ethics in their computer science classes.

Many students worried that these systems would accidentally hurt Black people more than White people. That meant that students were learning about how computer systems can be racist. But students did not worry that these systems would accidentally hurt people with disabilities.

Black people may be more likely to have an illness or disability because of how racism hurts your body. This means that even if a system is built to be “fair” about race, if it is still “unfair” about disability, the system will still be racist and ableist.

We worry that learning more about ethics will not be enough for computer scientists to build safer systems. If you don’t have strong relationships with people who are not like you, you might not be able to understand that a system is hurting people who are not like you.

That is why we hope Computer Science teachers who read this paper will think about how they can help their students become better friends to people who are not like them. When we become friends with other people, we can learn to have a “coliberative consciousness”.

A “coliberative consciousness” is when you understand how systems might hurt you AND also people who are not like you. This will help you understand how to build systems that liberate, or set free, more people.

To read more:

ieeexplore.ieee.org/document/9

Or

researchgate.net/publication/3

#ColiberationLab #TechJustice #COVID19 #ColiberativeConsciousness

Last updated 3 years ago