Rua M. Williams · @FractalEcho
517 followers · 978 posts · Server kolektiva.social

Another plain language summary from the .

“I, Misfit: Empty Fortresses, Social Robots, and Peculiar Relations in Autism Research.”

In this paper, I think about autistic people and robots. Ideas about autism as robotic have made researchers do very strange things. Many autism researchers write about how autistic people are like robots, and many robotics researchers write about how robots are like autistic people! There are even studies about how to use robots to help autistic people be more “human”. I think this is very silly. The paper that I wrote is very serious. But the truth is that I laughed a lot while writing it. I also cried a little. Sometimes things that are silly are also deeply sad.

There are many studies about how to use robots to take care of other people. Care Robots are designed for elder care, dementia care, and other kinds of care that might happen in a clinic, hospital, or nursing home. This is often called “Robot Assisted Therapy”. Researchers like Andrew Bischof and Arne Maibaum have studied robotics. They write about how most Care Robot research begins by asking “How can we use a robot to solve a problem here?” This is a silly way to begin research, because you are assuming that a robot would be a good solution to the problem at all.

“Robotic Care Assistants” or Care Robots are usually made to do things that other people think are boring, like doing the same thing over and over. Ruha Benjamin writes about how we think of robots as slaves, and how this means we sometimes build technologies with racist ideas. When we build things with racist ideas, we build a racist world, even if we do not know it. I worry about how often Care Robots are built to be like servants. I worry about this because if we want robots to be servants, that means we want servants. If we want servants, we want something that hurts other people. I think some people feel that it’s okay because robots do not have feelings. I feel that it’s not okay, no matter what, because wanting servants makes us treat other people badly, even if we think we are only acting badly toward robots.

I look at studies where people try to use robots to make autistic people behave a certain way. I think the way that researchers talk about autistic people tells us about how they do not respect autistic people very much. In these studies, both the autistic people and the robots sometimes do something surprising. When this happens, it can show researchers that what they think about autistic people is wrong.

Around 60 to 70 years ago, there was a man who studied autism named Bruno Bettelheim. Bettelheim ran a school for “Emotionally Disturbed” children, where families brought their autistic children when they felt they could not take care of them. Bettelheim sometimes convinced families to send their children to his school. He told families their children were not well, emotionally or mentally, and that he could help them. Bettelheim wrote about how autistic children were empty inside. He wrote that they became empty inside because there was something wrong with how the children behave around their families.

Most researchers now do not think that Bettelheim was right about autism. Even though Bettelheim was wrong, the things that he wrote were so popular that many people still believe them. Even researchers that know Bettelheim was wrong still write about autistic people being empty or “off” in some ways. When someone says that an autistic person is “off” they mean that they don’t feel like the autistic person is like other people, and that this is bad. Autistic people are often called “misfits”. Researchers write about how autistic people move funny, talk differently, and don’t understand other people. Often, they write about autistic people being like a robot, or a machine.

I think that it is silly that researchers talk about autistic people being like robots, and then want to use robots to make autistic people act more “normal”. Researchers often try to use a robot to teach an autistic person how to act when around other people. Researchers say that if they can make an autistic person act “normal”, then other people will treat them better. Why isn’t anybody teaching “normal” people how to be nice?

Researchers think they need to fix the autistic person because they do not think the autistic person is different, they think the autistic person is broken. Researchers write that if they can fix or change the autistic person, it will be good for other people. Researchers have decided two things. 1. That robots can teach autistic people to be like other people. And 2. That this is good because it will make things easier for everyone else.

Sometimes, the autistic people in these robot studies show the researchers that they are upset, that they don’t like the robot, or they don’t like the activity. I think it’s important for researchers to notice when the autistic people in their studies are “misbehaving” because it can show the researchers that what they are studying is wrong.

It’s so funny that researchers have written that they can use the robots to notice when the autistic person is upset. The researchers know that they are bad at understanding autistic people. But then they write about how autistic people struggle to understand others! If we are willing to teach robots how to understand autistic people, why won’t we teach each other how to understand autistic people? Why do we work so hard to change only the autistic person?

In one study, an autistic person was not good at the task the researchers wanted him to do. He was not copying what the robot did the way the researchers wanted him to. Even though he was doing the task “wrong”, at the end of every appointment, he would kiss the robot on the head. I think about this a lot. There are a lot of stories about autistic people being loving toward robots. Even though researchers are wrong about autistic people being robotic, autistic people often feel friendship with robots because we are treated badly by the same people. Autistic people and robots are both treated like “Misfits.” Researchers should be more like autistic people, be more open to being friends with people who are different, and stop trying to change them.

To read more:

pdcnet.org/techne/content/tech

Or

researchgate.net/publication/3

Hashtag soup:

Edit: Deleted and reposted because I accidentally kept it unlisted. Sorry about the beleted replies.

#ColiberationLab #hri #humanrobotinteraction #HumanRobotRelations #RobotAssistedTherapy #SociallyAssistiveRobots #autism #AutismTherapy #actuallyautistic #plainlanguage #research #ResearchSummary #scicomm #ScholarComm #sts #cds #CriticalDisabilityStudies #criticalautismstudies

Last updated 2 years ago

Rua M. Williams · @FractalEcho
506 followers · 942 posts · Server kolektiva.social

Another plain language summary from the .

“I, Misfit: Empty Fortresses, Social Robots, and Peculiar Relations in Autism Research.”

In this paper, I think about autistic people and robots. Ideas about autism as robotic have made researchers do very strange things. Many autism researchers write about how autistic people are like robots, and many robotics researchers write about how robots are like autistic people! There are even studies about how to use robots to help autistic people be more “human”. I think this is very silly. The paper that I wrote is very serious. But the truth is that I laughed a lot while writing it. I also cried a little. Sometimes things that are silly are also deeply sad.

There are many studies about how to use robots to take care of other people. Care Robots are designed for elder care, dementia care, and other kinds of care that might happen in a clinic, hospital, or nursing home. This is often called “Robot Assisted Therapy”. Researchers like Andrew Bischof and Arne Maibaum have studied robotics. They write about how most Care Robot research begins by asking “How can we use a robot to solve a problem here?” This is a silly way to begin research, because you are assuming that a robot would be a good solution to the problem at all.

“Robotic Care Assistants” or Care Robots are usually made to do things that other people think are boring, like doing the same thing over and over. Ruha Benjamin writes about how we think of robots as slaves, and how this means we sometimes build technologies with racist ideas. When we build things with racist ideas, we build a racist world, even if we do not know it. I worry about how often Care Robots are built to be like servants. I worry about this because if we want robots to be servants, that means we want servants. If we want servants, we want something that hurts other people. I think some people feel that it’s okay because robots do not have feelings. I feel that it’s not okay, no matter what, because wanting servants makes us treat other people badly, even if we think we are only acting badly toward robots.

I look at studies where people try to use robots to make autistic people behave a certain way. I think the way that researchers talk about autistic people tells us about how they do not respect autistic people very much. In these studies, both the autistic people and the robots sometimes do something surprising. When this happens, it can show researchers that what they think about autistic people is wrong.

Around 60 to 70 years ago, there was a man who studied autism named Bruno Bettelheim. Bettelheim ran a school for “Emotionally Disturbed” children, where families brought their autistic children when they felt they could not take care of them. Bettelheim sometimes convinced families to send their children to his school. He told families their children were not well, emotionally or mentally, and that he could help them. Bettelheim wrote about how autistic children were empty inside. He wrote that they became empty inside because there was something wrong with how the children behave around their families.

Most researchers now do not think that Bettelheim was right about autism. Even though Bettelheim was wrong, the things that he wrote were so popular that many people still believe them. Even researchers that know Bettelheim was wrong still write about autistic people being empty or “off” in some ways. When someone says that an autistic person is “off” they mean that they don’t feel like the autistic person is like other people, and that this is bad. Autistic people are often called “misfits”. Researchers write about how autistic people move funny, talk differently, and don’t understand other people. Often, they write about autistic people being like a robot, or a machine.

I think that it is silly that researchers talk about autistic people being like robots, and then want to use robots to make autistic people act more “normal”. Researchers often try to use a robot to teach an autistic person how to act when around other people. Researchers say that if they can make an autistic person act “normal”, then other people will treat them better. Why isn’t anybody teaching “normal” people how to be nice?

Researchers think they need to fix the autistic person because they do not think the autistic person is different, they think the autistic person is broken. Researchers write that if they can fix or change the autistic person, it will be good for other people. Researchers have decided two things. 1. That robots can teach autistic people to be like other people. And 2. That this is good because it will make things easier for everyone else.

Sometimes, the autistic people in these robot studies show the researchers that they are upset, that they don’t like the robot, or they don’t like the activity. I think it’s important for researchers to notice when the autistic people in their studies are “misbehaving” because it can show the researchers that what they are studying is wrong.

It’s so funny that researchers have written that they can use the robots to notice when the autistic person is upset. The researchers know that they are bad at understanding autistic people. But then they write about how autistic people struggle to understand others! If we are willing to teach robots how to understand autistic people, why won’t we teach each other how to understand autistic people? Why do we work so hard to change only the autistic person?

In one study, an autistic person was not good at the task the researchers wanted him to do. He was not copying what the robot did the way the researchers wanted him to. Even though he was doing the task “wrong”, at the end of every appointment, he would kiss the robot on the head. I think about this a lot. There are a lot of stories about autistic people being loving toward robots. Even though researchers are wrong about autistic people being robotic, autistic people often feel friendship with robots because we are treated badly by the same people. Autistic people and robots are both treated like “Misfits.” Researchers should be more like autistic people, be more open to being friends with people who are different, and stop trying to change them.

To read more:

pdcnet.org/techne/content/tech

Or

researchgate.net/publication/3

Hashtag soup:

#ColiberationLab #hri #humanrobotinteraction #HumanRobotRelations #RobotAssistedTherapy #SociallyAssistiveRobots #autism #AutismTherapy #actuallyautistic #plainlanguage #research #ResearchSummary #scicomm #ScholarComm #sts #cds #CriticalDisabilityStudies #criticalautismstudies

Last updated 2 years ago