Thought this was a fun late night video. Always charming to see humans being nice to inanimate objects.
#robots #humanrobotinteraction
Hi đ #hri2023 attendees
Are you looking for a roommate đĄ đ¸đŞ ? Or do you just want to connect đ¤with people also attending prior to the conference?
We created a #hri2023 channel on the robotics-worldwide slack https://join.slack.com/t/robotics-worldwide/shared_invite/zt-1ns6f8opd-VA8FLaMEp5_LKOJkTkIHhA
#HRI2023 #hri #humanrobotinteraction
đ˘ Call for Participation!
18th ACM/IEEE International Conference on Human-Robot Interaction #HRI2023 đ¤
https://humanrobotinteraction.org/2023/
đ¸đŞMarch 13-16, Stockholm, Sweden & Online đ
Registration is open, early bird deadlineđŚ: 31 Jan
Fee waiver deadline: 9 Feb
ACM SIGAI scholarship
#humanrobotinteraction #HRI #conference #robotics #robot
#HRI2023 #humanrobotinteraction #hri #conference #robotics #robot
Another plain language summary from the #ColiberationLab.
âI, Misfit: Empty Fortresses, Social Robots, and Peculiar Relations in Autism Research.â
In this paper, I think about autistic people and robots. Ideas about autism as robotic have made researchers do very strange things. Many autism researchers write about how autistic people are like robots, and many robotics researchers write about how robots are like autistic people! There are even studies about how to use robots to help autistic people be more âhumanâ. I think this is very silly. The paper that I wrote is very serious. But the truth is that I laughed a lot while writing it. I also cried a little. Sometimes things that are silly are also deeply sad.
There are many studies about how to use robots to take care of other people. Care Robots are designed for elder care, dementia care, and other kinds of care that might happen in a clinic, hospital, or nursing home. This is often called âRobot Assisted Therapyâ. Researchers like Andrew Bischof and Arne Maibaum have studied robotics. They write about how most Care Robot research begins by asking âHow can we use a robot to solve a problem here?â This is a silly way to begin research, because you are assuming that a robot would be a good solution to the problem at all.
âRobotic Care Assistantsâ or Care Robots are usually made to do things that other people think are boring, like doing the same thing over and over. Ruha Benjamin writes about how we think of robots as slaves, and how this means we sometimes build technologies with racist ideas. When we build things with racist ideas, we build a racist world, even if we do not know it. I worry about how often Care Robots are built to be like servants. I worry about this because if we want robots to be servants, that means we want servants. If we want servants, we want something that hurts other people. I think some people feel that itâs okay because robots do not have feelings. I feel that itâs not okay, no matter what, because wanting servants makes us treat other people badly, even if we think we are only acting badly toward robots.
I look at studies where people try to use robots to make autistic people behave a certain way. I think the way that researchers talk about autistic people tells us about how they do not respect autistic people very much. In these studies, both the autistic people and the robots sometimes do something surprising. When this happens, it can show researchers that what they think about autistic people is wrong.
Around 60 to 70 years ago, there was a man who studied autism named Bruno Bettelheim. Bettelheim ran a school for âEmotionally Disturbedâ children, where families brought their autistic children when they felt they could not take care of them. Bettelheim sometimes convinced families to send their children to his school. He told families their children were not well, emotionally or mentally, and that he could help them. Bettelheim wrote about how autistic children were empty inside. He wrote that they became empty inside because there was something wrong with how the children behave around their families.
Most researchers now do not think that Bettelheim was right about autism. Even though Bettelheim was wrong, the things that he wrote were so popular that many people still believe them. Even researchers that know Bettelheim was wrong still write about autistic people being empty or âoffâ in some ways. When someone says that an autistic person is âoffâ they mean that they donât feel like the autistic person is like other people, and that this is bad. Autistic people are often called âmisfitsâ. Researchers write about how autistic people move funny, talk differently, and donât understand other people. Often, they write about autistic people being like a robot, or a machine.
I think that it is silly that researchers talk about autistic people being like robots, and then want to use robots to make autistic people act more ânormalâ. Researchers often try to use a robot to teach an autistic person how to act when around other people. Researchers say that if they can make an autistic person act ânormalâ, then other people will treat them better. Why isnât anybody teaching ânormalâ people how to be nice?
Researchers think they need to fix the autistic person because they do not think the autistic person is different, they think the autistic person is broken. Researchers write that if they can fix or change the autistic person, it will be good for other people. Researchers have decided two things. 1. That robots can teach autistic people to be like other people. And 2. That this is good because it will make things easier for everyone else.
Sometimes, the autistic people in these robot studies show the researchers that they are upset, that they donât like the robot, or they donât like the activity. I think itâs important for researchers to notice when the autistic people in their studies are âmisbehavingâ because it can show the researchers that what they are studying is wrong.
Itâs so funny that researchers have written that they can use the robots to notice when the autistic person is upset. The researchers know that they are bad at understanding autistic people. But then they write about how autistic people struggle to understand others! If we are willing to teach robots how to understand autistic people, why wonât we teach each other how to understand autistic people? Why do we work so hard to change only the autistic person?
In one study, an autistic person was not good at the task the researchers wanted him to do. He was not copying what the robot did the way the researchers wanted him to. Even though he was doing the task âwrongâ, at the end of every appointment, he would kiss the robot on the head. I think about this a lot. There are a lot of stories about autistic people being loving toward robots. Even though researchers are wrong about autistic people being robotic, autistic people often feel friendship with robots because we are treated badly by the same people. Autistic people and robots are both treated like âMisfits.â Researchers should be more like autistic people, be more open to being friends with people who are different, and stop trying to change them.
To read more:
https://www.pdcnet.org/techne/content/techne_2021_0999_10_19_147
Or
Hashtag soup:
#HRI #HumanRobotInteraction #HumanRobotRelations #RobotAssistedTherapy #SociallyAssistiveRobots #Autism #AutismTherapy #ActuallyAutistic #PlainLanguage #Research #ResearchSummary #SciComm #ScholarComm #STS #CDS #CriticalDisabilityStudies #CriticalAutismStudies
Edit: Deleted and reposted because I accidentally kept it unlisted. Sorry about the beleted replies.
#ColiberationLab #hri #humanrobotinteraction #HumanRobotRelations #RobotAssistedTherapy #SociallyAssistiveRobots #autism #AutismTherapy #actuallyautistic #plainlanguage #research #ResearchSummary #scicomm #ScholarComm #sts #cds #CriticalDisabilityStudies #criticalautismstudies
Another plain language summary from the #ColiberationLab.
âI, Misfit: Empty Fortresses, Social Robots, and Peculiar Relations in Autism Research.â
In this paper, I think about autistic people and robots. Ideas about autism as robotic have made researchers do very strange things. Many autism researchers write about how autistic people are like robots, and many robotics researchers write about how robots are like autistic people! There are even studies about how to use robots to help autistic people be more âhumanâ. I think this is very silly. The paper that I wrote is very serious. But the truth is that I laughed a lot while writing it. I also cried a little. Sometimes things that are silly are also deeply sad.
There are many studies about how to use robots to take care of other people. Care Robots are designed for elder care, dementia care, and other kinds of care that might happen in a clinic, hospital, or nursing home. This is often called âRobot Assisted Therapyâ. Researchers like Andrew Bischof and Arne Maibaum have studied robotics. They write about how most Care Robot research begins by asking âHow can we use a robot to solve a problem here?â This is a silly way to begin research, because you are assuming that a robot would be a good solution to the problem at all.
âRobotic Care Assistantsâ or Care Robots are usually made to do things that other people think are boring, like doing the same thing over and over. Ruha Benjamin writes about how we think of robots as slaves, and how this means we sometimes build technologies with racist ideas. When we build things with racist ideas, we build a racist world, even if we do not know it. I worry about how often Care Robots are built to be like servants. I worry about this because if we want robots to be servants, that means we want servants. If we want servants, we want something that hurts other people. I think some people feel that itâs okay because robots do not have feelings. I feel that itâs not okay, no matter what, because wanting servants makes us treat other people badly, even if we think we are only acting badly toward robots.
I look at studies where people try to use robots to make autistic people behave a certain way. I think the way that researchers talk about autistic people tells us about how they do not respect autistic people very much. In these studies, both the autistic people and the robots sometimes do something surprising. When this happens, it can show researchers that what they think about autistic people is wrong.
Around 60 to 70 years ago, there was a man who studied autism named Bruno Bettelheim. Bettelheim ran a school for âEmotionally Disturbedâ children, where families brought their autistic children when they felt they could not take care of them. Bettelheim sometimes convinced families to send their children to his school. He told families their children were not well, emotionally or mentally, and that he could help them. Bettelheim wrote about how autistic children were empty inside. He wrote that they became empty inside because there was something wrong with how the children behave around their families.
Most researchers now do not think that Bettelheim was right about autism. Even though Bettelheim was wrong, the things that he wrote were so popular that many people still believe them. Even researchers that know Bettelheim was wrong still write about autistic people being empty or âoffâ in some ways. When someone says that an autistic person is âoffâ they mean that they donât feel like the autistic person is like other people, and that this is bad. Autistic people are often called âmisfitsâ. Researchers write about how autistic people move funny, talk differently, and donât understand other people. Often, they write about autistic people being like a robot, or a machine.
I think that it is silly that researchers talk about autistic people being like robots, and then want to use robots to make autistic people act more ânormalâ. Researchers often try to use a robot to teach an autistic person how to act when around other people. Researchers say that if they can make an autistic person act ânormalâ, then other people will treat them better. Why isnât anybody teaching ânormalâ people how to be nice?
Researchers think they need to fix the autistic person because they do not think the autistic person is different, they think the autistic person is broken. Researchers write that if they can fix or change the autistic person, it will be good for other people. Researchers have decided two things. 1. That robots can teach autistic people to be like other people. And 2. That this is good because it will make things easier for everyone else.
Sometimes, the autistic people in these robot studies show the researchers that they are upset, that they donât like the robot, or they donât like the activity. I think itâs important for researchers to notice when the autistic people in their studies are âmisbehavingâ because it can show the researchers that what they are studying is wrong.
Itâs so funny that researchers have written that they can use the robots to notice when the autistic person is upset. The researchers know that they are bad at understanding autistic people. But then they write about how autistic people struggle to understand others! If we are willing to teach robots how to understand autistic people, why wonât we teach each other how to understand autistic people? Why do we work so hard to change only the autistic person?
In one study, an autistic person was not good at the task the researchers wanted him to do. He was not copying what the robot did the way the researchers wanted him to. Even though he was doing the task âwrongâ, at the end of every appointment, he would kiss the robot on the head. I think about this a lot. There are a lot of stories about autistic people being loving toward robots. Even though researchers are wrong about autistic people being robotic, autistic people often feel friendship with robots because we are treated badly by the same people. Autistic people and robots are both treated like âMisfits.â Researchers should be more like autistic people, be more open to being friends with people who are different, and stop trying to change them.
To read more:
https://www.pdcnet.org/techne/content/techne_2021_0999_10_19_147
Or
Hashtag soup:
#HRI #HumanRobotInteraction #HumanRobotRelations #RobotAssistedTherapy #SociallyAssistiveRobots #Autism #AutismTherapy #ActuallyAutistic #PlainLanguage #Research #ResearchSummary #SciComm #ScholarComm #STS #CDS #CriticalDisabilityStudies #CriticalAutismStudies
#ColiberationLab #hri #humanrobotinteraction #HumanRobotRelations #RobotAssistedTherapy #SociallyAssistiveRobots #autism #AutismTherapy #actuallyautistic #plainlanguage #research #ResearchSummary #scicomm #ScholarComm #sts #cds #CriticalDisabilityStudies #criticalautismstudies
#introduction hi everyone! My name is Malcolm, Iâm a the lab head for Schindler (the elevator company) at #EPFL In #switzerland. My background is in mobile #robotics and my labâs research is on #machinelearning for sensor fusion and #humanrobotinteraction to improve Schindlerâs #sustainability, and workersâ safety. However, Iâm interested in learning about everything and if you have questions about what I do donât hesitate to ask :). One thing I canât do though, is fix your broken elevator :p.
#sustainability #humanrobotinteraction #machinelearning #robotics #switzerland #epfl #introduction
For some more specific background of my work, I just published my first article on domain-specific and domain-general network engagement during human-robot interaction together with Ruud Hortensius! â¨
âŠYou can check it out in the European Journal of Neuroscience: https://onlinelibrary.wiley.com/doi/10.1111/ejn.15823
#Neuroscience #socialneuroscience #functionalconnectivity #fMRI #HRI #humanrobotinteraction
#humanrobotinteraction #hri #fmri #functionalconnectivity #socialneuroscience #neuroscience
Inspired by @debruine and @dsquintana, this is my #introduction.
I'm part of the #TwitterMigration, woohoo! I'm an assistant professor at Utrecht University and plan to post about #humanrobotinteraction, #socialneuroscience, #socialcognition, #collaboration with students, #future of research(ers) and life in #academia in general, and probably about my new interest/hobby/job in construction (bought a house that requires a major revision).
#academia #future #collaboration #socialcognition #socialneuroscience #humanrobotinteraction #twitterMigration #introduction