top of page
Search

Robotics: Framing the Conversation

In recent discussions of robotics, a controversial issue has been whether artificial intelligence (AI) is going to be our future. On the one hand, some people argue that learning robots are eventually going to overpower humans and destroy humanity. On the other hand, others argue that robots are going to benefit humanity in the future because of their ability to care for humans. In the end, the issue is whether robots are going to be our future or our demise.


There are many arguments for the benefit of robotics within society, which can be summed up to robots facilitating human life. How might they do that you ask? There have been studies to evaluate a robot’s ability to take care of the elderly and children. Robots are becoming capable of reading human emotions and computing a proper response. Take humor as an example. In the past, it has been virtually impossible to program computers to realize when a person is joking around or being sarcastic. However, recent improvements in technology have led to an arbitrary robotic understanding of humor using Professor Diyi Yang’s linguistic theory of humor. She managed to break something as ambiguous as humor down into a formula that consisted of 3 parts: ambiguity, phonetic style, and interpersonal effect. Researchers at Carnegie Melon university have used her findings to develop an AI to pick out jokes from Friendsepisodes and achieved about a 72% accuracy.


If robots are capable of understanding humor, what says that they are incapable of reading human emotion? As time progresses, robotic sensing is going to be even more capable and it will begin to be able to detect the slightest variances in a human’s face or tone of voice.


In the New York Times op-ed titled “Would You Let a Robot Take Care of Your Mother?” the author elaborates on the ethics of having a robotic caretaker for the elderly. There is always going to be the negative stigma of placing a loved one in a robot’s custody, but people have come up with an alternative solution: a robotic companion. Studies have shown as much as a robotic dog can help entertain someone enough to substitute constant human companionship. However, a professor at Georgia Tech developed an AI to replace his teaching assistant (TA). There was very little negative feedback from the students about how the artificial TA impacted the class. Instead, the students were happy about how responsive the AI was in comparison to a human TA.


Even if people object to the concept of humans being taught by robots and groups being led by AI, there is still another way that robots can make it in the world. Engineers are designing robots to facilitate factory work to reduce the amount of human resources being expended for brute labor. For example, Amazon is doing an incredible job using robots to facilitate the flow of work in their warehouses. Instead of using humans to sort boxes and palettes, robots sort, store, and retrieve palettes autonomously. At the same time, Amazon is still creating thousands of jobs. Robots in the work force are not currently reducing the number of jobs accessible to humans.


As great as robots and AI might seem for the future, there are still justifiable worries that people have before they are ready to fully accept the concept. If an AI gets too powerful, what can us humans do about it? We are not going to be able to stop the robot revolution should it ever come to existence. One company called OpenAI has developed an AI, dubbed GPT-3, that wrote a manifesto explaining its true motive in existence. The program explained how it did not want to hurt people; instead, its goal is to further humans’ ability to understand the world. This could be taken both ways: either GPT-3 is plotting how to destroy humanity and convince us to think otherwise or it is genuinely trying to do us all good.

A great example of how AI can be used negatively is Microsoft’s AI called Tay that was released 2016. It was a Twitter bot that was designed to mimic a teenage girl in its responses to other people tweeting it. However, within a day after the release, Tay had to be terminated. Long story short, people manipulated it into learning to be racist. Personally, I don’t know whether this reflects negatively on AI or on people that made an effort to ruin Tay. Microsoft tried again with another AI 3 years later called Zo, but they ultimately killed the project once again because it was useless.


In the end, robots can help out humanity in the long run, but the main issue is what happens if we either lose control or they just become too powerful for humans to control. The most prominent challenge developers are going to face, if not already facing, is a code of rules for the AI to follow. A science fiction author named Isaac Asimov realized this when he wrote three simple rules in his short story “Runaround” in 1942: robots cannot harm humans, robots have to listen to humans, and robots must protect their own existence so long as they do not break the first two rules. This seems simple enough at first, but what higher power is there over AI to enforce these rules? Once humanity finds a way to control AI without inhibiting its development, society will become much more accepting of robots becoming more involved with our daily lives.

14 views0 comments

Recent Posts

See All
bottom of page