After watching the movie I, Robot, I find that many ethical issues come about from the technology shown in the movie. The movie takes place in 2035 and is about robots that are programmed with Three Laws: First Law-A robot must never harm a human being or, through inaction, allow any harm to come to a human; Second Law-A robot must obey the orders given to them by human beings, except where such orders violate the First Law; Third Law- A robot must protect its own existence unless this violates the First or Second Laws. Humans use these robots to do common tasks for them. Some of the ethical questions arisen from this movie include do robots have the ability to make emotional or ethical decision, are they entitled to the same rights as …show more content…
Another issue brought forward from the movie is whether they should be given the same rights as humans. The movie shows us that the robots have three laws that they live by, the first one being they must protect human from any harm. This first law has a few issues in being that sometimes humans do not need to be protected, for example people who have committed a crime, need to be punished, not protected. The second law tells the robot they are to obey every order given unless it violates the first law. Even if the order is unethical the robot must still obey it. The third law states the robot must protect the robot its self unless it would violate the first two laws. If they were given the same rights as humans would set them free from their laws. Robots cannot function as human because they lack the ability to have compassion or emotion. Robots do not have the ability to make ethical decisions. Another big ethical issue raised in the move is whether or not robots could be used to fight wars. This ethical issue just likes the other in the fact that it revolves on the lack of emotional or compassion component of the robots. Robots can be programed for the protection of individuals but because of their lack of compassion or emotion they would not know when to stop the attack. Because of the advancing technology robots similar to the ones in the movie I, Robot is not
The article, “Robots on Earth” by Jerry West, explains that although robots may be evil in movies and books, they help us more than people may think. In the article, West discusses how the opinions of the media are quite different than the jobs that real robots perform. Humans have many difficult jobs that must be done for the good of the population, which is why we have robots to complete these tasks. Chores such as welding, and working in factories harm our health; so, robots do these jobs to keep us safe. Robots in space may do simple missions so that astronauts can focus on more important duties. Also, astronauts use robotic equipment; such as treadmills; to stay healthy while in space. Other robots are used for people with disabilities
Despite all they have done for the world, robots have a very unique and extensive history of villainization. There will be many opportunities for them in the future to either make or break society. Popular theories of a robot war are often favorites, but a lot of the possible realities involve a much more passive takeover. Overall, robots are an important aspect to be educated about in this changing world. Simply understanding the implications of artificial intelligence can completely change its impact. Robots will be a part of the future, whether for the good of humans, or to their
In “Alone Together: The Robotic Movement,” Sherry Turkle explains some of the negative effects that robots are having on our lives. She also explains how they can have a negative effect on our daily lives without us even noticing. I am someone who knows a great deal about technology, however I had no idea that close human-robot interaction was happening at such an inappropriate level. There are many different examples Turkle uses in the article, however, I will only talk about two. I agree with Turkle not only that there are ethical problems with human-robot interaction but also that a lot of other forms of technology might be doing more harm than good.
With Robots becoming a popular part of our everyday lives people are beginning to question if people are treating robots with the same respect that they treat people with. Researchers are also beginning to wonder if there need to be laws to protect robots from being tortured or even killed. Scientists have done research to test and see if people react the same to robots as they would to actual people or animals. In Is it Okay to Torture or Murder a Robot Richard Fisher contemplates the reason on why it is wrong to hurt or kill a robot by using a stern and unbiased tone.
Jerry West’s article “Robots on Earth” talks about robots that, unlike books or movies, aid people simplifying their lives and health. As robots don’t need specific conditions; they are perfect for performing jobs that might be harmful to humans. Like the R2 humanoid at the International Space Station, which completes dangerous and mundane tasks for astronauts and frees their time. They also boost our health; they are working with scientists to create an exoskeleton for quadriplegic people. Robots aren’t evil, they’re useful machines that have so much to offer and make our lives safer.lives
In this day and age society is evolving in many different and unique ways. One major way is through our technology, which is improving every day. The new advancements can help make communicating easier, education smoother, and our country a safer place to live. This summer The Dallas Texas police force used a new and equipped robot to kill a criminal who refused to surrender. This has caused a very controversial subject in our country. The people who think it is morally wrong. Then there are people who think it is a great way to help and keep our low enforcement safer. I agree that it will help. Using a robot, it is safer, more efficient, and more American.
I support the advancements being made to robots in having them become more equipped to carry out tasks like guarding a bank or other establishment. Where I get a little skeptical in recognizing the questions Leetaru raised, is robots having any rights at all. Seeing as they are not an actual human being, it seems somewhat crazy to think that a robot would appear in a court case if they happened to harm someone. I believe that in a case like the one Leetaru outlined in his article, about the robber entering cardiac arrest after being subdued by the robot, should be dealt on a case-by-case situation. The programing for the robot should be looked at heavily to ensure that the robot was programmed to act in such a way. The robot obviously cannot be held responsible, because they act solely on how they were programmed. I believe it should be handled on a case-by-case situation seeing as malfunctions can occur, and it not be the programmers fault at all. I think that using robots for any kind of security work brings great risk to a company. In the case of accident, and even death, the other party wants justice. Using these robots makes that system very bumpy. We are left with this question: if advancements in this technology steadily keep rising, and usage becomes more prominent, will a separate justice system for these cases have to be
Personally I agree with the statements made by Asaro because I believe that human lives are too valuable to let “someone” (more like something) else to control them. Also, I don’t think that one can input feelings in a robot which makes the robot lifeless. I think that there is too great of a chance for a malfunction to occur and if we do not be careful the effects of the technological “advancement” can be fatal. Personally I believe that things should stay as they are right now. Why have a robot to take care of the elderly when it cannot decide without the approval of another person? Why not hire a caretaker instead? Why should someone buy a driverless car when you can either drive yourself or have someone drive you? I believe that we should continue these practices because they involve our decision making and not that of a robot. I believe that the only way a robot can be is when it is out fighting on the battlefield. I believe that this is the only reason that a robot should be used because it can greatly lower the number of fatalities in war, thus saving lives and helping families. Maybe one day the world will only use robots for warfare so that men do not have to keep continuing to sacrifice for their families. This will also help the families of the members in the army because when there is no loss, there is no grief. In conclusion, I don’t think that using robots for everyday
Sharkey explains the advantage points of his argument by explaining how the Japanese and South Korean companies are creating child-like robots that can be good for “video-game playing, conducting verbal quiz game, speech recognition, face recognition, and conversation” (Sharkey 358). He describes how robots have the ability to provide alerts when children move out of range. However, he brings up a crucial point in their programming on how robots can’t provide the proper care that human adults can give to their kids including: contact, touch and caring from other humans. Though robots can provide safety, children may not have contact with other humans for days, which according to Sharkey, can cause a “psychological impact of the varying degrees of social isolation” (358). His claim was based on animal studies. For example, during an experiment with monkeys, according to Sharkey, “severe social dysfunction occurs in infant animals that are allowed to develop attachments only to inanimate surrogates” (358). Like a child would do with a robot, the monkeys would grow too attached and their behavior would likely change. People today need to realize that they need to reconsider the idea of having robots care for their kids and start being the responsible ones.
The boundless potential of tomorrow's artificial intelligence is plighted by the hurdle of ethical conundrums. While robot weapons allow for countries to extirpate the security threats which produce the daily fear that dictates many lives, the use of these tools has to follow a legal precedent. What is the value of life? Further, what constitutes the characteristics of a target? Although, even after such a ruling the use of said weapons should be discouraged given the implications that such strikes will bring about.
When it comes to using Artificial Intelligence, one should be able to recognize their limits in doing so. In the story Marionettes Inc, and the movie, Ex Machina, both mediums displayed a clear and concise message about Artificial Intelligence, that is, when you create or utilize an AI robot with human-like qualities, there is always a possibility that it may turn against their rightful owner or creator, and will ultimately lead to their downfall.
For those who view AIs as dangerous, they call for strict laws and regulations as they don’t want to lose control of the AIs. One example of strict laws comes from Stuart Russel who argues that AIs’ only purpose should be to learn human values, but never understand those values, giving it “no purpose of its own and no innate desire to protect itself” (58). Thus, AIs would only understand their existence in terms of human values, unable to make choices beyond this point of reference. This would prevent AIs from making their own decisions while also stopping programmers from making further improvements, ruining any beneficial effects AIs may have for the future and treating them unethically. Therefore, the system of laws needed for AIs needs to be strict, but not suffocating to the point that they can’t develop or have rights. Ashrafian asserts that people should enforce a Roman-like system of laws that sets AIs as a lower status than humans, but with the ability to gain rights (325). Even though this would also start AIs as a lower status, like Russel suggests, it still gives them the ability to grow and gain more rights in society, no longer hindered by rigid laws. Additionally, with the intention to make AIs with intelligence equal or superior to humans, it would not be ethically correct to trap these beings into an oppressive cycle of never allowing them to have rights. In “A Defense of the Rights of Artificial Intelligences” by Eric Schwitzgebel and Mara Garza, a professor of philosophy and a researcher of artificial moral cognition respectively, propose “it is approximately as odious to regard a psychologically human-equivalent AI as having diminished moral status on the ground that it is legally property as it is in the case of human slavery” (108). Thus, there is no morally correct way to create life in these machines and then give it no
Lately there have been more and more smart machines that have been taking over regular human tasks but as it grows the bigger picture is that robots will take over a lot of tasks now done by people. But, many people think that there are important ethical and moral issues that have to be dealt with this. Sooner or later there is going to be a robot that will interact in a humane manner but there are many questions to be asked like; how will they interact with us? Do we really want machines that are independent, self-directed, and has affect and emotion? I think we do, because they can provide many benefits. Obviously, as with all technologies, there are dangers as well. We need to ensure that people always
The public loves the movie concept of robots taking over the world and humans fighting back, and there are plenty of them that have been filmed. A classic example of this is the movie I, Robot where society’s personal helper robots turn on them in an attempt to achieve an Orwellian dystopia. The issue is that this entertaining movie plot would turn into a living nightmare if it were to happen in real life. Victor Frankenstein’s life got ruined due to the creation of the creature as it tormented him until his death. An artificial intelligent robot would bring malice to creator once learning it that he was created to work like a slave. If the robot were to even to bring half as much tragedy that the creature in Frankenstein did, it would still wreck havoc on its owner. When compared a robot would seem much smarter than the creation and would have the ability to learn and evolve at a much faster rate. Consequently, this would mean that a robot would learn to disobey its master much quicker than in Frankenstein. Another glaring concern is that there is was only one creation in the novel but there could be thousands, even millions of artificially intelligent robot created. The damage that the creation caused in the story would be multiplied massively due to their being more bodies that would turn against their owner and would be able to work together. The
Hollywood blockbusters such as Terminator and Terminator Two have fueled the idea of artificial intelligence taking on humanoid characteristics and taking over the world. Let me answer the last question once and for all. It is not possible for a robot to think, feel, or act for itself, it may be programmed to mimic the actions, but not experience the real thing. We can program them to react to a certain stimulus, but a robot cannot and will never be able to comprehend, have feelings genuine guilt and much less act without the use of a programmer some were along the line. The second question is also a rather simple one. Of course there are robots that should not be created. For example, robots made for the sole purpose of mass destruction or robots made with the intention of harm to