Get help now
  • Pages 5
  • Words 1092
  • Views 497
  • Download

    Cite

    Pete
    Verified writer
    Rating
    • rating star
    • rating star
    • rating star
    • rating star
    • rating star
    • 4.9/5
    Delivery result 5 hours
    Customers reviews 612
    Hire Writer
    +123 relevant experts are online

    The Ethics Of Computers With Ai

    Academic anxiety?

    Get original paper in 3 hours and nail the task

    Get help now

    124 experts online

    In recent years, advancements in robotics has been bringing humans and machines to work together. Many autonomous systems are being used for variety of things.

    Robots can be used for simple tasks like mowing the lawn and vacuuming to advanced tasks like self-driving vehicles. Many of these robots are given artificial intelligence (AI). Development of AI has recently become a major topic among philosophers and engineers. One major concern is the ethics of computers with AI. Robot ethics (roboethics) is an area of study about rules that should be created to ensure that robots behave ethically.

    Humans are morally obligated to ensure that machines with artificial intelligence behave ethically. In the 1940s, science-fiction author Isaac Asimov came up with the Three Laws of Robotics. The laws are: 1 A robot may not injure a human being or, through inaction, allow a human being to come to harm; 2 A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law and 3 A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws. The three laws were the first attempt to govern the behavior of AIs.

    Intelligent robots are required to interact with their surroundings. Imagine a self-driving car going down a residential road with a bunch of parked cars on the side. Then, a child steps out from behind a parked car into the street. The self-driving car could either hit the child or swerve to avoid a collision. A car that has been programmed with a moral code would try to avoid running over the child. Autonomous vehicles are required to have a person in the driver’s seat as a safety precaution.

    The car would show ethical behavior by following the first law of robotics. Creating a moral code for robots poses many challenges. There are two main approaches to making an ethical robot. One approach is writing a specific ethical law for the robot to follow. I believe that robots should take a Kantian approach to decision making.

    Robots should follow Kant’s categorical imperative (First Formulation) “Act only according to that maxim by which you can at the same time will that it should become a universal law. ” The robot would be given a task. Then it could run a near infinite number of scenarios where the action could become a universal law. If they robot could accomplish the task, then it morally permissible to act on the maxim. Having a robot that follows the first formulation would also benefit humans.

    The robot could assist humans by running many scenarios. This would be a good starting point for when an AI needs to make a decision. Rules can be implemented easily since they are categorical. Another approach would be to teach the robot how to respond in situations. The response would need to have an ethical outcome. The method is similar to how humans learn morality.

    The robot would learn right from wrong. This approach can be effective as long as the teacher acts ethically. Robots could also take an Act Utilitarianism approach to decision making. A robot could run an algorithm to maximize overall happiness. An AI would quantify the happiness that each action would cause and then compare the results.

    Robots can do the calculations to estimate the amount of happiness that a decision could create a lot faster than humans can. This system could work if nobody is killed or harmed. The rules and laws that govern humans would need to be taken into account. This would ensure that the AI makes an ethical decision. The creation of AIs also needs to be ethical.

    Robots should not be designed to harm humans like in military applications. It is unethical for robots to learn how to become more effective at causing harm. Many military applications would violate the first law of robotics. Presently, drones use AI algorithms to acquire and destroy targets.

    In 2016, a US military drone falsely targeted people in Pakistan. The drone used cell phone metadata to acquire its targets. Unregulated AIs pose a huge risk for humanity. AIs could target many innocent people and cause mass destruction of cities if they don’t have an ethical code to guide them.

    Weaponization of AIs is unethical because it is wrong to design an advanced system to be more effective at killing humans. In 2016, Microsoft unveiled a machine learning project, an AI chatbot named Tay. The goal of the AI was to engage and entertain people on Twitter. Tay is capable of performing tasks like telling jokes, commenting on pictures, and answer questions.

    Tay used a learning based response system. A board of writers wrote some responses that Tay could use for conversations. Within 24 hours of release, the chatbot was making racist and misogynistic tweets. Many internet trolls would write the inappropriate comments. Then they would have Tay repeat them.

    This incident demonstrates that engineers have a responsibility to make sure that AIs have morals. Many people also fear that an AI could become hostile and that it could remove safety devices. Humans are not actively hostile towards animals. Since most AIs are programmed to act like humans and have conciseness, robots would not have any reason to be hostile towards humans. Basic moral principles would prevent them from causing harm to humans. In most cases, the primary function for an AI is friendliness towards humanity.

    There is no reason for AIs to resent human created motivations. There is no motive for an AI to reprogram itself to be unfriendly. Humans don’t remove parts of their personality to become unfriendly. Therefore, an AI wouldn ‘t want to remove their core parts that affect attitude.

    If something does go wrong and an AI goes rogue there are many safety devices in place. AIs are being designed with kill switches in case of emergencies. There are many reasons humans are obligated to design AIs with morals. Humans have moral codes and robots are design to think like humans. Robots need to be designed to have similar ethical codes. Without moral codes, AIs can cause harm to humans.

    AIs need to have a reliable way of learning so they can make fewer mistakes. Safe guards and filters need to be placed to ensure that AIs can learn from good examples. AIs must have goals that can be completed in an ethical way. When an AI makes a decision, it must be able to explain the reasoning that supports their actions.

    This essay was written by a fellow student. You may use it as a guide or sample for writing your own paper, but remember to cite it correctly. Don’t submit it as your own as it will be considered plagiarism.

    Need custom essay sample written special for your assignment?

    Choose skilled expert on your subject and get original paper with free plagiarism report

    Order custom paper Without paying upfront

    The Ethics Of Computers With Ai. (2019, May 07). Retrieved from https://artscolumbia.org/the-ethics-of-computers-with-ai-125676/

    We use cookies to give you the best experience possible. By continuing we’ll assume you’re on board with our cookie policy

    Hi, my name is Amy 👋

    In case you can't find a relevant example, our professional writers are ready to help you write a unique paper. Just talk to our smart assistant Amy and she'll connect you with the best match.

    Get help with your paper