Robotics technology has taken a new turn when it comes to improving acceptance of the technological products among humans. Ethical robots are defined as robotics products that can determine what is right and what is wrong in a given circumstance or situation. However, numerous concerns have been raised concerning the downside of having robots that can think like a man to a certain degree. The primary objective of the following business report is to provide ethical guidance on the ethical issues surrounding ethical robots. The description shall refer to the study by Vanderelst and Winfield (2018) on “The Dark Side of Ethical Robots.” The report shall focus on how artificial intelligence and machine learning pose risks in businesses and society.
Artificial Intelligence has received mixed feelings on its impact in the modern world. For AI enthusiasts, the ability to bring to life the sci-fi concept is considered a marvel. On the other hand, critics are questioning the ethical, social and economic impact that robots possess. The looming issue concerns the ethical aspects of operational robots in today’s world. According to Vanderelst and Winfield (2018), ethical robots have been invented to counter the issue. Regarding this, however, there are concerns on the moral knowledge that the robots may possess in line with their profound duties. This topic is interesting to me, and I find it fascinating that despite efforts by roboticists to improve on robots from an ethical perspective, there are still relenting issues circling the invention. The primary bone of contention from Vanderelst and Winfield (2018) is that ethical robotics are only programmed to understand how to question an action or a situation and determine the best outcome. Still, Vanderelst and Winfield (2018) conclude that if ethical robots can be built, this may pave the way for unethical robots. I concur with the researchers on the basis that for every good there must be evil; it is the ying-yang of life.
The ethical issue that Vanderelst and Winfield (2018)discuss is the impact that ethical robots may present on their ability to cause more harm than good. The domain of ethics is that a human being can decide whether to continue with an action or stop! However, with a robot, and in particular, the ethic robot, this may not be an option. Based on the experiments carried out by Vanderelst and Winfield (2018), it is clear that a robot cannot decide on whether to ‘stop’ or ‘continue.’ According to my assessment of the experiments carried out by Vanderelst and Winfield (2018), a robot -which is a derivative of a computer- only has programmable options. This is the default setting in the computer-based human-form, and this permits the robot to decide whether to go ahead on the left or go-ahead on the right. These are the challenges that are highlighted in Vanderelst and Winfield (2018) study on ethical robotics.
Based on the experiments conducted by Vanderelst and Winfield (2018), it is evident that ethical robots may not know how to decide within a situation with the concept of what would be the consequence. For instance, if an ethical robot encounters a cross-road where on the right there is an eight-year-old girl and on the left is an elderly lady; the robot is inclined to one option from both options (Alaieri and Vellino, 2016). The consequence is that the ethical robot may not know the best scenario from a moral perspective. In other words, an ethical robot is a fixation of what is a programmable right and wrong. However, to achieve what is humanly moral, Vanderelst and Winfield (2018) propose that robots have to be trained on each given scenario to determine which is the best outcome.
Artificial Intelligence and Machine Learning
The main risk that machine learning and artificial intelligence pose to society and businesses is the issue of unemployment. It is estimated that more than 70 million human jobs will be faced out to accommodate the new robotics. I believe the approach is based on the fact that robotics is programmed and built to perform duties correctly and without any mishap. For example, in the health care industry, machines are in use due to their preciseness and precision in handling surgeries, according to Alaieri and Vellino, (2016). The devices have been compared to efficiencies by humans, and the results outrank even the highest and most experienced surgeons in the world. The downside to the empowerment of robots on behalf of humans is that there is a smaller share portion for human compared to robotics. In time, it is estimated that nearly ¾ of the positions in the job market shall be held by robots and this renders human skills inept.
From a societal perspective, it is clear in my opinion that most robots manufactured have a particular facial feature and color tone. This may imply that robots can be racist or sexist. The number of females to male robots overwhelmingly support females. Probably, the need to manufacture more females is the association of femaleness with humility, calmness in the voice and nurturing tendencies compared to male voices (Alaieri and Vellino, 2016). Moreover, it believes that robots are also created to feature certain skin tones that favor the majority population races compared to the minority races. The phenomenon raises questions on AI biasness and further, questions on issues regarding social unfairness. The approach is that it may cause problems in the future in regards to trust from the people.
Also, I believe that robotics in machine learning and artificial intelligence may provoke incidences of security threat. As Vanderelst and Winfield (2018) mention in the study, the premise of building an ethical robot can birth an unethical robot. Cybersecurity in this modern age is persistent, and humans are always required to keep up with the latest inventions of how to stay safe both virtually and physically. The question, therefore, pertains to the infiltration intentions through artificial intelligence where robots can cause more harm than good.
Another issue is that in business robots may overtake the perception and capacity to learn from humans. It is evident that robots have more intellectual competencies compared to humans. In my view, robots are programmed to learn, and their ability to learn can surpass human ability to learn as well as intellect. The reason humans are on top, today, is due to the capacity to ensure that the species remain there (Alaieri and Vellino, 2016). But, with the growing need to improve on the ability of the robots to decide on the moral and ethical level, can bring competition shortly.
Furthermore, machine learning and artificial intelligence are associated with the artificial error. There is the common phrase: that man is to failure as the failure is to man. Considering that man is the creator of the beatifical machines, there is a possibility of limitations to its intelligence and machine learning capabilities. For instance, Vanderelst and Winfield (2018) point out that robots can only articulate what is right and wrong based on human guidance. I concur with the experiment on human-robot training conducted by Vanderelst and Winfield (2018). Robots can only understand what ethical based on repeated programmable practice is. Nonetheless, there are still limitations especially when program and algorithms are placed on the table. In situations when the robot has made more than one error including putting its owner in troubling circumstances, would it mean that the robot is faulty?
In summary, artificial intelligence and machine learning can pose more risks in business and society compared to the intended benefit. The questions are based on whether humans feel comfortable with the advancement of Artificial intelligence. According to Alaieri and Vellino (2016), ethical robots, for instance, may pave the way for unethical robots. The consequences of machine learning and artificial intelligence may be too detrimental to man.
Designing of Ethical Robots
There have been numerous methods applied to ensure that robots have an ethical sense from a human perspective. The prominent way has been the creation of an autonomous robot that accepts human emotions to motivate behavioral modifications (Alaieri and Vellino, 2016). Emotions within robots create a sense of guilt and satisfaction which provides for the ethical premise. The ethical regulations to robot actions are based on the diverse sets of bounds specific to their purposes. Researchers Vanderelst and Winfield (2018) determine that the most critical emotions for robots are the need to build guilt. Guilt is an emotion that allows the robot to ‘ feel’ and be able to produce whenever there is a violation of an ethical constraint. Guilt, is, therefore, regarded as a critical motivator to moral behavior. In normal cases, according to Vanderelst and Winfield (2018) study on ethical robots, moral behavior is programmed through social teachings within which it is estimated that behavioral modifications can lead to consequences of the previous action decipher. However, there is a limitation on the ideology that robots cannot exceed their specified thresholds. The reason is that robots’ abilities should be temporary and therefore, restricted. For instance, a military robot might not have access to certain weapons.
In summary, an ethical robot has to operate within the confines of the ethical regulations. These include predicting outcomes of possible actions and assessment of anticipated results against the laws. To understand the ethical rules, robots are subjected to human mimicking programs of activities. According to Vanderelst and Winfield (2018), there are possible actions that human motions can be used to program a robot. In my opinion, this means that for every action foreseeable by man, a robot has to be trained. The thought of the time, money and effort invested is unfathomable.
Ethical Consideration of Ethical Robots in Businesses
There is a need to put perspective the ethical risks associated with robots in businesses including human replacement. Introduction of robots that lack or minimize human error is a plus for any company. In my opinion, robots can improve productivity and therefore, the efficiency of business operations. On the other hand, the ethical issue is that robots will replace people causing unemployment and consequential economic downfall in workers. Facing out of employees could mean that most people cannot afford decent health care or decent housing. This, therefore, may have an economic setback in the long run.
Furthermore, the ethical issue can be good with minimized health risks and accidents associated with human resources. Robots are manufactured to always perform in the right manner. This entails the perfectionism in handling tasks such as in industrial manufacturing processes. In comparison, businesses have to grapple with issues surrounding employee safety and work environment which can be tedious and exhaust resources. This trickles down to the revenue generation. Whereas, for robots, businesses will reduce costs on ensuring the workplace is safe and secure for the robots.
Nevertheless, aspects of intelligence on robots compared to humans may cause a rift. My opinion is that robots are manufactured in a manner that offsets rational thinking in man. This means that robots can decipher a situation and determine what’s the best action. However, Srivastava, Garg, and Mishra (2017) argue that on this issue with critics questioning the artificial error probability in robots. As noted earlier, the error is to man as man is to failure and this statement strongly points out the deficiency potential in robots.
Also, understanding robotics ethical issues is to gauge whether it is a threat or a solution to the business. A warning could be that in time robots may make executive decisions which may also face out executive officers within a company (Srivastava, Garg and Mishra, 2017). In advanced robotics, I reflect that robots in time may know what to decide on when placed in an executive position. In most cases, the robots need all the information including statistics as well as balance sheets of the business to determine which is the best solution. Therefore, from an ethical perspective, robots may be more of a threat than a solution.
Robots in time will be omnipresent in the workplaces and within the society due to the increased production to meet the demand for the technology. The rate at which robotics in the current world is advancing is relentlessly trying to ‘make life better.’ This statement is arguable considering the numerous mishaps that some technological robots have made. Nonetheless, the epitome of robots is that they make life better. For instance, robots can be made as assistants (Salem et al., 2015). The owner programs the robot to perform home tasks such as cleaning the house or folding the laundry. The possibilities in what robotics can achieve have set the pace as well as the platform in motivating roboticists to create a robot for every scenario.
The current demand for robots is diverse from industrial use to home use. Roboticists have been able to understand how to make different types of robots from a competitive one to an aggressive one to an ethical one. In my opinion, robots will reduce workload whether at home or the workplace. Take this instance, in the current world, all aspects of chores in the house encompass machine-assisted duties from washing machines to dishwashers (Salem et al., 2015). Therefore, it may not be difficult to imagine that a robot can overtake in aiding humans in conducting their daily duties. Consequently, I do support the statement that ‘robots will be ubiquitous in our workplaces and society.’
In my view, the future of robotics shall be bigger and more advanced compared to what we are witnessing. The estimation is that engineers will design the next generation of robots that may have similarities within humans based on physical features. The assumption is that the robots may look as well as feel human which may make them appear charming and easy to work with from a human perspective. For instance, the robots may have realistic looking hair or have embedded sensors which allow the robots to react naturally to their surrounding similar to their human counterparts. From a business aspect, robotics may provide a long-lasting solution to replacing humans in the certain workstation or entirely. The current phenomenon is that robots can replace the man in areas such as the service industry to manufacturing. This is based on the benefits and minimized costs associated with robots. Moreover, robots may encapsulate functions that are too dangerous for humans to conduct such as rescue. For example, the prediction is that roboticists may produce terrain robots whose capability is to function together with humans in sharing locational information as well as provide search patterns. To an extent, this may be among the best ethical issues concerning robots in the future. The downside, nonetheless, is that robotics may endanger people with aspects of competition. In my view, robots that are made to think aggressively are conditioned to always believe in that perspective. In summary, the future of robotics is bright but, there may be more consequences than benefits in my opinion.
Alaieri, F. and Vellino, A., 2016, November. Ethical decision making in robots: Autonomy, trust, and responsibility. In International Conference on Social Robotics (pp. 159-168). Springer, Cham. https://www.ruor.uottawa.ca/bitstream/10393/35163/4/Robots-Paper-Final.pdf
Salem, M., Lakatos, G., Amirabdollahian, F., and Dautenhahn, K., 2015, October. Towards safe and trustworthy social robots: ethical challenges and practical issues. In International Conference on Social Robotics (pp. 584-593). Springer, Cham. https://uhra.herts.ac.uk/bitstream/handle/2299/18650/ICSR2015_SalemEtAl_submitted2.pdf?sequence=2&isAllowed=y
Srivastava, M., Garg, R., and Mishra, P.K., 2017. Analysis of Robot Detection Approaches for Ethical and Unethical Robots on Web Server Log. International Journal of Advanced Research in Computer Science, 8(5).
Vanderelst, D. and Winfield, A., 2018, December. The dark side of ethical robots. In Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society (pp. 317-322). ACM. https://arxiv.org/pdf/1606.02583