Today, artificial intelligence (AI) is widely used in number of different domains to support human decision making. Although an intelligence of a decision making machine is a questionable matter as those could be seen as a algorithmic processes, the benefits of utilizing machines to aid out decision are unquestionable. Combining uninterruptable operability, higher computing power and appropriate algorithm, a machine has far superior problem solving ability. Those include market predictions in financial sector, weather forecast and biological simulations in engineering and exact science, as well as management and medical fields. For examples, computers are used in medical field to process (MRI) images to recognize tumours, or in linguistics to aid understanding of language acquisition process. Furthermore, artificial intelligence is used by space agencies (NASA) for automated lunar and planetary terrain analysis and aviation systems to aid navigation during poor visibility conditions (Zia-ur Rahman at. al., 2007).
In many cases, we (human) are exposed to information which can be considered confidential, sensitive or ethical. In those cases, the decision taking process includes moral judgements. According to Blay Whitby (2008) “these include systems that give advice on patient care, on social benefit entitlement, and even ethical advice for medical professionals”. Even in more traditional information technology domains, data such as usage statistics analysis could be seen as ethical and need to be handled appropriately. Therefore, the philosophical debate and possible development of moral status of artificial intelligence should be examined carefully.
In 1976, Isaac Asimov proposed ‘‘three laws of robotics’’ which are considered by some as an “ideal set of rules for such machines” (Susan Leigh Anderson, 2008):
- A robot may not injure a human being, or, through inaction,
allow a human being to come to harm.
- A robot must obey the orders given it by human beings except
where such orders would conflict with the first law.
- A robot must protect its own existence as long as such
protection does not conflict with the first or second law.
Bibliography
- Anderson, Susan Leigh. 2008. "Asimov’s “three laws of robotics” and machine metaethics." AI & Society 22, no. 4: 477-493. Computers & Applied Sciences Complete, EBSCOhost (accessed November 28, 2010).
- Bringsjord, Selmer. 2008. "Ethical robots: the future can heed us." AI & Society 22, no. 4: 539-550. Computers & Applied Sciences Complete, EBSCOhost (accessed November 28, 2010).
- Brookshear J.G. (2007) Computer Science: An Overview, (9th Ed). Boston: Pearson Education Inc. P452-500
- Whitby, Blay. 2008. "Computing machinery and morality." AI & Society 22, no. 4: 551-563. Computers & Applied Sciences Complete, EBSCOhost (accessed November 28, 2010).
- Zia-ur Rahman,Daniel J. Jobson, Glenn A. Woodell (2007), Pattern Constancy Demonstration of Retinex/Visual Servo (RVS) Image Processing, NASA, Available from: http://dragon.larc.nasa.gov/VIP/pattern_constancy_demonstration_report.pdf (accessed November 28, 2010).