Sunday, November 28, 2010

Benefits and the Perils of Artificial Intelligence

Artificial Intelligence is a field of computer science which is seeking to build a set of machines that can complete some complex task without human intervention (Brookshear 2007).
Today, artificial intelligence (AI) is widely used in number of different domains to support human decision making. Although an intelligence of a decision making machine is a questionable matter as those could be seen as a algorithmic processes, the benefits of utilizing machines to aid out decision are unquestionable. Combining uninterruptable operability, higher computing power and appropriate algorithm, a machine has far superior problem solving ability. Those include market predictions in financial sector, weather forecast and biological simulations in engineering and exact science, as well as management and medical fields. For examples, computers are used in medical field to process (MRI) images to recognize tumours, or in linguistics to aid understanding of language acquisition process. Furthermore, artificial intelligence is used by space agencies (NASA) for automated lunar and planetary terrain analysis and aviation systems to aid navigation during poor visibility conditions (Zia-ur Rahman at. al., 2007).
In many cases, we (human) are exposed to information which can be considered confidential, sensitive or ethical. In those cases, the decision taking process includes moral judgements. According to Blay Whitby (2008) “these include systems that give advice on patient care, on social benefit entitlement, and even ethical advice for medical professionals”. Even in more traditional information technology domains, data such as usage statistics analysis could be seen as ethical and need to be handled appropriately. Therefore, the philosophical debate and possible development of moral status of artificial intelligence should be examined carefully.
In 1976, Isaac Asimov proposed ‘‘three laws of robotics’’ which are considered by some as an “ideal set of rules for such machines” (Susan Leigh Anderson, 2008):
  1. A robot may not injure a human being, or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the first law.
  3. A robot must protect its own existence as long as such protection does not conflict with the first or second law.
Susan Leigh Anderson (2008) argues that the rules proposed by Issac Asimov three laws of robotics are “an unsatisfactory basis for machine ethics”. The question of “meta-ethics” and development in artificial intelligence moral judgement could be seen as major obstacles needed to be overcome.


  • Anderson, Susan Leigh. 2008. "Asimov’s “three laws of robotics” and machine metaethics." AI & Society 22, no. 4: 477-493. Computers & Applied Sciences Complete, EBSCOhost (accessed November 28, 2010).
  • Bringsjord, Selmer. 2008. "Ethical robots: the future can heed us." AI & Society 22, no. 4: 539-550. Computers & Applied Sciences Complete, EBSCOhost (accessed November 28, 2010).
  • Brookshear J.G. (2007) Computer Science: An Overview, (9th Ed). Boston: Pearson Education Inc. P452-500
  • Whitby, Blay. 2008. "Computing machinery and morality." AI & Society 22, no. 4: 551-563. Computers & Applied Sciences Complete, EBSCOhost (accessed November 28, 2010).
  • Zia-ur Rahman,Daniel J. Jobson, Glenn A. Woodell (2007), Pattern Constancy Demonstration of Retinex/Visual Servo (RVS) Image Processing, NASA, Available from: (accessed November 28, 2010).

No comments:

Post a Comment