Sunday, November 28, 2010

Benefits and the Perils of Artificial Intelligence

Artificial Intelligence is a field of computer science which is seeking to build a set of machines that can complete some complex task without human intervention (Brookshear 2007).
Today, artificial intelligence (AI) is widely used in number of different domains to support human decision making. Although an intelligence of a decision making machine is a questionable matter as those could be seen as a algorithmic processes, the benefits of utilizing machines to aid out decision are unquestionable. Combining uninterruptable operability, higher computing power and appropriate algorithm, a machine has far superior problem solving ability. Those include market predictions in financial sector, weather forecast and biological simulations in engineering and exact science, as well as management and medical fields. For examples, computers are used in medical field to process (MRI) images to recognize tumours, or in linguistics to aid understanding of language acquisition process. Furthermore, artificial intelligence is used by space agencies (NASA) for automated lunar and planetary terrain analysis and aviation systems to aid navigation during poor visibility conditions (Zia-ur Rahman at. al., 2007).
In many cases, we (human) are exposed to information which can be considered confidential, sensitive or ethical. In those cases, the decision taking process includes moral judgements. According to Blay Whitby (2008) “these include systems that give advice on patient care, on social benefit entitlement, and even ethical advice for medical professionals”. Even in more traditional information technology domains, data such as usage statistics analysis could be seen as ethical and need to be handled appropriately. Therefore, the philosophical debate and possible development of moral status of artificial intelligence should be examined carefully.
In 1976, Isaac Asimov proposed ‘‘three laws of robotics’’ which are considered by some as an “ideal set of rules for such machines” (Susan Leigh Anderson, 2008):
  1. A robot may not injure a human being, or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the first law.
  3. A robot must protect its own existence as long as such protection does not conflict with the first or second law.
Susan Leigh Anderson (2008) argues that the rules proposed by Issac Asimov three laws of robotics are “an unsatisfactory basis for machine ethics”. The question of “meta-ethics” and development in artificial intelligence moral judgement could be seen as major obstacles needed to be overcome.


  • Anderson, Susan Leigh. 2008. "Asimov’s “three laws of robotics” and machine metaethics." AI & Society 22, no. 4: 477-493. Computers & Applied Sciences Complete, EBSCOhost (accessed November 28, 2010).
  • Bringsjord, Selmer. 2008. "Ethical robots: the future can heed us." AI & Society 22, no. 4: 539-550. Computers & Applied Sciences Complete, EBSCOhost (accessed November 28, 2010).
  • Brookshear J.G. (2007) Computer Science: An Overview, (9th Ed). Boston: Pearson Education Inc. P452-500
  • Whitby, Blay. 2008. "Computing machinery and morality." AI & Society 22, no. 4: 551-563. Computers & Applied Sciences Complete, EBSCOhost (accessed November 28, 2010).
  • Zia-ur Rahman,Daniel J. Jobson, Glenn A. Woodell (2007), Pattern Constancy Demonstration of Retinex/Visual Servo (RVS) Image Processing, NASA, Available from: (accessed November 28, 2010).

Sunday, November 7, 2010

Interpreted vs. Compiled Code

A source code needs to be translated from a set of primitives into a language understood by the machine (CPU). Compiled program is translated into a machine specific instructions (CPU specific) whereas interpreted program is educed to machine instructions at runtime. This has number of associated weaknesses but also allows certain functionality which cannot be implemented in a compiled program.
Interpreted programs are usually associated with slower runtime and that is because the code needs to be translated at runtime. In addition, it is much more difficult to enforce Intellectual Property rights since the code needs to be distributed. Both were partially solved by programming languages such as Java and C# by translating the code into bytecode which is executed in turn by a framework such as Java Virtual Machine. As noted by Peter Haggar (2001) “bytecode is an important part of the size and execution speed of your code”.
On the other hand, interpreted languages allow software developers to do thing that can not be done in a compiled language. For example, since the source code is usually stored in a text file, it could be manipulated by the program during the runtime - it allows the program to modify or mutate itself. In addition, interpreted languages are usually associated with easier software development process since the it is not required to recompile the code every time it changes. This feature is particular useful for operating system (OS) administrators who often tweak a script, executed by the OS itself, to suit a specific need without recompilation associated with the compiled programs. Another benefit of interpreted language is the interoperability of the developed program or script since the machine level translation happens at the execution of the program. This is demonstrated by the fact that a program written in C or C++ (usually) requires recompilation for every execution platform while code written in Java does not.
In reality, there is no clear distinction between compiled and interpreted software development language. According to Wikipedia (n.d.) “Many languages have been implemented using both compilers and interpreters, including Lisp, Pascal, C, BASIC, and Python”, therefore the discussion should be more focused on a specific implementation of the software development language.


Structured vs. Object Oriented Paradigm

According to K. Hopkins (2001) structured, orimperative, programming paradigm are build three fundamental constructs which are sequence, decision and repetition. Brookshear (2007) defines structured programming paradigm as a “programming process to be developed of a sequence of commands that, when followed, manipulate the data to produce the desirable result”. Therefore, it could be summarized as a process of implementing an algorithm as a sequence of commands. Structured programming paradigm suffered from a number of crucial issues such as memory leaks and deep hierarchies.
Object oriented programming paradigm decompose the system into a collection of objects each of which is capable of performing functionality related to itself. In addition, objects can interact between themselves to create a complex network of relationships to solve the problem at hand. Software development process using object oriented paradigm described by Mohamed, Ahmed Yakout A.; Hegazy, Abd El Fatah A.; Dawood, Ahmed R. (2010) as a process of orchestrating a data flow between collection of objects each manipulating the data to solve software problem. Those, in turn, allows better re-usage of objects and decoupling between the classes. A book published by Erich Gamma at al. (sometimes called “Gang of Four”) describes describes common programming problems and patterns for solving them allowing re-usage not only of the source code (objects) but re-usage of the design patterns as well.
According to Brookshear (2007), object oriented programming paradigm is based on a key five elements which are class, abstraction, encapsulation, inheritance, and polymorphism.
  • Class defines data object including characteristics such as attributes and methods or operations. For example, class Fish would includes attributes such as eyes, mouth, fins, and will be able to perform the following operations/methods: eat and swim.
  • Inheritance is a technique to described similar yet difference class characteristics. For example, Class Mammals would define attributes such as eyes, mouth and limbs and methods such as eat and walk. Then, class Dog would inherit all attributes and methods of Mammal with addition of specific methods such as bark. This allows software designer to reuse already implemented classes and extend those to solve a specific problem.
  • Abstraction, according to Wikipedia (n.d.) is “the act of representing essential features without including the background details or explanations”.
  • Encapsulation refers to ability of the class to conceal or restrict access to certain internal properties or methods. For example, a method bark could in theory use internal methods to implement the functionality but methods are considered as private (i.e. accessible by the object itself).
  • Polymorphism allows inheriting objects to mutate functionality to suit type-specific behaviour. For example, classes Doberman Pincher and German Shepherd both inheriting class Dog, could override the implementation of the method bark to match the distinct features of that specific bread.
Based on the characteristics of object oriented paradigm, it could be said that it better reflects the human way of thinking due to the fact that each real-world item is represented as object within the software with all relevant concepts (implemented as attributes and methods). On the downside, object oriented programming is usually associated with reduced performances and higher memory consumption due to the fact that each object has to be initiated and destroyed at the end of the object life cycle.
It is also important to note, that emerging software programming paradigms are constantly being developed. For example, aspect oriented that “that seeks new modularization of software systems in order to isolate secondary or supporting functions from the main program's business” (Mohamed, Ahmed Yakout A. at al 2010). According to aspect oriented programming paradigm, similar concepts or aspects such as error handling, logging and catching should be implemented in a separate functional units to solve the issue of code scattering and tangling.