Sunday, December 19, 2010

Who Owns Our Data?

When considering the ownership of information about a person, we need to consider a number of factors. Initially, we need to establish who generated the information as this will impact the entitlement to store and access the information. For example, Social Insurance Number (SIN) is generated Service Canada when a person is born or immigrates to Canada, therefore there is legitimate business need to store the information in its database. On the other hand, if the information was generated by a different entity (even the person itself), it is questionable if the government has a legitimate need to access that information. Although the legal complexity increases as the technology advances and more and more information is stored, in majority of countries the ownership of the data is governed by a law or a privacy act.
Whereas, as mentioned by Jones, Andy, Glenn S. Dardick, Gareth Davies, Iain Sutherland, and Craig Valli (2009) “there has also been an increasing trend in the use of the same computer to process and store both the organisation’s and the individuals personal information” therefore in corporate environment the lines are more blurry. For example, if a person uses corporate resources to send and receive Email messages containing private information, does the organization have a potential entitlement for the information?
To thoroughly examine the complexity of data ownership, consider the following scenario. An employee is required to provide medical, credit and personal (previous employment, skills, etc.) information prior to employment. Then, an employer generates information about the employee such as salary, weekly utilization and performance measurement. Finally, during the employment the employee generates information such as documents, software code, idea and thoughts which could be owned by the organization or by the employee itself. The last case is usually covered by an employment contract but the other cases are not always defined by a contract or an applicable legislation.

Bibliography

  • College, Mitchell A. 2010. "Disclosure and Secrecy in Employee Monitoring." Journal of Management Accounting Research 22, 187-208. Business Source Premier, EBSCOhost (accessed December 19, 2010).
  • Jones, Andy, Glenn S. Dardick, Gareth Davies, Iain Sutherland, and Craig Valli. 2009. "The 2008 Analysis of Information Remaining on Disks Offered for Sale on the Second Hand Market." Journal of International Commercial Law & Technology 4, no. 3: 162-175. Academic Search Complete, EBSCOhost (accessed December 19, 2010).
  • Yekhanin, Sergey. 2010. "Private Information Retrieval." Communications of the ACM 53, no. 4: 68-73. Business Source Premier, EBSCOhost (accessed December 19, 2010).

Monday, December 13, 2010

Privacy and Data Protection Laws in Canada

Garrie and Wong (2010) state that “users of social networking sites (SNS) and platforms are realising that their personal information, given for what was believed to be a “limited purpose”, has been hijacked, sold, repackaged, misused, abused and otherwise laid bare to the world” therefore it is imperative that data protection frameworks are established by the government to protect personal information of its citizens.
On a federal level, Canada has two privacy laws: Personal Information Protection and Electronic Documents Act (PIPEDA) and the Privacy Act. On a provincial level, laws such as The Personal Health Information Protection Act (Ontario), Freedom of Information and Protection of Privacy Act (Ontario), The Personal Information Protection Act (Alberta) and An Act Respecting the Protection of Personal Information in the Private Sector (Quebec) were declared by the federal Governor.
PIPEDA applies to private and public sector organisations “who collect, use or disclose personal information in the course of commercial activities” (Treasury Board of Canada Secretariat, 2003). The act which became a law in 2004 is divided into five parts and covers information about an identifiable individual including personal health information. The act establishes ground rules for collection, exchange and disclosure of the information covered under the act. The Office of the Privacy Commissioner of Canada (2005) summarizes PIPEDA as follows:
  • If your business wants to collect, use or disclose personal information about people, you need their consent, except in a few specific and limited circumstances.
  • You can use or disclose people's personal information only for the purpose for which they gave consent.
  • Even with consent, you have to limit collection, use and disclosure to purposes that a reasonable person would consider appropriate under the circumstances.
  • Individuals have a right to see the personal information that your business holds about them, and to correct any inaccuracies.
  • There's oversight, through the Privacy Commissioner of Canada, to ensure that the law is respected, and redress if people's rights are violated.
The main difference between the PIPEDA and Privacy Act is the fact that PIPEDA is a consent-based act, meaning that you must have consent to collect, use or disclose information. The Privacy Act is authority-based, meaning that you must ensure that you have the legal authority to collect, use or disclose information (Treasury Board of Canada Secretariat, 2003).
While the majority of the legislation bodies are still in the game of “catch up” (Daniel B. Garrie and Rebecca Wong, 2010), Office of the Privacy Commissioner of Canada (OPCC) is proactively looking into technologies and the use of these technologies with potential privacy concerns. For example, a number of studies have been conducted to identify privacy issues related to the use of RFID and Street Imaging technology (i.e. Google Earth), as well as the use of credit card numbers and social networking sites. Furthermore, Canadian Internet Policy and Public Interest Clinic (CIPPIC) filed a complaint against Facebook Inc. for noncompliance with the PIPEDA. According to Denham (2009), the central issues in the investigation was “whether Facebook was providing a sufficient knowledge basis for meaningful consent by documenting purposes for collecting, using, or disclosing personal information and bringing such purposes to individuals’ attention in a reasonably direct and transparent way”.
Furthermore, Kong (2010) notes that “after assessing the Personal Information Protection and Electronic Documents Act (PIPEDA) of Canada, the European Commission deems the transfer of data to Canadian transferees subject to this Act legal” which results in additional business opportunities between the EU and Canada.

Bibliography

  • Austin, Lisa M. 2006. "Reviewing PIPEDA: Control, Privacy and the Limits of Fair Information Practices." Canadian Business Law Journal 44, no. 1: 21-53. Business Source Premier, EBSCOhost (accessed December 12, 2010).
  • Daniel B. Garrie and Rebecca Wong (2010), Social networking: opening the floodgates to "personal data". Computer and Telecommunications Law Review 2010, 16(6), p167-175.
  • Elizabeth Denham (2009), Report of Findings into the Complaint Filed by the Canadian Internet Policy and Public Interest Clinic (CIPPIC) against Facebook Inc. Under the Personal Information Protection and Electronic Documents Act [online]. Office of Privacy Commissioner of Canada. Available from: http://www.priv.gc.ca/cf-dc/2009/2009_008_0716_e.cfm (accessed December 12, 2010).
  • Lingjie Kong (2010), Data protection and transborder data flow in the European and global context. European Journal of International Law 2010, 21(2), p441-456.
  • Office of the Privacy Commissioner of Canada (2005), Complying with the Personal Information Protection and Electronic Documents Act [online]. Available from: http://www.priv.gc.ca/fs-fi/02_05_d_16_e.cfm (accessed December 12, 2010).
  • Office of the Privacy Commissioner of Canada (2006), RFID Technology [online]. Available from: http://www.priv.gc.ca/fs-fi/02_05_d_28_e.cfm (accessed December 12, 2010).
  • Office of the Privacy Commissioner of Canada (2009), Captured on Camera - Street-level imaging technology, the Internet and you [online]. Available from: http://www.priv.gc.ca/fs-fi/02_05_d_39_prov_e.cfm (accessed December 12, 2010).
  • Office of the Privacy Commissioner of Canada (2009), Truncated Credit Card Numbers - Why stores should print only partial credit card information on customer receipts [online]. Available from: http://www.priv.gc.ca/fs-fi/02_05_d_44_tcc_e.cfm (accessed December 12, 2010).
  • Rivkin, Jennifer. 2005. "What's a Pipeda?." Profit 24, no. 2: 11. Business Source Premier, EBSCOhost (accessed December 12, 2010).
  • Treasury Board of Canada Secretariat (2003), Personal Information Protection and Electronic Documents Act [online]. Available from: http://www.tbs-sct.gc.ca/pgol-pged/piatp-pfefvp/course1/mod2/mod2-3-eng.asp (accessed December 12, 2010).

Sunday, December 12, 2010

Data Protection – For the Rich Only?

“Preventing improper information leaks is a greatest challenge of the modern society” state Aldini and Alessandra (2008).There are virtually countless ways (channels) sensitive data can be leaked through. First, there is a question of the intent; data leakage could be intentional, for example through a disgruntled employee who wishes to take a “souvenir” home, or unintentional as a result of a simple misunderstanding of security best practices. Then, technical and business environment should be evaluated and assessed to determine the most efficient and cost effective way to safeguard the data.
When discussing data leakage and protection on the consumer market, the boundaries between intentional and unintentional data leakage blend. Security aware consumers are not disclosing information such as credit card numbers, bank accounts and birth dates publicly, therefore it is safe to assume that it is either published as a result of a lack of understanding of security best practices or the malicious information theft.
Chichowski (2010) notes seven technologies that could prevent or limit data leakage for small and medium businesses. These include hosted Email security, Web/URL filtering, anti-malware software, patch management and whole disk encryption. Google (2010) provides a similar checklist consisting of eighteen items to make sure information is secure. Based on Pareto principle, by implementing those technologies a consumer could reduce the overall risk of data leakage by 80%. The question arises: are these technologies for rich only?
Instead of using locally installed E-mail security software which is capable of filtering spam, detecting phishing attacks and scanning for viruses, a consumer could use web based Email accounts such as Google, Live and Yahoo, which provide different levels of security. For example, Google Mail provides all of the above mentioned capabilities in addition to free storage space.
A number of security software vendors, including segment leaders such as Symantec and Kaspersky, offer free anti-malware scans capable of detecting “viruses, Trojans, Spyware or other malicious codes” (Kaspersky, 2010). In addition, free security software such as McAfee SiteAdvisor and AVG LinkScanner allow users to check the reputation of each website before opening it in a browser.
Today update or patch management technologies are an integral part of operating systems and consumer applications. For example, Microsoft Windows 7, Ubuntu OS and Mac OSX all come with build in update manager, which informs the user when security and regular updates become available. On Ubuntu, patch management software updates applications managed by the operating system such as Open Office, Firefox Web Browser and Adobe Reader.
Full disk encryption technology intends to provide last resort protection in case a laptop or a desktop is stolen. Encrypting the data stored on non-volatile memory devices such as hard drive, solid state disk or removable USB device prevents malicious users from accessing the information stored. In additional to corporate solutions such as PGP Full Disk encryption and McAfee Endpoint Encryption , Check Point Full Disk Encryption, there is a number of free applications capable of protecting These are: Microsoft BitLocker Drive Encryption and TrueCrypt.
It is evident that the security aware businesses and consumers have a wealth of options when in comes to technological solutions to protect sensitive or personal information. According to AVG Technologies (2010) only “46% of identity theft victims installed antivirus, anti-spyware, or a firewall on their computer after their loss”, therefore the main problem lies in the security awareness of the users rather than in the availability or cost of data leakage prevention solutions. While in large enterprises, Chief Information Security Officer (CISO) is required to provide internal employees with the security awareness program to, the question that remains open is: Who is responsible for the educating the end user when in comes to consumer market?

Bibliography

  • Aldini, Alessandro, and Alessandra Pierro. 2008. "Estimating the maximum information leakage." International Journal of Information Security 7, no. 3: 219-242. Business Source Premier, EBSCOhost (accessed December 12, 2010).
  • AVG Technologies (2010), AVG LinkScanner [online]. Available from: http://linkscanner.avg.com/ (accessed December 12, 2010).
  • Chichowski, Ericka. 2010. "Sound the Alarm." Entrepreneur 38, no. 6: 54-59. Business Source Premier, EBSCOhost (accessed December 12, 2010).
  • Google (2010), Gmail Security Checklist [online]. Available from: http://mail.google.com/support/bin/static.py?hl=en&page=checklist.cs&tab=29488&ctx=share (accessed December 12, 2010).
  • Kasperski (2010), Free Virus Scan [online]. Available from: http://www.kaspersky.com/virusscanner (accessed December 12, 2010).

Saturday, December 11, 2010

Security and Ethical Impact of Technological Advancement

The advancement in computer technologies provides us with ever changing capabilities such as fast Internet access, larger storage capacity, mobile computing, electronic financial transactions, smaller and faster processors, cloud-based computing and virtualisation. . Those in turn are utilised by the consumers and businesses to expand their operations to previously unattainable domains. For example, in the banking sector computing resources are used for tasks such as calculating risk factors and facilitating monetary transactions. Complex models that once took hours to update can now be modified within seconds, and transactions which used to take days are now instantaneous. Another example is that “Cloud infrastructure can save 40% to 50% in up-front costs, allowing pricing model flexibility, including paying per use, low or no up-front costs, no minimum spent and no long term commitment” (Tisnovsky, Ross 2010).
Cloud computing is one of the faster growing technological and business segments in the IT industry. Both individuals and enterprises are questioning the controls in place to safeguard the information stored outside the “secure” corporate boundaries. Subashini at.al. (2010) notes that “security is one of the major issues which reduces the growth of cloud computing and complications with data privacy and data protection continue to plague the market.”
Additional concerns are privacy and compliance issues, especially for international enterprises. Different privacy acts and regulations require companies to safeguard their data and restrict its migration to different geographical locations. In addition to that, different countries and regions have different security standard and compliance models such as GLBA, HIPAA, SOX and PCI) which organizations are required to comply with, therefore it is imperative those aspects are reviewed and addressed. According to recent statistic published by Ernst & Young (2009) “Only 34% of polled entities indicated they had an established response and management process in regards to privacy related incidents, while 32% have a documented inventory of assets covered by privacy requirements”.
Furthermore, ownership and control are additional issues, which companies are concerned about when discussing the implementation of Cloud based computing. Legal issue in data ownership and the lack of complete control of access to the stored information cause difficulties to organisations manifesting themselves in a number of security related issues, such as backup and disaster recovery. Ross Tisnovsky (2010) notes that “customers need formal contractual clauses to ensure data remains available if the supplier goes out of business or is acquired and for data redundancy across multiple sites”.
Finally, consistency and accuracy of the information should be considered when migrating sensitive data to the Cloud based infrastructure. For example, Data Protection Act (DPA) 1998 requires entities to review the information stored for accuracy. When factoring in issues discussed previously such as ownership of the information and the control over the information, a process of ensuring accuracy and consistency of the information stored should be considered and, in some cases, be part of contractual obligation with the service provider.
Given the advantages Cloud-based computing offers enterprises to ensure that data and application migration follow best practices and standards of security such as Open Web Application Security Project (OWASP) “Cloud Top 10 Security Risks” and “Security Guidance for Critical Areas of Focus in Cloud Computing” by Cloud Security Alliance (CSA). Understanding security and ethical issues, adoption of security frameworks and periodic risk assessments associated with the use of a particular technology will reduce the negative exposure of the enterprise.

Bibliography

  • Bodde , D. L. 2004 Intentional Entrepreneur: Bringing Technology And Engineering To The Real New Economy, M.E. Sharpe
  • Bublitz, Erich. 2010. "Catching The Cloud: Managing Risk When Utilizing Cloud Computing." National Underwriter / Property & Casualty Risk & Benefits Management 114, no. 39: 12-16. Business Source Premier, EBSCOhost (accessed December 8, 2010).
  • Cloud Security Alliance (2009), Security Guidance for Critical Areas of Focus in Cloud Computing V2.1 [online]. Available from: http://www.cloudsecurityalliance.org/guidance/csaguide.pdf (accessed December 8, 2010).
  • Ernst & Young. (2009). Outpacing change. 12th Annual Global Information Survey [online]. Available from: http://www.ey.com/Publication/vwLUAssets/12th_annual_GISS/$FILE/12th_annual_GISS.pdf (accessed December 8, 2010).
  • Farrell, Rhonda. 2010. "Securing the Cloud-Governance, Risk, and Compliance Issues Reign Supreme." Information Security Journal: A Global Perspective 19, no. 6: 310-319. Business Source Premier, EBSCOhost (accessed December 8, 2010).
  • OWASP (2010), Cloud Top 10 Security Risks [online]. Available from: http://www.owasp.org/index.php/Category:OWASP_Cloud_%E2%80%90_10_Project (accessed December 8, 2010).
  • Subashini, S., and V. Kavitha. "A survey on security issues in service delivery models of cloud computing." Journal of Network & Computer Applications 34, no. 1 (January 2011): 1-11. Business Source Premier, EBSCOhost (accessed December 8, 2010).
  • Tisnovsky, Ross. 2010. "Risks Versus Value in Outsourced Cloud Computing." Financial Executive 26, no. 9: 64-65. Business Source Premier, EBSCOhost (accessed December 8, 2010).

Sunday, December 5, 2010

Professional Ethics and Responsibility

According to Deborah Johnson (2008), the distinction between “guns-for-hire” and professionals is the fact that “guns for hire” will do everything in his or her capabilities for the prices. By contract, a professional will take responsibility for his or her action.
In an ideal world, every professional adheres to a core set of values of the profession. Ethical values include a value of human life in medicine, accuracy in auditing, integrity (among the others) in military and safety in engineering. Number of computer ethics bodies published and maintain code of ethics and conduct such as BCS, ACM and IEEE but due to variability in computing field it is difficult to define ethical behaviour of computing professional.
Since so many non-experts rely on a computer professional expertize, it puts the computer professional in a position of power. Furthermore, the result of a work conducted by a computer expert has direct and indirect impact on users of the product. “Computer experts generally work either as employees in organizations (including corporations, government agencies, and nongovernmental organizations) or as consultants hired to perform work for clients. Often their employer or client does not have the expertise to understand or evaluate the work being performed” Deborah Johnson (2008).
When we consider a role of a company (employer) and the impact of corporate policy and goals on a computer expert, the equation becomes even more complicated. Debora Johnson (2008) explains that “a computer experts might think of themselves as merely agents. They might presume that their client, employer, or supervisor is in charge and the expert’s role is merely to implement the decisions made by those higher up”. Furthermore, the role of a computer expert has a direct impact on the way an organization conducts business. Certain roles pf a computer professional could have conflicting interests with the business goals. For example, a security consultant should accurately identify security vulnerabilities and provide objective (vendor neutral) recommends without being compensated for up-selling a service or a hardware solution.
Debora Johnson (2008) summarizes that “computer experts aren’t just building and manipulating hardware, software, and code; they are building systems that help to achieve important social functions, systems that constitute social arrangements, relationships, institutions, and values.”

Bibliography

Internet Communication

When language is used by non native speaks, they tend to simplify the language. It is evident that English, as a lingua franca - “a language systematically used to communicate between persons not sharing a mother tongue” (Wikipedia, n.d.) is evolving with development of new abbreviations and slang, and easing grammar structure to simplify the language . Bill Templer (2009), notes that “we need a slimmer, sustainable lingua franca specially for trans-cultural working-class communication needs, a kind of 'convivial' English for the Multitude counterposed to English for Empire”.
At the same time, as human being we are constantly expressing emotions though facial expressions and gestures. In conversation, words, as well as body language are used to express our feeling, and in many cases slang is used to convey the message.
Similar process is happening to the language used for the online communication. For example, in China “as a result of the rapid development of computer-mediated communication, there has emerged a distinctive variety of Chinese language, which is generally termed Chinese Internet Language” (Gao, Liwei. 2006). English is not an exception and new abbreviations such as LOL (Laughing Out Loud), ROFL (Rolling On Floor Laughing), WILCO (Will Comply) *$ (Starbucks) and W8 (Wait) are widely used in online forums and chat rooms. Furthermore, according to Alla Markh (2004), different communicative situations involve different style of languages. Lexical and stylistic items of one style can be transferred to another style, in other words those can influence each other to a certain extent. For example, in the modern English language, abbreviation LOL appears not only in chats where it derives from, but also in the written and spoken language.
David Crystal (2009) explains that “abbreviations are a natural, intuitive response to a technological problem”. He argues that texters would not be able to use the technology (mobile phones, forums and chats) at all without having at least base knowledge in standard English writing system. In addition, the creation of abbreviation and slang should not be attributed to a younger generation influenced by the Internet. For example, the words “wot” (“what”) and “cos” (“because”) are part of English literary tradition and were used by Charles Dickens and Mark Twain. Furthermore, those words were given entry to Oxford English dictionary in the 19th century demonstrates that the language evolves with time to keep pace with society.

Bibliography

  • Crystal, David 2009, Txtng: frNd or foe?The Linguist, The Threlford Memorial Lecture, 47 (6), 8-11. Available from: http://www.davidcrystal.com/DC_articles/Internet16.pdf (accessed December 5, 2010).
  • Gao, Liwei. 2006. "Language contact and convergence in computer-mediated communication." World Englishes 25, no. 2: 299-308. Academic Search Complete, (accessed December 5, 2010).
  • Markh, Alla (2004), Nonverbal Means of Expressiveness in Internet Communication. On the Material of English and Russian Chats. International Higher School of Practical Physiology.
  • Netlingo (2010), The List of Chat Acronyms & Text Message Shorthand [online]. Available from: http://www.netlingo.com/acronyms.php (accessed December 5, 2010).
  • Templer, Bill. 2009. "A Two-Tier Model for a More Simplified and Sustainable English as an International Language." Journal for Critical Education Policy Studies (JCEPS) 7, no. 2: 187. EDS Foundation Index, EBSCOhost (accessed December 5, 2010).
  • Wikipedia (n.d.), Lingua franca [online]. Available from: http://en.wikipedia.org/wiki/Lingua-franca (accessed December 5, 2010).

Sunday, November 28, 2010

Benefits and the Perils of Artificial Intelligence

Artificial Intelligence is a field of computer science which is seeking to build a set of machines that can complete some complex task without human intervention (Brookshear 2007).
Today, artificial intelligence (AI) is widely used in number of different domains to support human decision making. Although an intelligence of a decision making machine is a questionable matter as those could be seen as a algorithmic processes, the benefits of utilizing machines to aid out decision are unquestionable. Combining uninterruptable operability, higher computing power and appropriate algorithm, a machine has far superior problem solving ability. Those include market predictions in financial sector, weather forecast and biological simulations in engineering and exact science, as well as management and medical fields. For examples, computers are used in medical field to process (MRI) images to recognize tumours, or in linguistics to aid understanding of language acquisition process. Furthermore, artificial intelligence is used by space agencies (NASA) for automated lunar and planetary terrain analysis and aviation systems to aid navigation during poor visibility conditions (Zia-ur Rahman at. al., 2007).
In many cases, we (human) are exposed to information which can be considered confidential, sensitive or ethical. In those cases, the decision taking process includes moral judgements. According to Blay Whitby (2008) “these include systems that give advice on patient care, on social benefit entitlement, and even ethical advice for medical professionals”. Even in more traditional information technology domains, data such as usage statistics analysis could be seen as ethical and need to be handled appropriately. Therefore, the philosophical debate and possible development of moral status of artificial intelligence should be examined carefully.
In 1976, Isaac Asimov proposed ‘‘three laws of robotics’’ which are considered by some as an “ideal set of rules for such machines” (Susan Leigh Anderson, 2008):
  1. A robot may not injure a human being, or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the first law.
  3. A robot must protect its own existence as long as such protection does not conflict with the first or second law.
Susan Leigh Anderson (2008) argues that the rules proposed by Issac Asimov three laws of robotics are “an unsatisfactory basis for machine ethics”. The question of “meta-ethics” and development in artificial intelligence moral judgement could be seen as major obstacles needed to be overcome.

Bibliography

  • Anderson, Susan Leigh. 2008. "Asimov’s “three laws of robotics” and machine metaethics." AI & Society 22, no. 4: 477-493. Computers & Applied Sciences Complete, EBSCOhost (accessed November 28, 2010).
  • Bringsjord, Selmer. 2008. "Ethical robots: the future can heed us." AI & Society 22, no. 4: 539-550. Computers & Applied Sciences Complete, EBSCOhost (accessed November 28, 2010).
  • Brookshear J.G. (2007) Computer Science: An Overview, (9th Ed). Boston: Pearson Education Inc. P452-500
  • Whitby, Blay. 2008. "Computing machinery and morality." AI & Society 22, no. 4: 551-563. Computers & Applied Sciences Complete, EBSCOhost (accessed November 28, 2010).
  • Zia-ur Rahman,Daniel J. Jobson, Glenn A. Woodell (2007), Pattern Constancy Demonstration of Retinex/Visual Servo (RVS) Image Processing, NASA, Available from: http://dragon.larc.nasa.gov/VIP/pattern_constancy_demonstration_report.pdf (accessed November 28, 2010).

Sunday, November 7, 2010

Interpreted vs. Compiled Code

A source code needs to be translated from a set of primitives into a language understood by the machine (CPU). Compiled program is translated into a machine specific instructions (CPU specific) whereas interpreted program is educed to machine instructions at runtime. This has number of associated weaknesses but also allows certain functionality which cannot be implemented in a compiled program.
Interpreted programs are usually associated with slower runtime and that is because the code needs to be translated at runtime. In addition, it is much more difficult to enforce Intellectual Property rights since the code needs to be distributed. Both were partially solved by programming languages such as Java and C# by translating the code into bytecode which is executed in turn by a framework such as Java Virtual Machine. As noted by Peter Haggar (2001) “bytecode is an important part of the size and execution speed of your code”.
On the other hand, interpreted languages allow software developers to do thing that can not be done in a compiled language. For example, since the source code is usually stored in a text file, it could be manipulated by the program during the runtime - it allows the program to modify or mutate itself. In addition, interpreted languages are usually associated with easier software development process since the it is not required to recompile the code every time it changes. This feature is particular useful for operating system (OS) administrators who often tweak a script, executed by the OS itself, to suit a specific need without recompilation associated with the compiled programs. Another benefit of interpreted language is the interoperability of the developed program or script since the machine level translation happens at the execution of the program. This is demonstrated by the fact that a program written in C or C++ (usually) requires recompilation for every execution platform while code written in Java does not.
In reality, there is no clear distinction between compiled and interpreted software development language. According to Wikipedia (n.d.) “Many languages have been implemented using both compilers and interpreters, including Lisp, Pascal, C, BASIC, and Python”, therefore the discussion should be more focused on a specific implementation of the software development language.

Bibliography

Structured vs. Object Oriented Paradigm

According to K. Hopkins (2001) structured, orimperative, programming paradigm are build three fundamental constructs which are sequence, decision and repetition. Brookshear (2007) defines structured programming paradigm as a “programming process to be developed of a sequence of commands that, when followed, manipulate the data to produce the desirable result”. Therefore, it could be summarized as a process of implementing an algorithm as a sequence of commands. Structured programming paradigm suffered from a number of crucial issues such as memory leaks and deep hierarchies.
Object oriented programming paradigm decompose the system into a collection of objects each of which is capable of performing functionality related to itself. In addition, objects can interact between themselves to create a complex network of relationships to solve the problem at hand. Software development process using object oriented paradigm described by Mohamed, Ahmed Yakout A.; Hegazy, Abd El Fatah A.; Dawood, Ahmed R. (2010) as a process of orchestrating a data flow between collection of objects each manipulating the data to solve software problem. Those, in turn, allows better re-usage of objects and decoupling between the classes. A book published by Erich Gamma at al. (sometimes called “Gang of Four”) describes describes common programming problems and patterns for solving them allowing re-usage not only of the source code (objects) but re-usage of the design patterns as well.
According to Brookshear (2007), object oriented programming paradigm is based on a key five elements which are class, abstraction, encapsulation, inheritance, and polymorphism.
  • Class defines data object including characteristics such as attributes and methods or operations. For example, class Fish would includes attributes such as eyes, mouth, fins, and will be able to perform the following operations/methods: eat and swim.
  • Inheritance is a technique to described similar yet difference class characteristics. For example, Class Mammals would define attributes such as eyes, mouth and limbs and methods such as eat and walk. Then, class Dog would inherit all attributes and methods of Mammal with addition of specific methods such as bark. This allows software designer to reuse already implemented classes and extend those to solve a specific problem.
  • Abstraction, according to Wikipedia (n.d.) is “the act of representing essential features without including the background details or explanations”.
  • Encapsulation refers to ability of the class to conceal or restrict access to certain internal properties or methods. For example, a method bark could in theory use internal methods to implement the functionality but methods are considered as private (i.e. accessible by the object itself).
  • Polymorphism allows inheriting objects to mutate functionality to suit type-specific behaviour. For example, classes Doberman Pincher and German Shepherd both inheriting class Dog, could override the implementation of the method bark to match the distinct features of that specific bread.
Based on the characteristics of object oriented paradigm, it could be said that it better reflects the human way of thinking due to the fact that each real-world item is represented as object within the software with all relevant concepts (implemented as attributes and methods). On the downside, object oriented programming is usually associated with reduced performances and higher memory consumption due to the fact that each object has to be initiated and destroyed at the end of the object life cycle.
It is also important to note, that emerging software programming paradigms are constantly being developed. For example, aspect oriented that “that seeks new modularization of software systems in order to isolate secondary or supporting functions from the main program's business” (Mohamed, Ahmed Yakout A. at al 2010). According to aspect oriented programming paradigm, similar concepts or aspects such as error handling, logging and catching should be implemented in a separate functional units to solve the issue of code scattering and tangling.

Bibliography

Saturday, October 30, 2010

Software Tunning vs. Verification Process

According to Wikipedia (n.d.) performance tuning is the process of “modifying a software system to make some aspect of it work more efficiently or use fewer resources”. This is distinctively different process from program verification which is described in Wikipedia (n.d.) as “the act of proving or disproving the correctness of intended algorithms underlying a system with respect to a certain formal specification or property”. The difference could be summarized as the performance versus functionality of the software. In addition, program verification process assess the assess the software under certain preconditions to verify functionality as per design requirements without changes to the source code, the tuning process modifies the program to better utilize the allocated resources (such as memory and CPU). As a result, both issues will be discussed separately.
Program verification process make sure that the software functionless as per design requirements. This could be verified by inspection of the source code, or by running a pre-defined (as per software requirements) test cases. The process is semi-automated by usage of automated testing software (for example, HP WinRunner) to run test case scenario against the assessed software, but those need to be configured/loaded by a human operator. Recently, due to the rising number of security exploit in applications, software vendors began using automated software capable assessing the application for flaws leading to security vulnerabilities such as Cross Site Scripting (XSS), Buffer Overflow, Cross Site Request Forgery (CSRF). Although the automated tools are very efficient in running the per-defined test case scenario, human interaction requires to define the test cases. Furthermore, number of functional flaws are a result of logical incorrectness and will require assessment by human capable of understanding the application as demonstrated by the following code:
  1. procedure login(user, password)
  2. (
  3. success <--- true
  4. if call authenticate(user, password) = true
  5. then (...)
  6. else (success <--- false)
  7. return success
  8. ) end procedure
the functional requirement of the procedure is to authenticate the user using the provided values. Since it is a functional requirement to handle two inputs and provide a Boolean output, it would be a fair to assume that automated tool or inexperienced assessor will not try to mutate the input to a level causing the authenticate procedure to fail on line 4 which could under certain conditions result in a successful authentication (procedure returns the value true). Verifications as such require not only human understanding of the application, but also experience therefore will remain as “work of art”.
Performance tuning, on the other hand, can achieve a fairly good results using automated tools. Of course, an experienced software architect can make a difference at the design stage by choosing efficient algorithms and implementation technologies, but reusable designed and patters can compensate for lack of experience. Additional tuning could be achieved by optimization on the source code level and using an optimizing compiler which relies on different tuning techniques such as loop optimization, common sub-expression elimination and redundant store elimination. It could be demonstrated by the following code:
  1. procedure calculate(a ,b)
  2. (
  3. c <--- 4
  4. result <--- (a+b)-(a+b)/4
  5. return result
  6. ) end procedure
Program tuning could be achieved by removing line 3 and calculating the expression a+b in line 4 only once. The result of compiler optimization could be:
  1. procedure calculate (a,b)
  2. (
  3. c <--- a + b
  4. result <--- c – c/4
  5. return result
  6. ) end procedure
Further optimization could be achieved by performing a run time optimization “exceeding the capability of static compilers by dynamically adjusting parameters according to the actual input or other factors” (Wikipedia, n.d.).

Bibliography

Sunday, October 24, 2010

Network as Super Operating System

It is unarguable that today a lot of developed applications are web enabled. Furthermore, more and more offered services are in the cloud, distributed or network aware. Examples to that trend are Google Apps which is “Web-based word processor, spreadsheet, presentation, form, and data storage service offered by Google.” (Wikipedia, n.d.), Amazon Web Services which provide cloud based services such as Amazon Elastic Compute Cloud, Elastic Load Balancing, Amazon Relational Database Service, Amazon Simple Storage Service and Alexa Web Information Service. Another example is a Cloud OS which is according to Good OS (2010) “is a web browser plus operating system, enabling the browser to perform everything that the desktop is able to perform”. The extend of offered services is much more that only technology based services as it is demonstrated by Amazon Mechanical Turk which “enables companies to programmatically access this marketplace and a diverse, on-demand workforce”. Those services, as countless others, are based on the standard and mature protocols such as SOAP, HTTP and XML which allow interoperability between different operating systems and web browsers, making heterogeneous environment not an operational issues. But can the network be seen as a super operating system?
The main obstacle of network as a “true operating system” is the fact that network itself is everywhere and all the time. For example, recent (June 30, 2010) statistic published by the www.internetworldstats.com by Miniwatts Marketing Group. (2010) shows that although 77.4 percent of population in North America are using Internet, only 10.9 percent of Africa population and 21.5 percent of Asia population have available Internet access. In total, less then a third (only 28.7 percent) of the world population are using Internet in one way or another.
It is can not be denied that the world is moving towards network enabled, distributed services. Increasing availability of the Internet in the developed countries, technological innovations and lowering cost of personal computers drive further the penetration of the network for personal and corporate usage. According to Miniwatts Marketing Group. (2010), the grows of total Internet users is a staggering 2,357.3 percent in Africa, 1,825.3 percent in the Middle East and 1,032.8 percent in Latin America. Regardless, at the moment the network can not be seen as a super operating system simply because it is not accessible to the large majority of the world population.
According to Brookshear (2007) “an operating system is the software that controls the overall operation of a computer” which allows users to interact with preferential devices and external environment via set of drivers, utilities and applications. The network, could be seen as a information exchange media, connecting a single instance of operating system and the user to the external environment such as the Intranet and Internet. To some degree, network can be substituted with different information exchange methods such as removable media (CD, DVD, etc.) and paper printouts, and in many cases those would be the only available methods to exchange information.

Bibliography

Saturday, October 16, 2010

Comparison of CISC vs. RISC Architecture

Both RISK (Reduced Instruction Set Computing) and CISC (Complex Instruction Set Computing) are CPU architectures with different design philosophies. RISK design principles is based on the idea that it more efficient to execute a large number of simplified infrastructures instead of single complex instruction. One famous example which inspired RISK architecture philosophy is DEC VAX index instruction which was, according to John Bayko (2003), 45% to 60% faster when replaced by a number of simple instructions in a loop. As a result of larger number of instructions, execution program were more complex and required more main memory space. Examples of CPU based on the RISK design are DEC Alpha, ARC, ARM, AVR, MIPS, Power, and SPARC.
CISC, on the other hand, is a direct competitor to RISK. It is based on a general lack of main memory to store large number of instructions when executing the program. Instead, a single instruction would result in number of low level operations such as memory read, arithmetic operation and memory write, resulting in a more dense code. In addition, high level programming languages were not available in the early days of the computer history therefore hardware designers tried to architect a set of complex instructions that would do as much as possible on behalf of compilers. CISC as a term was mainly used to contrast to RISK architecture and was basically used to described computer architectures such as the IBM S/370, DEC VAX, Intel x86, and Motorola 680.
The main strengths of the RISK architecture are in the number of available registers and the computation speed (of simple instructions). The disadvantage, as note previously, is a larger code size resulting in higher memory requirements. Although previously RISK based architecture was assosiated with a more complex development process, it slowly becomes irrelevant with introduction of higher level software development languages. Since RISK architecture excels in higher throughput of arithmetic actions (both integer and float point), is more applicable to systems requiring high computing power such as image processing, biological and geographical simulations, and trading applications.
The fundamental advantage of CISC architecture is a more dense code with requires fewer accessed to main memory, and although there are constant advancements in the RAM technology, both in clock speed and size, the main memory is still slower then CPU registers. The disadvantages are a more complex hardware architecture and the need to translate (transcode) even a single simple instruction which make CISC processors less efficient than RISK. Based on that, CISC processors are more applicable to desktop environment where frequent access to main memory is required.
However, according to Gao Y., Tang S. and Ding Z. (n.d.) “In 90's, the trend is migrating toward each other, RISC machines may adopt some traits from CISC, while CISC may also do it vice versa”. This is evident in the evolution of the Intel microprocessors which are converting, starting with Intel Pro family, CISC based instructions to micro-ops (RISC instructions).

Bibliography

Friday, October 15, 2010

Effects of Development in Non-Volatile Memory Technology

According to Wikipedia (n.d.), non-volatile memory or NVN, is a computer memory capable to storing the data without constant power supply. Previously, non volatile memory was associated with rotating hard drives were associated with non-volatile memory, characterizing with slow performance, bulky size and high electricity demand, and electrically erasable programmable read-only memory (EEPROM) characterizing with low density (small storage size) and slow write speeds.
Advances in non-volatile memory technology such as ferroelectric random access memory (FeRAM), phase change random access memory (PRAM), resistive random access memory (RRAM) and magneto resistive random access memory (MRAM) attempt to “achieve high speed, high density and low cost while incorporating non-volatility with robust endurance and retention characteristics” (Mitchell Douglas, 2010).
Number of different opinions exist with regards to the most prominent technology. According to Mitchell Douglass (2010), MRAM is the most promising of the technologies listed above, while Seong N., Woo, D., and Lee H. (2010) believe that “compared to other non-volatile memory alternatives, PCM is more matured to production, and has a faster read latency and potentially higher storage density”.
The availability of increased memory storage, combined with smaller physical size and more efficient power utilization will have a direct impact on mobile devices such as Personal Digital Assistance (PDA), smart phones and notebooks, all have limited resources including power (battery) and processing capabilities. The fact that mobile devices will be able store more information while consuming less power, will allow system architects to design mobile application with additional functionalities and features, directly impacts the end user. In addition, it will make data processing and storage techniques more distributed across the broader network.
On the other hand, availability of non volatile memory has a direct effect on the security of the data. If previously it was stored in relatively secure enterprise environment, today it is quite common to find the data stored on mobile devices. Furthermore, as noted by Enck W., Butler K., Richardson T., and McDaniel P (2006) “sensitive data written to main memory is now available across system reboots and is vulnerable as the system is suspended”. Techniques to safeguard the data will have to evolve to support hundreds of terabytes of data stored within distributed environment.

Bibliography

Saturday, October 9, 2010

PCI DSS & PA DSS version 2.0

Slowly but surely, PCI (Payment Card Industry) standards are getting more and more mature. As it stands today, PCI SSC (Secure Standard Council) maintains 3 security standards:
  • Payment Card Industry Data Security Standard (PCI DSS)
  • Payment Application Data Security Standard (PA DSS)
  • PIN Transaction Security (PTS)

Those who expected the council to release a minor updates of PCI DSS and PA DSS (i.e. version 1.2.1 to version 1.3) and now dreading the “oh mighty” version 2.0 can rest at east as the majority of the changes are no more that clarification and restructure of the standards.

The biggest change is to the Approved Scanning Vendor (ASV) program where vendors, who previously could offer simple vulnerability scanning services, are now required to adopt a more comprehensive approach. The scanning vendors will be required to educate their employees by going through PCI SCC training and certification process, work closely with merchants and service providers through remediation and rescanning, and provide their customers with standardized report and Attestation of Scanning Compliance (AoSC). Not only that, the vendors are expected to include environment discovery and verification of the scanning scope with the customer. From a security standpoint, those are welcome changes...

Friday, October 8, 2010

Future of the Information Security Expert

From the beginning of time, there were individuals and groups who had something which other desire. The object of desire changed throughout the centuries reflecting the state and the norms of the human society at the time, but there was always a need to safeguard the object of desire.
Association between information and power exists from Biblical times, and therefore the need to for information security. One of the means to protect information is encryption which is defined by the Oxford dictionary as an action of “convert (information or data) into a cipher or code, especially to prevent unauthorized access”. According to Fred Cohen (1995) “cryptography probably began in or around 2000 B.C. in Egypt, where hieroglyphics were used to decorate the tombs of deceased rulers and kings”. Base on that, we can safely assume that the need to protect information, such as intellectual property, financial data and medical records, will remain in the near future. Therefore, a position information security expert will exist as well to make sure information remains confidential, accurate and available.
The skill set of the information security expert will have to evolve with the information itself and the methods to store and access the information. For example, if previously information was captured on a printed material and storage required physical security, today information security experts are dealing mainly with electronic date. In addition, methods used to access the information, both legitimate and methods used by malicious users, will have impact on the role of information security expert. For example, number of attacks conducted through web applications increased significantly from 2000. It is further confirmed by the Cenzic (2008) report stating that “the percentage of Web application vulnerabilities went up to a staggering 80 percent”. The same could be said about the training required – it will have to evolve to provide information security experts with the required skill set.
Automation of the information security will have a major influence on the role of information security expert. If previously, network based scans and attacks were conducted manually, today numerous tools such as Nessus, nmap, nCircle and SAINT automate the task. The same trend happens with the web application security. Security tools are catching up with the industry to provide automated tools to identify (and exploit) web application security vulnerabilities. Naturally, automated tools will have their limitation and that is where information security expert will have to fill in the gap. As of today, assessments such as analysis of logical application flow could not be done by a computer, due to a need to understand the application, until it (computer) could pass a Turing test.

Bibliography

Monday, April 12, 2010

Microsoft Security Development Lifecycle (SDL) - Version 5.0

Microsoft has release it's fifth version of Secure Development Lifecycle document. It provide guidance and illustrates the way Microsoft applies the SDL to its products and technologies. In addition, it includes security and privacy requirements and recommendations for secure software development at Microsoft. It addresses SDL guidance for Waterfall and Spiral development, Agile development, web applications and Line of Business applications.

It can be downloaded from http://go.microsoft.com/?linkid=9724944.

Thursday, April 8, 2010

Screen for more productivity

Today, majority of people are using Windows but what I’m going to talk about is Screen.

Screen is a GNU utility that allows you to use multiple windows (virtual VT100 terminals) in Unix/Linux. Although, if you have a console access, you could spawn multiple terminals, there are two features I would like to highlight.

First, is the fact that screen stays active, even when SSH session is terminated. All processes initiated will keep running and could be re-attached once SSH connection is re-established. Furthermore, since screen session initiates a separate process rather than login session, it is more resource efficient.

In addition, using Screen, it is possible to share processes between multiple users and/or protect using password. For example, you create a screen session and run a command. Another person would be able to list existing screen sessions (screen –ls) and attach a session to their terminal (screen –r). Of course, that is not very secure, therefore it is possible to protect the screen session using user password.
jmarkh@ubuntu-01:~$ screen -S nmap
[detached]
jmarkh@ubuntu-01:~$ screen -S nessus
[detached]
jmarkh@ubuntu-01:~$ screen -ls
There are screens on:
15833.nessus (10-04-08 10:52:20 AM) (Detached)
15813.nmap (10-04-08 10:52:10 AM) (Detached)
15620.pts-0.ubuntu-01 (10-04-08 10:29:38 AM) (Detached)
3 Sockets in /var/run/screen/S-jmarkh.
Here are some commands/shortcuts that could be used with Screen (note that every screen command begins with Ctrl-a):
Ctrl-a cCreate new window (shell)
Ctrl-a kKill the current window
C-a C-xLock this terminal.
Ctrl-a wList all windows (the current window is marked with "*")
Ctrl-a 0-9Go to a window numbered 0-9
Ctrl-a nGo to the next window
Ctrl-a Ctrl-aToggle between the current and previous window
Ctrl-a [Start copy mode
Ctrl-a ]Paste copied text
Ctrl-a ?Help (display a list of commands)
Ctrl-a Ctrl-\Quit screen
Ctrl-a D (Shift-d)Power detach and logout
Ctrl-a dDetach but keep shell window open

The man pages for screen are quite readable and make a good tutorial.
man screen

Wednesday, January 6, 2010

The WASC Threat Classification v2.0 is Out

The WASC Threat Classification v2.0 was released two days ago. It is an effort to classify the weaknesses, and attacks that can lead to the compromise of a website, its data, or its users.
For more information, http://projects.webappsec.org/Threat-Classification.