Saturday, October 30, 2010

Software Tunning vs. Verification Process

According to Wikipedia (n.d.) performance tuning is the process of “modifying a software system to make some aspect of it work more efficiently or use fewer resources”. This is distinctively different process from program verification which is described in Wikipedia (n.d.) as “the act of proving or disproving the correctness of intended algorithms underlying a system with respect to a certain formal specification or property”. The difference could be summarized as the performance versus functionality of the software. In addition, program verification process assess the assess the software under certain preconditions to verify functionality as per design requirements without changes to the source code, the tuning process modifies the program to better utilize the allocated resources (such as memory and CPU). As a result, both issues will be discussed separately.
Program verification process make sure that the software functionless as per design requirements. This could be verified by inspection of the source code, or by running a pre-defined (as per software requirements) test cases. The process is semi-automated by usage of automated testing software (for example, HP WinRunner) to run test case scenario against the assessed software, but those need to be configured/loaded by a human operator. Recently, due to the rising number of security exploit in applications, software vendors began using automated software capable assessing the application for flaws leading to security vulnerabilities such as Cross Site Scripting (XSS), Buffer Overflow, Cross Site Request Forgery (CSRF). Although the automated tools are very efficient in running the per-defined test case scenario, human interaction requires to define the test cases. Furthermore, number of functional flaws are a result of logical incorrectness and will require assessment by human capable of understanding the application as demonstrated by the following code:
  1. procedure login(user, password)
  2. (
  3. success <--- true
  4. if call authenticate(user, password) = true
  5. then (...)
  6. else (success <--- false)
  7. return success
  8. ) end procedure
the functional requirement of the procedure is to authenticate the user using the provided values. Since it is a functional requirement to handle two inputs and provide a Boolean output, it would be a fair to assume that automated tool or inexperienced assessor will not try to mutate the input to a level causing the authenticate procedure to fail on line 4 which could under certain conditions result in a successful authentication (procedure returns the value true). Verifications as such require not only human understanding of the application, but also experience therefore will remain as “work of art”.
Performance tuning, on the other hand, can achieve a fairly good results using automated tools. Of course, an experienced software architect can make a difference at the design stage by choosing efficient algorithms and implementation technologies, but reusable designed and patters can compensate for lack of experience. Additional tuning could be achieved by optimization on the source code level and using an optimizing compiler which relies on different tuning techniques such as loop optimization, common sub-expression elimination and redundant store elimination. It could be demonstrated by the following code:
  1. procedure calculate(a ,b)
  2. (
  3. c <--- 4
  4. result <--- (a+b)-(a+b)/4
  5. return result
  6. ) end procedure
Program tuning could be achieved by removing line 3 and calculating the expression a+b in line 4 only once. The result of compiler optimization could be:
  1. procedure calculate (a,b)
  2. (
  3. c <--- a + b
  4. result <--- c – c/4
  5. return result
  6. ) end procedure
Further optimization could be achieved by performing a run time optimization “exceeding the capability of static compilers by dynamically adjusting parameters according to the actual input or other factors” (Wikipedia, n.d.).

Bibliography

Sunday, October 24, 2010

Network as Super Operating System

It is unarguable that today a lot of developed applications are web enabled. Furthermore, more and more offered services are in the cloud, distributed or network aware. Examples to that trend are Google Apps which is “Web-based word processor, spreadsheet, presentation, form, and data storage service offered by Google.” (Wikipedia, n.d.), Amazon Web Services which provide cloud based services such as Amazon Elastic Compute Cloud, Elastic Load Balancing, Amazon Relational Database Service, Amazon Simple Storage Service and Alexa Web Information Service. Another example is a Cloud OS which is according to Good OS (2010) “is a web browser plus operating system, enabling the browser to perform everything that the desktop is able to perform”. The extend of offered services is much more that only technology based services as it is demonstrated by Amazon Mechanical Turk which “enables companies to programmatically access this marketplace and a diverse, on-demand workforce”. Those services, as countless others, are based on the standard and mature protocols such as SOAP, HTTP and XML which allow interoperability between different operating systems and web browsers, making heterogeneous environment not an operational issues. But can the network be seen as a super operating system?
The main obstacle of network as a “true operating system” is the fact that network itself is everywhere and all the time. For example, recent (June 30, 2010) statistic published by the www.internetworldstats.com by Miniwatts Marketing Group. (2010) shows that although 77.4 percent of population in North America are using Internet, only 10.9 percent of Africa population and 21.5 percent of Asia population have available Internet access. In total, less then a third (only 28.7 percent) of the world population are using Internet in one way or another.
It is can not be denied that the world is moving towards network enabled, distributed services. Increasing availability of the Internet in the developed countries, technological innovations and lowering cost of personal computers drive further the penetration of the network for personal and corporate usage. According to Miniwatts Marketing Group. (2010), the grows of total Internet users is a staggering 2,357.3 percent in Africa, 1,825.3 percent in the Middle East and 1,032.8 percent in Latin America. Regardless, at the moment the network can not be seen as a super operating system simply because it is not accessible to the large majority of the world population.
According to Brookshear (2007) “an operating system is the software that controls the overall operation of a computer” which allows users to interact with preferential devices and external environment via set of drivers, utilities and applications. The network, could be seen as a information exchange media, connecting a single instance of operating system and the user to the external environment such as the Intranet and Internet. To some degree, network can be substituted with different information exchange methods such as removable media (CD, DVD, etc.) and paper printouts, and in many cases those would be the only available methods to exchange information.

Bibliography

Saturday, October 16, 2010

Comparison of CISC vs. RISC Architecture

Both RISK (Reduced Instruction Set Computing) and CISC (Complex Instruction Set Computing) are CPU architectures with different design philosophies. RISK design principles is based on the idea that it more efficient to execute a large number of simplified infrastructures instead of single complex instruction. One famous example which inspired RISK architecture philosophy is DEC VAX index instruction which was, according to John Bayko (2003), 45% to 60% faster when replaced by a number of simple instructions in a loop. As a result of larger number of instructions, execution program were more complex and required more main memory space. Examples of CPU based on the RISK design are DEC Alpha, ARC, ARM, AVR, MIPS, Power, and SPARC.
CISC, on the other hand, is a direct competitor to RISK. It is based on a general lack of main memory to store large number of instructions when executing the program. Instead, a single instruction would result in number of low level operations such as memory read, arithmetic operation and memory write, resulting in a more dense code. In addition, high level programming languages were not available in the early days of the computer history therefore hardware designers tried to architect a set of complex instructions that would do as much as possible on behalf of compilers. CISC as a term was mainly used to contrast to RISK architecture and was basically used to described computer architectures such as the IBM S/370, DEC VAX, Intel x86, and Motorola 680.
The main strengths of the RISK architecture are in the number of available registers and the computation speed (of simple instructions). The disadvantage, as note previously, is a larger code size resulting in higher memory requirements. Although previously RISK based architecture was assosiated with a more complex development process, it slowly becomes irrelevant with introduction of higher level software development languages. Since RISK architecture excels in higher throughput of arithmetic actions (both integer and float point), is more applicable to systems requiring high computing power such as image processing, biological and geographical simulations, and trading applications.
The fundamental advantage of CISC architecture is a more dense code with requires fewer accessed to main memory, and although there are constant advancements in the RAM technology, both in clock speed and size, the main memory is still slower then CPU registers. The disadvantages are a more complex hardware architecture and the need to translate (transcode) even a single simple instruction which make CISC processors less efficient than RISK. Based on that, CISC processors are more applicable to desktop environment where frequent access to main memory is required.
However, according to Gao Y., Tang S. and Ding Z. (n.d.) “In 90's, the trend is migrating toward each other, RISC machines may adopt some traits from CISC, while CISC may also do it vice versa”. This is evident in the evolution of the Intel microprocessors which are converting, starting with Intel Pro family, CISC based instructions to micro-ops (RISC instructions).

Bibliography

Friday, October 15, 2010

Effects of Development in Non-Volatile Memory Technology

According to Wikipedia (n.d.), non-volatile memory or NVN, is a computer memory capable to storing the data without constant power supply. Previously, non volatile memory was associated with rotating hard drives were associated with non-volatile memory, characterizing with slow performance, bulky size and high electricity demand, and electrically erasable programmable read-only memory (EEPROM) characterizing with low density (small storage size) and slow write speeds.
Advances in non-volatile memory technology such as ferroelectric random access memory (FeRAM), phase change random access memory (PRAM), resistive random access memory (RRAM) and magneto resistive random access memory (MRAM) attempt to “achieve high speed, high density and low cost while incorporating non-volatility with robust endurance and retention characteristics” (Mitchell Douglas, 2010).
Number of different opinions exist with regards to the most prominent technology. According to Mitchell Douglass (2010), MRAM is the most promising of the technologies listed above, while Seong N., Woo, D., and Lee H. (2010) believe that “compared to other non-volatile memory alternatives, PCM is more matured to production, and has a faster read latency and potentially higher storage density”.
The availability of increased memory storage, combined with smaller physical size and more efficient power utilization will have a direct impact on mobile devices such as Personal Digital Assistance (PDA), smart phones and notebooks, all have limited resources including power (battery) and processing capabilities. The fact that mobile devices will be able store more information while consuming less power, will allow system architects to design mobile application with additional functionalities and features, directly impacts the end user. In addition, it will make data processing and storage techniques more distributed across the broader network.
On the other hand, availability of non volatile memory has a direct effect on the security of the data. If previously it was stored in relatively secure enterprise environment, today it is quite common to find the data stored on mobile devices. Furthermore, as noted by Enck W., Butler K., Richardson T., and McDaniel P (2006) “sensitive data written to main memory is now available across system reboots and is vulnerable as the system is suspended”. Techniques to safeguard the data will have to evolve to support hundreds of terabytes of data stored within distributed environment.

Bibliography

Saturday, October 9, 2010

PCI DSS & PA DSS version 2.0

Slowly but surely, PCI (Payment Card Industry) standards are getting more and more mature. As it stands today, PCI SSC (Secure Standard Council) maintains 3 security standards:
  • Payment Card Industry Data Security Standard (PCI DSS)
  • Payment Application Data Security Standard (PA DSS)
  • PIN Transaction Security (PTS)

Those who expected the council to release a minor updates of PCI DSS and PA DSS (i.e. version 1.2.1 to version 1.3) and now dreading the “oh mighty” version 2.0 can rest at east as the majority of the changes are no more that clarification and restructure of the standards.

The biggest change is to the Approved Scanning Vendor (ASV) program where vendors, who previously could offer simple vulnerability scanning services, are now required to adopt a more comprehensive approach. The scanning vendors will be required to educate their employees by going through PCI SCC training and certification process, work closely with merchants and service providers through remediation and rescanning, and provide their customers with standardized report and Attestation of Scanning Compliance (AoSC). Not only that, the vendors are expected to include environment discovery and verification of the scanning scope with the customer. From a security standpoint, those are welcome changes...

Friday, October 8, 2010

Future of the Information Security Expert

From the beginning of time, there were individuals and groups who had something which other desire. The object of desire changed throughout the centuries reflecting the state and the norms of the human society at the time, but there was always a need to safeguard the object of desire.
Association between information and power exists from Biblical times, and therefore the need to for information security. One of the means to protect information is encryption which is defined by the Oxford dictionary as an action of “convert (information or data) into a cipher or code, especially to prevent unauthorized access”. According to Fred Cohen (1995) “cryptography probably began in or around 2000 B.C. in Egypt, where hieroglyphics were used to decorate the tombs of deceased rulers and kings”. Base on that, we can safely assume that the need to protect information, such as intellectual property, financial data and medical records, will remain in the near future. Therefore, a position information security expert will exist as well to make sure information remains confidential, accurate and available.
The skill set of the information security expert will have to evolve with the information itself and the methods to store and access the information. For example, if previously information was captured on a printed material and storage required physical security, today information security experts are dealing mainly with electronic date. In addition, methods used to access the information, both legitimate and methods used by malicious users, will have impact on the role of information security expert. For example, number of attacks conducted through web applications increased significantly from 2000. It is further confirmed by the Cenzic (2008) report stating that “the percentage of Web application vulnerabilities went up to a staggering 80 percent”. The same could be said about the training required – it will have to evolve to provide information security experts with the required skill set.
Automation of the information security will have a major influence on the role of information security expert. If previously, network based scans and attacks were conducted manually, today numerous tools such as Nessus, nmap, nCircle and SAINT automate the task. The same trend happens with the web application security. Security tools are catching up with the industry to provide automated tools to identify (and exploit) web application security vulnerabilities. Naturally, automated tools will have their limitation and that is where information security expert will have to fill in the gap. As of today, assessments such as analysis of logical application flow could not be done by a computer, due to a need to understand the application, until it (computer) could pass a Turing test.

Bibliography