Thursday, August 16, 2012

Client/Server DBMS


There are virtually limitless factors needed to be taken into consideration when evaluation client/server architecture. These factors range from hardware requirements, deployment topology, communication protocol, middleware requirements, integration of management interface, security aspects, scalability (horizontal and vertical) of the solution, development application interfaces, etc. Similarly, client/server database has a number of characteristics which can impact the overall solution. Peter Rob et al. (2009) notes that these include interoperability and remote query execution.
On of the most important client/server database management system characteristic is interoperability which manifest itself in the ability to “provide transparent data access to multiple and heterogeneous clients” (Peter Rob, Carlos Coronel and Steven Morris, 2009. Appendix F. Page 158). Numerous clients written in Java, .NET, C/C++ or Perl have the capacity to interact with the database management server to retrieve or update information through usage of standard protocols such as ODBC and JDBC.
In addition, client/server DBMS architecture allow the end-user to manipulate the information stored in the server DBMS in a number of forms. For example, one design can utilize resources (often distributed) allocated to the DBMS server to process the information (thin client), while another design can distribute the information processing relying on the computing resources of the clients (fat client). Peter Rob et al. (2009) notes that in many cases, the client/server architecture is closely related to distributed database management systems (DDBMS) where the processing of a single query can take place on a number of remote servers.
The client/server database architecture can take a number of forms. Peter Rob et al. (2009) mentions 2-tier and 3-tier architecture which adds a middleware between the client and the server tierd. The middleware, as the name suggests, has a number of functions to facilitate connectivity to the database management server ranging from exposing ODBC or MQ API, converting query formats and managing security aspect (authentication and authorization). Mohammad Ghulam Ali (2009) suggests a 4-tier database architecture which adds a Global Database Management System component to decompose the query “into a set of sub queries to be executed at various remote heterogeneous local component database servers” (Ali, M 2009).
As with any application, distribution of business logic and/or presentation increases the number of attack vectors, thus requiring thorough risk assessment and implementation of appropriate compensating controls to secure the overall solution (Cherry, D 2011). For example, distribution of information from a single location into multiple clients can have a security impact on information considered as private or confidential. Don Klely (2011) notes that additional security controls should be implemented to compensate for the remote storage (on the client side) and transportation of the information. Although this can be solved through a trivial implementation of SSL/TLS protocol between the database server and clients, the solution designer has to consider all applicable risks (both technical and operational) before deciding on the optimal compensating control(s).

Bibliography

  • Ali, M 2009, 'A Multidatabase System as 4-Tiered Client-Server Distributed Heterogeneous Database System', arXiv, EBSCOhost, viewed 16 August 2012.
  • Cherry, D 2011, Securing SQL Server [Electronic Book] : Protecting Your Database From Attackers / Denny Cherry, n.p.: Burlington, MA : Syngress, 2011., University of Liverpool Catalogue, EBSCOhost, viewed 16 August 2012.
  • Kiely, D 2011, 'Key Ways to Secure ASP.NET Applications with a SQL Server Back End', SQL Server Magazine, 13, 12, pp. 27-31, Computers & Applied Sciences Complete, EBSCOhost, viewed 16 August 2012.
  • Peter Rob, Carlos Coronel and Steven Morris, 2009. “Database Systems: Design, Implementation and Management”. 9th Edition. Course Technology.

Saturday, April 21, 2012

(Security) Open Source vs. Non-Open Source

When discussing two seemingly unrelated topics such as security and “open source versus non-open source” the discussion usually boils down to the quality of the product rather than the architecture or the implementation. Steve, M 2008 writes that open source projects as well as commercial software vendors use similar software development practices, methodologies and tools “bug trackers like Bugzilla, source code revision management tools like SVN and automatic build tools such as ant” (Steve, M 2008). Moreover, Gary McGraw points out that “Software security relates entirely and completely to quality. You must think about security, reliability, availability, dependability — at the beginning, in the design, architecture, test and coding phases, all through the software life cycle” (Mark Willoughby, 2005), therefore it is imperative to analyze the factors impacting the software quality both in open source and non-open source worlds.
While Ross J. Anderson (2008) notes that commercial deadlines can impact the quality of the source produced even by skilled software developers, the argument is counteracted by Craig Mundie, CTO Microsoft that in the current market the software vendors are under pressure to develop a quality software as “more and more customers view security as a key decision factor” (Berni Dwan, 2004). But statistical information available on National Vulnerability Database (2012) tend to agree with Ross J. Anderson showing a steady growth of vulnerabilities discovered in Microsoft Windows. According to National Vulnerability Database (2012), Microsoft Windows had 17 vulnerabilities discovered in 2007, 34 in 2008, 47 in 2009, 166 in 2010 and 197 in 2011.
Schryen, G (2011) uses two additional factors, Mean Time Between Vulnerability Disclosures (MTBVD) and (UN)Patching Behavior, to compare the security of open source versus non-open source system components. According to collected data, in most cases the vulnerabilities discovered within open source products are fixed by a degree quicker than equivalent non-open source counterparts. Moreover, the research demonstrates that, again in most cases, the open source products aim to close majority of identified vulnerabilities while non-open source adopt a more prioritized approach whereby “there is a strong bias toward patching severe vulnerabilities” (Schryen, G, 2011). As a result, 66.22% of Microsoft Internet Explorer 7 vulnerabilities remain unpatched compare to 20.36% of Mozilla Firefox 2. The same results are reflect the status of E-mail clients whereby Microsoft Outlook Express 6 has 65.22% unpatched vulnerabilities compare to 5.45% for Mozilla Thunderbird 1.
Anderson J. Ross (2008) also points out the differences between the target market of open source products and non-open source products, “that the users of open products such as GNU/Linux and Apache are more motivated to report system problems effectively, and it may be easier to do so, compared with Windows users who respond to a crash by rebooting and would not know how to report a bug if they wanted to” (Anderson J. Ross, 2008). This, in turn, can further skew the published vulnerabilities statistic.
While the numbers may suggest that the open source solutions are more secure, “open and closed approaches to security are pretty much equivalent, making source code publicly available helps attackers and defenders alike” (Berni Dwan, 2004) allowing each party to evaluate the most effective and sophisticated attack methods.

Bibliography

  • Berni Dwan 2004, 'Open source vs closed', Network Security, Volume 2004, Issue 5, May 2004, Pages 11-13, EBSCOhost, viewed 21 April 2012.
  • Mark Willoughby, 2005. “Q&A: Quality software means more secure software” [online]. Computer World. Available from: http://www.computerworld.com/s/article/91316/Q_A_Quality_software_means_more_secure_software (accessed. April 21, 2012).
  • National Vulnerability Database (NVD), 2012. “Statistics Results Page” [online]. Available from: http://goo.gl/RGRy9 (accessed: April 21, 2012)
  • Schryen, G 2011, 'Is Open Source Security a Myth?', Communications Of The ACM, 54, 5, pp. 130-140, Business Source Premier, EBSCOhost, viewed 21 April 2012.
  • Steve, M 2008, 'Open Source: Open source: does transparency lead to security?', Computer Fraud & Security, 2008, pp. 11-13, ScienceDirect, EBSCOhost, viewed 21 April 2012.
  • Ross J. Anderson, 2008. “A Guide to Building Dependable Distributed Systems”. 2nd Edition. Wiley Publishing.

Saturday, April 14, 2012

E-Money

E-money is defined by European Commission as “electronically, including magnetically, stored monetary value as represented by a claim on the issuer which is issued on receipt of funds for the purpose of making payment transactions” (European Commission, n.d.). As such, E-money provides the same advantages as cash: real time transaction and anonymity. Ken Griffin et al. (n.d.) note that there are a number of categories of E-money which include E-cash, digital checks, digital banks checks, smart cards and electronic coupons and tokens. There are a number of E-money issues such well known PayPal and less known such as Pecunix, Ukash and Bitcoin. Moreover, virtual world such as SecondLife has its own E-money currency (e.g. L$) which can be earned, spend and exchanged for US dollars (SecondLife, n.d.). As stated previously, the main advantages of E-money are real time transactions, low transaction fees and anonymity similar to real cash transactions. There are, however, concerns about security and fraud, as well as question on financial backup the virtual currency. For example, criminals are targeting Bitcom digital wallets or are using botnet networks to utilize the collective resources to mint virtual currency which can be exchanged for a real money (Peter Coogan, 2011).

Credit cards, on the other hands, are backed by international organizations responsible to issue and acquire credit card transactions. Often, the transaction fees (flat fee or a percent from a transaction) are charged to the merchant. From a end-user standpoint, credit cards are existing and trusted technology whereby the security standard (e.g. Payment Card Industry Data Security Standard) is verified by independent qualified security assessors (QSA). Additional benefits such as CashBack, AirMiles and membership points increase the adoption rate by the consumers. The drawback (which is considered by some consumers as an advantage) is the fact that all purchases are done in credit, and if not paid in full are subject to comparatively high interest rates changed.

It is difficult to compare the security risk between the E-money and the credit card technologies as both had high profile data thefts such as theft of 25,000 Bitcoins from 478 account (Jason Mick, 2011) and “a massive" security breach at a credit card processor has put 10 million accounts at risk” (Brandon Hill, 2012). On both fronts, there are efforts to tighten up the security as evident with Payment Card Industry Data Security Standard (PCI SSC, 2010) and MintChip Challenge by Royal Canadian Mint to create a secure alternative to E-cash backed by the Canadian Government (Emily Jackson, 2012).

With variety of methods to exchange funds (i.e. payment and transfers) electronically such as PayPal, MintChip, Bitcoin, Credit and Debit Cards, it is not surprising that Sweden moving towards cashless economy (CBCNews, 2012) where different digital payment methods are used in parallel serving different purpose (i.e. micro payments) rather than compete with each-other.

Bibliography

  • Brandon Hill, 2012. “Global Payments Inc. Hit By Security Breach; 10M Visa, MasterCard Accounts at Risk” [online]. DailyTech. Available from: http://www.dailytech.com/Massive+Security+Breach+Hits+MasterCard+Visa+10M+Accounts+at+Risk/article24355.htm (accessed: April 14, 2012).
  • CBCNews, 2012. “Sweden moving towards cashless economy” [online]. Available from: http://www.cbsnews.com/8301-202_162-57399610/sweden-moving-towards-cashless-economy/ (accessed: April 14, 2012).
  • Emily Jackson, 2012. “Royal Canadian Mint to create digital currency” [online]. The Star. Available from: http://www.thestar.com/business/article/1159513--royal-canadian-mint-to-create-digital-currency (accessed: April 14, 2012).
  • European Commission, n.d. “e-Money” [online]. Available from: http://ec.europa.eu/internal_market/payments/emoney/index_en.htm (accessed: April 14, 2012).
  • Jason Mick, 2011. “Inside the Mega-Hack of Bitcoin: the Full Story” [online]. DailyTech. Available from: http://www.dailytech.com/Inside+the+MegaHack+of+Bitcoin+the+Full+Story/article21942.htm (accessed: April 14, 2012).
  • Ken Griffin, Phillip Balsmeier, Bobi Doncevski, n.d. “Electronic Money as A Competitive Advantage” [online]. Available from: http://journals.cluteonline.com/index.php/RBIS/article/download/5458/5543&ei=uVKFT-2kBoXh4QSvr4izBQ&usg=AFQjCNFHe4HkbgG7hbdsQzlWnXg1LR7MNA (accessed: April 14, 2012).
  • Payment Card Industry Security Standard Council, 2010. “Payment Card Industry (PCI) Data Security Standard” [online]. Available from: https://www.pcisecuritystandards.org/documents/pci_dss_v2.pdf (accessed: April 12, 2012).
  • Ross J. Anderson, 2008. “A Guide to Building Dependable Distributed Systems”. 2nd Edition. Wiley Publishing.
  • SecondLife, n.d. “Buy L$ ” [online]. Available from: https://secondlife.com/my/lindex/buy.php?lang=en_US (accessed: April 14, 2012).
  • Peter Coogan, 2011. “Bitcoin Botnet Mining” [online]. Symantec. Available from: http://www.symantec.com/connect/blogs/bitcoin-botnet-mining (accessed: April 14, 2012).

Saturday, April 7, 2012

Network Security Architecture

While information security could be seen as a “business enabler” by allowing the organization to deploy innovating solutions, thus gaining competitive advantage, without exposing the organization to additional risk, incorrect design and implementation could have the opposite effect by placing unnecessary burden on the Information Technology department staff and the organizational employees. Therefor, implementation of security controls to protect network boundaries should be consistent with business needs, budget and the resources these controls are designed to protect. As a result, the security design should be based on a security assessment (threat risk assessment) of the current environment to identify corporate assets and resources, and the security risks they are exposed to. George Farah (2004) suggests a five phased approach to a security network architecture with threat risk assessment as an initial step. It follows by formulation of a network security architecture design to mitigate the identified risks. In the third phase, the organization develops a security policy and procedures to govern the deployment and maintenance of the proposed architecture. The forth phase includes the deployment of the architecture while int the last, fifth phase the organization implements the security polity through management processes such as patch management, configuration management, vulnerability management, etc.
Typical security components in the network infrastructure include a firewall, Virtual Private Network (VPN) concentrator and a proxy server. Additional devices could include Intrusion Detection System (IDS) or Intrusion Prevention System (IPS), Web Application Firewall, Anti-Spam & Email Security Software and Data Leakage Prevention (DLP). The logical location of these and other devices depends on the corporate assets requiring protection, business processes, existing (or proposed) network architecture. For example, Virtual Private Network is only necessary when employees are required to work from a remote location such as home or remote office. As stated early, incorrect architecture or deployment of unnecessary system components could mean additional burden on the IT staff which will impact the service provided by the organization.

Edge firewall is often regarded as “a workhorse” in securing internal network from unauthorized access via the Internet” (Patrick W. Luce, 2004). It often provides a number of security services such as access control, stateful inspection and Network Address Translation (NAT). In many cases, firewall devices have a build-in VPN and basic IPS/IDS capabilities such as Cisco PIX 500 series (Cisco, n.d.). 

Modern firewall devices offered by security vendors such as Palo Alto, Checkpoint and Cisco, allow organizations to extend the security services provided by the device using “plug and play” architecture. For example. Check Point Software Blade Architecture allows network administrators to extend the firewall functionality “without additional hardware, firmware or driver” (Checkpoint, 2012). The additional blades include services such as IPSEC VPN, IPS, DLP, Web Security, Antivirus & Anti-Malware, Anti-Spam & Email Security and Voice over IP (VoIP). 

When considering the “expendable” security devices as oppose to “best of breed” architecture, the security experts are divided into two camps. On one hand, a unified management of the security architecture allows network administrators to correlate events and incidents, and better manage the security posture of the organizations. On the other hand, having security devices from a number of software and hardware vendors (best of breed) adds an additional layer of defense an attacker need to overcome when trying to exploit an identified vulnerability.

To conclude, the organizational security solution should meet targeted business security needs rather than on a trend and security “fashion”. Unnecessary complex solutions increase the pressure on the IT which can, in fact, reduce the overall security level of the entire organization.

Bibliography

Friday, March 30, 2012

Software Tamper Resistance


One of the methods to provide tamper resistance capabilities to a software is code obfuscation. It is a process designed to change the software in order to make the software more difficult to reverse engineering while semantically equivalent to the original program. The technique is used both by “white hat” security specialist to protect Intellectual Property and to “deter the cracking of licensing and DRM schemes” (Victor, D 2008), as well as “black hat” as a protection technique to avoid detection (signature based) by anti-virus engines. Victor D. (2008) lists a number of techniques used to obfuscate the code including just-in-time decryption, polymorphic encryption, timing check, layer anti-debugging logic and binary code morphing. Moreover, Bai Zhongying and Quin Jiancheng (2009) successfully applied obfuscation principals in web environment creating prototypes of JavaScript and HTML obfuscation tools. An additional advance technique, self-modifying code, was proposed by Nokos Mavgoriannopoilos (n.d.) whereby the software mutates its own code in order to make it difficult to “make attacks [on the code] more expensive” (Nikos Mavrogiannopoulos, Nessim, K, & Bart, P n.d.).
While usage of obfuscation techniques become widely acceptable, Preda, M, & Giacobazzi, R (2009) raise a question of effectiveness of code obfuscation techniques - “it is hard to compare different obfuscating transformations with respect to their resilience to attacks and this makes it difficult to understand which technique is better to use in a given scenario” (Preda, M, & Giacobazzi, R. 2009) due to absence of theoretical research to formalize the metric of code obfuscation.
ProGuard “is a free Java class file shrinker, optimizer, obfuscator, and preverifier” (Eric Laforune, 2011). While advantages of the tool are easy integration into commonly used Integrated Development Environments (IDE) as well as ant tasks, and additional functionality such as optimizer and code shrieker, its obfuscation capabilities are limited to code morphing. More advance techniques, or combination of a number of obfuscation techniques such as flow obfuscation and string encryption could potentially (see previous paragraph discussing the lack of metric to measure the effectiveness of code obfuscation) exponentially increase the effort required to reverse engineer the code.

Bibliography

  • Bai Zhongying; Qin Jiancheng; 2009 , "Webpage Encryption Based on Polymorphic Javascript Algorithm," Information Assurance and Security, 2009. IAS '09. Fifth International Conference on , vol.1, no., pp.327-330, 18-20 Aug. 2009
    doi: 10.1109/IAS.2009.39
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=5284075&isnumber=5282964
  • Eric Lafortune, 2011. “ProGuard” [online]. Available from: http://proguard.sourceforge.net/ (accessed: March 30, 2012).
  • Nikos Mavrogiannopoulos, Nessim, K, & Bart, P n.d., 'A taxonomy of self-modifying code for obfuscation', Computers & Security, ScienceDirect, EBSCOhost, viewed 29 March 2012.
  • Preda, M, & Giacobazzi, R 2009, 'Semantics-based code obfuscation by abstract interpretation', Journal Of Computer Security, 17, 6, pp. 855-908, Academic Search Complete, EBSCOhost, viewed 29 March 2012.
  • Ross J. Anderson, 2008. “A Guide to Building Dependable Distributed Systems”. 2nd Edition. Wiley Publishing.
  • Victor, D 2008, 'Obfuscation: Obfuscation – how to do it and how to crack it', Network Security, 2008, pp. 4-7, ScienceDirect, EBSCOhost, viewed 29 March 2012.

Saturday, March 17, 2012

Cracking AES/3-DES


In 2002, a distributed network (desitributed.net) was successfully recovers a DES encryption key within 2.25 days. In order to estimate if 3-DES or AES keys can be recovered using a brute-force attack, this paper calculates the number of encryption operations and the (potentially) available processing power.
One of the largest distributed computing projects, folding@home estimates that with utilization of modern hardware such as Graphic Processing Unit (GPU), it is possible to achieve an acceleration of up to forty times (x40) over CPU due to its ability to perform “an enormous number of Floating Point OPerations (FLOPs) “ (Vijay Pande, 2010). Therefore, by using 200,000 actively processing computers, it is possible to surpass the 10 Petaflop level. As such, it is safe to assume that one average each participating machine contributes:




10×(10^15)÷200,000=50,000,000,000=50*(10^9)


or 50 billion calculations per second.
To amass the required computing power to brute force 3-DES or AES encryption, a bot network could be use to “harvest” idle CPU/GPU cycles. One of the most advance malware today, TDL-4, controls over 4.5 million infected computers in 2011 (Sergey Golovanov and Igor Soumenkov, 2011). Therefore, using previous assumption that zombie (infected computer) is capable of processing 50 billions calculations per seconds, the total computing power of a bot-net network such as TDL-4 is:




50*(10^9)*4.5*(10^6)=2.25×10^17=225×10^15


or 225 quadrillion (short scale) operations per second.
S. Kelly (2006) note that because in 3DES encryption scheme, the encryption keys relationship is C = E_k3(D_k2(E_k1(p))), in order to brute force a 3DES encryption a total of 2^168 cryptographic operation will be required. Assuming that a single 3DES decryption takes a microsecond (10^-6), it will take:



2^168÷225×10^15×10^6


1.66286409 × 10^27 second, or 5.26941088 × 10^19 years. This is far longer than the universe exist (4.339×10^17 seconds). The reader should note that the figure is by far smaller than what was estimated by S. Kelly (2006) and this is due to increased computing power of moder CPU and GPU devices. Regardless, it is safe to assume that 3DES can withstand a brute force attack.

Bibliography

  • Kaur, G, & Kumar, D 2010, 'Performance and Analysis of AES, DES and Triple DES against Brute Force Attack to protect MPLS Network', International Journal Of Advanced Research In Computer Science, 1, 4, p. 420, EDS Foundation Index, EBSCOhost, viewed 17 March 2012.
  • Ross J. Anderson 2008, “Security Engineering: A Guide to Building Dependable Distributed Systems”. 2nd Edition. Wiley.
  • Sergey Golovanov, Igor Soumenkov 2011, “TDL4 – Top Bot” [online]. Kaspersky Lab ZAO. Available from: http://www.securelist.com/en/analysis/204792180/TDL4_Top_Bot?print_mode=1 (accessed: March 17, 2012).
  • S. Kelly, 2006, Security Implications of Using the Data Encryption Standard (DES) [online]. Network Working Group. Available from: http://www.ietf.org/rfc/rfc4772.txt (accessed: March 17, 2012).
  • Vijay Pande, 2010. “Folding@home high performance client FAQ” [online]. Available from: http://folding.stanford.edu/English/FAQ-highperformance (accessed: March 17, 2012).