Saturday, March 15, 2014

SAQ A v2.0 vs. SAQ A v3.0 Eligibility Criteria

Now that PCI SSC has released the updated version of Self Assessment Questionnaires, I would like to share my opinion on the SAQ A v3.0 and SAQ A-EP 3.0.

The initial impression was that the SAQ A v3.0 and SAQ A-EP v3.0 will have a major impact on the Payment Card Industry as both introduce a more stringent eligibility criteria (SAQ A v3.0) and by far more applicable PCI DSS requirements (SAQ A-EP 3.0). In fact, SAQ A-EP covers almost the same requirements as SAQ C. However, after careful review of the eligibility criteria of SAQ A v2.0, SAQ A v3.0 and SAQ A-EP v3.0, and the actual impact the SAQ A-EP v3.0 will have on the merchants, the change is not that drastic.

Lets take a look at the eligibility criteria for the SAQ A 3.0 and compare it to the SAQ A 2.0; Aside from implying that third party provides have to be PCI DSS validated (“compliant” is no longer acceptable), the additional criteria includes:

  • All payment acceptance and processing are entirely outsourced to PCI DSS validated third-party service providers;
  • Your company has no direct control of the manner in which cardholder data is captured, processed, transmitted, or stored;

Additionally, for e-commerce channels:
  • The entirety of all payment pages delivered to the consumer’s browser originates directly from a third-party PCI DSS validated service provider(s).

For merchants that are accepting payment information as mail/telephone-order only, the impact is minimal. For e-commerce channel, and that SAQ A-EP 3.0 comes into play as well, the difference can be summarized in one word: hosting – who is hosting the web site (and the payment pages). Moreover, I can see some merchants successfully argue that the “traditional” redirection to a hosted payment provider, exactly what SAQ A-EP 3.0 qualifies for, still eligible to validate compliance using SAQ A 3.0.

For the sake of argument, lets consider a typical scenario whereby a merchant self-host an e-commerce web server which redirects to a third-party payment provider. Are all payment acceptance and processing are outsourced to a PCI DSS validated third-party service providers? Yes. Is the merchant has no direct controls of the manner in which cardholder data is captured, processed, transmitted, or stored? Yes, it is all taken care of by a third-party payment processor. Are all payment pages delivered to the consumers' browser originate directly from a third-party PCI DSS validated service provider(s)? Here, if we define payment pages as web pages where cardholder data is entered by a consumer, then the answer is still yes.

We can also define “all payment pages” as the entire e-commerce data flow (i.e. the entire e-commerce web site) which, potentially, disqualifies the merchant from validating compliance using SAQ A v3.0. However, if we slightly change the scenario and the web server is hosted by a PCI DSS validated service provider (i.e. Amazon AWS). Now, both the e-commerce pages and the payment web pages are originate directly from a third-party PCI DSS validated service provider(s).

So what is the intent of the choice of words and the additional requirements of the SAQ A v3.0? Is it to force merchants to use validated third-party service providers and to force merchants to migrate to a hosted (again, by validated third-party service providers) solutions? The alternative is, arguably, to comply with comprehensive SAQ A-EP v3.0.

Tuesday, May 14, 2013

Linux/Unix Secure Password Generator

Linux provides a build-in functionality to generate secure passwords with a few (piped) commands:
cat /dev/urandom | tr -dc [:print:] | head -c 8
A special file /dev/urandom provides an interface to a Linux kernel random number generator which gathers environmental noise from device drivers and other sources into an entropy pool.

Notes:
  1. -c 8 parameter controls the length of the password.
  2. [:print:] (complexity) can be substituted with [:alpha:][:digit:] for less complex password.

Monday, April 1, 2013

SECURE Python Django Database Settings

I have started to use Python Django web framework  (The Web framework for perfectionists with deadlines) and was shocked that still, in 2013, the official documentation (https://docs.djangoproject.com/en/dev/ref/settings/) and the "experts" on stackoverflow (http://stackoverflow.com/questions/3540339/is-it-okay-that-database-credentials-are-stored-in-plain-text) recommend storing database connection credentials in clear-text. It is not like the database stored confidential information(e.g. Intellectual Property), private or sensitive information, etc.

Python has a keyring library (https://pypi.python.org/pypi/keyring) that provides an easy way to access the system keyring service from python and can be used in an application to safely store passwords.

To install it on Ubuntu, make sure you have up-to-date pip Python package:
sudo apt-get install python-pip
sudo pip install pip -U

Then, using pip, install the keyring library:
sudo pip install keyring
Finally, update the settings.py with the following code to securely store authentication credentials:

import keyring
import getpass
database_name = 'schema_name'
username = 'administrator'
password = keyring.get_password(database_name, username)

while password == None :
    password = getpass.getpass(database_name + " Password:\n")
    # store the password
    keyring.set_password(database_name, username, password)

DATABASES = {
    'default': {
        'ENGINE': 'django.db.backends.mysql', 
        'NAME': database_name,      # Or path to database file if using sqlite3.
        'USER': username,                 # Not used with sqlite3.
        'PASSWORD': password,      # Not used with sqlite3.
        'HOST': db.inteliident.com',   # Set to empty string for localhost. Not used with sqlite3.
        'PORT': '3306',                       # Set to empty string for default. Not used with sqlite3.
    }
}

Simple, isn't it?

Monday, January 28, 2013

Exception Handling for Input Validation? Not My Cup of Tea!

We all know the importance of input validation, and we all have different views on implementation aspects: white-listing vs. black-listing, serialized vs. canonicalize, etc. etc. etc.

Lately, I was involved in a heated discussion about using Java and .NET Exception Handling mechanism to perform tasks such as input validation, as opposed to if-then-else control statement (that I was advocating for).

In theory, both mechanisms can be used to perform validation and handling of an invalid input, but here are two arguments why if-then-else control statement is a better fit:
  • From a conceptual stand point, exception is defined by Oracle as "an event, which occurs during the execution of a program, that disrupts the normal flow of the program's instructions" (Oracle n.d.). Does invalid user input considered as exceptional event? Hardly so as it is quite likely for the end-user to make a mistake (intentional - malicious, or unintentional) therefore the software is expected to handle it. Rico Mariani (2003) writes "If you think it will be at all normal for anyone to want to catch your exception, then probably it shouldn't be an exception at all" (Rico Mariani, 2003).
  • From a performance standpoint (here, I will sound like an ageing C/C++ software developer), exception handling mechanism is (really) expensive!
To examine the last statement that Java exception handling mechanism is significantly more expensive than a simple such as if-then-else flow control statement, I have created the following two Java classes:

public class ExceptionMethod {
    public void ThrowException() throws Exception {
        throw new Exception();
    }
   
    public Object ReturnNull() {
        return null;
    }
}


and

public class Main {
    public static void main(String[] args) {
        ExceptionMethod em = new ExceptionMethod();

        do {
            /**
             * Profile 1: generate and handle an exception
             */
            try {
                em.ThrowException();
            } catch (Exception ex) {
                // do nothing
            }
            /**
             * Profile 2: handle null using if-then-else
             */
            if (em.ReturnNull() == null) {
             // do nothing
            }
           
            // yeld to prevent locking the JVM
            Thread.yield();
        } while (true);
    }
}


In order to monitor the resource consumption, I have used NetBeans Profiler.

The do-while loop was designed to for the application into an infinitive loop allowing monitoring of the resources allocation over extensive period of time (generates large statistical sample).
a
Initially, the code in Profile 2 was commented out and executed. The NetBeans Profiler generated the following graph:


Then, Profile 1 section was commented out and Profile 2 uncommented. The application was executed again with the following graph generated by NetBeans Profiler:

Please note the following observation:
  • Memory consumption of the application utilizing exception handling mechanism is close to 512MB - a maximum set for the JVM, while application utilizing if-then-else control statement barely uses 5MB.
  • Surviving Generation, a number of instances that are alive in the heap (Jiri Sedlacek, n.d) grows linearly (to a certain point when Garbage Collector unallocates heap memory, than rises again) in application relying on exception handling, while remaining flat in application relying on if-then-else control statement indicating no redundant object allocation and deallocation.
I have expected based on the conducted research that exception handling mechanism will require more resources than a simple control statement, but the results (times hundred in terms of memory consumption and linear vs. static number of objects in heap) emphasise the important of understanding the "under the hood" mechanisms employed by the software.

Moreover, the experiment demonstrates how a security mechanism (exception handling) can be transformed into a security vulnerability (i.e. Denial of Service) if not used correctly.

In a real application, a combination of both should be used to provide an adequate layer of security: if-then-else control statement to handle expected (and to a certain degree, unexpected) data while exception handling mechanism to allow the software to either recover from an unexpected error or fail gracefully.

References

  • Oracle, n.d. "What Is an Exception?" [online]. Available from: http://docs.oracle.com/javase/tutorial/essential/exceptions/definition.html (accessed: January 28, 2013)
  • Rico Mariani, 2003. "Exception Cost: When to throw and when not to" [online]. MSDN. Available from: http://blogs.msdn.com/b/ricom/archive/2003/12/19/44697.aspx (accessed: January 28, 2013)

Thursday, August 16, 2012

Client/Server DBMS


There are virtually limitless factors needed to be taken into consideration when evaluation client/server architecture. These factors range from hardware requirements, deployment topology, communication protocol, middleware requirements, integration of management interface, security aspects, scalability (horizontal and vertical) of the solution, development application interfaces, etc. Similarly, client/server database has a number of characteristics which can impact the overall solution. Peter Rob et al. (2009) notes that these include interoperability and remote query execution.
On of the most important client/server database management system characteristic is interoperability which manifest itself in the ability to “provide transparent data access to multiple and heterogeneous clients” (Peter Rob, Carlos Coronel and Steven Morris, 2009. Appendix F. Page 158). Numerous clients written in Java, .NET, C/C++ or Perl have the capacity to interact with the database management server to retrieve or update information through usage of standard protocols such as ODBC and JDBC.
In addition, client/server DBMS architecture allow the end-user to manipulate the information stored in the server DBMS in a number of forms. For example, one design can utilize resources (often distributed) allocated to the DBMS server to process the information (thin client), while another design can distribute the information processing relying on the computing resources of the clients (fat client). Peter Rob et al. (2009) notes that in many cases, the client/server architecture is closely related to distributed database management systems (DDBMS) where the processing of a single query can take place on a number of remote servers.
The client/server database architecture can take a number of forms. Peter Rob et al. (2009) mentions 2-tier and 3-tier architecture which adds a middleware between the client and the server tierd. The middleware, as the name suggests, has a number of functions to facilitate connectivity to the database management server ranging from exposing ODBC or MQ API, converting query formats and managing security aspect (authentication and authorization). Mohammad Ghulam Ali (2009) suggests a 4-tier database architecture which adds a Global Database Management System component to decompose the query “into a set of sub queries to be executed at various remote heterogeneous local component database servers” (Ali, M 2009).
As with any application, distribution of business logic and/or presentation increases the number of attack vectors, thus requiring thorough risk assessment and implementation of appropriate compensating controls to secure the overall solution (Cherry, D 2011). For example, distribution of information from a single location into multiple clients can have a security impact on information considered as private or confidential. Don Klely (2011) notes that additional security controls should be implemented to compensate for the remote storage (on the client side) and transportation of the information. Although this can be solved through a trivial implementation of SSL/TLS protocol between the database server and clients, the solution designer has to consider all applicable risks (both technical and operational) before deciding on the optimal compensating control(s).

Bibliography

  • Ali, M 2009, 'A Multidatabase System as 4-Tiered Client-Server Distributed Heterogeneous Database System', arXiv, EBSCOhost, viewed 16 August 2012.
  • Cherry, D 2011, Securing SQL Server [Electronic Book] : Protecting Your Database From Attackers / Denny Cherry, n.p.: Burlington, MA : Syngress, 2011., University of Liverpool Catalogue, EBSCOhost, viewed 16 August 2012.
  • Kiely, D 2011, 'Key Ways to Secure ASP.NET Applications with a SQL Server Back End', SQL Server Magazine, 13, 12, pp. 27-31, Computers & Applied Sciences Complete, EBSCOhost, viewed 16 August 2012.
  • Peter Rob, Carlos Coronel and Steven Morris, 2009. “Database Systems: Design, Implementation and Management”. 9th Edition. Course Technology.

Saturday, April 21, 2012

(Security) Open Source vs. Non-Open Source

When discussing two seemingly unrelated topics such as security and “open source versus non-open source” the discussion usually boils down to the quality of the product rather than the architecture or the implementation. Steve, M 2008 writes that open source projects as well as commercial software vendors use similar software development practices, methodologies and tools “bug trackers like Bugzilla, source code revision management tools like SVN and automatic build tools such as ant” (Steve, M 2008). Moreover, Gary McGraw points out that “Software security relates entirely and completely to quality. You must think about security, reliability, availability, dependability — at the beginning, in the design, architecture, test and coding phases, all through the software life cycle” (Mark Willoughby, 2005), therefore it is imperative to analyze the factors impacting the software quality both in open source and non-open source worlds.
While Ross J. Anderson (2008) notes that commercial deadlines can impact the quality of the source produced even by skilled software developers, the argument is counteracted by Craig Mundie, CTO Microsoft that in the current market the software vendors are under pressure to develop a quality software as “more and more customers view security as a key decision factor” (Berni Dwan, 2004). But statistical information available on National Vulnerability Database (2012) tend to agree with Ross J. Anderson showing a steady growth of vulnerabilities discovered in Microsoft Windows. According to National Vulnerability Database (2012), Microsoft Windows had 17 vulnerabilities discovered in 2007, 34 in 2008, 47 in 2009, 166 in 2010 and 197 in 2011.
Schryen, G (2011) uses two additional factors, Mean Time Between Vulnerability Disclosures (MTBVD) and (UN)Patching Behavior, to compare the security of open source versus non-open source system components. According to collected data, in most cases the vulnerabilities discovered within open source products are fixed by a degree quicker than equivalent non-open source counterparts. Moreover, the research demonstrates that, again in most cases, the open source products aim to close majority of identified vulnerabilities while non-open source adopt a more prioritized approach whereby “there is a strong bias toward patching severe vulnerabilities” (Schryen, G, 2011). As a result, 66.22% of Microsoft Internet Explorer 7 vulnerabilities remain unpatched compare to 20.36% of Mozilla Firefox 2. The same results are reflect the status of E-mail clients whereby Microsoft Outlook Express 6 has 65.22% unpatched vulnerabilities compare to 5.45% for Mozilla Thunderbird 1.
Anderson J. Ross (2008) also points out the differences between the target market of open source products and non-open source products, “that the users of open products such as GNU/Linux and Apache are more motivated to report system problems effectively, and it may be easier to do so, compared with Windows users who respond to a crash by rebooting and would not know how to report a bug if they wanted to” (Anderson J. Ross, 2008). This, in turn, can further skew the published vulnerabilities statistic.
While the numbers may suggest that the open source solutions are more secure, “open and closed approaches to security are pretty much equivalent, making source code publicly available helps attackers and defenders alike” (Berni Dwan, 2004) allowing each party to evaluate the most effective and sophisticated attack methods.

Bibliography

  • Berni Dwan 2004, 'Open source vs closed', Network Security, Volume 2004, Issue 5, May 2004, Pages 11-13, EBSCOhost, viewed 21 April 2012.
  • Mark Willoughby, 2005. “Q&A: Quality software means more secure software” [online]. Computer World. Available from: http://www.computerworld.com/s/article/91316/Q_A_Quality_software_means_more_secure_software (accessed. April 21, 2012).
  • National Vulnerability Database (NVD), 2012. “Statistics Results Page” [online]. Available from: http://goo.gl/RGRy9 (accessed: April 21, 2012)
  • Schryen, G 2011, 'Is Open Source Security a Myth?', Communications Of The ACM, 54, 5, pp. 130-140, Business Source Premier, EBSCOhost, viewed 21 April 2012.
  • Steve, M 2008, 'Open Source: Open source: does transparency lead to security?', Computer Fraud & Security, 2008, pp. 11-13, ScienceDirect, EBSCOhost, viewed 21 April 2012.
  • Ross J. Anderson, 2008. “A Guide to Building Dependable Distributed Systems”. 2nd Edition. Wiley Publishing.

Saturday, April 14, 2012

E-Money

E-money is defined by European Commission as “electronically, including magnetically, stored monetary value as represented by a claim on the issuer which is issued on receipt of funds for the purpose of making payment transactions” (European Commission, n.d.). As such, E-money provides the same advantages as cash: real time transaction and anonymity. Ken Griffin et al. (n.d.) note that there are a number of categories of E-money which include E-cash, digital checks, digital banks checks, smart cards and electronic coupons and tokens. There are a number of E-money issues such well known PayPal and less known such as Pecunix, Ukash and Bitcoin. Moreover, virtual world such as SecondLife has its own E-money currency (e.g. L$) which can be earned, spend and exchanged for US dollars (SecondLife, n.d.). As stated previously, the main advantages of E-money are real time transactions, low transaction fees and anonymity similar to real cash transactions. There are, however, concerns about security and fraud, as well as question on financial backup the virtual currency. For example, criminals are targeting Bitcom digital wallets or are using botnet networks to utilize the collective resources to mint virtual currency which can be exchanged for a real money (Peter Coogan, 2011).

Credit cards, on the other hands, are backed by international organizations responsible to issue and acquire credit card transactions. Often, the transaction fees (flat fee or a percent from a transaction) are charged to the merchant. From a end-user standpoint, credit cards are existing and trusted technology whereby the security standard (e.g. Payment Card Industry Data Security Standard) is verified by independent qualified security assessors (QSA). Additional benefits such as CashBack, AirMiles and membership points increase the adoption rate by the consumers. The drawback (which is considered by some consumers as an advantage) is the fact that all purchases are done in credit, and if not paid in full are subject to comparatively high interest rates changed.

It is difficult to compare the security risk between the E-money and the credit card technologies as both had high profile data thefts such as theft of 25,000 Bitcoins from 478 account (Jason Mick, 2011) and “a massive" security breach at a credit card processor has put 10 million accounts at risk” (Brandon Hill, 2012). On both fronts, there are efforts to tighten up the security as evident with Payment Card Industry Data Security Standard (PCI SSC, 2010) and MintChip Challenge by Royal Canadian Mint to create a secure alternative to E-cash backed by the Canadian Government (Emily Jackson, 2012).

With variety of methods to exchange funds (i.e. payment and transfers) electronically such as PayPal, MintChip, Bitcoin, Credit and Debit Cards, it is not surprising that Sweden moving towards cashless economy (CBCNews, 2012) where different digital payment methods are used in parallel serving different purpose (i.e. micro payments) rather than compete with each-other.

Bibliography

  • Brandon Hill, 2012. “Global Payments Inc. Hit By Security Breach; 10M Visa, MasterCard Accounts at Risk” [online]. DailyTech. Available from: http://www.dailytech.com/Massive+Security+Breach+Hits+MasterCard+Visa+10M+Accounts+at+Risk/article24355.htm (accessed: April 14, 2012).
  • CBCNews, 2012. “Sweden moving towards cashless economy” [online]. Available from: http://www.cbsnews.com/8301-202_162-57399610/sweden-moving-towards-cashless-economy/ (accessed: April 14, 2012).
  • Emily Jackson, 2012. “Royal Canadian Mint to create digital currency” [online]. The Star. Available from: http://www.thestar.com/business/article/1159513--royal-canadian-mint-to-create-digital-currency (accessed: April 14, 2012).
  • European Commission, n.d. “e-Money” [online]. Available from: http://ec.europa.eu/internal_market/payments/emoney/index_en.htm (accessed: April 14, 2012).
  • Jason Mick, 2011. “Inside the Mega-Hack of Bitcoin: the Full Story” [online]. DailyTech. Available from: http://www.dailytech.com/Inside+the+MegaHack+of+Bitcoin+the+Full+Story/article21942.htm (accessed: April 14, 2012).
  • Ken Griffin, Phillip Balsmeier, Bobi Doncevski, n.d. “Electronic Money as A Competitive Advantage” [online]. Available from: http://journals.cluteonline.com/index.php/RBIS/article/download/5458/5543&ei=uVKFT-2kBoXh4QSvr4izBQ&usg=AFQjCNFHe4HkbgG7hbdsQzlWnXg1LR7MNA (accessed: April 14, 2012).
  • Payment Card Industry Security Standard Council, 2010. “Payment Card Industry (PCI) Data Security Standard” [online]. Available from: https://www.pcisecuritystandards.org/documents/pci_dss_v2.pdf (accessed: April 12, 2012).
  • Ross J. Anderson, 2008. “A Guide to Building Dependable Distributed Systems”. 2nd Edition. Wiley Publishing.
  • SecondLife, n.d. “Buy L$ ” [online]. Available from: https://secondlife.com/my/lindex/buy.php?lang=en_US (accessed: April 14, 2012).
  • Peter Coogan, 2011. “Bitcoin Botnet Mining” [online]. Symantec. Available from: http://www.symantec.com/connect/blogs/bitcoin-botnet-mining (accessed: April 14, 2012).

Saturday, April 7, 2012

Network Security Architecture

While information security could be seen as a “business enabler” by allowing the organization to deploy innovating solutions, thus gaining competitive advantage, without exposing the organization to additional risk, incorrect design and implementation could have the opposite effect by placing unnecessary burden on the Information Technology department staff and the organizational employees. Therefor, implementation of security controls to protect network boundaries should be consistent with business needs, budget and the resources these controls are designed to protect. As a result, the security design should be based on a security assessment (threat risk assessment) of the current environment to identify corporate assets and resources, and the security risks they are exposed to. George Farah (2004) suggests a five phased approach to a security network architecture with threat risk assessment as an initial step. It follows by formulation of a network security architecture design to mitigate the identified risks. In the third phase, the organization develops a security policy and procedures to govern the deployment and maintenance of the proposed architecture. The forth phase includes the deployment of the architecture while int the last, fifth phase the organization implements the security polity through management processes such as patch management, configuration management, vulnerability management, etc.
Typical security components in the network infrastructure include a firewall, Virtual Private Network (VPN) concentrator and a proxy server. Additional devices could include Intrusion Detection System (IDS) or Intrusion Prevention System (IPS), Web Application Firewall, Anti-Spam & Email Security Software and Data Leakage Prevention (DLP). The logical location of these and other devices depends on the corporate assets requiring protection, business processes, existing (or proposed) network architecture. For example, Virtual Private Network is only necessary when employees are required to work from a remote location such as home or remote office. As stated early, incorrect architecture or deployment of unnecessary system components could mean additional burden on the IT staff which will impact the service provided by the organization.

Edge firewall is often regarded as “a workhorse” in securing internal network from unauthorized access via the Internet” (Patrick W. Luce, 2004). It often provides a number of security services such as access control, stateful inspection and Network Address Translation (NAT). In many cases, firewall devices have a build-in VPN and basic IPS/IDS capabilities such as Cisco PIX 500 series (Cisco, n.d.). 

Modern firewall devices offered by security vendors such as Palo Alto, Checkpoint and Cisco, allow organizations to extend the security services provided by the device using “plug and play” architecture. For example. Check Point Software Blade Architecture allows network administrators to extend the firewall functionality “without additional hardware, firmware or driver” (Checkpoint, 2012). The additional blades include services such as IPSEC VPN, IPS, DLP, Web Security, Antivirus & Anti-Malware, Anti-Spam & Email Security and Voice over IP (VoIP). 

When considering the “expendable” security devices as oppose to “best of breed” architecture, the security experts are divided into two camps. On one hand, a unified management of the security architecture allows network administrators to correlate events and incidents, and better manage the security posture of the organizations. On the other hand, having security devices from a number of software and hardware vendors (best of breed) adds an additional layer of defense an attacker need to overcome when trying to exploit an identified vulnerability.

To conclude, the organizational security solution should meet targeted business security needs rather than on a trend and security “fashion”. Unnecessary complex solutions increase the pressure on the IT which can, in fact, reduce the overall security level of the entire organization.

Bibliography

Friday, March 30, 2012

Software Tamper Resistance


One of the methods to provide tamper resistance capabilities to a software is code obfuscation. It is a process designed to change the software in order to make the software more difficult to reverse engineering while semantically equivalent to the original program. The technique is used both by “white hat” security specialist to protect Intellectual Property and to “deter the cracking of licensing and DRM schemes” (Victor, D 2008), as well as “black hat” as a protection technique to avoid detection (signature based) by anti-virus engines. Victor D. (2008) lists a number of techniques used to obfuscate the code including just-in-time decryption, polymorphic encryption, timing check, layer anti-debugging logic and binary code morphing. Moreover, Bai Zhongying and Quin Jiancheng (2009) successfully applied obfuscation principals in web environment creating prototypes of JavaScript and HTML obfuscation tools. An additional advance technique, self-modifying code, was proposed by Nokos Mavgoriannopoilos (n.d.) whereby the software mutates its own code in order to make it difficult to “make attacks [on the code] more expensive” (Nikos Mavrogiannopoulos, Nessim, K, & Bart, P n.d.).
While usage of obfuscation techniques become widely acceptable, Preda, M, & Giacobazzi, R (2009) raise a question of effectiveness of code obfuscation techniques - “it is hard to compare different obfuscating transformations with respect to their resilience to attacks and this makes it difficult to understand which technique is better to use in a given scenario” (Preda, M, & Giacobazzi, R. 2009) due to absence of theoretical research to formalize the metric of code obfuscation.
ProGuard “is a free Java class file shrinker, optimizer, obfuscator, and preverifier” (Eric Laforune, 2011). While advantages of the tool are easy integration into commonly used Integrated Development Environments (IDE) as well as ant tasks, and additional functionality such as optimizer and code shrieker, its obfuscation capabilities are limited to code morphing. More advance techniques, or combination of a number of obfuscation techniques such as flow obfuscation and string encryption could potentially (see previous paragraph discussing the lack of metric to measure the effectiveness of code obfuscation) exponentially increase the effort required to reverse engineer the code.

Bibliography

  • Bai Zhongying; Qin Jiancheng; 2009 , "Webpage Encryption Based on Polymorphic Javascript Algorithm," Information Assurance and Security, 2009. IAS '09. Fifth International Conference on , vol.1, no., pp.327-330, 18-20 Aug. 2009
    doi: 10.1109/IAS.2009.39
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=5284075&isnumber=5282964
  • Eric Lafortune, 2011. “ProGuard” [online]. Available from: http://proguard.sourceforge.net/ (accessed: March 30, 2012).
  • Nikos Mavrogiannopoulos, Nessim, K, & Bart, P n.d., 'A taxonomy of self-modifying code for obfuscation', Computers & Security, ScienceDirect, EBSCOhost, viewed 29 March 2012.
  • Preda, M, & Giacobazzi, R 2009, 'Semantics-based code obfuscation by abstract interpretation', Journal Of Computer Security, 17, 6, pp. 855-908, Academic Search Complete, EBSCOhost, viewed 29 March 2012.
  • Ross J. Anderson, 2008. “A Guide to Building Dependable Distributed Systems”. 2nd Edition. Wiley Publishing.
  • Victor, D 2008, 'Obfuscation: Obfuscation – how to do it and how to crack it', Network Security, 2008, pp. 4-7, ScienceDirect, EBSCOhost, viewed 29 March 2012.

Saturday, March 17, 2012

Cracking AES/3-DES


In 2002, a distributed network (desitributed.net) was successfully recovers a DES encryption key within 2.25 days. In order to estimate if 3-DES or AES keys can be recovered using a brute-force attack, this paper calculates the number of encryption operations and the (potentially) available processing power.
One of the largest distributed computing projects, folding@home estimates that with utilization of modern hardware such as Graphic Processing Unit (GPU), it is possible to achieve an acceleration of up to forty times (x40) over CPU due to its ability to perform “an enormous number of Floating Point OPerations (FLOPs) “ (Vijay Pande, 2010). Therefore, by using 200,000 actively processing computers, it is possible to surpass the 10 Petaflop level. As such, it is safe to assume that one average each participating machine contributes:




10×(10^15)÷200,000=50,000,000,000=50*(10^9)


or 50 billion calculations per second.
To amass the required computing power to brute force 3-DES or AES encryption, a bot network could be use to “harvest” idle CPU/GPU cycles. One of the most advance malware today, TDL-4, controls over 4.5 million infected computers in 2011 (Sergey Golovanov and Igor Soumenkov, 2011). Therefore, using previous assumption that zombie (infected computer) is capable of processing 50 billions calculations per seconds, the total computing power of a bot-net network such as TDL-4 is:




50*(10^9)*4.5*(10^6)=2.25×10^17=225×10^15


or 225 quadrillion (short scale) operations per second.
S. Kelly (2006) note that because in 3DES encryption scheme, the encryption keys relationship is C = E_k3(D_k2(E_k1(p))), in order to brute force a 3DES encryption a total of 2^168 cryptographic operation will be required. Assuming that a single 3DES decryption takes a microsecond (10^-6), it will take:



2^168÷225×10^15×10^6


1.66286409 × 10^27 second, or 5.26941088 × 10^19 years. This is far longer than the universe exist (4.339×10^17 seconds). The reader should note that the figure is by far smaller than what was estimated by S. Kelly (2006) and this is due to increased computing power of moder CPU and GPU devices. Regardless, it is safe to assume that 3DES can withstand a brute force attack.

Bibliography

  • Kaur, G, & Kumar, D 2010, 'Performance and Analysis of AES, DES and Triple DES against Brute Force Attack to protect MPLS Network', International Journal Of Advanced Research In Computer Science, 1, 4, p. 420, EDS Foundation Index, EBSCOhost, viewed 17 March 2012.
  • Ross J. Anderson 2008, “Security Engineering: A Guide to Building Dependable Distributed Systems”. 2nd Edition. Wiley.
  • Sergey Golovanov, Igor Soumenkov 2011, “TDL4 – Top Bot” [online]. Kaspersky Lab ZAO. Available from: http://www.securelist.com/en/analysis/204792180/TDL4_Top_Bot?print_mode=1 (accessed: March 17, 2012).
  • S. Kelly, 2006, Security Implications of Using the Data Encryption Standard (DES) [online]. Network Working Group. Available from: http://www.ietf.org/rfc/rfc4772.txt (accessed: March 17, 2012).
  • Vijay Pande, 2010. “Folding@home high performance client FAQ” [online]. Available from: http://folding.stanford.edu/English/FAQ-highperformance (accessed: March 17, 2012).