Wednesday, November 21, 2007

Technological Means to Prevent Data Leakage

Before we dive into “Geeky Stuff”, let me tell you what this article is not about. This article will not detail data leakage prevention policies and procedures, nor will it detail specific vendor solutions.
This article will, however, provide a brief overview of the ways data can leak and details the technological means of protecting yourself from data leakage.

Why is Technology Important?

Let’s start with a question: What do the following individuals have in common - an underpaid developer in an off-shore software house who sold millions of code lines to competitors, a well paid employee who stored client files on his stolen laptop and an extremely well-paid manager in a governmental organization who decided to post two CDs on the Web containing data about millions of British citizens (including date of birth, addresses and back accounts)?
The aforementioned cases did not lack a data prevention policy – the organizations did have well-defined data leakage prevention policies and procedures – however they did lack technological means to enforce these policies.

Technology without Policy

Don’t get me wrong, policies are part of my bread and butter. Without well-defined and management approved policies there will be no legitimate reason to have technological tools in first place. Policies define which one of these digital streams is considered sensitive information and who is allowed (or not) to access and handle this information.
The Ways to Leak Data are Limitless
I will assume that our internal network and data storage are as secure as possible. In this case, there are only two ways to remove data from our stronghold:
  • Physical removal (such as hardcopy of softcopy on media such as CD/DVD, USB, etc.)
  • Electronic means (dial up, Internet, FTP, SCP and every other existing way to transfer information).
From here, there are endless variations of each method.

How to Protect Yourself
There are many tools and solutions out there, but as I mentioned before this article will not go into detail on the solutions. There are a number of locations where we can detect the leakage and prevent it:

1. The Inner Circle
This is the best place to start. It is very tricky to prevent physical removal of information, but we can make it really hard for someone to try. So, why not limit access to the data in first place?
Many solutions exist to tackle this one. In my opinion, Role Based Access Control (RBAC) is the best approach to allow legitimate use of the sensitive information for business purposes while denying access to another.

2. Everything in Between

Here is where we need to think how to prevent data leakage. As I mentioned before, it is no simple task to prevent intentional or accidental leakage of sensitive data.
First, we need to make sure that all the hardcopies will be destroyed before leaving the premises. For that, we need to make sure that shredders are available and easily accessible.
Second, discontinue the use of writeable CD/DVD drives on your client systems. There is normally no reason for the average user to burn CD/DVDs. Furthermore, disable USB/Firewire storage and removable media. You might want to set up a number of stations where it will be possible to write CD/DVD, however limit access to managers or someone with higher privileges in order to filter the data leaving the organization.

3. The Outer Perimeter

Firewalls are our outer perimeter protectors. Usually, they are configured to block incoming traffic, however they allow most of the outgoing communication. This is also the best location to place a proxy server that will inspect all the outgoing traffic. Based on predefined rules (that is where policies come), a proxy can decide which data can leave the organizational network and what should stay in.
But even in organizations where a proxy is used to scrutinize outbound traffic, there are many additional services that are permitted through the firewall which are not inspected by the proxy. We can use NMAP (nmap –sT –p 0-65535 ) to map all the authorized ports through the firewall in order to get an idea of the existing potential areas for data leakage.
The most common way of bypassing a proxy inspection is to use an SSH port, which is usually open for administrative tasks. Users can easily tunnel anything they like over SSH. Also, as I mentioned before, just because the default port for SSH is 22, it doesn’t mean the SSH cannot run on any other port (such as over 80 or 443).
Additional methods include DNS or ICMP Tunneling. For example, tools such as OzymanDNS, NSTX and DNScat allow tunneling SSH traffic over DNS (and other protocols). In many cases, it is impossible to block a DNS protocol altogether, but for example, you can use internal a DNS server and allow DNS traffic only from that specific server through the firewall.

4. The Wide Wild West

In those rare cases when we do need our sensitive data to leave the protective walls of our organization, we need to make sure that the data is sufficiently protected. Believe me, using a ZIP file with ‘1234567890’ password (and that is a long password) is not considered protection at all. As an alternative, you could use PGP (of GnuPG) to encrypt your files with recipient(s) key(s) and then either write the data to CD/DVD or send it using electronic means, such as Email or Bittorent. Even in a case when CD/DVD will be lost, you can sleep quietly due to the fact that it is currently impossible to break an RSA1024 bit key (this is a minimum standard for today’s encryption).

Conclusion
Introducing a good set of organizational Data Leakage Prevention policies is not enough. On the other hand, technological solutions are meaningless without a good set of policies since you need to define what is allowed and what is not.
Technology is a necessary ‘tool’ to backup and enforce policies and in order to limit data leakage in an organization (it would be unwise to believe that it is possible to prevent data leakage altogether). A good combination of both technologies and policies is needed.

Published under Comsec Consulting UK

Wednesday, August 8, 2007

Security in a “Virtual” Reality

Today, every business tries to cut costs in order to be competitive in tomorrow’s market. Sometimes, this is done by outsourcing IT, HR or other non-core services. In other organizations, cost reduction is achieved by consolidating its IT environment by using Virtualization.
By creating multiple virtual machines which share resources such as CPU, memory, hard drive, network devices, etc. organizations are able to reduce the cost of management of server operation. This includes the hardware, maintenance and human resources needed to manage, operate and administer these servers on a daily basis.
Furthermore, in regard to Disaster Recovery Planning (DRP), virtualization can also enhance the security level by providing means to faster, more flexible, and more reliable disaster recovery at a lower cost. It also significantly reduces planned and unplanned downtimes.
This demand created many software vendors who would be happy to sell you their product. Examples include OpenVZ, Xen, VirtualBox, Virtual Iron, Virtual PC, VMware, QEMU, Adeos, Mac-on-Linux, Win4BSD, Win4Lin Pro, z/VM, and more.
However, failure to configure and harden your Virtual server might have very unpleasant results, especially when implemented without security considerations. In a number of cases we witnessed a hacker bringing down an entire virtual infrastructure because of a memory leak flaw on one of the servers. By exploiting this flaw, the hacker was able to consume all the available memory to a point where the entire system crashed.
The saying “Start Secure – Stay Secure” is a simple slogan used by Comsec to emphasize that security has to begin from the very first stages of design and integration, and should be integrated into every following step.
So, how do we secure a virtual environment?

Design a Secure Virtual Environment

Each solution has its own approach, which does not always suits the organizational needs. Some of these solutions completely separate each OS while others create separate “zones” with a shared kernel. Organizations need to determine security requirements that should correlate with the organizational security policy. A Security Architect should be involved in this stage in order to define parameters such as access control to server consol, design of virtual network architecture, design of virtual machines, communication protocols, etc.

Firewall

When implementing a virtual environment, some of the communication can relay on an internal, virtual network. For example, when a virtual web server communicates with a virtual database, the packets traverse through a virtual network only. A traditional firewall will not be able to filter this communication if needed.
There are a number of possible solutions. One of them is to use a firewall integrated into a virtual server application. The second option is to configure the virtual machines to route all the communication through an external firewall by connecting virtual machines to separate physical network cards. A third option is to use a “virtual” firewall which usually comes in the form of a virtual appliance. These appliances function as “traditional” firewall devices and can perform functions such as “Deep Packet Inspection”, session based rules, filtering, etc.

Hardening Virtual “Guest” Servers

In most cases we can assume that each virtual machine is fully isolated from another virtual machine running on the same server. These servers need to be hardened and tested periodically as their “physical” counterparts. Security vulnerabilities existing on one of the virtual machines could allow an attacker to skip from that machine to another in the network.

Hardening Virtual “Host” Server

“Host” servers are responsible for allocating memory and CPU to “Guest” servers, as well as providing access to storage and network devices. By receiving access to file systems on the “Host” server, an attacker will gain access to files stored on virtual machines. It is also possible to shut down the "Host" server which will result in DoS on each of the hosted "virtual" machines. These are only a few of the possible scenarios.
The “Host” server should be hardened and should undergo very strict access control. Only administrators and dedicated operators should have access to consol and virtual server management interfaces. The server should be updated with the latest security patches and it should be configured in a secure fashion.

(Security) Policies… Policies… Policies…

Every security decision should be backed up by an existing and approved policy. These policies should include:
  • Password Policy (expiration, password length, etc.)
  • Authentication Policy (Token, LDAP, local / etc/passd, etc.)
  • Access Control Policy
  • Network Connectivity Policy (such as separation to VLANs, firewall rules, etc.)
It is important to review these policies at least annually to make sure that they are updated with new standards and best practices.

Security Auditing

Security auditing should be performed on a periodic basis. Auditing should include:
  • Security testing of “Guest” servers’ operating system and services
  • Security testing of “Host” servers’ operating system
  • Security testing of “Host” server virtualization application
  • Relevant network equipment (switches, firewalls, routers, etc.)
This auditing is important in order to discover security issues such as new vulnerabilities, redundant services, outdated firewall rules and routing tables, etc.

Conclusions
Virtualization technology offers many operational and financial advantages. It even provides some security benefits. Nevertheless, this concept introduces several security weaknesses. The associated vulnerabilities include potential denial of service, data leakage and others.
With a proper and professional security approach towards the virtualization concept, one could achieve secure and reliable environment.
This approach should include proper secure design, secure configuration of the environment, secure maintenance, and proper periodical security audits.

Published under Comsec Consulting UK

Tuesday, April 10, 2007

Security Considerations for Data Centres

Communications in data centres today are most often based on networks running the IP protocol suite. Data centres contain a set of routers and switches that transport traffic between the servers and to the outside world. Redundancy is sometimes provided by getting the network connections from multiple vendors.
So, what are the other considerations we need to take in count when designing data centre?

Network Security
Most people take it for granted, but network security plays important role in securing our data. Every package, encrypted or not, traverse the network and affected from network state.
Usually, data centres must have crypo-capable routers and switches with comprehensive ACL rules, firewalls whom are capable to deal with different protocols required by your business (like VOIP, VPN) and perform application data inspection, role based access control to managing the network and other security features (such as anti-virus, anti-span, etc.).

Business Compartmentalization
Since business information stored on servers is the core of business, we need to make sure that this information is not accessible to third party. It is considered a good practice to separate each enterprise to a separate VLAN and, if possible, to separate each business application to different compartment. This way, virus outbreak or DoS attack that affects one compartment, will no influence the business information flow in the other.

Administrative Traffic
Sniffing administrative traffic can be very helpful when you are trying to break into “digital fortress”. This traffic may contain access password, IP addresses, configurations, etc. Data centres need to make sure that this information is inaccessible and doesn't mixed up with production data. To do so, create separate VLAN segment for administrative traffic and make sure that this traffic is encrypted. In this case, separate network segment not only increase security, as intruder will have to break through another layer of defence, but also improve performance of production segment.

Logging and Monitoring
High-quality event logging and monitoring is the lifeblood of incident response operations. Many organizations have implemented pretty good event logging at the network and operating system level, but very rarely at the application level. To the incident response analyst, each layer of logging brings its own perspective on a security event. And a full complement of those perspectives is necessary to really understand what took place at the
For example, when trying to forensically determine how a site was compromised, the network logs show the date, time, protocol, source, etc., of the attack. The operating system logs show what the intruder did and accessed on the host's operating system. The application logs provide insight into what data the intruder accessed, modified, deleted, etc., within the compromised application. Without that ''big picture'' view, it is exceedingly difficult to provide company executives with an accurate damage assessment so they can make the appropriate business decisions on how to proceed.

Regulations Compliance
Sometimes it is very important that your data centre provide complies with different regulations and standards as it may affect your organization's compliance. There are different regulations such as BS7799 / ISO17799 Information Security Management, Basel II and the Basel Capital Accord, and the Sarbanes-Oxley Act 2002, which provide guidance for investment institutions and ISO14000 Environmental Management System.

DRP (Disaster Recovery Plan)
Not only DRA is compulsory compliance (such as Sarbanes-Oxley and HIPAA), it is essential to business continuity. Disaster recovery plane gives you the ability to respond to an interruption in services by implementing a plan to restore an organization's critical business functions, and since the core of business is the data stored in our data centers. Is is important to design, implement, test and update DRP to ensure regulation compliance, and more important, continuity of the business.

And, Physical Security
Some will argue that physical security has nothing to do with information security. I don't believe so. Since the core values of information security are confidentiality, integrity and availability of the data we are trying to protect, and they are affected from physical factors, we have to take them in count when protecting our data.

Water

Data centre have to be located as far as possible from flooded locations and ensure humidity between 35-85%. Water, or humidity can damage our servers, therefore integrity of our data and availability of business services.
Too much humidity and water may begin to condense on internal components; too little and static electricity may damage components.

Fire
Data centrers must have elaborated fire prevention and fire extinguishing systems. The best practice is to have zoned fire prevention and detection systems and high-quality fire-doors and other physical fire-breaks. In case a fire does break out it can be contained and extinguished within a small part of the facility. Fire detection systems must consist of a very sensible heat sensors, which should detect even the smallest heat rise or spark in order to deal with the situation before full scale fire incident.

Electricity
Backup power must be catered for via one or more uninterrupted power supplies and/or diesel generators. To prevent single points of failure, all elements of the electrical systems, including backup system, have to be fully duplicated, and critical servers connected to both the "A-side" and "B-side" power feeds.

Access Control
Perhaps the most important factor is data centre security is access control. If server can be damaged, the data will not be available. Another scenario, if data is encrypted but the server is stolen, not only our data is not available, which can damage business, it also can be take to external location where sensitive information can be decrypted.
Physical access to the site must be restricted to allowed personal only. Organisation must consider using access cards (with smart chip), biometric systems and double door with separate access tokens. In many cases, surveillance cameras and guards are used to increase the security.

Published under Comsec Consulting UK

Thursday, April 5, 2007

Five Most Important Security Considerations for VoIP

It is understandable why so many organizations are moving to a Voice over IP infrastructure. VoIP is one of the fastest growing technologies in telecommunications today, thanks to its low cost and great flexibility. However, due to VoIP's special security vulnerabilities, the assimilation of VoIP systems in enterprises involves major security risks, and requires deep organizational thought and examinations regarding the ideal VoIP architecture. Enterprises are under the mistaken assumption that existing network architecture can still be used “as is” following the addition of a VoIP infrastructure. However, this addition can damage the quality of service in the enterprise. Moreover, this can also cause financial damage and damage to the organization's reputation.
Enterprises that decide that they wish to change their voice communication infrastructure into VoIP face a great challenge. This change requires principal thoughts and raises essential questions regarding issues that affect the architecture of the network and the interface between VoIP and the data network.
The following section will outline the risks that financial organizations have to consider when implementing a VoIP infrastructure:

Risk to Existing Data Network
The deployment of some VoIP systems can damage an enterprise's information security layout including the quality of services provided by these systems. As VoIP services run on the organization’s existing platforms, they are exposed to the same information security breaches. Networks that are not secured enough can damage VoIP and other environments in the enterprise, thus they must be designed and secured in the most appropriate way. Since a financial organisation relays on the existing data network for business critical applications, e-Banking infrastructure and transactions, damaging it could lead to a huge financial loses as well as lost of trust.

Opening VoIP to the Internet
Privacy and security regulations dictate that financial institutions are ultimately responsible for the privacy of their client/partners. Opening VoIP components to external communication besides the inter-organizational communications increases the exposure of the internal network to security risks. In addition to that, VoIP applications are exposed to data stealing, eavesdropping, impersonation and denial of service; vulnerabilities which can affect the data network if not configured correctly. The leakage of unpublished financial reports or client’s confidential information can damage organization’s reputation as well as lead to financial loses. Organizations need to ensure that VoIP deployment does not minimize enterprise's information security or quality of services and examine the risks to its information.

Data Stealing and Eavesdropping
Similarly to other data, VoIP is exposed to attacks and attempts to make use of software breaches. VoIP eavesdropping attempts are even more easily executed than PSTN calls. Organizations need to inspect the access control list and policy enforcement; will make sure that the machinery is configured in such a fashion that only permitted individuals can use VoIP and implement a maximal secured network with context to VoIP oriented attacks. All of these can contribute to making the financial institution's infrastructure more secure from external and internal threats.

Assuring Business Continuity
Availability of e-Banking applications, financial databases and other business IT assets is critical to financial organisation. A single power outage can cause financial and image damages to the enterprise and its services due to the lack of ability to use VoIP. Organizations will have to evaluate the options, costs and efficiency in business continuity in case of a power outage that prevents the ability to use VoIP. Also, organizations will have to examine and evaluate various business continuity plans to overcome this obstacle.

Endpoint Security Issues
Integration of some types of endpoints into VoIP systems can damage the security level of the network. VoIP systems use a wide variety of forms for communication, ranging from the traditional telephone handsets to conferencing units, mobile units and soft-phones. However, malicious codes and other various vulnerabilities are very common on PCs connected to the Internet, and must be checked in the integrative network to ensure its security. The organization will have to check the quality of Wi-Fi protection and soft-phones, if used.

Conclusions
Financial enterprises will have to ensure the optimal security before, during and after VoIP deployment in the enterprise. Inadequate security may cause financial damage and damage to the organization's reputation.

Published under Comsec Consulting UK

Tuesday, March 6, 2007

KVM: Kernel-based Virtual Machine for Linux

OK, I'm impressed...
I have tried a lot of virtualization "solutions" but this time I'm really impressed.
KVM (abbreviation of Kernel-based Virtual Machine) work fast, installed in less that 5 minutes and really easy to manage. And the best part is, that every virtual machine is just a process on the host. You can monitor (top) and/or kill it in a sec...
While still in the early development stages, KVM shows a real potential. I even can say that I have enjoyed "playing" with it.

http://kvm.sourceforge.net/

Friday, February 16, 2007

The Future Of “Signature Based” Security

In today's digital world, one cannot afford to be unprotected. It does not matter if you are a multi-national enterprise or a home user, there is someone out there who will want to use your PC to collect sensitive data or to infiltrate into your network.
We use computers for everything – from banking and investing to shopping and communicating with others through email or chat programs. Although you may not consider your communications "top secret," you probably do not want strangers reading your email, using your computer to attack other systems, sending forged email from your computer, or examining personal information stored on your computer (such as financial statements).
Most of our information security devices use malware “signatures” to identify different types of malware and thus protect our assets. These signatures can be in the form of firmware for our switches and routers, configuration and patches for IPS/IDS, firewalls and other servers, and virus/malware signatures for our anti-virus servers. We, as computer users, need to update these signatures on a daily basis in order to stay protected.
But, is it enough?

Is it Enough?

Apparently not.
According to CERT, 8,064 vulnerabilities were detected in 2006 alone.
But that is not all. The amount of time it takes for a virus to be distributed varies, though typically the fiercer attacks also spread more rapidly: 'Low Intensity' attacks take approximately 7 hours to 2 days; 'Significant' attacks take 1 hour to 1 day; and 'Medium' to 'Massive' attacks were swiftly distributed in 3 to 5 hours.
This means that vendors will have to update their firmware and release patches on a daily basis, while we will have to dedicate most of our time to patching our devices and servers.
But even that may not be enough. In some organizations there is a very strict patch release control which can take more that a week, and in some organizations, the Information Technology (IT) is so large, that it is not a task for one man. You may need to hire additional personnel for this one task.
Failure to comply will expose the organization to various threats. Furthermore, there is a chance that you might be attacked before a patch will be applied.

What is the Alternative?

The alternative, as I see it, is to do what the financial sector (banks) did and still does to client's on-line transactions.
Most of the banks today have fraud detection systems. And if they don't – they should. These systems analyse client's behaviour patterns over a period of time and then detect any digression in behaviour. After that, they may pop-up an authentication box or block the transaction, depending on vendor and configuration.
Why not take this approach into the IT world?
Instead of analysing client behaviour, we will analyse the normal behaviour of our software and then monitor it for any abnormal activity. For example, Microsoft Word will never try to rename a word.exe or try to manipulate .DLL files. In general, programs do not try to rename themselves.
We can take two different approaches. One, we will analyse “good” programs and allow them to function according to the patterns of their behaviour. The other approach can be analysis of “bad” software and thus block every software that behaves the same.

Why is it Good?

This alternative approach gives us the ability to react to any abnormal behaviour in our IT infrastructure and react fast, be it a registry key change or TCP packages manipulation. We will not have to deal with situations where we already lost the information and now we have to close the gap. With good and appropriate behaviour analysis, the gaps simply won't be there.
Since behaviour of malware does not change often, we won't have to spend the whole day patching our servers or disconnecting them to handle a virus outbreak.
For an attack to be successful, each attacker will have to come up with totally different intrusion scenarios, and I don't mean buffer overflow through a different DLL file.

Published under Comsec Consulting UK

Monday, February 12, 2007

AJAX - The Evil Within

What is Ajax?
Ajax shorthand for Asynchronous JavaScript and XML and it is a web development technique for creating interactive web applications. Ajax isn’t a technology. It’s really several technologies, each flourishing in its own right, coming together in powerful new ways. Ajax incorporates:
Standards presentation using XHTML and CSS
Dynamic display and interaction using the Document Object Model (DOM)
Data interchange and manipulation using XML and XSLT
Asynchronous data retrieval using XMLHttpRequest
JavaScript that binds everything together
The main reason for Ajax is reduce bandwidth usage and increase interactivity and user experience. By using the XmlHttpRequest object to request and return data without a re-load, a programmer by-passes this requirement and makes the loosely coupled web page behave much like a tightly coupled application.

The “Traditional” Web Applications
In “Traditional” web applications, the default architecture was considered as back-end servers to store and manipulate the data while front end (HTML web page) was mostly used to style and display the information.
A lot of information security best practices were written to support the growing demand on business to go on-line. Users started to be aware of security implication and the on-line behaviour like, don't press the “Submit” button twice, "Wait a bit longer, it's just processing," or "Don't press the 'back' button after you've submitted the form.
But you can throw that basic knowledge out of the window now that AJAX is here.

“Ajax” Web Applications
The data is still resides in the back-end data servers, but Ajax extends program across both the client device and the server. Instead of seeing blank browser window and an hourglass icon, Ajax don't “steal” user's interaction with the application.
Every user action that normally would generate an HTTP request to the server. Now, any response to a user action that doesn’t require a trip back to the server, such as simple data validation, editing data in memory, and even some navigation, the Ajax engine handles on its own. If the engine needs something from the server in order to respond, the engine makes those requests asynchronously, usually using XML.

Threats in Ajax
As we seen before, Ajax extends programs across both the client device and the server, creating far more opportunities for hackers to deliver malware onto sites.

Multiple Transmissions

If before, when user fill the form, the information was submitted only once to the server, an Ajax-enabled form, which automatically relays the data from each field as data is entered, will launch multiple transmissions that virus writers can latch into.

Screen Capturing Attacks
Before Ajax, the user were been able to control (with more or less granularity) the information which website was able to get access to. The user was able to review the information couple of times before hitting the “Submit” button. Now, So-called screen-scraping attacks and Web session hijacking attempts, both of which also seek to steal users' data, could also be performed more easily by taking advantage of Ajax. With AJAX, a user's actions can be constantly and meticulously monitored. Because it can be done, it will be done, and that will lead to a headache bigger than just wasted bandwidth, gigabytes of useless information, and slower page load times.

Cross Site Scripting
Ajax introduce a lot more JavaScript code into the web pages. Before, most of the business logic was hidden safely on our servers and the client was presented with mostly processed data and very few lines of script. Now, Ajax moves fair amount of business logic to front-end which allow bigger “surface” to attack.

DoS (Denial of Service)
Though this problem is not new, Ajax approach may increase this vulnerability. Just imagine the following scenario: Web server is serving 1000 users to lookup ten digits serial number. Poorly implemented Ajax web page will do a look up for every digit user type. This will increase the load server needs to handle tenfold.

Source Code Hiding
Since Ajax allow to perform any action in the background, it also allow malicious web page to change the JavaScript code on right button click. In other words, it's possible to add or modify JavaScript functions and code in the background even after a page load!
So even if you inspect the page source for code that might be sending keystrokes or mouse movements back to the Web server, you can't be certain that the code you see is the only code that's executing.

Intellectual Property
Commercial application vendors regard their source code as their intellectual property, and are likely to want to obfuscate client side code to protect it. AJAX implementations introduce a large amount of client side code so we are likely see a lot of commercial applications implement JavaScript obfuscation. This may be an understandable approach from a legal perspective, but it breaks apart when the intention is to obfuscate security controls.
Even average developer with malicious intentions and with enough time could reverse-engineer, and get “insider view” on security control of such applications.

Eavesdropping
Eavesdropping is one of the simplest and the oldest techniques for data theft: an unauthorized party listens to conversations between systems and collects their data. Eavesdropping could be done simply to get access to data like credit card account information, or it could be used to hijack identity credentials and start an authentication attack as described above.

Data Integrity and Replay Attacks
Eavesdropping enables both data integrity attacks and replay attacks. Using data obtained by eavesdropping with slight modifications (recall that XML is a text-based, human-readable format) before sending it on its way is known as a data integrity attack. The most basic version is a simple message forgery.
Replay attacks occur when a hacker uses stolen messages and continually replays that same valid (maybe we validate the schema) message at potentially high rates. Obviously that type of attack could have serious repercussions like server overload.

Knowledge and Best Practices
This might be the biggest problem of Ajax today, lack of knowledge base and best practices. For the last 1o years, a lot of security experts and developers gathered huge knowledge base, expertise and best practises of how to secure and migrate risks in standard web application. Nothing of this kind exist for Ajax. Ajax applications have a huge attack surface, much larger than traditional applications, and the buzz around Ajax is creating immense security implications, as the available knowledge bases and types of resources available for developers are poor.
Inexperienced developers fail to properly protect their work and attackers learn to use the benefits of Ajax to their advantage. His (inexperienced Ajax programmer) use of widely available Ajax code in their own programs without proper understanding, a common practice, will create even more problems.

XML Security
Since the other end point of Ajax is Web Service with understand XML, all security threats exits in XML are relevant with usage of Ajax. The XML threats are:
  • Schema Poisoning - Manipulating the XML Schema to alter processing information.
  • XML Parameter Tampering - Injection of malicious scripts or content into request parameters.
  • WSDL Scanning - Scanning the WSDL interface can reveal sensitive information about invocation patterns, underlying technology implementations and associated vulnerabilities.
  • Oversized Payload - Sending oversized messages to create an XDoS attack.
  • Recursive Payload - Sending mass amounts of nested data to create an XDoS attack against the XML parser.
  • SQL Injection - SQL Injection allows commands to be executed directly against the database for unauthorized disclosure and modification of data.
  • External Entity Attack - An attack on an application that parses XML input from un-trusted sources.
  • Malicious Code Injection - Scripts embedded within a SOAP message can be delivered directly to applications and databases; traditional binary executables and viruses attached to SOAP payloads.

How to protect your application
Actually, there is nothing new. We still need to validate the input received from he users, perform output encoding and improper access control. AJAX in itself does not introduce these vulnerabilities – poorly web applications have been susceptible to these problems long before AJAX.

Spajax

Sprajax (sometimes called as “Atlas” ajax security scanner , since it can only detect the “Atlas” framework and the SOAP web services used by the “Atlas” framework) is the first web security scanner developed specifically to scan AJAX web applications for security vulnerabilities.

XML Firewall
XML firewalls are different breed of application firewall. It can examine SOAP headers and XML tags, and based on what they find, distinguish legitimate from unauthorized content.
In addition, XML firewalls can look into the body of the message itself and examine it down to the tag level. It can tell if a message is an authorized one, or coming from an authorized recipient. Also, they can understand metadata about the Web service's service requester as well as metadata about the Web service operation itself. They can gather information about the service requester, such as understanding what role the requester plays in the current Web service request, for example. XML firewalls can also provide authentication, decryption, and real-time monitoring and reporting.

Same Old, Same Old
Maybe Ajax is new in town, but it is just a combined usage of existing technologies.
Number of tips to secure your application:
  • Secure architecture – Design your application to be secure. Start secure and stay secure by including security as a component in each stage of the software development life cycle.
  • Trusted software libraries - From encryption to session management, it’s best to use components that are trusted, reliable, tried and thoroughly tested. No need to reinvent the wheel and repeat the mistakes of others.
  • Validate user's input - Web applications must NEVER trust the client (web browser).
  • Secure principles - Every component of the website should be configured with separation of duties, least privilege, unused features disabled, short session lifetime and error message suppressed.
  • Threat Management – Continuous vulnerability assessments are the best way to prevent attackers from accessing corporate and customer data. By perform periodic penetration tests, gap analysis and threat migration plan it is possible to stay ahead of the malicious users and protect our assets.

The future
Despite security threats mentioned above, the business' drive to attract more clients by improving their websites and web applications will create a increasing usage of Ajax. Customers love dynamic and interactive website, everything that Ajax represents, so there is a lot of demand. But the problem is, most of Ajax developers know nothing about security.
As I see it, the threat of improper usage of Ajax won't stop companies for massive adoption of technology which can give them advantage over their competitors. It will take a number of attacks on popular web site and large business before they will realise the security impact on their business.
Ajax in itself is not “evil”. When it was created it had only best intentions in mind, but as all technologies can be used by malicious users, so do Ajax can be a deadly threat to security of your organization if those threats are not mapped, analysed and migrated.

Published under Comsec Consulting UK