First of all, in my personal opinion Flash/Flex (or ActionScript) = HTML5 + JavaScript + CSS3. Why? Because both are rooted in ECMAScript therefore I view ActionScript as a JavaScript on steroids (with additional libraries). For example, Adobe's ActionSript has a build-in API to draw graphics (i.e. GraphicsPath object) while JavaScript relies on HTML5 canvas element (see http://www.williammalone.com/articles/flash-vs-html5-canvas-drawing/). Moreover, typical Flex application architecture resembles AJAX based web application (i.e. JQuery framework). AJAX engine is capable of making SOAP and RESTful, exactly as Adobe's Flex application, handling data as simple text, XML or in JSON format (see http://www.adobe.com/devnet/flex/articles/flex_java_architecture.html).
Naturally, it has its advantages such as no cross browser implementation issues, and disadvantages such as the requirement to have an Adobe Flash plug-in installed to run the application. In addition, Adobe's Flex has a mature and well developed software development platform while AJAX still relies on GNU Emacs which, when considering enterprise web application, is a "biggie".
But back to the security assessment of a Adobe's Flex applications...
Because Adobe's Flex application is basically a packaged ActionScript which runs on a client side, a lot can be gleaned from the source code itself. A number of SWF decompilers such as SourceTec Software SWF Decompiler (http://www.sothink.com/product/flashdecompiler/) allows an assessor to break down Flash into components such as shapes, images, sounds, video, text, ActionScript, etc. and examine to identify "leaked" intellectual property (IP), copyrighted material, comments in the source code and other security related goodies.
The next step (or in parallel) would be to review the interaction between the Adobe's Flex application and the back-end server(s) using tools such as network sniffers and analyzers (i.e. WireShark), and application proxy (i.e. Paros, Fiddler2 or Burp). Again, the communication could reveal sensitive information such as user ids, passwords and maybe even credit card numbers. Moreover, communication could be intercepted and tampered to attack the back-end web server and the server application. Here a few buzz words come to mind such as XSS, CSRF and SQL Injection.
Finally, the back-end server deserves some attention as well - not for nothing it runs 8 dual core CPUs with 16GB RAM. Here, the rules of the game are similar to a standard web application assessment (if it is can be called "standard"). First, a quick scan to identify the what is running and how secure it is - basic misconfiguration can leave gaping holes. Then automated and manual security assessment to exploit the identified weaknesses which could range from weak authentication of the administration module to bad coding standards such as lack of input validation or exposure of database internal schema.
Imagination and creativity are assessors best weapons!
Monday, December 5, 2011
Tuesday, November 22, 2011
The Future of Web Services
In the late 1990s and 2000s the Internet evolved from
a static content web pages into dynamically generated websites with a
database back-end. The era gave birth to technologies such as ASP and
PHP which dominate more than 52 percent of the market (BuiltWith
Trends, 2011). Today, as the grid computing, distributed computing
and cloud computing are rapidly becoming defacto choice for data
storage and access (Divakarla, U, & Kumari, G 2010), web
application need to evolve and adopt the emerging data access
technologies. In addition, many businesses rely on Business to
Business (B2B) information which is exposed through web services
technologies to provide an additional layer of security (access
authentication and authorization) as opposed to exposing a direct
connection to the back-end database.
Information such as geographical location (MaxMind,
Inc. 2011), credit rating (Experian Information Solutions, Inc.
2011), employment and income verification (Equifax, Inc. 2011),
address lookup and readdressing information (Canada Post, 2011) is
available to merchants and service provides through standard (SOAP
and RESTful) web services. As such, instead of maintaining its own
database of geoip information or postal codes, a web application can
simply invoke an exposed web services to get access to the up-to-date
data maintained by an “expert” service provider. Moreover,
“Amazon S3 provides a simple web services interface that can be
used to store and retrieve any amount of data” (Amazon Web Services
LLC, 2011) which allows web software developers to create a database
driven application without having a traditional database back-end
relying completely on standard web services protocols such as SOAP
and REST.
The main obstacle in adoption of a distributed
information storage such as Amazon Web Services is the security
aspect of it. While vendors state that the storage “is secure by
default” (Amazon Web Services LLC, 2011), there are certain aspects
of security such as physical security which are can not be controlled
by the data originator. As such, merchants and service providers
wishing to utilize a “cloud” storage option need to evaluate and
implement compensating control such as adoption of HTTPS protocol to
transfer the data and encrypt the data before it is stored in the
“cloud”. Ideal, on organization wishing to join the "cloud" should assess the risks by conducting a Threat Risk Assessment (TRA) and to make sure there are security controls in place to mitigate the identified risks.
Bibliography
- Amazon Web Services LLC, 2011. “Amazon
Simple Storage Service” [online]. Available from:
http://aws.amazon.com/s3/
(accessed: November 19, 2011).
- BuiltWith Trends, 2011. “Top
in Frameworks” [online]. Available
from: http://trends.builtwith.com/framework/top
(accessed: November 19, 2011).
- Canada Post, 2011. “Postal Code Data Products” [online]. Available from: http://www.canadapost.ca/cpo/mc/business/productsservices/mailing/pcdp.jsf (accessed: November 19, 2011).
- Divakarla, U, & Kumari, G 2010, 'AN OVERVIEW OF CLOUD COMPUTING IN DISTRIBUTED SYSTEMS', AIP Conference Proceedings, 1324, 1, pp. 184-186, Academic Search Complete, EBSCOhost, viewed 19 November 2011.
- Equifax, Inc. 2011. “The
Decision 360” [online]. Available
from: http://www.equifax.com/consumer/risk/en_us
(accessed: November 19, 2011).
Labels:
Assessment,
Cloud,
Information,
Risk,
Security,
Service,
Threat,
Web
Tuesday, November 8, 2011
Internal vs. External Risk
Recently, I had a very interesting conversation with
a CISO about the need (or the lack) of a security assessment for an
application which was “up and running for quit some time” on the
Intranet. The business driver behind the initiative was to expose the
same application, which (of course) relies on authentication, to
business partners and clients to access marketing (statistics,
geographical and demographical distribution of users, etc.) over the
Internet.
It is quite obvious that external exposure has
inherently higher risk than the same resource (document, application,
database, etc.) exposed to the internal environment. But have you
tried to quantify the risk?
According to U.S. Census Bureau (2007), there are
120,604,265 employees in 29,413,039 establishments in US which means
that the average company size in Us is 4.1 employees (internal
exposure). Whereas the total world population (external exposure) is
estimated as 6,973,530,974 total population (U.S. Census Bureau,
2011). Using simple formula
6973530974 ÷ (120604265 ÷ 29413039)
we can calculate that the external exposure is
1,700,708,830 higher.
Naturally, it does not translate directly into risk
as the average US company with 4.1 employees does not have
Intellectual Property and not every human on earth have the means
(technical equipment, skills, time, motivation, etc.) to identify and
exploit security vulnerability. Regardless, even if the number is
reduced by million (1,000,000), we are still talking about 1,700 more
exposure ≈ risk.
This numbers are quit impressive...
References
- U.S. Census Bureau, 2007. "Statistics about Business Size (including Small Business)" [online]. Available from: http://www.census.gov/econ/smallbus.html (accessed: November 7, 2011)
- U.S. Census Bureau, 2011. "International Data Base World Population Summary" [online]. Available from: http://www.census.gov/population/international/data/idb/worldpopinfo.php (accessed. November 7, 2011).
Saturday, November 5, 2011
Data Warehousing
The concept of data warehousing was introduced in
80s as a non volatile repository of historical data mainly used for
organizational decision making. (Reddy, G, Srinivasu, R, Rao, M, &
Rikkula, S 2010). While the data warehouse consist of information
gathered from diverse sources, it maintains its own database,
separated from operational databases, as it is structured for
analytical processes rather than transactional processes (Chang-Tseh,
H, & Binshan, L 2002).
Traditionally, data warehouses were used by medium
and large organizations to “perform analysis on their data in order
to more effectively understand their businesses” (Minsoo, L,
Yoon-kyung, L, Hyejung, Y, Soo-kyung, S, & Sujeong, C 2007) which
was designed as a centralized database used to store, retrieve and
analyze information. Those systems were expensive, difficult to build
and maintain, and in many cases made internal business processes more
complicated.
With the wide adoption of Web (the Internet) as a
successful distributed environment, data warehouses architecture
evolved to a distributed collection of data marts and a metadata
servers which describe the data stored in each individual repository
(Chang-Tseh, H, & Binshan, L 2002). Moreover, the usage of web
browsers made deployment and access the data warehouses less
complicated and more affordable for businesses.
As a further matter, according to Pérez, J at. al.
(2008) the Web is “the largest body of information accessible to
any individual in the history of humanity where most data is
unstructured, consisting of text (essentially HTML) and images”
(Pérez, J, Berlanga, R, Aramburu, M, & Pedersen, T 2008). With
the standardization of XML as a flexible semistructured data format
to exchange data on the Internet (i.e. XHTML, SVG, etc), it became
possible to “extract from source systems, clean (e.g. to detect and
correct errors), transform (e.g. put into subject groups or
summarized) and store” (Reddy, G, Srinivasu, R, Rao, M, &
Rikkula, S 2010) the data in the data warehouse.
On the other hand, it is important to consider the
“deep web” which accounts for close to 80% of the web
(Chang-Tseh, H, & Binshan, L 2002), the data access, retrieval,
cleaning and transformation could present further obstacles to
overcome. In addition, as the information stored in the data
warehouses becomes more accessible through Internet browsers (as
compare to corporate fat-clients), so does the risk of data theft
(through malicious attacks) and leakage. Chang-Tseh at. al. (2002)
further notes that the security of the warehouse is dependent primary
on the quality and the enforcement of the organizational security
policy.
Bibliography
- Chang-Tseh, H, & Binshan, L 2002, 'WEB-BASED DATA WAREHOUSING: CURRENT STATUS AND PERSPECTIVE', Journal Of Computer Information Systems, 43, 2, p. 1, Business Source Premier, EBSCOhost, viewed 5 November 2011.
- H.M. Deitel, P.J, Deitel and A.B. Goldber, 2004. “Internet & World Wide Web How to Program”. 3Rd Edition. Pearson Education Inc. Upper Saddle River, New Jersey.
- Minsoo, L, Yoon-kyung, L, Hyejung, Y, Soo-kyung, S, & Sujeong, C 2007, 'Issues and Architecture for Supporting Data Warehouse Queries in Web Portals', International Journal Of Computer Science & Engineering, 1, 2, pp. 133-138, Computers & Applied Sciences Complete, EBSCOhost, viewed 5 November 2011.
- Pérez, J, Berlanga, R, Aramburu, M, & Pedersen, T 2008, 'Integrating Data Warehouses with Web Data: A Survey', IEEE Transactions On Knowledge & Data Engineering, 20, 7, pp. 940-955, Business Source Premier, EBSCOhost, viewed 5 November 2011.
- Reddy, G, Srinivasu, R, Rao, M, & Rikkula, S 2010, 'DATA WAREHOUSING, DATA MINING, OLAP AND OLTP TECHNOLOGIES ARE ESSENTIAL ELEMENTS TO SUPPORT DECISION-MAKING PROCESS IN INDUSTRIES', International Journal On Computer Science & Engineering, 2, 9, pp. 2865-2873, Academic Search Complete, EBSCOhost, viewed 5 November 2011.
Friday, November 4, 2011
PHP in Secure Environments
While PHP is by far the most popular framework for
web development, according to BuildWith Trends (2011) its popularity
is actually on a decline – the graph posted on the PHP.net website
is from 2007. Newer technologies such as ASP.NET Ajax, Ruby on Rails,
Adobe Flex and Microsoft Silverlight are gaining larger market share
(BuildWith Trends, 2011). On the other hand, the PHP framework is
being actively developed and supported therefore its popularity does
not play a major role when discussing security of the environment.
When discussion secure e-commerce environment, in
many cases the choice of the development language itself is not the
major influencing factor in the overall security stance. In many
cases, the hackers are targeting misconfigured or outdated services
rather than trying to exploit vulnerability such as buffer overflow
in the language interpreter (Verizon RISK Team, 2011). Moreover, Open
Web Application Security Project (OWASP) Top 10 web application
security risks highlight the fact that majority of exploitable
vulnerabilities are related to the web server such as
misconfiguration and insufficient transport layer protection, and the
security awareness of the software developers such as injection,
cross site scripting and insecure direct object reference (OWASP,
2010).
Security awareness of software developers is
considered by many security experts as the main factor impacting the
risk exposure of a web application (Dafydd Stuttard and Marcus Pinto,
2011). Lets consider SQL injection as an example; while the SQL
injection vulnerability was first documented in 1998
(rain.forest.puppy, 1998) and ranked as a number one security risk by
the Open Web Application Security Project (OWASP, 2010), the code
such as (potentially vulnerable to SQL injection):
Select
* from products where productCode=' " . $prodcode . " ' "
still appears in the university lecture notes
(Laureate Online Education, 2007).
Organizations such as PHP Groups and PHP Security
Consortium provide guides on security of PHP deployment and secure
code development using PHP. In addtion, the guide (PHP Security
Consortium, 2005) covers topics such as input validation, database
and SQL injections, session management and issues related to shared
hosts.
Bibliography
- BuildWith Trends, 2011. “Frameworks Distribution” [online]. Available from: http://trends.builtwith.com/framework (accessed: November 4, 2011).
- Dafydd Stuttard and Marcus Pinto, 2011. "The Web Application Hacker's Handbook: Discovering and Exploiting Security Flaws". 2Nd Edition. Wiley.
- H.M. Deitel, P.J, Deitel and A.B. Goldber, 2004. “Internet & World Wide Web How to Program”. 3Rd Edition. Pearson Education Inc. Upper Saddle River, New Jersey.
- Laureate Online Education, 2007. “MSC IN: Programming the Internet Seminar Five – PHP / Database Connectivity”. Laureate Online Education B.V.
- OWASP, 2010. “OWASP Top 10 for 2010” [online]. Available from: https://www.owasp.org/index.php/Category:OWASP_Top_Ten_Project (accessed: November 4, 2011).
- PHP Security Consortium, 2005. "PHP Security Guide" [online]. Available from: http://phpsec.org/projects/guide/ (accessed: November 4, 2011).
- rain.forest.puppy, 1998. "NT Web Technology Vulnerabilities" [online]. Phrack Magazine Volume 8, Issue 54 Dec 25th, 1998, article 08 of 12. Available from: http://www.phrack.org/issues.html?issue=54&id=8#article (accessed: November 4, 2011).
- Verizon RISK Team, 2011. “2011
Data Breach Investigations Report”
[online]. Verizon Business. Available from:
http://www.verizonbusiness.com/resources/reports/rp_data-breach-investigations-report-2011_en_xg.pdf
(accessed: November 4, 2011).
Labels:
e-commerce,
Environment,
Injection,
PHP,
Security,
SQL
Saturday, October 29, 2011
Protecting Code
As the world is shifting from compiled languages such
as C, C++ and Pascal to scripting languages such Python, Perl, PHP
and Javascript, so does the growth in exposure of intellectual
property (the source code). While previously “fat clients”
usually written in C and C++ were a compiled machine code
executables, more modern applications written in .NET and Java
consist of bytecode which is a “is the intermediate representation
of Java programs” (Petter Haggar, 2001). The same is applicable to
.NET applications which could be disassembled using tools shipped
with the .NET Framework SDK (such as ILDASM) and decompiled back into
source code (Gabriel Torok and Bill Leach, 2003). With web
technologies such as HTML, Javascript and Cascading Style Sheets
(CSS) where the source has to be downloaded to the client side in
order to be executed by the web browser, the end user has
unrestricted access to the entire source code.
Ability to access source code can be used both for
legitimate and malicious intent. For example, security tools are
using the ability to decompile Java applets and Flash to “performs
static analysis to understand their behaviours” (Telecomworldwire,
2009). Moreover, the ability to disassemble the source code can be
used by the software developers for debugging. On the other hand, it
can also be used to reverse engineer the source code which directly
impact the ability to protect the intellectual property.
One obvious way to try to protect the source code,
thus the intellectual property it carries, is to use obfuscation
(Gabriel Torok and Bill Leach, 2003)(Peter Haggar, 2001)(Tony Patton,
2008). Regardless of the language used to the develop the
application, obfuscation usually means:
- replacement of variable names to non-meaningful character streams
- replacement of constants with expressions
- replacement of decimal values with hexadecimal, octal and binary representation
- addition of dummy functions and loops
- removal of comments
- concatenating all lines in the source code
In a way, the process of obfuscation changes the
source code to make it difficult for the “reader” to understand
the logic behind it. It (obfuscation) could be seen as “your kid
sister encryption” - “cryptography that will stop your kid sister
from reading your files” (Bruce Shneier, 1996). Of course,
persistent “reader” can invest enough time and resources to
reproduce the source code (deobfuscate) by applying obfuscation
principals in reverse.
Bibliography
- Telecomworldwire, 2009. 'HP unveils HP SWFScan free web security tool' 2009, Telecomworldwire (M2), Regional Business News, EBSCOhost, viewed 28 October 2011.
- Bruce Schneier, 1996. “Applied Cryptography”. Wiley; 2nd Edition. Preface.
- Gabriel Torok and Bill Leach, 2003. “Thwart Reverse Engineering of Your Visual Basic .NET or C# Code” [online]. Microsoft. Available from: http://msdn.microsoft.com/en-us/magazine/cc164058.aspx (accessed: October 28, 2011).
- H.M. Deitel, P.J, Deitel and A.B. Goldber, 2004. “Internet & World Wide Web How to Program”. 3Rd Edition. Pearson Education Inc. Upper Saddle River, New Jersey.
- Peter Haggar, 2001. “Java bytecode: Understanding bytecode makes you a better programmer” [online]. IBM. Available from: http://www.ibm.com/developerworks/ibm/library/it-haggar_bytecode/ (accessed: October 28, 2011).
- Tony Patton, 2008. “Protect
your JavaScript with obfuscation”
[online]. TechRepublic. Available from:
http://www.techrepublic.com/blog/programming-and-development/protect-your-javascript-with-obfuscation/762
(accessed: October 28, 2011).
Labels:
Engineering,
Intellectual,
Obfuscation,
Property,
Reverse,
Source Code
Saturday, October 22, 2011
Adaptave Web Site Design
Paul De Bra (1999), identifies a number of issues
related to adoptive web site design including “the separation of a
conceptual representation of an application domain from the content
of the actual Web-site, the separation of content from adaptation
issues, the structure and granularity of user models, the role of a
user and application context” Paul De Bra (1999). This essay will
discuss separation of conceptual representation and the role of the
user in the application context more than ten years after publication
of the original article.
Modern web application development frameworks such as
.NET, Spring Framework, JavaServer Faces, Apache Orchestra, Grails
and Struts offer clear separation between application representation
and the content. The separation is achieved by implementation of
Model-View-Controller (MVC) architecture where “Model” layer is
responsible for storing and managing access to relevant pieces of
data, “View” layer is responsible for rendering and layout of the
data, and “Controller” layer is responsible for interaction with
the end user (i.e. Internet browser). No more the entire content has
to be “stored” statically in the HTML page, but generated
dynamically based on input received from the user. Moreover, HTML5
Web Storage API greatly increase the storage capacity (compared to
HTML session cookies) which allows web application to store
structured data on a client side (WHATWG, 2011). This could further
facilitate user centric web site design such as storage of user
preferences, data catch, etc.
On the other hand, when discussion “the role
of a user and application context” Paul De Bra (1999), the
methodology and the technology is not as mature. Qiuyuan Jimmy Li
ties the issue to the organization of the web application structure
and notes that majority of web sites do not adapt the content to the
individual user. Instead, the web server “provides the same content
that has been created beforehand to everyone who visits the site”
(Qiuyuan Jimmy Li, 2007). Instead, he suggest a framework which
accounts for users' cognitive style and adopts information content
for each individual user. Justin Brickell at. al. (2006) takes a
slightly different approach and instead suggest mining site access
longs to identify access patterns and user behavior such as
scrolling, time spent on each page, etc. The collected information
could be used for shortcutting - “process of providing links
to users’ eventual goals while skipping over the in-between pages”
(Brickell at. al., 2006).
In addition, it is important to highlight the security and privacy issues when discussing adaptive web-site design. In order for a web application to provide customized content, it (web application) requires to acquire or collect personal data about individual user and users' behavior patterns. For example, Google Gmail uses automated scanning and filtering technology to “show relevant ads” (Google, 2011). This could be considered by some individuals as intrusion into privacy, especially if the processed message contains sensitive information such as health records or financial information.
Bibliography
- Google, 2011. “FAQ about Gmail, Security & Privacy” [online]. Available from: http://mail.google.com/support/bin/answer.py?hl=en&answer=1304609 (accessed: October 22, 2011).
- H.M. Deitel, P.J, Deitel and A.B. Goldber, 2004. “Internet & World Wide Web How to Program”. 3Rd Edition. Pearson Education Inc. Upper Saddle River, New Jersey.
- Justin Brickell, Inderjit S. Dhillon and
Dharmendra S. Modha, 2006.“Adaptive
Website Design using Caching Algorithms”
[online]. Available from:
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.155.5537&rep=rep1&type=pdf
(accessed: October 22, 2011).
- Paul De Bra, 1999. “Design Issues in Adaptive Web-Site Development” [online]. Available from: http://wwwis.win.tue.nl/~debra//asum99/debra/debra.html (accessed: October 22, 2011).
- Qiuyuan Jimmy Li, 2007. “Design and Implementation of a User-Adaptive Website with Information Pallets” [online]. Available from: http://dspace.mit.edu/bitstream/handle/1721.1/45636/367589980.pdf?sequence=1 (accessed: October 22, 2011).
- WHATWG, 2011. “HTML – Web Storage” [online]. Available from: http://www.whatwg.org/specs/web-apps/current-work/multipage/webstorage.html#webstorage (accessed: October 22, 2011).
Friday, July 29, 2011
Forensic Software Analysis
Linux/GNU provides a wealth of tools which can be used to analyze
binaries such as file, strings, md5sum, hexdump, ldd, strace and gdb.
Moreover, profiling tools such as AppArmor could be useful when
analyzing behaviour of an unknown binary.
For the purpose of demonstrating forensic software analysis
process and recoverable artifacts, a number of Linux/GNU tools will
be used to investigate Skype application. Conclusions of the
investigation will be presented at the end of the document.
The information includes:
The output file could be parsed with grep with appropriate regular expression to identify accessed and/or modified system resources.
AppArmor includes a “a combination of advanced static analysis and learning-based tools” (AppArmor Security Project, 2011) which could be helpful when investigator a malicious software behaviour. For purpose of forensic investigation aa-genprof command can be used to record all software activities which could be analyzed at the later stage.
When using aa-genprof to analyze potential malware behaviour, the investigator has to invoke all possible functionality to force the software to access all local and remote resource. In many cases, it is required to let the software run for a few days as some malware such as infected bots communicate with the bot controller periodically.
On the other, Skype access to resources such as Firefox bookmarks and extensions such as LassPass (advance password manager), and system resources such as /etc/passwd raise suspicious as it resembles typical malware behaviour.
file
file command helps identifying file type and displays general information about the suspected binary.file command |
ldd
ldd command can be used to identify all shared libraries used by the suspicious software.ldd command |
gdb
gdb is a GNU debugger useful for debugging executable “see what is going on 'inside' another program while it executes—or what another program was doing at the moment it crashed” (Free Software Foundation, Inc. 2002). gdb allows an investigator to run a suspicious program step by step, view a value of a specific expression or print a stacktrace using bt command.gdb command |
gdb stacktrace |
strace
strace can be used to display system calls and signals, including access to local and remote resources such as /etc/passwd. strace command could be used with -o parameter to output the content to a specified file.The information includes:
- a name of a system call
- arguments; and
- return values
strace command |
strace output |
The output file could be parsed with grep with appropriate regular expression to identify accessed and/or modified system resources.
grep strace output |
strings
strings prints all printable characters in a specified file. It could be useful to identify system calls, resources, URLs, IP addresses, names, etc. when analyzing a suspected software.strings command |
strings output |
AppArmor
AppArmor is a “Linux application security system” (AppArmor Security Project, 2011) using whitelist approach to protect the operating system and its users. It is done by enforcing “good” behaviour including open ports, allowed resources, etc. to prevent from known and unknown application flows.AppArmor includes a “a combination of advanced static analysis and learning-based tools” (AppArmor Security Project, 2011) which could be helpful when investigator a malicious software behaviour. For purpose of forensic investigation aa-genprof command can be used to record all software activities which could be analyzed at the later stage.
When using aa-genprof to analyze potential malware behaviour, the investigator has to invoke all possible functionality to force the software to access all local and remote resource. In many cases, it is required to let the software run for a few days as some malware such as infected bots communicate with the bot controller periodically.
aa-genprof command |
skype command |
Skype main window |
Skype calling echo123 |
Skype messaging echo123 |
aa-genprof analyzing Skype access to Pulse resources |
aa-genprof analyzing Skype access to system fonts configuration |
aa-genprof analyzing Skype access to local chat |
aa-genprof analyzing Skype access to Firefox bookmarks |
aa-genprof analyze Skype access to Firefox extensions |
Conclusions
Skype application access resources required to display a graphic user interface (GUI) using Qt library, interact with audio devices through Pulse framework and resources required to use a network, all of which could be considered ass legitimate behaviour for a VoIP application.On the other, Skype access to resources such as Firefox bookmarks and extensions such as LassPass (advance password manager), and system resources such as /etc/passwd raise suspicious as it resembles typical malware behaviour.
Bibliography
- Free Software Foundation, Inc. (2002), “GNU Tools
Manual”.
- AppArmor Security Project (2011), “Wiki Main Page” [online]. Available from: http://wiki.apparmor.net/index.php/Main_Page (accessed: July 27, 2011).
Tuesday, July 19, 2011
Criminal Activity On Peer-To-Peer (P2P) Networks
Criminal activity on peer-to-peer (P2P) networks are usually associated with sharing of illegal such as copyrighted or offensive material (music, movies, snuff films or pornography). There are a number of cases when a law enforcement agencies successfully taken down the sites such as the case with Elite Torrents group (Charles Montaldo, 2005). But recently different peer-to-peer protocols such BitTorrent and Kad are being used to command and control an army of digital zombies (botnet). Botnet, controlled by a botmaster, can be used to attacks such as spam and denial of service.
As bots are getting more and more sophisticated allowing the controller to capture keystrokes, take screen shots, send spam and participate in denial of service attacks, and much harder to detect due to inclusion of rootkit capabilities, “the most significant feature, however, is the inclusion of peer-to-peer technology in the latest version of the botnet's code” (Peter Bright, 2011). Moreover, some bots allow controllers to “sublet”, for a price, an IP address to be used as anonymous proxy.
Peer-to-peer technology allows hacker to eliminate a “single point of failure” - a single (sometimes multiple) Internet Relay Chat (IRC) server or a Really Simple Syndication (RSS) feed to command the botnet. Over the years, there were a number of attempts by a botnet developers to develop the next generation utilizing peer-to-peer control mechanism such as “Slapper, Sinit, Phatbot and Nugache have implemented different kinds of P2P control architectures” (Ping Wang, Sherri Sparks, Cliff C. Zou, 2007), each with its weaknesses. For example, Sinit bot used random probing techniques to discover other Sinit infected machines which resulted in easily detected network traffic. Insecure implementation of authentication mechanism made Slapper easy to hijack. Whereas Nugache contained a list of static IP addresses used as initial seed (David Dittrich, Steven Dietrich 2008) (David Dittrich, Steven Dietrich 2009).
Modern implementation of the bots utilizing peer-to-peer protocol with combination of encryption (based on TLS/SSL) of the network traffic, public-key based authentication mechanism, randomly used ports with protocol mimicking to avoid anomalies detection on the network level and prevent hijacking of the botnet network by competing botmasters and law enforcement agencies. The TDL4 (or Alureon) dubbed as “the ‘indestructible’ botnet” and it is running on over 4.5 million infected computers at the time of writing (Sergey Golovanov, Igor Soumenkov 2011).
To make botnet more resilient, a hierarchical structure is used with each servant (a hybrid of bot and server) communicates with a small subset of bots, and each not contains a small list of other peers (in case servant is not available). The servants themselves are rotated (dynamic) and updated periodically to prevent capturing and disturbing the botnet network. Locally, the malware uses rootkit functionality to avoid detection by anti-viruses. For example, Alureon botnet “infects the system's master boot record (MBR), part of a hard disk that contains critical code used to boot the operating system” (Peter Bright 2011), meaning that rootkit is loaded before operating system and an antivirus software.
Forensic investigation of crime involved advanced peer-to-peer botnet involves a combination of reverse engineering, operating system and network forensic. For example, TDL4 infects victims MBR which, up on investigation, immediately identify the presence of the rootkit. Moreover, a presence of certain files (recoverable from offline forensic image) such as cfg.ini and ktzerules in certain locations could indicate infection. On a network level, upon infection the malware downloads and “installs nearly 30 additional malicious programs, including fake antivirus programs, adware, and the Pushdo spambot” (Sergey Golovanov, Igor Soumenkov 2011) making it possible to monitor and detect the botnet activity.
References
- Charles Montaldo (2005), “FBI Cracks Down on BitTorrent
Peer-To-Peer Network” [online]. Available from:
http://crime.about.com/b/2005/05/31/fbi-cracks-down-on-bittorrent-peer-to-peer-network.htm
(accessed: July 18, 2011).
- David Dittrich, Sven Dietrich (2008), "P2P as botnet
command and control: a deeper insight" [online]. Available
from:
http://staff.washington.edu/dittrich/misc/malware08-dd-final.pdf
(accessed: July 18, 2011).
- David Dittrich, Sven Dietrich (2009), "Discovery
techniques for P2P botnets" [online]. Available from:
http://www.cs.stevens.edu/~spock/pubs/dd2008tr4.pdf
(accessed: July 18, 2011).
- Laureate Online Education B.V. 2009, “Computer Forensics
Seminar for Week 7: Network Forensics II”, Laureate Online
Education B.V
- Peter Bright (2011), "4 million strong Alureon P2P
botnet "practically indestructible" [online]. Available
from:
http://arstechnica.com/security/news/2011/07/4-million-strong-alureon-botnet-practically-indestructable.ars
(accessed: July 18, 2011).
- Ping Wang, Sherri Sparks, Cliff C. Zou (2007), "An
Advanced Hybrid Peer-to-Peer Botnet" [online]. School of
Electrical Engineering and Computer Science, University of Central
Florida. Availble from:
http://www.usenix.org/event/hotbots07/tech/full_papers/wang/wang.pdf
(accessed: July 18, 2011).
- Sergey Golovanov, Igor Soumenkov 2011, “TDL4 – Top Bot”
[online]. Kaspersky Lab ZAO. Available from:
http://www.securelist.com/en/analysis/204792180/TDL4_Top_Bot?print_mode=1
(accessed: July 18, 2011).
Friday, July 8, 2011
Legal Aspect of Remote Monitoring
Regardless of the device owners awareness, remote monitoring of a computer or mobile device can be done by an agent deployed on the device, or by analyzing the traffic generated by the device. Each of these approaches have its own pros and cons that will be discussed below.
Remote monitoring of a computer utilizing locally deployed agent (such as event log monitor or key logger) can provide a wealth of information such as currently running processes, existing and active users, access to installed applications, etc. Legitimate deployment of such agents usually done by installing the software on a workstation or laptop by a system administrator either with or without users knowledge, while tools such as key loggers used by malicious users or criminal are usually deployed using existing vulnerabilities in the operating system, web browser or other installed applications. It is interesting to note that many legitimate monitoring software packages are using technology and methods previously used my malware. For example, many of employee monitoring software have capabilities such as keystroke monitoring, send and received Email messages logging, website activity, accessed documents, etc (TopTenReviews, 2011).
On the other hand, monitoring computer activities by analyzing the generated network traffic does not require the installation of a user agent (malware), means it leaves no traces on the computer itself which can be uncovered by a digital forensic investigator. The disadvantage, of course, is that the information can be deducted only from services and applications generating network traffic. For example, laptops connected to a domain will try to communicate to a domain controller, Java JRE and Adobe Reader periodically checks for available updates therefore providing the intruder with a list of potential targer (services and applications). In some cases, when devices communicates using insecure protocols, it is possible to gather information such as user names and passwords. Moreover, there are some attack vectors which can subvert the traffic such as DNS poisoning, ARP poisoning and Man In The Middle (MiTM) Proxy to servers/devices controlled by the intruder.
From a legal point of view, the technical aspect of data acquisition could fall into a different category. For example, in the US data collected while in transit, such as Email message, falls under the Wiretap Act therefore requires special permission. On the other hand, “dropping” a key-logger and collecting data as it is being drafted does not violate the Wiretap Act. Similarly, “at the recipient’s end, the U.S. District Court of New Hampshire in Basil W. Thompson v. Anne M. Thompson, et al., ruled that accessing email stored on a hard drive was not an "interception" under the Wiretap Act” (Ryan, DJ. & Shpantzer, G. 2005). Moreover, the age of the acquired data impacts the applicable legal requirements; Recent data, less than 180 days, which would include network log files, even logs, etc. “requires a warrant issued under the Federal Rules of Criminal Procedure or equivalent State warrant, while older communications can be accessed without notice to the subscriber or customer” (Ryan, DJ. & Shpantzer, G. 2005).
Finally, network environment introduces unique challenges to the digital forensic process, such as inability to take a snapshot, distributed geographic locations with different legal requirements and the amount of available data, requires some adaptation of the AAA principals (Laureate Online Education B.V., 2009). In order to be admissible in the court of law, the handling of network traffic as a digital forensic evidence, has to be in accordance with Daubert guidelines which “assess the forensic process in four categories: error rate, publication, acceptance and testing” (John Markh 2011). Moreover, due to the high volatility of the artifacts, the investigators are required to pay additional attention to the chain of custody.
Bibliography
- Laureate Online Education B.V. 2009, “Computer Forensics
Seminar for Week 6: Network Forensics I”, Laureate Online
Education B.V.
- Markh J. 2011, “Week 5 Discussion Question - UNIX Forensic
Tools”. Laureate Online Education B.V.
- Ryan, DJ. & Shpantzer, G. 2005. “Legal Aspects of
Digital Forensics” [online]. Available from:
http://euro.ecom.cmu.edu/program/law/08-732/Evidence/RyanShpantzer.pdf
(accessed: July 07, 2011).
- TopTenReviews 2011, “2011 Monitoring Software Review
Product Comparisons” [online], TechMediaNetwork.com, Available
from: http://monitoring-software-review.toptenreviews.com/
(accessed: July 7, 2011).
Thursday, July 7, 2011
Criminal Profiling in Digital Forensic
Criminal profiling has been used by crime investigators for centuries. It gained world wide attention after being used in England in Jack the Ripper case. Diamon A. Muller (2000) describes criminal profiling as a process “designed to generate information on a perpetrator of a crime, usually a serial offender, through an analysis of the crime scene left by the perpetrator” allowing law enforcement agencies to better utilize limited resources. Criminal profiling has two distinct approaches: inductive and deductive analysis (Rogers M. 2003). The inductive approach relies on the statistical analysis of behaviour patterns from previously convicted offenders while deductive focuses on the case specific evidence. One of the examples of criminal profiling methodologies is “diagnostic evaluation (DE), crime scene analysis (CSA), and investigative psychology (IP)” (Diamon A. Muller, 2000).
There are two contradicting points of view on criminal profiling; some claim it is an art while others claim it is a science similar to criminology and psychology. Moreover, as oppose to criminology or physiology, human lives may be depended on accuracy of criminal profiling: “if a profile of an offender is wrong or even slightly inadequate police maybe misled allowing the offender to escape detection for a little while longer—and innocent people may be dead as a result.” (Diamon A. Muller, 2000). As a result, many law enforcement agencies are still evaluating the adoption of criminal profiling.
Since digital forensic investigation is in essence crime investigation, that has similar investigation phases (acquisition of evidence, authentication, analysis and reporting/presentation), criminal profiling can be used as well to predict offenders behaviour. Just like in the traditional crime investigation, “digital” offenders have motives, different skill levels and tools. Regardless on the profiling methodology (inductive or deductive), the results of criminal profiling can greatly aid digital forensic investigation.
“The network evidence acquisition process often results in a large amount of data” (Laureate Online Education B.V. 2009) and the results of criminal profiling can help the investigator conduct a more specific keyword search, focus of specific area (i.e. allocated and unallocated space) and geographical location (IP addresses). Moreover, the profiling information can pinpoint supporting or corroborating evidence such as IRC chat channels, FTP sites, underground forums and newsgroups (Rogers, M 2003).
Just like traditional criminals, “digital” offenders have weaknesses that could be used when interviewing/interrelating suspects or witnesses. Although the interview process itself could be completely different from what we traditionally understand as “interview” (i.e. IRC chat rooms, forums, mailing lists, etc.), Rogers M. notes that “individuals who engage in deviant computer behaviour share some common personality traits, and given the proper encouragement, show a willingness to discuss and brag about their exploits” (Rogers, M 2003).
Bibliography
- DAMON A. MULLER 2000, “Criminal Profiling: Real Science
or Just Wishful Thinking?” [online], HOMICIDE STUDIES, Vol. 4
No. 3, August 2000 234-264. Sage Publications, Inc. Available from:
http://www.uwinnipeg.ca/academic/ddl/viol_cr/files/readings/reading22.pdf
(accessed: July 7, 2011).
- Laureate Online Education B.V. 2009, “Computer Forensics
Seminar for Week 6: Network Forensics I”, Laureate Online
Education B.V.
- Rogers, M 2003, 'The role of criminal profiling in the
computer forensics process', Computers & Security, May,
Business Source Premier, EBSCOhost, viewed 7 July 2011.
Saturday, July 2, 2011
Forensic Investigation of Celullar and Mobile Phones
“In
general, the same forensic principles that apply to any computing
device also apply to mobile devices in order to enable others to
authenticate acquired digital evidence.” (Casey E. at. al. 2011)
therefore a forensic investigator should follow the same forensic
process as with any computing device. When an acquired digital
evidence involves a recovered phone call, the investigation process
usually include accessing data collected by the cellular network
provider. A number of countries have erected laws to expedite the
access of the law enforcement agencies to the client information,
such as The Regulation of Investigatory Power Act of 2000 (RIPA) in
UK, USA Patriot Act, The Surveillance Devices Bill 2004 in Australia
and The Search and Surveillance Powers Bill 2008 in New Zealand.
These laws require (telephone and internet) service providers to
maintain a log of all communication such as calls, Email messages,
SMS (text messages), MMS (multimedia messages), established Internet
connection, etc.
With appropriate legal documents (as required), the investigator can obtain information such as customer name, billing name, geographic locations (based on the Base Station Transceiver), list of calls, etc. which could be helpful for the investigation process. More over, while it is generally believed that prepaid cellular phones are cheap enough and difficult to trace (Casey E. at. al. 2011), the device can still contain useful information. In addition, service provider could maintain information such as “credit card numbers used for purchases of additional time or an email address registered online for receipt of notifications” (Jansen W. and Ayers R. 2007).
Due to the diversity in the functionality and capabilities of the mobile devices (cellular phones, smart phones, etc) there is no one single investigation methodology of the cellar phone. In general, the process involves manual review of the information available through the menu such as address book, last call, text messages, etc. Specialized tools are used only when extraction of deleted information or access to “hidden” data (such as Apple iPhone cell towers and Wi-Fi hotspots database) is required (Laureate Online Education B.V. 2009). The potential evidences related to the mobile device include:
With appropriate legal documents (as required), the investigator can obtain information such as customer name, billing name, geographic locations (based on the Base Station Transceiver), list of calls, etc. which could be helpful for the investigation process. More over, while it is generally believed that prepaid cellular phones are cheap enough and difficult to trace (Casey E. at. al. 2011), the device can still contain useful information. In addition, service provider could maintain information such as “credit card numbers used for purchases of additional time or an email address registered online for receipt of notifications” (Jansen W. and Ayers R. 2007).
Due to the diversity in the functionality and capabilities of the mobile devices (cellular phones, smart phones, etc) there is no one single investigation methodology of the cellar phone. In general, the process involves manual review of the information available through the menu such as address book, last call, text messages, etc. Specialized tools are used only when extraction of deleted information or access to “hidden” data (such as Apple iPhone cell towers and Wi-Fi hotspots database) is required (Laureate Online Education B.V. 2009). The potential evidences related to the mobile device include:
- handset
identifier - International Mobile Equipment Identity (IMEI)
- Subscriber
Identifier (SIM)
- call
register
- address
book
- calendar
- photographs
- videos
- voice
mail
- passwords
such as Internet Mail accounts, desktop (for synchronization), etc.
- installed
applications
- attached
peripheral devices and special modification
- accessed
Wifi hotspots
- cell
towers
Bibliography
- Apple 2011, “Apple Q&A on Location Data”
[online]. Available from:
http://www.apple.com/pr/library/2011/04/27Apple-Q-A-on-Location-Data.html
(accessed: June 2, 2011)
- Ayers R., Jansen W., Cilleros N., Daniellou R. 2005, “Cell
Phone Forensic Tools: An Overview and Analysis” [online].
National Institute of Standards and Technology. Available from:
http://csrc.nist.gov/publications/nistir/nistir-7250.pdf
(accessed: July 1, 2011)
- Casey E., Turnbull B. 2011, “Digital Evidence and
Computer Crime 3rd Edition” [online].
Elsevier Inc. Available from:
http://www.elsevierdirect.com/companions/9780123742681/Chapter_20_Final.pdf
(accessed: July 1, 2011)
- CBC News 2009, “Internet surveillance laws in Canada and
around the world” [online]. Available from:
http://www.cbc.ca/news/canada/story/2009/06/19/f-internet-cellphone-wiretap-surveillance-law.html
(accessed: July 2, 2011)
- Jansen W., Ayers R. 2007, “Special Publication 800-101:
Guidelines on Cell Phone Forensics” [online]. National
Institute of Standards and Technology. Available from:
http://csrc.nist.gov/publications/nistpubs/800-101/SP800-101.pdf
(accessed: July 1, 2011)
- Laureate Online Education B.V. 2009. “Seminar 5:
Investigating UNIX, Macintosh, and Handheld Devices”.
Friday, June 24, 2011
Vishing and VoIP Forensics
Royal Canadian Mounted Police (2006) defines Vishing (or Voice Phising) as “the act of leveraging a new technology called Voice over Internet Protocol (VoIP) in using the telephone system to falsely claim to be a legitimate enterprise in an attempt to scam users into disclosing personal information”. Vishing could be viewed as natural evolution of Phishing which uses Email messages by the con artists to glean private information such as credit cards, social insurance numbers and PIN numbers. While the general public is getting more and more familiar with this type of con as well as Email software vendors include functionality to prevent Phishing attacks, the fraudsters are moving on to the technology still trusted by the users – telephony.
Traditionally, in the world of public switched telephone network (PSTN), although possible (Art of Hacking, 2000) it was much harder to spoof Caller ID (CID) as “each circuit on either end of the call is assigned a phone number by the phone company.” (Reardon M. 2009). Today, with the the move to SIP trunks and VoIP technology, spoofing caller ID is fairly trivial. Moreover, there are legitimate ways to acquire a telephone number in a any region in the world such as Skype Online Number. According to Adam Boone (2011), “telecom security researchers over the past two years have reported a very sharp rise in attacks against unsecured VoIP systems”. As a result, phishers have access to infrastructure which could be used to launch vishing attacks as demonstrated in scam targeting Motorola Employees Credit Union, Qwest customers and Bank of the Cascades (Krebs B. 2008).
In most cases, vishing attack involves calling someone using either a war dialler or legitimate voice messaging company. When call is answered, an automated message informs the caller that either the credit card or their bank account has an suspicious activity, and asks to call a predefined number to verify their account by entering their credit card number.
Digital forensic investigation of a vishing suspect is not a trivial matter. Since the attack is usually initiated by calling or texting (SMS) a large number of phone numbers, an investigator could look for unusual behaviour pattern. A number of forensic software can parse Skype artifacts, either in memory (RAM) or on an acquired image, such as Skypeex, Nir Sofer Skype Log Viewer and Belkasoft Skype Analyzer. For other software such as Astrix, a manual review of the log file will be required. Moreover, a forensic investigator utilize foremost command to look for .wav or .mp3 files which could be used as a recorded message. Finally, the SIP trunk service provide which was used by the frtaudsters could provide information such as user-id. This information could be used in the string search (srch_strings command) in acquired memory or non volatile storage images to identify suspected hardware.
Bibliography
- 'Beware of phishing--and vishing' 2006, Nursing, 36,
12, p. 66, Academic Search Complete, EBSCOhost, viewed 24
June 2011.
- Art of Hacking (2000), “Beating Caller ID”
[online]. Available from: http://artofhacking.com/files/beatcid.htm
(accessed: June 24, 2011).
- Boone, A 2011, 'Return of the Phone Phreakers: Business
Communications Security in the Age of IP', Security: Solutions
for Enterprise Security Leaders, 48, 4, pp. 50-52, Business
Source Premier, EBSCOhost, viewed 24 June 2011.
- Chow, S, Gustave, C, & Vinokurov, D 2009, 'Authenticating
displayed names in telephony', Bell Labs Technical Journal,
14, 1, pp. 267-282, Business Source Premier, EBSCOhost,
viewed 24 June 2011.
- Krebs B. 2008, “The Anatomy of a Vishing Scam” [online].
Available from:
http://blog.washingtonpost.com/securityfix/2008/03/the_anatomy_of_a_vishing_scam_1.html
(accessed: June 24, 2011).
- Swarm, J 2007, 'A Closer Look at Phishing and Vishing',
Community Banker, 16, 7, p. 56, Business Source Premier,
EBSCOhost, viewed 24 June 2011.
- Reardon M. 2009. “Protect yourself from vishing attacks”
[online]. CNET News. Available from:
http://www.zdnet.com/news/protect-yourself-from-vishing-attacks/303175
(accessed: June 24, 2011).
- Royal Canadian Mounted Police (2006), “Vishing or Voice
Phishing” [online]. Available from:
http://www.rcmp-grc.gc.ca/scams-fraudes/vish-hame-eng.htm
(accessed: June 24, 2011).
Thursday, June 23, 2011
Firefox 3 Forensic Analysis
Accessing
information on the Internet leave variety of footprints such as
visited websites, viewed content, downloaded documents, etc. The
forensic information could be found in single files, directories,
local databases and Windows registry. Moreover, Windows operating
system maintains in registry a log of all local and wireless network
connections (including the MAC address of the switch/router) which
can further help forensic investigation to identify the physical
location of the suspect (Laureate Online Education B.V., 2009)
(Jonathan Risto, 2010).
According to W3School (2011), the five most used web browsers are Firefox (42%) followed by Chrome (25%) and Internet Explorer (25%), then Safari (4%) and Opera (2.4%). As such, digital forensic investigator should be knowledgeable in all four and geared up to perform extraction and analysis of the data collected by these Internet Browsers. In most cases, Internet browsers use local cache to store information to increase access time, history of visited web sites, favourites, etc. In some cases (Firefox), the stored information indicates if the suspect typed the Uniform Resource Locator (URL) showing intent of criminal or illegal activity. Furthermore, autocomplete history and cookies can provide the forensic investigator on information typed entered to the websites, or stored locally. In addition to that, the increasing use of web chats such as Yahoo! Chat and Gmail Chat allow provides potential access to additional information.
While Internet Explorer and Firefox traditionally stored the information in a file, from Firefox version 3 the information stored in the SQLite databases. For example, bookmarks and browsing history are stored in places.sqlite, passwords are stored in the key3.db and signons.sqlite, autocomplete history in formhistory.sqlite and cookies in cookies.sqlite (Mozilla.org, n.d.) Numerous tools are available to perform forensic analysis of the information captured by the Firefox, including f3e and a simpel SQLite command line utility.
To locate SQLite 3 database, an investigator can utilize signature based search (i.e. foremost command) and look for the following hex value: 53 51 4C 69 74 65 20 66 6F 72 6D 61 74 20 33. To make sure that the identified SQLite database file is indeed a file used by Firefox, the following signature could be used to validate the file: 43 52 45 41 54 45 20 54 41 42 4C 45 20 6D 6F 7A 5F 62 6F 6F 6B 6D 61 72 6B 73.
Since SQLlite does not require authentication to work with the database, SQL statements could be used to retrieve relevant information (case specific). For example, the following query will retrieve 20 most visited websites:
Regardless, although the content of the records is wiped, Pereira, M (n.d.) has demonstrated that “when searching all disk, record vestiges was found in unallocated space” either due to reallocated data by the underlying OS or due to the “rollback” journal used by the SQLite engine.
According to W3School (2011), the five most used web browsers are Firefox (42%) followed by Chrome (25%) and Internet Explorer (25%), then Safari (4%) and Opera (2.4%). As such, digital forensic investigator should be knowledgeable in all four and geared up to perform extraction and analysis of the data collected by these Internet Browsers. In most cases, Internet browsers use local cache to store information to increase access time, history of visited web sites, favourites, etc. In some cases (Firefox), the stored information indicates if the suspect typed the Uniform Resource Locator (URL) showing intent of criminal or illegal activity. Furthermore, autocomplete history and cookies can provide the forensic investigator on information typed entered to the websites, or stored locally. In addition to that, the increasing use of web chats such as Yahoo! Chat and Gmail Chat allow provides potential access to additional information.
While Internet Explorer and Firefox traditionally stored the information in a file, from Firefox version 3 the information stored in the SQLite databases. For example, bookmarks and browsing history are stored in places.sqlite, passwords are stored in the key3.db and signons.sqlite, autocomplete history in formhistory.sqlite and cookies in cookies.sqlite (Mozilla.org, n.d.) Numerous tools are available to perform forensic analysis of the information captured by the Firefox, including f3e and a simpel SQLite command line utility.
To locate SQLite 3 database, an investigator can utilize signature based search (i.e. foremost command) and look for the following hex value: 53 51 4C 69 74 65 20 66 6F 72 6D 61 74 20 33. To make sure that the identified SQLite database file is indeed a file used by Firefox, the following signature could be used to validate the file: 43 52 45 41 54 45 20 54 41 42 4C 45 20 6D 6F 7A 5F 62 6F 6F 6B 6D 61 72 6B 73.
After
curving the SQLite 3 database file (using dd
or foremost
commands), it could be accessed simply by using sqlite
command. All
Firefox SQLite 3 files, are in essense a database with multiple
tables. For example, places.sqlite contains the following tables:
moz_anno_attributes, moz_favicons, moz_keywords, moz_annos,
moz_historyvisits, moz_places, moz_bookmarks, moz_inputhistory,
moz_bookmarks_roots and moz_items_annos.
Since SQLlite does not require authentication to work with the database, SQL statements could be used to retrieve relevant information (case specific). For example, the following query will retrieve 20 most visited websites:
sqlite>
SELECT rev_host FROM moz_places ORDER BY visit_count DESC LIMIT 20;
to
retrieve all places associated with the word “drugs”:
sqlite>
SELECT * FROM moz_places WHERE url like "%drugs%";
or, to
display all completed downloads (firefoxforensics.com, 2008):
SELECT
url, visit_date
FROM moz_places, moz_historyvisits
WHERE moz_places.id = moz_historyvisits.place_id AND visit_type = "7"
ORDER by visit_date
FROM moz_places, moz_historyvisits
WHERE moz_places.id = moz_historyvisits.place_id AND visit_type = "7"
ORDER by visit_date
Firefox Anti-forensic Features
Firefox includes a number of anti forensic features which could be either invoked by the suspect, or automatically by the Firefox itself such as removal of old history records after a period of 90 days. Moreover, a suspect could use “Private Browsing” functionality or manually invoke “Clear Recent History”. In these cases, Firefox fills the space of each record with zeros, effectively wiping the data.Regardless, although the content of the records is wiped, Pereira, M (n.d.) has demonstrated that “when searching all disk, record vestiges was found in unallocated space” either due to reallocated data by the underlying OS or due to the “rollback” journal used by the SQLite engine.
Bibliography
- Jonathan Risto (2010), “Wireless Networks and the
Windows Registry - Just where has your computer been?” [online].
SANS Institute. Available from:
http://www.sans.org/reading_room/whitepapers/auditing/wireless-networks-windows-registry-computer-been_33659
(accessed: June 23, 2011).
- Firefoxforensics.com (2008), “Firefox Research”
[online]. Available from:
http://www.firefoxforensics.com/research/index.shtml
(accessed: June 23, 2011).
- Laureate Online Education B.V. (2009). “Seminar for Week
4: Investigating Windows Systems”.
- Mozilla.org (n.d.), “Profiles” [online]. Available
from: http://support.mozilla.com/en-US/kb/Profiles
(accessed: June 23, 2011).
- Pereira, M (n.d.), 'Forensic analysis of the Firefox 3
Internet history and recovery of deleted SQLite records',
DIGITAL INVESTIGATION, 5,
3-4, pp. 93-103, EBSCOhost (accessed:
23 June 2011).
- W3School.com (2011), “Browser Statistics”
[online]. Available from:
http://www.w3schools.com/browsers/browsers_stats.asp
(accessed: June 23, 2011).
Friday, June 17, 2011
Destruction of Sensitive Information
Destruction of sensitive information has being on the agenda of many organizations and governments. As a result, numerous standard were developed such as U.S. Department of Defence (DoD) 5220.22-M, National Institute of Standards and Technology (NIST) 800-88 and Canada Communications Security Establishment (CSE) ITSG-06, to provide guidance to the IT administrators and owners to protect against information retrieval when recycling or disposing of storage media.
NIST lists four types of sensitization types: disposal, cleaning, purging and destroying. In most cases, disposal of the storage media is not considered as secure method of discarding media containing sensitive information. The rest of this paper will review the defined standard for the data cleaning standards.
Cleaning refers to a method of removing sensitive infromation that would protect the data “against a robust keyboard attack” (Richard Kissel at. al., 2006). Simple deletion of files is not sufficient for clearing as operating systems simply mark the appropriate entries in the FAT File Allocation Table, or equivalent in other file systems, as deleted leaving in the Data Region unchanged. As a result, the data could be potentially recovered using forensic tools. Up until 2001, the standard method of securely clearing sensitive information was overwriting the data with zero, one, random or predefined patterns such as “Gutmann Method” (Peter Gutmann, 1996). For example, Communications Security Establishment (2006) defines overwrite process as “process itself must include a minimum of three passes including 1s, 0s, and a pseudo-random pattern over the entire accessible area of the magnetic tape or disk, followed by verification of results by the human operator.”
The intent of the overwriting process is to overcome the the track-edge phenomenon allowing recovery of the magnetic pattern residue from track boundaries using magnetic force microscope. Using the microscope, the researches examine the relative peaks of magnetic transitions, to recover the binary data. Although the attack on the track-edges were documented in the laboratory environment, “it requires a very well equipped research laboratory with costly microscopy equipment and highly trained researchers with a great deal of time and patience” Communications Security Establishment (2006). Moreover, as the data written to the magnetic media become and more dense. According to Seagate press release (2011), it has reached “areal density of 625 Gigabits per square inch”, which is 310 million times over the density of the first hard drive. As a result, the effort required to recover the data makes it virtually impossible. Richard Kissel et. al. (2006) writes that “studies have shown that most of today’s media can be effectively cleared by one overwrite.”
Furthermore, since about 2001, all ATA IDE, SATA and SCSI hard drive manufacturer include support for the "Secure Erase" or “Secure Initiate” commands which writes binary zeros using internal fault detection hardware. Although the method not does precisely follows the DoD 5220.22 “three writes plus verification” specification, the university of California Magnetic Recording Research (2008) “showed that the erasure security is at the level of DoD 5220, because drives having the command also randomize user bits before storing on magnetic media”. Moreover, NIST Special Publication 800-88 classifies “Secure Erase” command as acceptable method of purging, equivalent to media degaussing.
Bibliography
- Communications Security Establishment (2006), “Clearing
and Declassifying Electronic Data Storage Devices” [online],
Available from:
http://www.cse-cst.gc.ca/its-sti/publications/itsg-csti/itsg06-eng.html
(accessed: June 17, 2011).
- Gutmann P. (1996), “Secure
Deletion of Data from Magnetic and Solid-State Memory”
[online]. Department of Computer Science, University of Auckland.
Available from:
http://www.cs.auckland.ac.nz/~pgut001/pubs/secure_del.html
(accessed: June 17, 2011).
- Gordon F. Hughes (2008) “CMRR Protocols for Disk Drive
Secure Erase” [online]. University of California San Diego,
Center for Magnetic Recording Research. Available from:
http://cmrr.ucsd.edu/people/Hughes/CmrrSecureEraseProtocols.pdf
(accessed: June 17, 2011).
- Hughes, G.F. Coughlin, T. Commins, D.M. 2009, “Disposal
of Disk and Tape Data by Secure Sanitization”, Security &
Privacy, IEEE volume 9 issue 3, p29-34.
- Richard Kissel, Matthew Scholl, Steven Skolochenko, Xing Li
(2006), “NIST Special Publication 800-88: Guidelines for Media
Sanitization” [online]. National Institute of Standards and
Technology. Available from:
http://csrc.nist.gov/publications/nistpubs/800-88/NISTSP800-88_rev1.pdf
(accessed: June 17, 2011).
- Seagate Technology LLC (2011a), “Media Sanitization
Practices During Product Return Process Best Practices Statement”
[online]. Available from:
http://www.seagate.com/staticfiles/support/docs/warranty/SeagateMediaSanitizationPractices19-Mar-2011.pdf
(accessed: June 17, 2011).
- Seagate Technology LLC (2011b), “Seagate Breaks Areal
Density Barrier: Unveils The World's First Hard Drive Featuring 1
Terabyte Per Platter” [online]. Available from:
http://www.seagate.com/ww/v/index.jsp?locale=en-US&name=unveils-1-terabyte-platter-seagate-pr&vgnextoid=6fbdb5ebf32bf210VgnVCM1000001a48090aRCRD
(accessed: June 17, 2011).
Saturday, June 11, 2011
Disclosure of Evidence
When an expert witness is required to disclose evidence that can damage its client's case, the conflict of interests could be examined from two standpoints: ethical and legal obligations.
In Ontario, Canada, under the Private Security and Investigative Services Act (PSISA) 2005, all individuals who conduct “investigations to provide information on the character, actions, business, occupation, or whereabouts of a person”, including digital forensic experts, require a Private Investigation license. The act, which came into force in 2007, is a way to professionalize the industry and ensure that all practitioners are qualified to act as private investigators (PI). The investigators are required to be familiar with criminal and civil legislation, privacy acts, and (hearing) procedural requirements. Moreover, the individuals must accept PSISA Code of Conduct (Government of Ontario 2005b) which states that every individual licensee must act with honesty and integrity, and comply with all federal, provincial and municipal laws.
In addition to that, private investigators should have the ability (skills and knowledge) to present evidence in the court of law. In Ontario, Canada, the act that governs the hearing procedures in the court of law is the Statutory Powers Procedure Act (SPPA). Section 5.4 of SIPPA states that the “tribunal may, at any stage of the proceeding before all hearings are complete, make orders for, (a) the exchange of documents; (b) the oral or written examination of a party; (c) the exchange of witness statements and reports of expert witnesses; (d) the provision of particulars; (e) any other form of disclosure. ” (Government of Ontario, 2009). The intent is to ensure fair hearing procedures and to prevent the conflict of interests of the expert witnesses hired by either the defence or the prosecution.
From the ethical standpoint, according to the PSISA Code of Conduct, an expert is required to act with “honesty and integrity” (Government of Ontario 2005b), therefore a licensed private investigator is expected to provide a full and truthful disclosure of the discovered evidence. Moreover, under SEPPA section 5.4, the tribunal may order for a full exchange of expert witness statements and reports. As a result, failure to provide a truthful and full disclosure may result in the revocation of the Private Investigation license and criminal case against the expert witness.
Bibliography
- Government of Ontario (2009), “Statutory Powers
Procedure Act” [online]. Available from:
http://www.e-laws.gov.on.ca/html/statutes/english/elaws_statutes_90s22_e.htm
(accessed: June 11, 2011)
- Government of Ontario (2005a), “Private Security and
Investigative Services Act” [online], Available from:
http://www.e-laws.gov.on.ca/html/statutes/english/elaws_statutes_05p34_e.htm
(accessed: June 11, 2011).
- Government of Ontario (2005b), “Private Security and
Investigative Services Act – Code of Conduct” [online],
Available from:
http://www.e-laws.gov.on.ca/html/regs/english/elaws_regs_070363_e.htm
(accessed: June 11, 2011).
Subscribe to:
Posts (Atom)