+ All Categories
Home > Documents > Security Acts

Security Acts

Date post: 06-Dec-2015
Category:
Upload: john-arias
View: 18 times
Download: 1 times
Share this document with a friend
Description:
Security
44
The Magazine for IT Security February 2010 issue 2 www.securityacts.com free digital version made in Germany © jufo – Fotolia.com
Transcript
Page 1: Security Acts

The Magazine for IT Security

February 2010

issue 2www.securityacts.com free digital version made in Germany

© jufo – Fotolia.com

Page 2: Security Acts

Security incidents respect neither geographical, nor time-zone nor administrative boundaries. The Annual FIRST Conference focuses exclusively on the �eld of computer security incident handling and response, it addresses the global spread of computer networks and the common threats and problems faced by everyone involved.

Join us in Miami for this unique gathering of security professionals and learn �rst hand the latest in incident prevention, detection and response best practices.

Forge alliances and become a part of a globally trusted forum.

REGISTER TODAY!HTTP://WWW.FIRST.ORG

HTTP://CONFERENCE.FIRST.ORG

Page 3: Security Acts

3www.securityacts.com

Dear Readers, The first issue of “Security Acts” was very well received in the IT Security Community. We have received a lot of feedback from Readers, congratu-lating us. We thank you for this. It takes a lot of effort behind the scenes, creating a successful issue and without the hard work and support of our colleagues, we would not have achieved this. I am looking forward to introducing new authors and some great new ar-ticles in the next “Security Acts” issue. It is interesting to see that we are reaching a Worldwide community with the “Security Acts”, in some coun-tries more than in others, but we will increase our Marketing activities and make a point of reaching everyone. I would really appreciate it if you would pass on the Information about the Magazine to all your interested colleagues and contacts and ask you to think about advertising, for your company, in “Security Acts”. We are reaching nu-merous readers worldwide, who are specifically in the IT Security market. In the past few weeks we have noticed that Google, Microsoft and the Chi-nese government have carried out a lot “marketing”, in particular on the top-ic IT Security. I would appreciate it, if one of you could write an article about this topic. I think it is important to provide the public with all the information. Thank you for all your support and I wish you a happy and successful 2010.

Yours sincerely

José Manuel Díaz DelgadoEditor

Editorial

Page 4: Security Acts

4 www.securityacts.com

Contents

Editorial /3

Reader's Opinion /6by Catalin Bobe

Web Vulnerability Scanners: Tools or Toys? /7by Dave van Stein

What if I lose all my data? /11by Mauro Stefano

Windows Identity Foundation and Windows Identity and Access Platform /14by Manu Cohen-Yashar

Security Testing: Taking the path less travelled /17by Ashish Khandelwal, Gunankar Tyagi, Anjan Kumar Nayak

Security Testing: Automated or Manual? /20by Christian Navarrete

File Fuzzing – Employing File Content Corruption to Test Software Security /23by Rahul Verma

Column: IT Security Micro Governance – A Practical Alternative /28by Sachar Paulus

Avoiding loss of sensitive information – as simple as 1-2-3 /30by Peter Davin

IT Security Micro Governance – A Practical Alternative /33by Ron Lepofsky

The CSO’s Myopia /37by Jordan M. Bonagura

Security@University – Talking about ICT security with two CTOs /52by Diego Pérez Martínez and Francisco J. Sampalo Lainz

Masthead /42

Index Of Advertisers /42

Page 5: Security Acts

5www.securityacts.com

PLEASE CONTACT:

[email protected]

Secure software engineering has become an increasingly important part of software quality, particularly due to the development of the Internet. While IT security measures can offer basic protection for the main areas of your IT systems, secure software is also critical for establishing a completely secure business environment.

Become ISSECO Certifi ed Professional for Secure Software Engineering to produce secure software throughout the entire de-velopment cycle. The qualifi cation and certi-fi cation standard includes

· requirements engineering

· trust & threat modelling

· secure design

· secure coding

· secure testing

· secure deployment

· security response

· security metrics

· code and resource protection.

BE SAFE! START SECURE SOFTWARE ENGINEERING

ISSECOSECURE SOFTWARE ENGINEERING

®

ISSECOSECURE SOFTWARE ENGINEERING

®

ISSECOSECURE SOFTWARE ENGINEERING

®

ISSECOSECURE SOFTWARE ENGINEERING

®

ISSECOSECURE SOFTWARE ENGINEERING

®

WWW.ISSECO.ORG

Foto

: ©sc

hiff

ner-

phot

ocas

e.co

m

Page 6: Security Acts

6 www.securityacts.com

I wouldn't even say it was a "people" problem". Because this is were my point of view differs greatly from Mike Murray's (and most other infosec professionals'). If we say "it's a user problem", all it means is that we blame the users for this state of affairs. I think that's wrong. The users do THEIR jobs, as much as we should do ours. But because we don't do ours properly, we "trans-fer" the blame on them.

We, the infosec professionals, need to do a better job at impart-ing our knowledge to everyone else. That's why I am a great be-liever in awareness (as a continuous process) and less in (online) training (as a one time event). To me, the next level of (informa-tion) security will happen when we'll motivate/persuade the us-ers (end users, IT staff, executives and senior management) to integrate whatever is needed for their jobs and wellbeing into their day-to-day life and habits.

I bet you carry the keys to your home, car, maybe office. Every day, no? Those are security devices. You lock the door every time you leave your home or your car after you park it. Why? Why do you carry the keys? They are heavy. They need to be taken care of (you can't leave them on a table in a bar).

So what's happening here? Well, the way I see it, we, the info-sec professionals, need to come down from our ivory towers and speak the same language as our audience. Just look at aware-

ness materials available on the Internet. Most are trivial, stupid and talk down on people. The first thought I have when looking at them is "Do you think I'm stupid?".

Yes, there is social engineering, and there are social networks. Hackers and organised crime are getting better and better. I can see it in the quality of the spam emails I get. I can see it in the social networks I surf occasionally. We can still put firewalls and IDS/IPSes and filtering mechanisms in place to stop people get-ting on Twitter, or Facebook or LinkedIn. But that's not what the business wants, is it?

The solution? More and better awareness. Day-in, day-out. Con-tinuous process.

Unfortunately, awareness takes time (you can't report on it to-morrow, if you started it this morning). More difficult to measure (human behaviour is a difficult thing to measure, to start with). And it doesn't offer a predictable Return-on-Investment (if you install a firewall, that firewall will stay in the company until the end of its days and you control it, whereas an employee can leave at any time, taking any knowledge and time you invested in him with him).

Catalin Bobe, CISSP, CISM, CISASecureBase Consulting, CANADA

(issue 1, october 2009)"The Human Face of Security - #1" by Mike Murray

Reader's Opinion

You'd like to commenton an article?Please feel free to contact:[email protected]

Page 7: Security Acts

7www.securityacts.com

Web Vulnerability Scanners: Tools or Toys?

by Dave van Stein

© iStockphoto.com

/JordiDelgado

Executing a web application vulnerability scan can be a difficult and exhaustive process when performed manually. Automating this process is very welcome from a tester’s point of view, hence the availability of many commercially and open-source tools nowadays.

Open-source tools are often specifically created to aid in manual testing and perform one task very well, or combine several tasks with a GUI and reporting functions (e.g. W3AF), whereas commer-cial web vulnerability scanners, like e.g. IBM Rational Appscan, HP Webinspect, Cenzic Hailstorm, and Acunetix Web Vulnerability Scanner, are all-in-one test automation tools designed for saving time and improving coverage.

Web Application Vulnerability scanners basically combine a spi-dering engine for mapping a web application, a communication protocol scanner, a scanner for user input fields, a database with attack vectors and a heuristic response scanner combined with a professional GUI and advanced reporting functions. Commercial vendors also provide frequent updates to the scanning engines and attack vector database.

Evaluating web vulnerability scanners

Over the past years many vulnerability scanner comparisons have been performed 1, 2, 3 and the most common conclusion is that the results are not consistent.

This great diversity in results can partially be explained by the lack in common testing criteria. Vulnerability scanners will typi-cally be used by testers with various backgrounds like functional testers, network security testers, pen-testers, and developers, each of them having a different view on how to review these tools, causing different result interpretations. Sometimes this leads to comparing web vulnerability scanners with other security-related products, which is like comparing apples and oranges 1.

Another explanation is the diversity in a test basis. The vast amount of technologies and ways to implement these in web

applications make it difficult to define a common test strat-egy. Each application requires a different approach, which is not always easy to achieve, making it hard to compare results.

Finally, like testers, vendors also have different views on how to achieve their goals. Although vulnerability scanners might look the same on the outside, the different underlying technologies can make interpretation and comparison of results more difficult than they appear to be.

The Web Application Security Consortium (WASC) started a proj-ect in 2007 to construct “a set of guidelines to evaluate web application security scanners on their identification of web ap-plication vulnerabilities and its completeness.”5. Unfortunately this project has not reached its final stage yet, although a draft version has been recently presented6.

This article focuses on the difficulties reviewing and using vulner-ability scanners. It does not provide the best vulnerability scan-ner available, but discusses some of the strengths and weak-nesses of these tools and gives insight in how to use them in a vulnerability analysis.

Using a web vulnerability scanner

Vulnerability scanners are like drills. Although the first are designed to find holes and the latter for creating them, their usage is similar.

Using drills out-of-the-box will possibly yield some results, but most likely not the desired ones. Without doing some research into the several configurations of the machine, the possible drill-heads, and the material you are drilling into, you are more than likely to fail in drilling a good hole and may come across some surprises. Likewise, running vulnerability scanners out-of-the-box will probably show some results, but without reviewing the many configuration options and structure of the test object, the results will not be optimal. Also the ‘optimal configuration’ differs in each situation. Before using a vulnerability scanner efficiently,

Page 8: Security Acts

8 www.securityacts.com

it is necessary to understand how scanners operate and what can be tested.

In essence, scanners work in the 3 following steps:

1. identify a possible vulnerability2. try to exploit the vulnerability3. search for evidence of a successful exploit

For each of these steps to be executed in an efficient way, the scanner needs to be configured for the specific situation. Failing in configuring one of these steps properly will cause the scanner to report incomplete and untrustworthy results, regardless of the success rate of the other two steps.

Knowing what to test

Before a vulnerability scanner can start looking for potential problems, it first has to know what to test. Mapping a website is essential to be able to efficiently scan for vulnerabilities. A scan-ner has to be able to log into the application, stay authenticated, discover technologies in use, and find all the pages, scripts, and other elements required for the functionality or security of the application.

Most vulnerability scanners provide several options for logging in and website spidering that work for standard web applications, but when a combination of (custom) technologies are used, ad-ditional parameterization is needed.

Another parameter is the ability to choose or modify the user agent the spider uses. When a web application provides different functionality for different browsers or contains a mobile version, the spider has to be able to detect this. A scanner should also be able to detect when a website requires a certain browser for the functionality to function properly.

After a successful login, a scanner has to be able to stay authen-ticated and be able to keep track of the state of a website. While this is no problem when standard mechanisms (e.g. cookies) are used, custom mechanisms in the URI can easily cause problems. Although most scanners are able to identify the (possible) exis-tence of these problem-causing techniques, an automatic solu-tion is rarely provided. When a tester does not know or under-stand the application, used techniques and possible existence of problems, the coverage of the test can be severely limited.

The Good

Vulnerability scanners are able to identify a wide range of vulner-abilities, each requiring a different approach. These vulnerabili-ties can roughly be divided into four aspects:

• information disclosure• user input independent• user input dependent• logic errors

Information disclosure handles all errors that provide sensitive information about the system under test or the owner and user(s) of the application. Error pages that reveal too much information can lead to identification of used technologies and insecure con-figuration settings. Standard install pages can help an attacker successfully attacking a web application whereas information like e-mail addresses, user names, and phone numbers can help a social engineering attack. Some commercial vendors also check for entries in the Google Hacking Database7. Its use, how-ever, is limited in the development or acceptance testing stage. User-independent vulnerabilities cover insecure communications (e.g. sending passwords in clear text), storing passwords in an unencrypted or weakly encrypted cookie, predictable session identifiers, hidden fields, and having enabled debugging options in the web server.

Checking for both information disclosure problems and user-in-dependent vulnerabilities manually can be very time-consuming and strenuous. Vulnerability scanners identify these types of er-rors efficiently almost by default.

The Bad

Bigger problems arise when testing for user-dependent vulnera-bilities. These problems occur due to insecure processing of user input. The most known vulnerabilities of this kind are Cross-site scripting (XSS)8, 9 , the closely related Cross-site request forgery (CSRF)10, 11, and SQL injection12, 13.

The challenge for automated scanning tools, when testing for these vulnerabilities, lie in detecting a potential vulnerability, ex-ploiting the vulnerability and detecting the results of a successful exploit.

SQL injection

SQL injections are probably the best known vulnerabilities at the moment. This attack already caused many website defacements and hacked databases. Although the most simple attack vectors are no longer a problem for most web applications, the more so-phisticated variants can still pose a threat. Even when an ap-plication does not reveal any error messages or feedback on the attack, it can still be vulnerable to so-called blind SQL injections. Although some blind injections can be detected by vulnerability scanners, they cannot be used for complete coverage, mainly due to performance reasons. Blind SQL injections typically take a long time to complete, especially when every field in an appli-cation is tested for these vulnerabilities. Most vendors acknowl-edge this limitation and provide a separate blind SQL injection tool to test a specific location in an application.

XSS and CSRF attacks

Cross-site scripting (and cross-site request forgery) attacks are probably the most underestimated vulnerabilities at this mo-ment. The consequences of these errors might look relatively harmless or localized, but more sophisticated uses are discov-ered each day like hijacking VPN connections, firewall bypass-

Page 9: Security Acts

9www.securityacts.com

ing, and gaining complete control over a victim’s machine.The main problem with XSS is that possible attack vectors run into millions (if not more). For example; an XSS thread on sla.cker.org14 has been running since September 2007, contains close to 22,000 posts so far and new vectors are posted almost daily.

There are several causes that contribute to the vast amount of possible attack vectors:

• It is possible to exploit almost anything a browser can inter- pret, so not only via the traditional SCRIPT and HTML tags, but

also CSS templates, iframes, embedded objects, etc. • It is possible to use tags recursively (e.g. <SCR<SCRIPT>IPT>)

for applications that are known to filter out statements. • It is possible to use all sorts of encoding (e.g. unicode15) in

attack vectors.• It is possible to combine two or more of these vectors, creat-

ing a new vector that is possibly not properly handled by an application or filtering mechanism.

With AJAX the possibilities increase exponentially. Obviously it is impossible to test all these combinations in one lifetime, not in the least for the performance drop this would cause. Vulnerability scanners therefore provide a subset of the most common attack vectors, sometimes combined with fuzzing technologies. This list is, however, insufficient by nature, so additional attack vectors should be added or manually tested.

Another problem is the diversity of XSS attacks; the two most known types are reflective and stored attacks. With reflective at-tacks the result of the attacks is transferred immediately back to the client, making analysis relatively simple. Stored or persis-tent attacks on the other hand are stored at some place and not immediately visible. The result might even not be visible to the logged-on user and may require logging in as another user and understanding the application logic to detect them.

Stored or persistent user input vulnerabilities can basically be checked in two ways:

• Exploit all user input fields in a web application and scan the application completely again afterwards for indications of suc-cessful exploits

• Check what is stored on the server after filtering and sanitizing

Most scanners opt for an implementation of the first method, but detecting all successful exploits is a difficult task. Especially when an application has different user roles or an extensive da-ta-flow, successful exploits can be hard to detect without under-standing and taking into account the application logic.

Acunetix uses an implementation of the second method in a tech-nology called AcuSensor16. Although this technology shows good results in detecting for example stored XSS and blind SQL injec-tion attacks, the biggest drawback is that it has to be installed on the web-server in order to use it. This might not be problematic in a development or even acceptance environment, but in a pro-duction environment this is often not an option or even allowed.

The Ugly

The most difficult errors to find in a web-application are appli-cation and business logic errors. Although these errors are usu-ally a combination of other vulnerabilities, they also contain a functional element contributing to the problem. Examples of logic errors are password resets without proper authentication or the possibility to order items in a webshop and bypassing the payment page. Since logic errors are a combination of security problems and flaws in the functional design of an application, practically all vulnerability scanners have problems detecting them. Commercial vendors like IBM and Cenzic do have a mod-ule for defining application logic attacks, but these are very basic modules and require extensive parameterization.

When testing for logic errors, manual testing still is necessary, although vulnerability scanners can be used for the repetitive or strenuous parts of the test. Practically all commercial vendors, but also e.g. Burp Suite Pro, have an option to use the vulner-ability scanner as a browser. Hereby the tester chooses the route to test the application, while the scanner can perform automatic checks in the background.

Conclusions

Vulnerability scanners can be very useful tools in improving the security and quality of web applications. However, like any other testing tool, being aware of the limitations is essential for a prop-er use. With their efficient scanning of communication problems and bad practices, they can save time and improve the quality and security early in the development of web applications. When used for testing user input filtering and sanitizing, they can save time by rapidly injecting various attacks. However, manual re-viewing of the results is essential and, due to the limited amount of attack vectors, additional manual testing remains necessary.

Fully automated testing of business and application logic is not possible with vulnerability scanners. Here vulnerability scanners have the same limitations as other test automation tools. How-ever, when used by experienced security testers, they can save time and improve the test coverage when used to automate the most strenuous parts of the security testing process. □

1 http://www.networkcomputing.com/rollingreviews/Web-Applications-Scanners/

2 http://ha.ckers.org/blog/20071014/web-application-scanning-depth-statistics/

3 http://anantasec.blogspot.com/2009/01/web-vulnerability-scanners-comparison.html

4 http://en.hakin9.org/attachments/consumers_test.pdf5 http://www.webappsec.org/projects/wassec/6 http://sites.google.com/site/wassec/final-draft7 http://johnny.ihackstuff.com/ghdb/8 http://en.wikipedia.org/wiki/Cross-site_scripting9 http://www.owasp.org/index.php/Cross-site_Scripting_(XSS)10 http://en.wikipedia.org/wiki/Csrf11 http://www.owasp.org/index.php/Cross-Site_Request_Forgery12 http://en.wikipedia.org/wiki/SQL_injection13 http://www.owasp.org/index.php/SQL_injection14 http://sla.ckers.org/forum/read.php?2,1581215 http://en.wikipedia.org/wiki/Unicode_and_HTML16 http://www.acunetix.com/websitesecurity/rightwvs.htm

Page 10: Security Acts

10 www.securityacts.com

Dave van Stein is a senior test consultant at ps_testware. He has close to 8 years of expe-rience in software and acceptance testing and started specializing in Web Application Security at the beginning of 2008. Over the years, Dave has gained experience with

many open-source and commercial testing tools and has found a special interest in the more technical test-ing areas and virtualization techniques. Dave is active in the Dutch OWASP chapter, and he is both ISEB/ISTQB-certified and EC-Council ‘Certified Ethical Hacker’

> About the author

Your Ad [email protected]

Subscribe at:

www.securityacts.com

Page 11: Security Acts

11www.securityacts.com

What if I lose all my data?by Mauro Stefano

© AlexPin – Fotolia.com

This article describes solutions for saving data. These solutions are a possible first step towards a Disaster Recovery project, quickly feasible and at limited cost. The proposed solution fits at times of economic hardship, or as long as you still consider the loss of data as a non- relevant problem. The off-site storage solution is not an alternative to Disaster Recovery; it is only a first step towards a solution to the entire recovery of ICT services.

When we think of an information system, we immediately imag-ine a computer or rather one or more computer rooms with many servers; in fact we are talking about a DC (Data Center).

We know very well that the DC with its computers holds all our data. Servers are only the instruments to access and process our data. The data are the computer representation of our company, of our projects and of our knowledge; they are the assets of the Data Centre. Without the data, much of the vital business infor-mation is lost.

We are well aware that many copies of small subsets of our data are present on the personal computers of our users; we know very well that other subsets are spread in multiple copies among our customers as well as among our suppliers. There are still other subsets on hard copy in our various offices. Finally, should it be of any use, you can use the employees’ memories.However, is this the way to reconstruct the data in the event of ac-cidental loss? Obviously, a structured system, well-organized and periodically tested, makes it more likely that we are successful.

If you are involved in Disaster Recovery and especially in the man-agement of business continuity in the event of loss of the DC, you will realize as we discuss these issues that you are working for a very complex project with very low chance of being invoked.

Fortunately, the likelihood of actually having to use the Disaster Recovery systems, or to use business continuity alternative pro-cesses in order to continue our business in case of unavailability of production information systems, is really negligible. Most ICT

operators are prepared to deal with disasters that will never re-ally need to be managed, aware that if they are not in a position to react, the risk would be to stop the business completely.

If we focus on Disaster Recovery more closely and split it into simple elements, we will see that it is achievable through the following elements:

1. Suitable rooms to accommodate an alternative DC2. Computer and alternative disk(s), compatible with those normally used in production 3. A network connection between the recovery DC and the users’

offices 4. Data and system configurations

If we imagine ourselves in the moment immediately after a di-saster, for which we were not prepared, and look at the above four elements one by one, we will see that the first three can be designed from scratch, whilst the fourth, the data, cannot be rebuilt from scratch. Data can only be restored. So we must have a back-up copy.

It is no problem for us to hire a properly equipped DC, we can buy new computers compatible with the ones that have been de-stroyed, we can ask our network provider to set up a new link to the new DC, but we cannot reconstruct our data from the various and incomplete copies described above. Again, data can be only restored. If I find myself in a period of economic constraints, and if my Data Center risk assessment allows me to accept a long outage period, I can temporarily dispense with a complex plan for Disas-ter Recovery, but at least I must take steps to prepare a remote regular copy (daily or weekly) of most of my data.

To achieve this project can be easier than you may imagine. In the following, we will look at some of the possible alternatives.

Page 12: Security Acts

12 www.securityacts.com

Copying of the cartridge.

We have always been accustomed to deal with what we call “component failure”, i.e. breaking of a component in one of the systems. In some cases, this may lead to the loss of a subset of the data. We then restore the lost data from the regular copies (often daily) made for this purpose. At the same time regular cop-ies of data allow us to return from a possible application data corruption. The daily back-up is usually a complete copy of the data, which is normally held in the same rooms in which the primary data are stored. We will easily see the obvious: In the event of a DC disaster, we would lose both copies. For data recovery purposes, the sim-plest solution would be to make a copy, and take the tapes, to which the daily and weekly data have been saved, to a remote location.

This solution is viable, but could be difficult and costly in large-scale environments, where the cartridges are engaged almost continuously. In these cases, we also need to increase all the cartridges libraries since those of production are nearly always already committed.

This solution has another disadvantage in that it requires signifi-cant human activity, because we need people to remove the car-tridges from the tape library, put them in a box, move them to a safe place far away and then manage the reverse cycle. The data flow relating to this solution is represented in blue in the diagram.

Copy data on a deduplication system.

Analyzing a data store in all details we see that it contains many small subsets of data repeated many times over and only few subsets that are really unique.

This is easier to understand in an e-mail system, where we have at least two copies of the same message, one of the sender and one of the recipient. If there is more than one recipient, the num-ber of copies increases. Whenever a mail is forwarded, the new one still contains the original one, so the number of copies of the original mail continues to increase.

In the same way, all documents that have the company logo in the header or footer contain many copies of the company logo.

By using a system to search and delete duplicated copies of data, you can reduce occupied disk space.

Through deduplication (duplication reduction), it is possible to define a single file no longer as a string of bytes, but as a string of pointers to well-known different blocks. If the data is not re-peated, depending on the implementations, they are defined as new blocks or only as changes to existing blocks.

Today these systems still don’t always have adequate perfor-mance to be used for live data, but they could be used to reduce the disk space used by saved copies that have been generated,

but remain unused. In the back-up systems, there are normally still multiple versions of the same files more or less unchanged. In this case, a system to detect and remove duplication will lead to an even more significant reduction of the disk space occu-pancy for the back-up, since the multiple copies of the same files are only pointers to the same original single objects.

In the case of small changes, depending on the individual imple-mentations, a new object is created which points to the original with an indication of the changes, or a new object composed by pointers to blocks remained unchanged and pointers to some

Page 13: Security Acts

13www.securityacts.com

Mauro Stefanois Engagement Manager in a large ICT company preparing proposal so-lutions for different ICT items. In the past he was ICT Security Manager in the IT department of a large Italian automotive company. Presently, he fo-cuses his activities on Se-

curity and End-User Support services, even though he has experience in all IT services, which is based on 24 years of international activities in the ICT sector. In the past, Mauro has published some articles in an Italian specialized ICT magazine and participated at several conferences and congresses as chairman or speaker.

From the projects he has managed, the most relevant are: ICT Security introduction and implementation at his first company and design of a large Disaster Recovery project proposal for one of the main customers of his present employer. He had the opportunity to manage multiple innovative projects as introduction of Mobile Computer in ’93 and utilization of TCP/IP on main-frames in the nineties.

Mauro holds multiple certifications: CGEIT, CISA, CISM and ITIL.

> About the author

new chunks for each block that contain a change.

We can therefore easily understand how a back-up system based on reducing duplication can significantly reduce the disk space required. Another advantage of back-up systems which dedupli-cation is the capability to make a remote copy.

In practice, it is possible to duplicate the data of the primary back-up at a second remote site on other disks. This system, as well as being automated to reduce the human operations, also needs low-speed network connections, because it will carry only very few varied data and pointers to blocks that represent new files.

In practice, it becomes possible to have a remote copy of back-up data by implementing the Virtual Tape Libraries with deduplica-tion features and with a remote replication option in place of the Tape Libraries. The data flow relating to this solution is repre-sented in green in the diagram.

Copy data on remote disk

A third solution that allows an almost continuous data alignment is based on a replica from disk to disk. In these solutions data are synchronously or asynchronously mirrored between the lo-cal disk system and a remote disk system, which are connected through a high-speed link. This solution requires not only more disk space for high-efficiency, but also a high-speed network link.

Obviously, this solution is adequate in cases where you intend to achieve a DR solution with little data loss and a fast recovery time within a short timeframe. The data flow relating to this solu-tion is represented in yellow in the diagram. □

Subscribe at:

www.securityacts.com

Page 14: Security Acts

14 www.securityacts.com

Identity management is a complex problem, yet almost every ap-plication has to address it.

The world of identity management is being revolutionized with the introduction of WS* standards. Federation, single sign-on and claim-based authorization are common requirements. The question that remains open is: How should it be implemented?

Every framework has to address the identity problem. In this ar-ticle, I would like to introduce the .Net solution called "Windows Identity Foundation" previously known as the Geneva frame-work.

Windows Identity Foundation (WIF) enables the .NET develop-ers to externalize identity logic from their application, improv-ing developer productivity, enhancing application security, and enabling interoperability with applications written in other plat-forms. Windows Identity Foundation (WIF) can be used for on-premises software as well as cloud services. Windows Identity Foundation (WIF), which is part of the new Identity and Access products wave, gives applications a much richer and flexible way to deal with identities by relying on the claims-based identity con-cept I described in a previous article.

Using WIF it is easy to implement a claim-based authorization system based on industry standard protocols. WIF simplifies the creation of a security token service - STS (which is the center of every claim-based system) as well as the interaction with other existing STS and resources.

Windows Identity and Access platform includes several releases.

• Active Directory Federation Services 2.0• Windows Identity Foundation • Windows Cardspace 2.0

ADFS 2.0

ADFS 2.0 is the next generation of Active Directory Federation Services.

At the core of ADFS 2.0 is a security token service (STS) that uses Active Directory as its identity store. The STS in ADFS 2.0 can issue security tokens to the caller using various protocols, includ-ing WS-Trust, WS-Federation and Security Assertion Markup Lan-guage (SAML) 2.0. SAML is the base standard for claim-based tokens, while the WS* standards are all about the communica-tion and negotiation. To support old versions, ADFS 2.0 STS sup-ports both SAML 1.1 and SAML 2.0 token formats and all WS* versions. AD FS 2.0 is designed with a clean separation between wire protocols and the internal token issuance mechanism. Dif-ferent wire protocols are transformed into a standardized object model at the entrance of the system, while internally ADFS 2.0

© Lum

inis - Fotolia.com

Windows Identity Foundation and Windows Identity and Access Platform by Manu Cohen-Yashar

Page 15: Security Acts

15www.securityacts.com

uses the same object model for every protocol. This separation enables AD FS 2.0 to offer a clean extensibility model, indepen-dent of the intricacies of different wire protocols.

Windows Identity Foundation - System.IdentityModel

WIF is a framework for implementing claim-based identity in your applications. It can be used in any Web application or Web ser-vice, cloud or on-site applications.

The goal was to make the interaction with claims easy. It is de-signed to unify and simplify claim-based applications. It builds on top of WCF’s plumbing. It handles all the cryptography required and implements all the related WS and SAML standards. WIF also introduces an HttpModule called the WS-Federation Authen-tication Module (FAM) that makes it trivial to implement WS-Fed-eration in a browser-based application.

Using WIF it is possible to create your custom STS or connect to another identity provider with only a few lines of code.

For example, when you build in with WIF, you’re shielded from all of the cryptographic heavy lifting. WIF decrypts the security to-ken passed from the client, validates its signature, validates any proof keys, shreds the token into a set of claims, and presents them to you via an easy-to-consume object model.

Cardspace 2.0

Cardspace is an identity selector. To a user a Cardspace repre-sents his identity in a simple and friendly manner. Cardspace is very much like the ID card in your wallet or a personal card you distribute to your colleagues. The card is installed on the user computer. The information contained in the card is not the user identity. The card contains the information needed to fetch the identity info from the identity provider.

Cardspace is not a new technology. It was released with .Net framework 3.0 back in 2005. Cardspace was not a huge suc-cess, because it was not easy to use. WIF will change that.

WIF introduces all the plumbing needed to use Cardspace on the client and the infrastructure to build or use an identity provider on the server. In Cardspace 2.0 there are many performance im-provements to assure its use will be easy and comfortable.

The identity problem is complex, the challenge is huge, but on the other hand it must be easy to create applications with advanced identity capabilities. WIF together with Microsoft’s Identity and Access Platform allows exactly that. Security and identity are a global issue, and thus interoperability between all platforms is a necessity. Microsoft’s Identity and Access Platform is based on well-known industry standard protocols to make sure it will fully comply with the interoperability requirement.

The world of identity is going through a revolution with claim-based authorization systems. If you want to be up to date, I rec-ommend to take a close look at WIF and at Microsoft’s Identity and Access Platform. □

Manu Cohen-Yasharis an international expert in application security and distributed systems. Cur-rently consulting to vari-ous enterprises worldwide and Germany banks, ar-chitecting SOA based se-cure reliable and scalable solutions.As an experienced and ac-knowledged architect Mr

Cohen-Yashar was hosted by Ron Jacobs in an ARCast show and spoke about interoperability issues in the WCF age. Mr Cohen-Yashar is a Microsoft Certified Trainer and trained thousands of IT professionals worldwide.

Mr Cohen-Yashar is the founder of an interoperability group in cooperation with Microsoft Israel in which dis-tributed system experts meet and discuss interoperabil-ity issues. http://www.interopmatters.com/A wanted speaker in international conventions, Mr Co-hen-Yashar lectures on distributed system technologies, with specialization on WCF In which he is considered one of the top experts in Israel. Manu won the best presenta-tion award in CONQUEST 2007.Mr Cohen-Yashar is currently spending much of his time bringing application security into the development cycle of major software companies (Amdocs, Comverse, Elbit, IEI, The Israeli defense System…)

Mr Cohen-Yashar is giving consulting services on securi-ty technologies and methodologies (SDL etc). Mr Cohen-Yashar is known as one of the top distributed system archi-tects in Israel. As such he offers lectures and workshops for architects who want to specialize in SOA, and leads the architecture process of many distributed projects.

> About the author

Page 16: Security Acts

16 www.securityacts.com

Application Security

© iS

tock

phot

o.co

m/a

bu

Co-Founder of ISSECO (International Secure Software Engineering Council)

www.diazhilterscheid.com

How high do you value your and your customers’ data? Do your applications reflect this value accordingly? Accidental or deliberate manipulation of Data is something you can be protected against.

Talk to us about securing your Systems. We will assist you to incor-porate security issues in your IT development, starting with your system goals, the processes in your firm or professional training for your staff.

[email protected]

Page 17: Security Acts

17www.securityacts.com

Security Testing: Taking the path less travelledby Ashish Khandelwal, Gunankar Tyagi, Anjan Kumar Nayak

© H

annesTietz – Fotolia.com

Have you ever thought about the idea how easy it would have been to write simple test cases that could reveal the security loopholes in your product? This is easier said than done. I have seen in my experience how often a security testing initiative gets stopped during implementation. There goes a whole series of justifying why we need it, do we really need it, what will we achieve, and so on. After all this comes the myriad effort of bring-ing the security testing framework alive. Even though the age-old “Threat Modelling approach” remains the best tool available, it still remains a distant dream for many to embrace. In our honest opinion it would just be a disgrace for the readers to ask ques-tions such as “What is security testing?”, “Why is security test-ing required?”, “What is the importance of security testing?” and questions along this line.

Defining security testing no longer remains a challenge; however, the test techniques might vary from application testing to web testing. What still remains a distant dream is the acceptance of security testing in the SDLC. With the recent security threats posed at various circles and the emerging risks relating to data theft issues, it has become tougher to address security testing needs. Added to this is the distinct lack of business requirement specifying the security states of a product.

• Why do security testing? Although it’s a disgracing question to ask, it’s always good to ask this question to get your footing right.

• How to do security testing? This is where most of the initia-tives get killed off, since no one has a clear picture of how to do it, and since the whole procedure is so very undefined and unclear. Is threat modelling the only source available? Is there no other way to get started with it as a small venture?

• Who will do security testing? Do I need a seasoned security professional, or can I do this with my regular black-box tes-ters? Can I leverage on by adding some security certification to my existing resources?

Trinity of Constraints

The most favoured “Threat Model” has the following three con-straints as shown in the diagram:

• Complexity: What’s complex in “Threat Model”? First thing, it’s time consuming. Because you need to first understand the product architecture in every detail. It involves first listing and then finding the possible interaction levels of each and every asset of the product.

• Connectivity: What are the possibilities of me getting con-nected to the underlying product architecture? I have come across this very common complaint from many about the lack of product architectural documentation. Even if there are doc-uments, who has the time to explain them to you. The other possibility is to walk through the million lines of code base. But is it a very wise option to follow?

• Changeability: Evolving development models such as Agile add more pain to the problem. The product feature and its underlying architecture are subjected to change, if not fre-quently.

Page 18: Security Acts

18 www.securityacts.com

Creating an adaptive testing approach that bridges the gap be-tween the pains of “Threat Modelling” and starting with your own security testing project is the best solution.

Adaptive Security Testing Approach

“Adaptive” security testing focuses more on finding security de-fects rather than finding them early in the cycle, when the exper-tise level is still building.

This concept originates from the fact that the security testing is a part of functional testing, rather than something that is per-formed as an individualistic form under supervision. Look at the Security testing ladder as it progresses through.

As we go ahead up the ladder, the expertise of the security tes-ter increases and this results in defects that could be functional as well as security. Each step up the ladder will optimize your results, but in turn would require consistent upgrading of your security testing skills and product knowledge. With this ladder, we have derived a two-way security testing approach termed as “Adaptive Security Testing Approach”.

The “Adaptive” Model is in its initial stage centred more on stimu-lating the security testing approach by focusing on finding early security defects rather than on studying deeply into technology and product in order to attack it. It apparently induces enthusi-asm and adaptability in such a zigzagged security arena. As we move along with this approach, we will start adapting to our knowledge and constraints to maximize our potential and results.

In short, we break the complete effort into two categories: a) Peripheral Security Testing: It is an entry point based, Black-

Box Security testing approach with the aim of quickly finding surface-level but critical security issues.

Use Case Scenario: - Fuzzing a GUI window

b) Adversarial Security Testing: It is an expertise-based, hostile Security testing approach, which is being carried inside-out to reveal loopholes in the product.

Abuse Case Scenario: - Exploiting Windows ACL

Adversarial testing is a successor of Peripheral. As we go up the ladder, our methodology changes and hence our priorities to test change. Each testing type is classified in terms of Inputs, Activi-ties and Outputs as seen from Figure3. Inputs are the prerequisites required in order to follow the par-ticular type of testing. Activities describe the flow to perform the particular type of testing. Output takes care of results and the analysis part.

Here, the tester needs to look at software risk analysis on a com-ponent-by-component, tier-by-tier, environment-by-environment level and needs to apply the principles of measuring threats, risks, vulnerabilities, and impacts at all of these levels.

Entire security efforts and progress should be recorded in the following ways:

Figure 3

Page 19: Security Acts

19www.securityacts.com

1. Identify: Identify the Security testing technique or the technol-ogy that will be tested and also the component of the product/application that will be and has been subjected to the test. E.g. it is easy to start with techniques like buffer overflow, privi-lege escalation, files/folder tampering, DOS etc...

2. Explain: A detailed explanation of the concept and intent of the technique or technology in the context of your product/application. Against each of the identified techniques evalu-ate your product’s security rating, i.e. how vulnerable is the product to the identified technique. Once this is done, create the test scenario.

3. Execute and Record: Execute tests in the above context and list the scenarios that have been tested.

4. Report: Report issues that have been found during the course

of the testing and the status of the issues in the product/ap-plication. The most important of all is to document the test results, even when tests pass. This would provide a test cover-age view in future.

Conclusion

No doubt, in absence of a guided test scenario, testing approach, lack of requirement, the so- called product management support towards security testing becomes a tough task. It goes via a myr-iad of motivational issues, as success is not expected to come overnight (not even over months).

To make security testing a part of the functional Black-Box test-ing, one would need to create the necessary skill-set first of all, which can be appended only by technical certifications, user group studies, basic understanding of the security concept, net-work/OS elements.

Treating security testing as any other testing type, rather than continuing to give it a specialized treatment, would be a good start. Looking beyond network traffic, network security testing would also provide a good starting point. As mentioned in the beginning, the current time data/information loss is also a se-curity breach.

So the thinking now needs to be on how this can be covered in security testing. □

Ashish Khandelwal has more than 5.5 years of Software Testing experi-ence. He works as a Se-nior QA Engineer with McAfee Host DLP product solution group. Being a CEH, he is interested in the latest Security Testing trends, and continuously seeks to improve software security. Ashish works

towards becoming an technology solution consultant/architect by providing technical insight to different verti-cals in testing solutions.

Gunankar Tyagi, a Com-puter Science gradute, has around 2 years of experience solely in Black Box testing. Gunankar has earned many accolades for his out-of-the-box test-ing skills and has proven himself a respected tester in a very short time. His ar-eas of interest delve more into Security Testing.

Anjan Kumar Nayak has close to 8 years of Software Testing experi-ence, the last 3 years at McAfee. As the Sr Project Lead – QA for McAfee Host DLP product solution group, Anjan manages the end-to-end Quality Assur-ance and interacts con-tinuously with customers

to understand and cater to their needs. A certified PMP, Anjan has presented at many international conferences and been continuously working towards enriching the testing community knowledge base for the past 5 years. His areas of interest mainly involve Test Process im-provement with statistical test management/ end-point solution performance testing and Security testing.

> About the authors

Subscribe at:

www.securityacts.com

Page 20: Security Acts

20 www.securityacts.com

Security Testing: Automated or Manual?

by Christian Navarrete

© cosm

a - Fotolia.com

One of the hottest and most discussed topics by people involved in the security testing field is this: Should security testing be based on automatic or manual methods? However, what is the truth about using these tools to detect vulnerabilities in systems, networks or applications? Can these tools help an organization obtain good security results; can they identify weaknesses in or-der to put in place the required measures/defences to prevent real attacks by potential intruders? Moreover, are these auto-mated efforts enough to accomplish the objective of detecting real vulnerabilities? In this article, we will cover some aspects of web application security testing, and we will see how manual testing could be an essential element, working alongside auto-mated testing procedures to reduce false-positives, leading to defining real vulnerabilities better, which will ultimately lead to a significant impact on the overall security of the organization in question.

It is common practice within the industry to adopt a “Run & Re-port” approach to security testing, where the execution of an automated Vulnerability Assessment tool (AVA) is a common ap-proach. The reports generated by such tools are often consid-ered as sufficient by not just mainstream companies but also by security specialists and contractors, and we question whether this approach leads to a false sense of security (sic), since these results in themselves are often not double/manually checked. For some readers this is an experience they know all too well; or example when an AVA monthly report delivered by an outsourcer includes ‘high vulnerability’ findings for an IIS Web Server even though the company only deploys Apache-based servers. This example demonstrates that small errors can lead to large gaps within companies’ security, and an equivalently large exposure to the subsequent potential threats.

Automated vs Manual Security Testing

Within the market there are various automated security tools, many of them open-source like Nessus (used for Infrastructure Vulnerability Assessment) or Nikto (to perform Web Application

Vulnerability Scanning). More sophisticated commercial tools ex-ist containing advanced features like HTTP Sniffing, Fuzzers, Ses-sion Recording, Manual requesters. While both of these classes of tools have some useful features, this is just one piece of the puzzle. Those tools are good at detecting infrastructure-based and application-based vulnerabilities - even many home-made applications; however, these tools crucially fail to address Man-ual and Business logic testing. Often guidance exists for both tool types (automated & manual) for testing to assess the vul-nerabilities: For automated tools templates are often used (e.g. some vulnerability scanners based on the OWASP and SANS TOP 10 vulnerabilities) or they have their own predefined profile/template tests (which in turn look for infrastructure vulnerabili-ties, application vulnerabilities, weak passwords, high-risk alerts etc). For manual security testing, various standards exist which help to complete this process in the best possible manner, for example OWASP (Testing Guide Project – http://www.owasp.org/index.php/Category:OWASP_Testing_Project), PCI DSS Ref. 11.2, 11.3.1, 11.3.2, for Web applications and ISSAF (Information Sys-tems Security Assessment Framework - http://www.oissg.org/issaf) for Infrastructure testing - created as guidance for staff implementing such tests and initially reviewed globally by secu-rity professionals.

Manual and Business Logic Testing

What is Manual and Business Logic testing? As indicated earlier, automated testing focuses on “technical” testing: the tool will process several (test) templates, which aim to detect application vulnerabilities like XSS, SQLi, Generic Injection attacks (HTML, LDAP, etc.), CSRF, among others. For example, when assessing infrastructure, these tools will attempt to detect if the web server is running a vulnerable version.However, what these tools based on such scripts fail to address is illustrated by the case of an in-house developed solution (in this case web-server) which has been deployed; here the goal of testing should be to ensure that no vulnerabilities exist which would allowa client’s account to be compromised. It is easy to understand which event has a greater

Page 21: Security Acts

21www.securityacts.com

security impact: the identified XSS issue after authentication or a parameter which does not validate data provided by the user, in turn being a potential window for a fraud event. This is where the Manual (and Business Logic testing) becomes relevant.

So far, we have described tools and how they work and detect well-known vulnerabilities, but what about vulnerabilities that exist, but are not exposed to the internet, vulnerabilities which do not exist in a vulnerability database or which do not have a vulnerability identifier? What about undisclosed/undetected vul-nerabilities in the internal application? Simple. Manual testing.

Manual testing should be enforced by Business Logic testing, which means that the testing should be focused on the Business perspective at the same time as on the technical side. Perform-ing manual testing ensures that the tester does not just cover “technical issues”, such as infrastructure or injections, but en-sures that other complex vulnerabilities that are harder to detect over a normal automated security scanning, do not matter if ex-ecuted on a regular basis. The formula to accomplish the task is to just think in a business manner and make synergies with the technical aspect of security that involves the business. In the end this falls on the web servers, app servers and all the elements that contribute to making the business secure in a technical way.

The Tools to do the Job

The toolset for manual testing are basically MITM (Man in the Middle) tools, such as Paros Proxy (http://www.parosproxy.org) or Burp Suite (http://portswigger.net/suite/), and both are free for download. These tools act as an intermediary between the tester browser and the target application. This way, the tester is able to “tamper/modify” the HTTP request BEFORE it is sent to the application. Here is a quick example. What about an HTML form which has JavaScript-based validation? By using this type of tool, you have total control over the application.

If in our case you don’t want to be validated by this control, it can be easily bypassed by deleting the tags which load the validation control to the user browser. Or another example: What about if your logon application expects a simple username and the “user” sends a malicious SQL statement to execute Operative System commands and opens a reverse-shell? Interesting, right?

Then, Automated or Manual?

BOTH. When using the power of the fully and updated automated tool running in parallel, armed with fully made security based tests cases, along with a MITM tool, the Security testing team is well prepared to perform deep security analysis covering all the required aspects. This will give a good security posture and pre-vent any kind of internal or external attack in the future. □

Christian NavarreteChristian Navarrete has 10 years of experience in providing professional Computer, Network and Internet Security imple-mentations in almost all states of México, United States, Argentina, Perú, Ecuador, Chile and Ven-ezuela, in various sectors

like Financial, Education, Private and Government. His core experiences are focused on Forensic Analysis and Penetration Testing. He also did presentations at some Universities (including Instituto Politecnico Nacional) He has recently participated at “b:Secure CSI-Monterrey” where he gave a presentation with the topic “Protection Strategies - Hacker Minded”. Also he was exhibitor for local hacker con-named BugCon, in where he will show his advances of his new Pentest framework “I-Ninja Pentex v1.0”. Currently, Christian works as Information Security Consultant performing web security testing for a well-known Banking Corp. In his free time, he acts as forum administrator of the popular SecTester.NetForum [http://sectester.net] and is one of the official transla-tors of the OWASP Spanish newsletter team.

> About the author

Page 22: Security Acts

22 www.securityacts.com

online trainingenglish & german (Foundation)

ISTQB® Certified Tester Foundation Level ISTQB® Certified Tester

Advanced Level Test Manager

© iS

tock

phot

o/Yu

ri_Ar

curs

Our company saves up to

60% of training costs by online training.

The obtained knowledge and the savings ensure the competitiveness of our company.

www.testingexperience.learntesting.com

securityacts02.indd 18 26.01.2010 09:54:13

Page 23: Security Acts

23www.securityacts.com

File Fuzzing – Employing File Content Corruption to Test Software Security

by Rahul Verma

© Stasys Eidiejus – Fotolia.com

Introduction

Fuzzing is about finding possible security issues with software through data corruption. The software in discussion might be a desktop application, a network daemon, an API or anything you could think of. Fuzzing is extensively used by security researchers and large-scale product development companies. It has become an essential part of the Security Development Life Cycle in many organizations, and is known to find a high percentage of security issues as compared to other techniques.

This paper focuses on file fuzzing, which is a special class of fuzz-ing dedicated to corrupting file formats. It is an easy-to-employ form of security testing and can be quickly put to work. Most of the software uses some sort of input in the form of files. The paper discusses the general uses and formats of such files and the data corruption strategies that can be employed.The paper starts with a brief introduction of fuzzing and related concepts and then digs deeper into the area of file fuzzing.

Issues and Challenges

Making testers aware of this technique is the first challenge. Un-derstanding it and implementing is the next one!

Wikipedia defines fuzzing as:

“Fuzz Testing or Fuzzing is a software testing technique that pro-vides random data ("fuzz") to the inputs of a program. If the pro-gram fails (for example, by crashing, or by failing built-in code assertions), the defects can be noted.”

Let’s try to understand fuzzing a little further:

• Fuzzing as a security testing technique As indicated by its definition, fuzzing is all about sending mal-

formed data as input to an application to locate bugs. Such bugs typically result in crashes, which after analysis can result

in finding a vulnerability which makes the software exploitable in a certain way. A commonly discussed example of this sort is a buffer overflow vulnerability, which can allow an attacker to inject shellcode in the application at run time and make it execute malicious code e.g. launch a remote shell.

• Fuzzers are anti-parsers As they say in the security world – “All input is malicious”. In

terms of fuzzing, we try to generate all sorts of malformed/malicious data. The software employs a lot of parsing routines to interpret the input and take decisions e.g. buffer alloca-tion, making calculations, type conversions, action execution etc. Fuzzing is all about breaking false assumptions or faulty code in such parsers. When malformed data is passed, it can trigger misallocation of memory, unintended interpretation of unsigned data in signed context causing buffer overflows and crashes. In code reviews, a problem may or may not map to a user input. In fuzzing, because such malformed data is di-rectly tied to user input or variables that can be manipulated by a user, any such issues can be directly exploited.

• Fuzzing is essentially an automated testing technique Fuzzing is essentially an automated testing technique. As the

number of test cases executed can quickly become very large, it is an art to carry out fuzzing with focus on areas with the highest likelihood of locating vulnerabilities. It involves priori-tization of tests based on analysis of the application, related protocol(s) and past vulnerabilities in similar applications.

• Fuzzing employed for the McAfee Anti-virus Engine The McAfee anti-virus engine is subjected to file fuzzing using

an in-house built engine specific tool developed by Tony Bar-tram in C++. Fuzzing is also carried out targeting specific file formats in a protocol-aware fashion. For the purpose, custom scripts are developed for samples generation using Python or Perl. Dedicated hardware rigs are used to carry out “data cor-ruption” tests running round the clock for weekly builds.

Page 24: Security Acts

24 www.securityacts.com

Getting Started

Fuzzing has been considered to fit in the category of grey-box test-ing because of the nature of analysis and automation involved and can be executed as black-box testing as well. The irony is that despite this fact, the term and the related implementation are mostly unknown to software testers.

There is good chunk of fuzzing work that can be taken up by a software tester against the general view of this being suitable only for security researchers. When testers locate a bug as part of their usual job, they are rarely responsible for analyzing which piece of code is actually responsible for the bug. Testers usu-ally log the defect with test case details and their preliminary thoughts and analysis from outside the box. Fuzzing is no dif-ferent, the only difference being the nature of the data that is submitted.

Fuzzing makes a software tester think beyond BVA and ECP, it makes him redefine his view of input to the application, and it extends the traditional approach to testing by bringing in a lot of possible test areas.

Tools of the Trade

There are a lot of tools available for carrying out fuzzing of differ-ent kinds, namely File Fuzzing, Browser Fuzzing, Command Line Fuzzing, Environment Variable Fuzzing, Web Application Fuzzing, ActiveX Fuzzing and so on. As the focus of the paper is File Fuzz-ing, the readers can specifically look at:

• FileFuzz, SpikeFile, NotSpikeFile (http://labs.idefense.com/software/fuzzing.php)

• General-purpose frameworks like Peach (http://peachfuzzer.com/) and Sulley (http://www.fuzzing.org/fuzzing-software ), and

• Last but not the least, an upcoming framework for the pur-pose – PyRAFT (http://pyraft.sourceforge.net), which is being actively developed by the author of this article.

Knowledge about what already exists in the area of fuzzing helps to understand practical implementations of different types of fuzzing. It helps in using or extending existing open- source tools or coming up with altogether new tools and frameworks by ana-lyzing the code and execution methodology of existing ones.

If one looks specifically at file fuzzing and also at a specific kind of file, one can look for tools rather than frameworks. Even from a development perspective, writing a tool for a specific purpose is a lot easier than writing a general-purpose framework because of all the design considerations involved.

Pre-Requisites in terms of knowledge

Listed below are some of the concepts/technologies that one should be aware of before stepping into fuzzing:

• The essence of security testing and common input-based at-tacks

• Using a Hex Editor• A programming language of choice. Python is common in the

world of fuzzing now. Older fuzzers were developed in C, but one can find some implementations in C#, Java and Perl as well.

• How architecture (Little Endian/Big Endian) impacts binary packing of data

• Programmatically dealing with Binary files (reading and writ-ing data) as per defined data types

• Knowledge of concepts and modules related to hashing and compression.

• Patience…a lot of it. When developing samples or understand-ing file formats, it’s all about hex data and not about fancy GUI-based testing. One has to be very patient during the file format analysis phase of file fuzzing.

Purpose of Input files in a software

Software uses input files for various reasons. Some of the most common uses that can be thought of are the following:

• An Office Productivity Suite like MS Office, OpenOffice is all about creating and publishing files e.g. documents, spread-sheets, presentations etc.

• A Media Player uses media files of different formats to play audio/video

• A Browser users HTML/XML/CSS files to show web content• An Anti-virus software uses virus definition files to detect mal-

ware• License files are used to determine validity/expiry of software• Configuration files are used by web servers• Temporary files are written to disk by software to be read at a

later stage

Understanding File Formats

At high level we can classify file formats into two formats: Text and Binary.

• Text Formats These can take two common forms. One form can be typically

seen in configuration files in which a plain text file is used where each line corresponds to a configuration setting and has a key-value pair separated by “::” or “=” or “=>” etc. This format can also be seen in log readers where each line in the log is a comma-separated content of various parameters. These days, XML is more popular in defining configuration files. It gives freedom of creating a much more complex struc-ture, e.g. nested definitions. Other text formats are HTML files which are again mark-up language based files with tags and attributes.

• Binary Formats These formats are more complex to analyze and are not hu-

man-readable. They can simply consist of binary packed data

Page 25: Security Acts

25www.securityacts.com

as per a set protocol, or can be compiled data as well using proprietary compilers. One common format that binary files follow is a TLV format – Type-Length-Value, where type is based on identifiers recognized by the software, length gives the length of data that follows, and then the value i.e. data. Typically, the type and length fields have a fixed number of bytes allocated to them in terms of number of bytes, and the data part is flexible and dependent on the length field. Such records are put in sequence as shown in the snapshot below:

A very complex format of this nature is the SWF format, which has the tag identifiers based on a tag record header that takes 2 bytes. Instead of consuming the full 2 bytes, the for-mat takes the first 10 bits as the tag identifier and the next 6 bits as tag length. Amazing, isn’t it? This is true for short tag formats, and for long formats the approach is changed.

Until we know the format of a file, it is like a black box, and the data corruption is also black-box corruption. As soon as you start to understand the file format and start data corruption by tak-ing the dependencies of the file formats into consideration, it be-comes gray-box fuzzing. You do not need to know the code that deals with it, the high-level logic of how the data is interpreted will suffice.

File Fuzzing – Putting TIGEMA on Job

Fuzzing is easier understood if we split the process into steps. The fuzzing process can be remembered with the TIGEMA mne-monic, which stand for:

• T–Target(s)• I–InputVectors• G–Generate• E–Execute• M–Monitor• A–Analyze

Each of the above describes a distinct step in the process of fuzz-ing. One or more of them might work in conjunction or parallel to each other. Figure 1 gives a visual snapshot of these steps in con-junction with each other. The following sections discuss the steps in detail with a view to file fuzzing: see Figure 1 on the next page

• IdentifyTargets This step can be approached in two ways in file fuzzing:○ Identify the software to be fuzzed. Identify all input files that it

takes. Shortlist the formats to be fuzzed. Fuzz them.○ Identify the file format to be fuzzed. Identify all software that

supports the identified file format. Shortlist the software to be fuzzed. Fuzz it.

The first approach is typically employed when testing the se-curity of the product one is working on. Security researchers

who find vulnerabilities in third-party software employ both of the above approaches based on the context.

• IdentifyInputVectors(Files) In the case of file fuzzing, the type of input vector is a file, but

there can be multiple files that one wants to fuzz. E.g. an anti-virus would scan almost all existing known formats. So, when fuzzing anti-virus, one would typically fuzz a mixed set of for-mats. There are situations when you fuzz different file formats for the same software, but each of them has a separate pur-

pose e.g. a configuration file, a license file, the primary file format (docu-ment/media file) etc.

• GenerateFuzzData This is the step where actual fuzzer development comes into

the picture. Based on the inputs chosen, you make decisions about the kind of fuzzing you want to employ. This governs the quality and quantity of fuzzed data you will receive for the inputs you have identified.

• Execute At this step you send (publish) the fuzzed data (for an input

vector) to the target application. This might be a post-genera-tion process, or it might run along with the generation process.

In the former, you first generate all the fuzzed data and write to an output file and later send this data one by one to the ap-plication. You might require a lot of disk space in this case de-pending on the kind of fuzzing. If you are fuzzing file formats, if the size of the file is large, you might end up consuming a lot of disk space (or at worst running out of disk space). In some cases, this option might not be feasible at all.

So, in case of file fuzzing, the latter approach is followed. You

generate fuzz data and send it to the application. If the ap-plication crashes, a copy of the data is retained and the next fuzz iteration gets executed; otherwise the data is ignored (or deleted if on disk) and the fuzzing process is continued. This way, only that fuzz data which is problematic is retained on the disk (in the form of files/database entries etc.).

• Monitor This is done while you are sending the data to the application.

This typically involves a debugger being attached to the appli-cation right from the beginning of the test. It might also involve monitoring the resource utilization on the box. If there is a crash, the fuzzer should be able to know about it. The debug-ger takes a dump of the application in the event of a crash for later analysis. The fuzzer then launches the application again, attaches the debugger and proceeds to the next fuzzing step.

The fuzzer should have a component which puts a cut-off limit on the running time of the application (called time threshold) and monitors the related process. Time thresholds help in kill-ing an application and proceeding with the next test case as a part of the normal fuzzing process.

Page 26: Security Acts

26 www.securityacts.com

• Analyze The crash dump and the fuzz data that caused it are taken for

analysis at this stage. This is typically taken up by a security researcher and/or development team with knowledge of vul-nerability analysis.

The software tester’s job at this stage is providing the required data to the mentioned team. Based on interest, a tester can learn basic crash dump analysis and be of further help.

Approaches for File Fuzzing

There are many factors which govern the way file fuzzing will be implemented. Some of the key factors one needs to con-sider are:

• SpecificFileFuzzingVersusGeneralFileFuzzer Fuzzing a specific file format can be done using quick scripting

with no time spent on designing reusable components, but when looking at fuzzing more than one file format using the same tool, framework design has to be considered and has to be split into classes/functions that can be employed when executing file fuzzing of various sorts.

• OSplatform The type of OS platform has a large impact on the way the tool

is designed because the fuzzer needs to understand how the OS handles the process, what are the debugging options avail-able, how resources can be monitored etc.

• DataCorruptionMethod-GenerationversusMutation At a broad level, a fuzzer can produce fuzz data in two ways

– generation and mutation. In generation, the complete pro-tocol is generated from scratch based on the knowledge of

the protocol built into the fuzzer. This requires a lot of ground work to be done by reading relevant manuals and analysis. If no such published data is available, you will have to resort to reverse engineering skills, which most of the times is quite a complex task.

The advantage is that you get complete control over the proto-col and can get good code coverage.

Mutation is about capturing good data and then fuzzing vari-ous sections of data. For this you use a baselined “good file” for mutating. The advantage is that you can get started with fuzzing efforts quickly, but one must take care of internal de-pendencies of the fields and optimum code coverage.

• FileFormat-IgnoringversusHandlingInternalDependencies: In many protocols there are fields that are dependent on other

fields in turn, e.g. they might include length, checksums etc. If you choose to abide by these conditions, the fuzzing process gets a little trickier than otherwise. A suggested way is to build these checks into the fuzzer you build and carry out tests in both ways – breaking the dependencies and abiding by them. This helps to unearth any false assumptions and also makes sure that the correct parsers are hit (of course by increasing the number of test cases executed significantly).

• BlindFuzzingversusFormat-awareFuzzing A blind fuzzer has no knowledge of the underlying protocol.

It is assigned the task of blindly corrupting or generating a data packet and sending to the application. This is very easy to build, but results in wastage of CPU cycles and time by generating and testing data that is rejected outright, some-times, much before it actually reaches the target. The proto-col- aware fuzzer is complex to build, but is more reliable and result-oriented.

Blind fuzzers are usually tied to the mutation approach and protocol-aware fuzzers are tied to the generation approach (or to mutation approach while abiding by the dependencies of the fields).

Conclusion

All in all, file fuzzing (or fuzzing in general) is a good and easy way to test software for security issues. A software tester can further contribute in the area by brushing up skills on threat modeling for analyzing various input vectors and associated threats, code cov-erage to check the effectiveness of the fuzzing tool, core dump analysis to understand the cause of the crashes captured, and vulnerability analysis to associate crashes to a possible vulner-ability that could be exploited.

Fuzzing should not be thought of as a replacement for other forms of testing. It should be a new form of testing added to the existing tests being conducted.□

Figure 1: Steps in Fuzzing

Page 27: Security Acts

27www.securityacts.com

Definitions, Abbreviations and Acronyms

Acronym Description

Fuzzing

Fuzz Testing or Fuzzing is a software test-ing technique that provides random data ("fuzz") to the inputs of a program. If the pro-gram fails (for example, by crashing, or by failing built-in code assertions), the defects can be noted.”

Black Box Testing Testing an application with little or no knowl-edge of the underlying implementation

Grey Box TestingTesting an application with knowledge of logic/high level implementation but not of the exact code

Threat Modeling A method of assessing and documenting the security risks with a software application.

VulnerabilityA security exposure in an operating system or other system software or application soft-ware component

ThreatPossibility that vulnerability may be exploit-ed to cause harm to a system, environment, or personnel.

References

Item Description

Fuzzing: Brute Force Vulnerability Assess-ment

A book dedicated to the art of fuzzing by Sut-ton Michael, Greene Adam, Amini Pedram. Published by Addison-Wesley Professional

Building a Fuzzing Framework: A Primer for Software Testers

Paper written by Rahul Verma (author of this paper) dealing with building a fuzzing frame-work. Selected for TEST2008 Conference.

Wikipedia: Fuzz Testing. http://en.wikipedia.org/wiki/Fuzz_testing

Fuzzing.org: Fuzzing Software. http://www.fuzzing.org/fuzzing-software

Rahul VermaWith an experience of more than 7 years in the industry, Rahul has explored the areas of security testing, large-scale performance engineering and database migra-tion projects. He currently leads the Anti-Malware Core QA team (MIC Labs) at McAfee India as a Senior Techni-cal Lead. He is a core member of the McAfee Global Performance Testing Team and a Python trainer in the McAfee Automation Club. Rahul has presented at several conferences and orga-nizations including CONQUEST 2009 (Germany), STeP-IN, ISQT, TEST2008, Yahoo! India, McAfee, Applabs and STIG. His recent presentations were on the subjects of Fuzzing, Performance engineering COE, Web Applica-tion Security, User behavior and Performance Percep-tion Analysis (UBPPA). He received the Testing Thought Leadership Award at TEST2008 conference for his Testing Perspective website (www.testingperspective.com), along with the Best Innovative Paper Award for his paper on design of fuzzing frameworks. Rahul is a member of the Indian Testing Board and is associated as author/reviewer for Foundation and Advanced Level Certifications by ISTQB. Rahul holds a B.Tech degree from REC Jalandhar (India). He has been associated with professional theatre, mu-sic, poetry and stage anchoring for more than 12 years.

> About the author

Subscribe at:

www.securityacts.com

Page 28: Security Acts

28 www.securityacts.com

Column

IT Security Micro Governance – A Practical Alternative Prof. Dr. Sachar PaulusProfessor for Corporate Security & Risk Management

Just a few days ago, Microsoft had to admit serious security is-sues in almost all of its web-enabled products, not only in the browser, but also in e-mail and other productivity applications. The recommendation of the German Federal Office for Security in the Information Technology (BSI) was not to use products that use the browsing engine of Microsoft‘s Internet Explorer, includ-ing the browser itself in the versions 6, 7 and 8.

Now, this is obviously a real threat to internet technology. Not so much the existence of the flaw itself - as most of you surely know, there is no such thing as 100% secure software -, but that the internet-enabling of more and more applications adds ad-ditional risk. Let me explain this: Of course, you can use another browser and another e-mail software, but would you really con-sider replacing the most standardized office productivity suite? Let alone, that when switching applications most of the format-ting will be gone? So, the answer is probably no - and you will live with the risk of being attacked, until the vendor will have supplied patches solving the problem.

There is a risk in using different tools for different purposes, sim-ply by the fact that there might be more flaws and attack vectors that have to be controlled. But using the same engine in different products is also risky, because patching probably won‘t happen at the same time. As long as there exists an overview of where these components are used, then the risk - yet being higher - can still be controlled. But as soon as one looses control over the us-age of the components, not only the risk, but also the probability for a communication crisis increases substantially.

By the way, note that Microsoft did an excellent job in managing the _discovery_ of the flaw. It was communicated to Microsoft using „responsible disclosure“, which is the best way to address security flaws (the researcher spoke directly to Microsoft and did not publish it directly, in order not to give potential attackers too much time to develop attack software). However, this does not help if one needs a number of weeks to identify in which products the code is actually used - and consequently the flaw is present.

So the lessons learned from this issue are:

1. Keep track of where code is re-used.2. Implement a responsible disclosure strategy with your researcher community.3. Be able to develop and install patches addressing the same

issue for multiple products simultaneously.

Obviously, developing secure software is more than just perform-ing input encoding and avoiding buffer overruns...

Sachar Paulus is Professor for Corporate Security and Risk Manage-ment in the department for Business Administration at Brandenburg University of Applied Sciences. Sa-char Paulus has a Ph.D. in number theory and sever-al publications on cryptog-raphy. He has been in the business for more than 13

years, 8 of which with SAP, the world's largest business software manufacturer, where he held various positions related to security, among others Senior Vice President Product Security and Chief Security Officer. He was mem-ber of the RISEPTIS advisory board of the EC, member of ENISA's permanent stakeholder group and is one of the authors of the Draft Report of the Task Force on Inter-disciplinary Research Activities applicable to the Future Internet of the EC. He is also President of ISSECO, a non-for-profit organization aiming at driving secure software development, and standardizing qualification around se-cure software engineering.

> About the author

Page 29: Security Acts

29www.securityacts.com

Berlin, Germany

IT LawContract Law

GermanEnglishSpanishFrench

[email protected]

© K

atrin

Sch

ülke

k a n z l e i h i l t e r s c h e i d

securityacts02.indd 21 26.01.2010 09:54:18

Page 30: Security Acts

30 www.securityacts.com

Information loss protection is often considered to be a costly pain, and such projects are often given low priority. Even if secu-rity measurements are required by regional legislation, it’s often difficult for companies to know where to start.

What’s the problem?

IT security is about risk management, the cost of taking the risk of a major data breach versus the cost of avoiding the risk by implementing a data leak prevention solution. It is quite similar to getting insurance or taking the chance. A breach of confiden-tial information is, however, more than just fines and penalties, as it reflects poorly upon the company’s credibility and brings negative media exposure that equates to reducing the overall corporate value of the organization. But what happens when a company loses confidential information about individuals, be it employees or clients? Even though strict laws and regulations are in place, many companies have still not implemented any processes to prevent information from being lost or stolen. If the consequences of losing sensitive information are not clear, com-panies may take on the risk by taking no action until a serious incident has occurred.

What can be done? Solutions addressing loss of sensitive information usually go un-der the name of Data Leak Prevention (DLP).

DLP simply means making sure that sensitive data does not leave the organization’s network unsecured, and that only the “right” people have access to the right information. In the last few years, companies and organizations have spent huge sums of money with a view of keeping the bad guys out of their net-works by investing in firewalls and other filter technologies to protect against hackers, viruses, spam and spyware. A Ponemon study from 2009 shows that employees leaving a company are considered to be one of the largest security threats for organiza-tions. So now it is time to look inward, and monitor the workflow

processes of information within the network and the protection methods when critical information is stored and/or sent outside the enterprise network.

What to do? Today, IT directors and security professionals focus their atten-tion on stopping information from leaking out of the network. And that challenge is much greater compared to inbound protection issues.

This challenge cannot be solved based on technology solutions only. Constantly informing and educating employees regarding the importance of handling information in a secure way will be necessary. Integrated coaching mechanisms, where employees will be notified that some actions might expose security risks, will become more common. Examples are where a user sends an email containing confidential information such as bank details or insurance numbers. Content control mechanisms can detect that the content is likely to be sensitive, and suggest or even enforce encryption of the email. Another example might be where a user inserts a USB memory stick to the computer, and the system rec-ognizes that the memory stick needs to be encrypted before let-ting the user store information on it. This will again give the user the ability to choose to encrypt or not to use the memory stick. In other words, the user will be presented with a solution instead of a problem. This type of personalized messages which informs and even educates users, would quickly become ineffective if they remained entirely static. They requires continuous updating of the content in such a way that the user absorbs the informa-tion every time and not just instinctively clicks past it.

Where do we start?

Several of the leading suppliers of DLP solutions have developed security platforms for companies and organizations so they can easily find and implement the right product for their needs. Some of the security providers are focused on developing modular plat-

© Sven H

oppe - Fotolia.com

Avoiding loss of sensitive information– as simple as 1-2-3by Peter Davin

Page 31: Security Acts

31www.securityacts.com

forms that allow companies and organizations to begin imple-menting the DLP solutions where they are needed the most. The user/administrator can gradually let the solution include a grow-ing number of user groups and security modules. With the help of such a security platform, the company can manage their DLP solution centrally. A modular security platform is one of the most popular platform solutions today, since the initial cost is low and can easily grow as needs increase.

Conclusion

The threats are real, and they can be very costly. However, a sim-ple, cost-effective solution is available. It’s just a matter of get-ting started, and with SEP, getting started has never been easier. So, just like you make sure that you have an adequate insurance for your organization to avoid risks, make sure you also have an “insurance” against Data Leak risks! □

Peter Davin is CEO of the Swedish company Crypt-zone AB. He has worked in the software and com-munications industries in Scandinavia and the US for many years and is con-sidered one of the veter-ans within the field of Data Leak Prevention/Informa-tion Protection.

Peter has considerable experience and knowledge of de-veloping and leading medium-sized businesses and has helped to launch over 20 companies during his career. In 2001 he started Secured eMail with the intention of creating a company which would provide an afford-able, easy-to-use, yet technologically advanced solution for securing email communications. This idea quickly evolved into many more products and solutions. The company eventually developed into Cryptzone, which is now a public company listed on the stock exchange in Sweden, Stockholm.

In late 2009, Cryptzone acquired the IT security com-pany AppGate Network Security and Peter is currently working as CEO of both companies.

A graduate in engineering, Peter has an MBA from Penn State University, USA, and a BA from the University of Gothenburg, Sweden. He is also with a board member of several other companies in Europe.

> About the author

Your Ad [email protected]

Page 32: Security Acts

32 www.securityacts.com

Applications today can no longer rely on the infrastructure to secure their assets. The applications must be built withintegrated security design and that is a great challenge.

We can help you to secure your software!

SELA, a founding organization of ISSECO®, offers you comprehensive services helping you to integrate applicationsecurity into the product life cycle. The services are provided for each and every step of the development life cycle:

Addressing Application Security from a Holistic Perspective

We See the Whole Picture…

[email protected] lwww.se la .co . i l / en

Methodologies Deployment

Security Standards and Requirements Assessments

Architecture and DesignApplication Security

Development

Application Security Testing Penetration Testing

and Hacking

Deployment and Responseto Security Issues

Application SecurityTesting

Application Securityusing C++

Web ServicesSecurity

Application Security Design

Application Security using the Microsoft

.NET Framework

Introduction toInformation Security

Vista SecurityWindows 7 Security

ISSECO® CertifiedProfessional for Secure Software Engineering

Next ISSECO ® Certified Professionalfor Secure Software Engineering Courses

Available Courses:

November 17-19/2009November 25-27/2009December 20-22/2009January 20-22/2010 January 25-27/2010 February 7-9/2010 February 24-26/2010

Country City Date

Tel Aviv TorontoTel Aviv PuneSingaporeTel AvivToronto

IsraelCanadaIsraelIndiaSingaporeIsraelCanada

Page 33: Security Acts

33www.securityacts.com

IT Security Micro Governance – A Practical Alternative by Ron Lepofsky

Executive Summary

For most organizations, particularly for medium and small insti-tutions, IT Governance is difficult to initiate and maintain as it is an ongoing process. There are many subject experts, vendors, and consultants that cater to implementation, but the inherent difficulties and complexities make the implementation of it an elusive goal for many.

Since Governance is, by definition, strategic and focused over long timeframes, it is not designed to deal with unexpected and potentially costly IT security threats. Threats which can evolve into costly security events. A distraught client once described how a serious access breach within his organization could have been prevented if the senior management had evaluated and acted upon his impromptu but appropriate recommendations to harden access controls.

The author proposes a modified process to respond to mitigating threats that require funds exceeding the annual IT security bud-get and calls this “micro Governance”.

Definitions of IT Governance

IT Governance is a subset discipline of Corporate Governance focused on information technology (IT) systems and their perfor-mance and risk management. Various bodies of authority on the subject publish similar definitions of IT Governance, each with its own emphasis of intent. Four prominent authorities define IT governance on their web sites as follows:

1. ISACA: …provide the leadership, organizational structures and processes that ensure that the enterprise’s IT sustains and extends the enterprise’s strategies and objectives.

2. ITGI:… an effective IT governance framework that addresses strategic alignment, performance measurement, risk man-agement, value delivery and resource management.

3. Forrester: … The act of establishing IT decision structures, processes, and communication mechanisms in support of the business objectives and tracking progress against fulfilling business obligations efficiently and consistently.

4. MIT Sloan School of Management: IT governance is the process by which firms align actions with their performance goals and assign accountability for those actions and their outcomes.

The three predominant frameworks for implementing IT Gover-nance are provided by ISACA, ITIL and ISO. In a more granular view, the ISO 38500:2008 guiding principles are organized into three prime sections, specifically Scope, Framework and Guid-ance. The framework comprises definitions, principles and a mod-el. It sets out six principles for good corporate governance of IT:

• Responsibility• Strategy• Acquisition• Performance• Conformance• Humanbehaviour

Significance of IT Security Governance for Compliance

Compliance violations may attract all manner of liability directly affecting a governance committee, such as fines and confine-ment for SOX, revocation of interconnection agreements with electrical utilities for NERC CIP, and violation notices from third party auditors for COBIT.

Examples of well known regulatory frameworks and compliance standards are as follows:

• Financial-SOX,Bill109,BaselII,PCI,SAS70• ElectricalInfrastructureforNorthAmerica-NERCCIP• Privacy-PIPEDA,RedFlag,GLB• IndustryBestPractices-COBIT,ITIL

© D

mitry N

aumov - Fotolia.com

Page 34: Security Acts

34 www.securityacts.com

IT Security Micro Governance – A Practical Alternative

This covers the problems caused by insufficient Governance and the root causes of this problem.

Insufficient IT Governance Impedes the Security Team

In dynamic network environments, security issues can quickly appear where insufficient funds are planned to mitigate new se-curity risks. An active IT Governance process is invaluable to deal with such issues.

Insufficient IT Governance:

• Slows decision making.• Inhibits communication of risk and associated potential finan-

cial loss between the IT security team and executive manage-ment.

• Inhibits attaining unplanned, sufficient IT security funding.

Barriers to implementing IT Governance

Well known barriers to attaining IT governance are:

• The all-encompassing scope of any Governance is a daunting challenge to face.

• Expensive.• Time consuming.• IT security risk can be very difficult to quantify.• The executives may find it difficult to request additional funds

particularly where the IT security team has done an excellent job and there are no expensive security vulnerabilities.

• A false sense of security makes cost justifying security bud-gets difficult.

• A Governance committee may get bogged down over confu-sion arising between identifying the content of compliance frameworks with compliance objectives.

• Turf wars over accepting / relegating ownership of responsi-bilities for various aspects of IT compliance.

• Maintaining longevity of the IT Governance process.

ITSecurity Micro Governance as a Practical Alternative

A simplified alternative to the barriers mentioned above creates a bite-sized micro process, which will provide the following value to a corporate entity:

• Minimizes the liability of executives with respect to their fidu-ciary responsibilities for IT Governance.

• Facilitates communications between the Governance Body and the IT Security Team regarding cost justification of un-planned or insufficient budget.

• Provides a regular opportunity for the Security Team to convey top priorities with requests for expedited executive authoriza-tion.

• Provides a regular opportunity for executives to convey busi-ness priorities that affect IT related risks directly to those re-sponsible for physically managing those risks.

• Minimizes decision time and frustration levels by identifying bite-sized issues.

Steps to Implement IT Micro Governance

1. IT Security should identify the top priority IT security risk(s) that require immediate decisions / funding by the executive team.

2. Estimate the ROI or potential cost avoidance by mitigating the risk(s).

3. Formally create a micro-Governance process to address the risk(s).

4. Engage a third party advisor to expedite the process.5. Create a virtual (temporary) team to manage each risk man-

agement process.6. Assign other management and employees as appropriate to

the virtual team.7. Identify a timeline to complete the project.8. Identify a mechanism to test the degree of success of the miti-

gation.9. Identify a timeline to report the degree of success back to the

IT Governance Committee.10. Assess whether ROI or cost avoidance goals were sufficiently

met.*11.Mandate longevity for the micro-Governance process by di-

recting the virtual team to continue monitoring the process and reporting to the Governance Committee.

12.Integrate the process into the IT security operations / admin istration processes and disband the virtual team.

* It is difficult to obtain data that captures the prevention of a se-curity threat based on a specific action taken. One empirical yet evidentiary-based method is to compare the frequency of similar threats before and after mediation steps are implemented.

To assist with calculating IT security-related risk, ROI / cost avoid-ance, and residual risk, Governance Committees (and IT security professionals) can contract third party expertise in these matters.

Example Situation

The Problem Statement

1. A CIO of a fictitious company identifies weak identity manage-ment as a significant risk to the privacy and integrity to corpo-rate information as well as to SOX compliance.

2. The problem has recently arisen due to several factors:

• The external corporate auditors introduced new IT audit con-trol points for monitoring unauthorized and attempted unau-thorized accesses to critical servers and critical applications.

• Corporate cost cutting has caused a reduction in the staff lev-els of the security administration group.

• A cost cutting reorganization has dramatically changed employ-ees’ roles and needs to access various servers and applications.

• The group of recently terminated employees which include IT security administrators has raised the potential threat of ma-

Page 35: Security Acts

35www.securityacts.com

licious activity from ex-employees plus a diminished capacity for the corporation to adequately administer access privileges.

3. There are insufficient funds for a comprehensive upgrade to the identity management infrastructure to ensure reasonable compliance for SOX.

4. The problem is further obfuscated as the lack of any major se-curity breach makes it appear to senior executives that there are no security threats.

5. Nonexistent IT Governance means decision making about the new risk will be delayed until the next year’s budget cycle

IT Micro-Governance Solution

1. If the corporation does in fact have an IT Governance com-mittee that is amenable to reacting quickly with micro-Gover-nance decisions, then the CIO can identify to the Governance committee the business risks relating to weak identity man-agement.

2. The Governance committee works with the CIO to estimate the cost to the corporation in the event of a security event at $5,000,000 per incident.

3. They build a business case modeled upon the chance of a security event occurring once per year.

a. The CIO estimates the first year annual cost to technically miti-gate the risk at $100,000 and $50,000 annually thereafter.

b. The first year mitigation cost / annual loss expectation is $100,000 / $5,000,000 or 2% and 1% thereafter.

c. The Governance committee decides the return is acceptable.4. The IT Governance committee formally creates a specific task

force and IT micro-Governance process to mitigate the identity management risk.

5. They engage a third party advisor to expedite the process, so that an aggressive date of fully tested implementation is 6 months.

6. They appoint virtual team leaders to manage each risk man-agement process. The team leaders are comprised of two members of the IT Governance committee, the CIO, three members of the IT security team, 6 business line managers, a member of HR and a member of the CFO’s team. They also have external security consultants and auditors to assist with testing and evaluating the effectiveness of the new process.

7. The virtual team leaders assign other employees to imple-ment the project and to create an ongoing process to monitor, manage, and report on the proposed identity management process.

8. The team creates a detailed project plan to complete the project.9. The third party consultants and auditors work with the team

right from the beginning to design processes and mecha-nisms to test and report on the degree of success of the new identity management process.

10.The virtual team and IT Governance committee creates a schedule for reporting / feedback / direction meetings as oversight for the new process, including:

a. Evaluating the degree of success of the initial implementation.b. A subset of the virtual team continues to monitor and report to

the Governance committee. c. A third party with expertise in calculating IT security risk is as-

signed the task of re-evaluating the initial ROI or cost avoid-

ance business model in terms of: i. Was risk correctly estimated? ii. Is there an ongoing evaluation of the degree of risk

reduction? iii. Can the new process and its budget be integrated into IT

security operations / administration. Can the virtual team be disbanded?

Conclusion

Keep it simple. □

Ron Lepofsky, B.A.SC. (Mech Eng), CISSP Owner, ERE Information Security and Compliance Auditors. Founder and President of an information security audit and com-pliance company since 2000. The company is called ERE Information Security and Compliance Auditors.

Previously founder and President of a data telecommu-nications services and product sales company called PTI Telecommunications, founded in 1989. Graduated in Mechanical Engineering, University of Toronto. Sales representative for high tech companies until 1989, for: Digital Equipment of Canda Ltd., Timeplex Canada Lim-ited, and Data General Canada Ltd.

I contribute articles to publishers of information securi-ty, legal, and electrical utility periodicals, and frequently speak at similarly related conferences.

Specialties:IT security audits, forensics, server hardening, pen tests, external vulnerability assessments, gap analysis, network architecture security, policy, web sites, wireless, USB, em-ployee internet abuse, laptop, firewalls, VPNs, Risk analy-sis: Compliance audits: SOX security, Bill 198 security, ISO 17799, CobiT, COSO, ITIL. Privacy audits: PIPEDA, HIPAA. Perpetual audit / monitoring of network security and compliance. Writing security policy and procedures.

> About the author

Sources of Information - Governance Authorities

• ISACA(InformationSystemsAuditandControlAssociation)www.isaca.org• ITGI(ITGovernanceInstitute)www.itgi.org• GartnerGroupwww.gartner.com• IBMwww-935.ibm.com/services/us/index.wss/offering/its/a1031003• SANS(SysAdmin,Audit,Network,SecurityInstitute)www.sans.org/read-

ing_room/whitepapers/casestudies/corporate_governance_and_informa-tion_security_1382

• TheITMetricsandProductivityInstitutehttp://www.itmpi.org/default.aspx?pageid=198

• MITSloanSchoolofManagementhttp://web.mit.edu/cisr/working%20papers/cisrwp349.pdf

Page 36: Security Acts

36 www.securityacts.com

Das Qualitätsmanagement und die Software-Qualitätssicherung nehmen in Projekten der Finanzwelt einen sehr hohen Stellenwert ein, insbesondere vor dem Hintergrund der Komplexität der Produkte und Märkte, der regu-latorischen Anforderungen, sowie daraus resultierender anspruchsvoller, vernetzter Prozesse und Systeme. Das vorliegende QS-Handbuch zum Testen in der Finanzwelt soll

• Testmanagern, Testanalysten und Testern sowie• Projektmanagern, Qualitätsmanagern und IT-Managern

einen grundlegenden Einblick in die Software-Qualitätssicherung (Meth-oden & Verfahren) sowie entsprechende Literaturverweise bieten aber auch eine „Anleithilfe“ für die konkrete Umsetzung in der Finanzwelt sein. Dabei ist es unabhängig davon, ob der Leser aus dem Fachbereich oder aus der IT-Abteilung stammt. Dies geschieht vor allem mit Praxisbezug in den Ausführungen, der auf jahrelangen Erfahrungen des Autorenteams in der Finanzbranche beruht. Mit dem QSHandbuch sollen insbesondere folgende Ziele erreicht werden:1. Sensibilisierung für den ganzheitlichen Software- Qualitätssicher-

ungsansatz2. Vermittlung der Grundlagen und Methoden des Testens sowie deren

Quellen unter Würdigung der besonderen Anforderungen in Kreditin-stituten im Rahmen des Selbststudiums

3. Bereitstellung von Vorbereitungsinformationen für das Training „Test-ing for Finance!“

4. Angebot der Wissensvertiefung anhand von Fallstudien5. Einblick in spezielle Testverfahren und benachbarte Themen des Qual-

itätsmanagements

Herausgegeben von Norbert Bochynek und José M. Díaz Delgado

Die AutorenBjörn Lemke, Heiko Köppen, Jenny Siotka, Jobst Regul, Lisa Crispin, Lucia Garrido, Manu Cohen-Yashar, Mieke Gevers, Oliver Rupnow, Vipul Kocher

Gebundene Ausgabe: 431 Seiten

ISBN 978-3-00-028082-5

1. Auflage 2010 (Größe: 24 x 16,5 x 2,3 cm)

48,00 € (inkl. Mwst.)

www.diazhilterscheid.de

TESTENI N D E R F I N A N Z W E L T

testen IN DER FINANZWELT

Herausgegeben vonNorbert Bochynek und José M. Díaz Delgado

Die AutorenBjörn Lemke, Heiko Köppen, Jenny Siotka, Jobst Regul, Lisa Crispin, Lucia Garrido, Manu Cohen-Yashar, Mieke Gevers, Oliver Rupnow, Vipul Kocher

LektoratAnnette Schwarz Satz/Layout/DesignDaniel Grötzsch

ISBN978-3-00-028082-5Printed in Germany© Díaz&Hilterscheid

1. Auflage 2010

HAnDBUCH

HAn

DBU

CH t

este

n IN

DER

FIN

ANZW

ELT

1. Au

flage

201

0

J o s é D í a zN o r b e r t B o c h y n e k

testenI N D E R F I N A N Z W E L T

herausgegeben von

48,00 €

HANDBUCH

Page 37: Security Acts

37www.securityacts.com

The CSO’s Myopiaby Jordan M. Bonagura

© zothen - Fotolia.com

Before reading this article, imagine what it would be like to be able to manage your own company without your customers’ data, or, imagine what it would be like if your competitors got hold of these data…

Well, it has long been established that data are extremely valu-able for companies. Your customers’ databases and the expe-rience they have acquired through the years are fundamental, and they represent a great competitive advantage in this new corporative era. With this in mind, we can see the importance of implementing specific policies in order to build a base which will guarantee that these data are safe.

There has been a recent increase in incidents related to security issues in a way that IT management has become more and more complex, and automatically the need for a new kind of profes-sional, the CSO, has emerged.

The CSO has become the responsible person for risk areas, data security, and also for the definition and implementation of the security strategies and policies that the company will implement.

Such policies are developed to reduce risks and their impacts, and limit exposure to liability in all areas.

Figure 1 (top right) shows the direct relation be-tween security enhancement and risk reduction. It shows that the higher the security, the lower the risks.

However, the major questions it addresses do not consider the urge for good professionals in secu-rity or the development of good policies. Every company must go through these steps when it decides to implement or organize such policies.

The “in-box” vision, commonly used at the time of creating these policies, is not enough to encompass all the company’s existing range of vulnerabilities. When we analyze the graphic published by Breach Security Labs in August, 2009 on “The Web Hacking In-cidents Database 2009”, which demonstrates the vulnerabilities which were the hackers’ most favorite during the first half of 2009, we obviously and automatically realize that a high percentage of them come from a particular breach in the SQL Injection (19%) – an opening for data theft. I say obviously, because, as previously mentioned, data is one of the company’s most valuable assets.

What vulnerabilities do hackers use?

Source: Breach Security Labs

Figure 1

Page 38: Security Acts

38 www.securityacts.com

Such analyses are extremely relevant for a CSO, since they make it possible to enhance and update the logic control mechanisms (Firewalls, Anti Virus, IDS/IPS and, etc.) and thus reduce the risks relating to the well-known breaches that are addressed by the company’s established policies. Furthermore, it becomes pos-sible to take into account new ways to explore these breaches.

One specific risk I would like to briefly mention in this context is that there are people in charge of the administration and that people also make mistakes. Some might ponder that policies exist for this purpose and that they are there to be carried out precisely by the employees, yet it is worth emphasizing that poli-cies require continuous review as much as physical and logical mechanisms require updating. And, also competent profession-als involved in security matters require constant training.

Everything sounds perfect now, doesn’t it?

Unfortunately not! Let’s refer to the Bible where we find the line that says “the foolish man who built his castle on the sand ...”

The major problem is that every security policy is developed with an “in box” vision, although a large range of well-known breaches are available “outside the box”. In other words, the ones experi-encing the problems are the ones who can’t see them.

If the CSO simply relies on his own policy, he will not be able to see what it does not comprise and he will be deceived by his pseudo-security. That is what I call “CSO myopia”. By believing in his defined policy, he thinks he can control the whole thing, when actually he is only controlling his whole policy.

I mean: “Sometimes we hide the key under the doormat and forget to lock the door…”

One of the main problems in this “myopia” is when we treat, for ex-ample, the risks concerning the errors of configuration and admin-istration (Configuration/Admin Error (8%)) as in the graphic below.

What vulnerabilities do hackers use?

This sort of error, besides being considered a breach, may en-hance the identification process and consequent exploration of other breaches. A practical example is the directory listing of a web server showing database configuration files.

Calm down! Not all is lost…

It is often difficult to have the “out box” 100% of the time when you are dedicated to the “in box” and mainly on the idea that everything is under control. A very important recommendation, in my opinion, is to resort to specialized consulting professionals (Pentest), who are experts at analyzing breaches, which are still not familiar to the company, and the different methods to explore the ones already considered by your present policy. Attitudes like this might contribute to the decrease in the prob-lems coming from the managerial myopia..

Keep alert, keep safe! □

Source: Adapted from Breach Security Labs

Jordan M. Bonagura Jordan M. Bonagura is a computer scientist, post graduated in Business Strategic Management, Innovation and Teaching (teaching methodology and research). He works as a business consultant and researcher in infor-mation security with em-phasis on new breaches.

He is lecturer in the area of information technology at various institutions, among them the Brazilian Institute of Advanced Technology (Veris/IBTA).

As a university professor he has conducted “in com-pany” training at several nationally recognized organi-zations, among them the National Institute for Space Research (INPE).

> About the author

Page 39: Security Acts

39www.securityacts.com

Security@University – Talking about ICT security with two CTOs

© Feng Yu – Fotolia.com

Universities are very interesting entities in the ICT area. For ex-ample, the number of users in their information systems is large. A “medium size” university has between five thousand and thirty thousand users. Also, the number of services provided to these users is numerous: web applications, e-learning, email, net stor-age, VPN access, wireless access, VoIP services, mobility servic-es, single-sign-on, inter-university services.

SG6 has sought the vision of two Chief Technical Officers, CTOs, of two Spanish universities, around their vision of information security and the role it plays in the deployment of its ICT services.

Profile: Name, Career, Position, Company, …

Diego Pérez Martínez, Bachelor of Engineering in Computer Sci-ence, Chief Technology Officer (CTO) at Information and Commu-nication Technologies Service, University of Almería Francisco J. Sampalo Lainz, Bachelor of Engineering in Computer Science, Chief Technology Officer (CTO) at Information and Commu-nication Technologies Service, Technical University of Cartagena

Universtiy Profile: Name, Web, Students, Professors, Service Staff and ICT Staff

University of Almería: www.ua.es, 12500 students, 1000 profes-sors, 500 service staff and 60 ICT staff

Technical University of Cartagena: www.upct.es, 6500 students, 550 professors, 375 service staff and 22 ICT staff

1. Which are the main problems in the ICT area in an institu-tion like yours?

The University of Almería, like all Spanish universities, is faced with a growing request of services by the users. As part of these services, the users demand, every time, more availability, secu-rity, quality …

Moreover, technologies and systems which support these ser-vices are becoming more complicated. Summarizing: The work multiplies while the working team remains basically the same.Despite the moment of crisis we find ourselves in, ICT financing for the University of Almería is growing thanks to the awareness of the Andalusian Government and the management team of the University. The economics is therefore not a problem at this time.

In my opinion, there are three main problems with which the ICT area is struggling at the moment.

First. Alignment between the goals and priorities of the Univer-sity. It's essential to reach an agreement that allows the efforts and resources devoted to the ICT area to be targeted in the same direction that the strategic objectives of the University can be met. Also, to show managers the competitive advantages that ICT can bring.

Second. Interaction and communication with users. The "custom-er services", sometimes in the management of incidents, other times in the request of new developments, is probably the aspect that we must spend more time on. So, it's important to improve communication and information with our users: training them in the use of computers and showing them the ICT services that are available to them. These actions will improve, both the quality of service, and the possibilities that the users will find.

Third. Quality and security in the service deployment. Many times, and for various reasons, rapid and/or inexpensive deploy-ments are performed. In such cases, the risk associated with of-fering a unsafe or poorly developed service is not assessed. It's essential to devote sufficient time to the phases of analysis and design, ensuring quality and safety in all phases of deployment. It is also necessary to explain to managers that this additional cost is ultimately profitable.

Besides these three problem areas, there are also other issues of a more internal or technical nature. For example: the lack of

Diego Pérez Martínez and Francisco J. Sampalo Lainz

Page 40: Security Acts

40 www.securityacts.com

technical staff to support the demands of the University, the in-ternal organization of ICT staff, the training of staff, the short cycle life of current software systems, ...

Note that I don't speak about money, which is a common theme for all areas of the company ;-)

2. Which are the main challenges?

Spanish universities are facing what is probably the biggest chal-lenge in recent decades: The adequacy of the university and particularly its information systems to the requirements of the European Higher Education Area. The contents and the lecturers are no longer the center of the educational process.The focus of attention is shifting towards the student.

Moreover, students are very different to a few years ago. Most of them have a computer, they have important ICT skills, they participate in social networks, etc. The university has to adapt to this new kind of student; it needs to know how to approach them through the internet 2.0 tools.

The challenges will be to find good solutions, (I am not saying the best solutions), to the three problems raised above.

Also, I think another major challenge is to seek partnerships with other ICT staff, whether from universities or not, to develop com-mon solutions to common problems.

3. Information security is one of the problems, or is it perhaps more of a challenge?

I do not know if it is the right view or not, but information security is perceived basically as a problem.

In my opinion, security is inherent to the development of any ser-vice; logically, depending on the criticality of the same or the data being handled, we must give it a higher or lower security level.

Besides, I think by now we all know that security is not just a technical problem, we must also take into account the role of the users, and we also have to ask them for that effort.

4. If you do not mind my asking, which are your goals in infor-mation security?

Guarantees of the availability, integrity and confidentiality of in-formation are fundamental obligations in the ICT department of a university. I do not perceive security itself as a goal, but as a tool. The goal is supporting systems (and implant new systems) which provide the functionality they should in an efficient, effec-tive and, of course, secure way.

According to what I said before, we could state the following goals: Include security considerations in the phases of design and specification of services, provide training for users, and co-operate/coordinate with other agencies (for example: Iris-CERT)

5. Which direct experiences did you have with information se-curity?

As I mentioned before, security is taken into account in the de-sign and implementation of all new information systems.

Moreover, we are aware that security needs a specialization, which makes it difficult to dispose real, trained and updated spe-cialists in your own team. Further, the team that designed and developed a system is not, in my opinion, the most appropriate team to audit its security. Even if they try to be totally impartial, a “contamination” still prevails which is impossible to obviate. For this reason the Information and Communication Technologies Service decided last year to realize two security audits through external companies.

The ICT service of the University of Almeria cooperates with ex-ternal entities like CICA or RedIRIS in the resolution of incidents, where a team of our University might be involved.

I suppose that this question refers to actions aimed at the treat-ment of security (and not to security incidents). In this case we can state the following:

Coordination with other institutions and support to forums (RedIris).

Implementation and integration of open coded security tools (SNORT, Nagios, etc.) above the platform OSSIM for using it like the managing platform of security incidents.

Security audits of some of our public services, realized by SG6 during 2008.

6. How do you value these experiences? Which of them is the most positive? Is there a negative side?

The experience with the company SG6, who realized the audits in our network and virtual campus, was absolutely satisfactory. Ac-cording to my experience, everything is positive, as security holes are discovered by anticipating potential problems, in an indirect way the technicians get trained, and a security culture is formed which permeates the ICT Service.

I can’t mention anything negative. It is an experience which we will repeat.

All experiences have been positive; perhaps the only nega-tive thing we could find in connection with the implementation of OSSSIM, was the limited documentation and experience we found in some open-coded platforms, and the consequent dif-ficulty to integrate different products.

7.Ifyouhadtogiveadvicetoapersonwhoneverhadanyexpe-rience with information security, what would it be?

Humility. That they should be aware that even if you are working very well there will always be a bug or mistake through which the

Page 41: Security Acts

41www.securityacts.com

system can be attacked, and an external audit realized by good professionals is an opportunity, not a threat. They should open their minds and accept that nobody is perfect.

To work in cooperation, either with a company of the same indus-try or with trusted technology partners.

8. What do you associate to each of the following concepts?

Penetration Test?: A needLOPD (Spanish Data Protection Law)? A danger for companies, even if they act in good faith.ISO27001? Something desirable.

Penetration Test?: Information tool about the security status of the services; it must be used with a lot of caution and with guar-antees. LOPD (Spanish Data Protection Law)? I cannot say much: com-pulsory legislation that at least once a year makes us revise our infrastructure, our practices and our organization and the per-sons in charge of the data.ISO 27001? Security standard for information systems; to be honest, even if it is wrong to say it, I have not read it.

9. Do you think that an organization like yours does have the risk to suffer an intrusion?

Yes, universities by their very nature are organizations with thou-sands of users, very heterogeneous, and where one thing is of prime importance: availability.

Everyone of us has read in the newspapers about security inci-dents in which universities are involved. In some cases as direct victims, as their information system was attacked, in other cases as indirect victims, when their good name appears in relation with attacks realized from the university.

Of course, who does not?

10. Finally, looking ahead, which plans does your institution have in relation to information security for 2010?

First, finish implementation of all the improvement actions aris-ing from the past audits.

Second, continue realizing partial security audits of our informa-tion systems.

Third, realize an external audit which forces us to the Data Pro-tection Law (LOPD).

Fourth, study the possibility of obtaining the ISO 2700 certification.

Besides covering the challenges we have mentioned above, we would like to secure the electronic administration services we are going to provide, as well as reviewing the matters relating to compliance of LOPD and adaptation to the new regulation (RD 1720/2007). □

Your Ad [email protected]

Page 42: Security Acts

42 www.securityacts.com

Masthead

EDITOR

Díaz & Hilterscheid

Unternehmensberatung GmbH

Kurfürstendamm 179

10707 Berlin, Germany

Phone: +49 (0)30 74 76 28-0 Fax: +49 (0)30 74 76 28-99 E-Mail: [email protected]

Díaz & Hilterscheid is a member of “Verband der Zeitschriftenverleger Berlin-Brandenburg e.V.”

EDITORIAL

José Díaz

LAYOUT & DESIGN

Frenkelson Werbeagentur

WEBSITE

www.securityacts.com

ARTICLES & AUTHORS

[email protected]

ADVERTISEMENTS

[email protected]

PRICE

online version: free of charge

ISSN

ISSN 1869-4977

In all publications Díaz & Hilterscheid Unternehmensberatung GmbH makes every effort to respect the copyright of graphics and texts used, to

make use of its own graphics and texts and to utilise public domain graphics and texts.

All brands and trademarks mentioned, where applicable, registered by third-parties are subject without restriction to the provisions of ruling la-

belling legislation and the rights of ownership of the registered owners. The mere mention of a trademark in no way allows the conclusion to be

drawn that it is not protected by the rights of third parties.

The copyright for published material created by Díaz & Hilterscheid Unternehmensberatung GmbH remains the author’s property. The dupli-

cation or use of such graphics or texts in other electronic or printed media is not permitted without the express consent of Díaz & Hilterscheid

Unternehmensberatung GmbH.

The opinions expressed within the articles and contents herein do not necessarily express those of the publisher. Only the authors are responsible

for the content of their articles.

No material in this publication may be reproduced in any form without permission. Reprints of individual articles available.

Díaz & Hilterscheid GmbH 16, 23, 36, 44

Cabildo de Gran Canaria 43

iSQI 5

SELA 32

Kanzlei Hilterscheid 29

Index Of Advertisers

Page 43: Security Acts

43www.securityacts.com

agilerecord01_anzeigen.indd 8 17.12.2009 10:56:00

Page 44: Security Acts

Training with a View

also onsite training worldwide in German, English, Spanish, French at

http://training.diazhilterscheid.com/ [email protected]

“A casual lecture style by Mr. Lieblang, and dry, incisive comments in-between. My attention was correspondingly high. With this preparation the exam was easy.”

Mirko Gossler, T-Systems Multimedia Solutions GmbH

“Thanks for the entertaining introduction to a complex topic and the thorough preparation for the certification. Who would have thought that ravens and cockroaches can be so important in software testing”

Gerlinde Suling, Siemens AG

Kurfü

rste

ndam

m, B

erlin

© K

atrin

Sch

ülke

08.02.10-10.02.10 Certified Tester Foundation Level - Kompaktkurs Berlin15.02.10-19.02.10 Certified Tester - TECHNICAL TEST ANALYST Berlin22.02.10-25.02.10 Certified Tester Foundation Level Frankfurt am Main22.02.10-26.02.10 Certified Tester Advanced Level - TESTMANAGER Düsseldorf/Ratingen24.02.10-26.02.10 Certified Professional for Requirements Engineering - Foundation Level Berlin01.03.10-03.03.10 ISSECO® - Certified Professional for Secure Software Engineering Berlin08.03.10-10.03.10 Certified Tester Foundation Level - Kompaktkurs München15.03.10-17.03.10 Certified Tester Foundation Level - Kompaktkurs Berlin15.03.10-19.03.10 Certified Tester Advanced Level - TEST ANALYST Düsseldorf22.03.10-26.03.10 Certified Tester Advanced Level - TESTMANAGER Berlin12.04.10-15.04.10 Certified Tester Foundation Level Berlin19.04.10-21.04.10 Certified Tester Foundation Level - Kompaktkurs Hamburg21.04.10-23.04.10 Certified Professional for Requirements Engineering - Foundation Level Berlin28.04.10-30.04.10 Certified Tester Foundation Level - Kompaktkurs Düsseldorf03.05.10-07.05.10 Certified Tester Advanced Level - TESTMANAGER Frankfurt am Main03.05.10-07.05.10 Certified Tester - TECHNICAL TEST ANALYST Berlin10.05.10-12.05.10 Certified Tester Foundation Level - Kompaktkurs Berlin17.05.10-21.05.10 Certified Tester Advanced Level - TEST ANALYST Berlin07.06.10-09.06.10 Certified Tester Foundation Level - Kompaktkurs Hannover09.06.10-11.06.10 Certified Professional for Requirements Engineering - Foundation Level Berlin14.06.10-18.06.10 Certified Tester Advanced Level - TESTMANAGER Berlin21.06.10-24.06.10 Certified Tester Foundation Level Dresden

- subject to modifications -

securityacts02.indd 19 26.01.2010 09:54:15


Recommended