+ All Categories
Home > Documents > Security Requirements; The Strange Relationship Between Application and Security Requirements.

Security Requirements; The Strange Relationship Between Application and Security Requirements.

Date post: 18-Jan-2023
Category:
Upload: sinclair
View: 0 times
Download: 0 times
Share this document with a friend
23
Requirements Engineering Security Requirements; The Strange Relationship Between Application and Security Requirements. by Mike Libassi
Transcript

!!

Requirements Engineering Security Requirements; The Strange Relationship Between Application and

Security Requirements. !by

Mike Libassi

!!

! Abstract

Security has taken a backseat to functional requirements and the costs have been high for

this. The standard methodology of functional and non-functional requirements is not helping to

solve this. When security is addressed in a functional requirement there is an assumption the de-

velopment team has the correct level of security expertise and things are missed. At time securi-

ty verification gets done at a later stage in development or even after release; causing costly fix-

es.

Ways to move all aspects of security into the functional spotlight is a focus of much

work; and there has been valuable research in this area. Some of the research is explored here as

well as an idea that builds upon it with an “anti-user” process. A process that may be able to help

close the dived between functional and security requirements.

!2

1. Introduction

At times security and business requirements run on different tracks, and worse run differ-

ent races with dissimilar goals. On occasion the security of a systems function or users data is

listed upfront in the functional requirements. Other times security is a “back room” function and

imposes restrictions or listed as a non-functional requirement.

This inconsistency leaves a blurred boundary between functional and security require-

ments; and boundary that is crossed either by:

• An applications requirements has security needs that are pushed down into the infrastructure.

or

• The infrastructure has security policies that are pushed-up into an applications functionality.

This has affected how and when security is verified in the development lifecycle and

makes for difficult measures of impacts and efforts. This was also recognized by (Cheng &

Atlee, 2007 p. 9), “To further add to the challenge, there is no consensus on the degree to which

security requirements should be realized at the requirements level.” There is a loss of security

with this requirements gap; so how can this gap be closed or minimized? This paper is an evalu-

ation of the issues and current research in this area; with some considerations of an “anti-user”

process that could help minimize the gap.

1.1 Importance

We could keep the status-quo and have security remain part of the non-functional re-

quirements. However, we do encounter security holes by keeping security as non-functional par

!3

of the process. An example could be the functional requirement states the user must login with a

password. However, no guidelines on password strength are given. The details of password

strength end-up coming from a security department, or policy, and a conflict occurs that causes

requirements to be changed. Security policies could state the password must be twelve or more

characters however the page element only has room for ten, or only allows twelve (missing the

“or more” part). These missed changes can cause effort resizing; or worse these are noticed dur-

ing development and cause code rework that is more expensive than reworking requirements.

Does making security a non-functional requirement end-up making security un-function-

al? Seems security gets attention only after an incident happens or towards the end of develop-

ment. Only some security testing is aimed at a functional level; i.e. if user enters incorrect pass-

word they get a bad password message. Deeper security testing, like penetration testing, comes

after deployment. Discoveries in this late stage produce costly repairs that could have been

caught, and fixed, in requirements.

These fictitious examples are very close to reality; (Verdon & McGraw, 2004 p. 6),

“roughly 50 percent of security problems are the result of design flaws.” It is cheaper to correct

mistakes early than after release.

1.2 Overview of major considerations

Security as an afterthought.

In the twenty years of requirements engineering security, along with other aspects of a

system like performance, have been bound to the non-functional section of the process. The non-

functional requirement is loosely defined as: system requirements that are not part of the behav-

ior of the system. However, security is deemed a critical part of a system, even more today, and

!4

not system behavior. Unfortunately non-functional requirements become an afterthought and so

does security.

!Security is dropped into requirement teams that have little to no security background or training.

There is an expectation that development teams contains security expertise. This is not

completely false; especially where business rules are upholding security or need to comply with

regulations. Some development teams may have great security knowledge and expertise. How-

ever, development direction is on functionality so most process and documentation it tied to it.

Testing also follows the functional road with functional and regression tests. Only testing the

functional requirements is leaving security holes to be tested after the system is built; or worse

after its released.

!Post release security

Pushing a complete system through with little to no security testing only to be evaluated

later with penetration testing seems to be the norm. It’s like building-in automotive safety after

the car is built then crash tested to see where safety is lacking. Are we running development as

crash test dummies? There is research that discusses this and is reviewed further in the back-

ground section of this paper.

1.3 Overview of the paper

This paper is an exploration into these issues and background research in security re-

quirements and offers an exploratory view of possible improvements and changes in this area.

!5

The idea and architecture the “anti-user requirement” is explored as both a process improvement

and an automated system.

The paper covers the following:

• Section 2 reviews current work in the areas of security metrics and research into integration of

security into the requirements process.

• Section 3 briefly covers areas of similar process and research that should be considered for fur-

ther review.

• Section 4 covers considerations of the security gap and its affects. A summary architecture for

a “anti-user” process that leverages the works covered in section 2.

• Section 5 is summary and recommendations.

!2. Background

The gap between functional and security requirements is not a new topic and has existing

research in ways to alleviate the problems. Some of the different approaches, like the Building

Security In Maturity Model (BSIMM), offer guidelines to incorporate security higher in the de-

velopment process. The following research has also looked to ways to improve security by ex-

amining the gap and offering new methods to close it.

Research by (Bishop, 2003) asks the question on what does security entail? Is it access

controls, entitlement verification, data encryption standards or password strength? Does a line

need to be drawn between operational security goals and security requirements of a project? A

!6

goal is to make security less of a mystery to all facets of development. Working towards this

goal may help blur the line between back-room security and functionality security requirements.

These questions are important as we examine alternative security requirement methods.

The Security Quality Requirements Engineering (SQUARE) methodology by (Mead,

Viswanathan, & Zhan, n.d.) is an attempt to bring security requirements into mainstream re-

quirements engineering by fitting it into the Rational Unified Process (RUP). The paper explores

the nine steps of the SQUARE process and the problem it attempts to solve. The SQUARE

process is worthy of examination as it attempts to inject security into the requirements process by

moving security requirements out of the non-functional area. This is in line with goals discussed

in this paper.

(Cheng & Atlee, 2007) researches into strategies that extend security into requirements

engineering in section 5.2. Section 2 talks about how requirements start ill-defined and gradually

become detailed. I’d add with no security specialists at the start the security requirements, along

with other non-functional requirements, can stay vague. Section 5.2 reenforces the problem of

missing security higher up the development process (Cheng & Atlee, 2007 p. 8), “Although

strategic, the threat-based approach to security requirements engineering is reactive and focuses

on low-level security requirements; there is no notion of a general security policy.” The section

goes on to support that there is no consensus on how much security should be realized at re-

quirements level; and concludes with open questions that are applicable here.

Ways of extracting security requirements early in the process is looked at by (N. Hallberg

& Hallberg, 2006) with the USeR method. The USeR process extracts security requirements

!7

from the text of functional requirements. The process is designed to be run along with normal

requirements and is used to help find security areas that may have been overlooked. This con-

cept is part of the detail considerations discussed in this paper.

Investigating measurements of security requirements by (Islam & Falcarin, 2011) based

on risk analysis and a goal / question metrics. This work investigates the metrics side of this

problem and supports the need for security metrics (Islam & Falcarin, 2011 p. 70), “Without a

systematically defined approach for security measurement and without good metrics, it is hard to

answer, how secure a software product is.” The paper reviews the Goal Question Metric (GQM)

framework. GQM is not an all encompassing security measure, as no such thing exists, however

parts of the metrics could be used in future processes. This possibility is further examined in

section 4 with the possibility of leveraging it in some manor in the “anti-user” process.

The proposed process to capture and analyze object oriented security requirements by

(McDermott & Fox, 1999) generates an “abuse case” to help verify security. The paper states

that software practitioners that have little security experience are being driven to implement se-

curity features. Due to this we see lacking security in delivered software. In addition security is

a non-functional requirement so its priority gets pushed towards the end of the development

process.

Lin, et al., also covers the “anti” concept with anti-requirements (AR) that are incorporat -

ed into (Lin, Nuseibeh, Ince, Jackson, & Moffett, 2003) abuse-frames. The abuse-frames are

used for an early detection against current threats. The anti-requirement strives to push the col-

lection of security issues early in the process. The work goes on to cover an abstract model of

!8

the abuse-frames. The concept to push security requirement higher into the process is also re-

flected in other works and further in this paper.

A vision where security becomes part of the requirements engineering community. The

paradox is put forth where (Crook, Ince, Lin, & Nuseibeh, 2002 p. 203), “What is paradoxical is

that there does not seem to be a wholehearted commitment by both academics and industry to

treat this topic systematically at the top level of requirements engineering.” The paper covers six

current problems and calls out the anti-requirement; that is defined as (Crook et al., 2002 p. 203),

“An anti-requirement is a requirement of a malicious user that subverts an existing requirement.”

This is a great concept however not a new one; previous academic and research has presented

security policy models; that unfortunately go ignored. The paper does a great exploration into

the question why is this still happening. The anti-requirement is a large influence for the anti-

user process discussed in this paper.

From examining what a security requirement entails, how to move it higher into the de -

velopment process and ways to measure it; even with this research the area is still young and has

future work for it. Other areas may also yield research that could benefit here.

!3. Related Topics

There are several areas of research in security testing, security mechanisms and security

metrics. Few areas that are related, and could possible be leveraged, such as as the The Building

Security In Maturity Model (BISMM), The Common Vulnerability Scoring SystemVersion

(CVSS), CERT Vulnerability database and the SANS Vulnerability Analysis Scale.

!9

The BISMM describes 111 activities organized in twelve practice areas. And through

studies of 51 software security groups (SSGs) none had exactly the same structure. This suggest-

ing there is no one set way to structure a security group. However, there were some commonali-

ties observed that are worth study when security is examined for improvements. The BSIMM is a

source of ideas and general guidance and not a ridged framework that is the silver bullet for se-

curity.

Scoring and metrics systems for security exist such as the CVSS, CERT and SANS that

should be part of the development process in some fashion. A software security group may use

these resources as part of understanding the current threats that may affect the security of the

software.

These related resources are not fully covered here; as each could easily fill an entire re-

view. However, all influence, directly or indirectly, the ideas under consideration in this paper.

!4. Detailed considerations

An examination of the security gap and its impacts are considered here. Further an archi -

tecture summary is presented for an “anti-user” process that may help fill this gap. As stated

from the BISMM examination this is not a silver bullet for security and requirements. However,

it may be leveraged in some manor to spark further innovation in the field.

!4.1 The gap between user and system security.

!10

Having security as an afterthought creates a “security divide” that has become costly to

cross or ignore. It is not implied here that development teams do not care about security; it is

they are not afforded the experience or time too. Nor should they be expected to be experts when

other groups can be used to handle security. There are times when a development team addresses

security requirements upfront; however only when it is part of the functional design. This makes

the security gap not a binary one but a grey area that is different from team to team and as unique

as the software each is developing. This is reflected in much of the current research; and well

stated by Lin:

“However, what the security community has identified as important, but still lacking, is a

precise notion of security requirements, a means of analyzing them, and a systematic ap-

proach to defining suitable problem boundaries in order to provide a focus for early se-

curity threat analysis.” (Lin et al., 2003 p. 1)

When we look at where the question “is it secure?” is asked during development vs. the

costs to repair, in Figure 1, it is clear not addressing security further up into requirements is be-

coming costly.

!11

� Figure 1. Cost to Fix Over Time

Why does security take a backseat in development? One could speculate many reasons;

like no tangible benefits when paying for security upfront. Stakeholders pay for the minimum

security as it is difficult to justify the costs when the numbers roll-up. The reasons go deeper

then just costs; as stated by (McDermott & Fox, 1999 p. 1), “market forces are driving software

practitioners who are not security specialists to develop software that requires security features.”

There is an assumption when a security requirement is under development the development team

is staffed with enough expertise to accommodate correct development and test of security. What

further clouds this is that some teams have a varied level of security experience and understand-

ing of possible issues. So we don’t have development teams sitting around making security up

as they go along; there is some level of assessing security. Yet, the job of the development team

!12

is developing the software to meet the functional requirements; so partial security knowledge

gets part-time attention.

Partial attention may also be due to the divide between functional and non-functional re -

quirements. As set forth in requirement templates for years, security and other items like per-

formance and reliability, are bound to the non-functional section of the document. The tone this

sets is clear and though proposals to change requirements templates is not discussed here, it is a

viable step to explore. Agile development helps break this in some ways by allowing security

stories to be added to the backlog with functional stories. However, without a security specialist

on the team the security stories may not come to be until after an incident happens on a live sys-

tem.

This lack of a security specialist further widens the gap and even security contractor firms

also may not completely help. Even when external security resources are used it is typically later

in development (as seen in Figure 1). The feedback loop of external security analysis can be par-

tial, if any at all, leaving the development teams without a security “sage” to help guide them.

This may be an extreme example yet not as uncommon as thought in the development industry.

Impacts from this security gap have been documented.

!4.2 Impacts

An example is an occurrence at a bank in Hastings England. The system did not include

an audit trail for address changes and a clerk took advantage of this flaw to withdraw a large

amount of money from a customers account. After profiling a customer the clerk changed the

customers address, generated an ATM card and PIN that was sent to the new address, used the

!13

card to withdraw funds, then change the address back to the original customers (Anderson,

2001). The customer was left with no money and the bank no records of the address change to

help resolve this issue. Could a dedicated security engineer assigned during development have

caught this? This case it looked like a missing functional requirement and/or business rule was

missed and having a security representative higher in the development may have caught this.

There has been enough security issues that projects have arisen to track the common flaws; the

OWASP is one of them.

The Open Web Application Security Project (OWASP) 2010 list the number 1 security

risk is the injection attack. As stated in the (“Open Web Application Security Project (OWASP),”

2010), “The best way to find out if an application is vulnerable to injection is to verify that all

use of interpreters clearly separates untrusted data from the command or query.” Verification of

this type seems not to fit into functional testing and, due to the nature of this flaw at the number

1 position, not properly imposed during development. As seen in the cost to complexity model

in Figure 1 it is cheaper to fix a security issue early rather than having it found in a penetration

test after release or worse found after a security incident occurs.

Analysis of the Diebold AccuVote-TS 4.3.1 system found (Kohno, Stubblefield, Rubin, &

Wallach, 2004), “A voter can also perform actions that normally require administrative privi-

leges, including viewing partial results and terminating the election early.” Similar issues with

integrity may be overlooked even when a development team does address current security

threats. Core design of a system that may be exploited from an insider threat is a case that may

be out of the normal bounds of a development teams perspective; a perspective that a dedicated

security analyst may catch in design.

!14

There are areas that development depends on; wider use of open source and public li-

braries has lead to security issues with watering hole attacks. A development team may not think

to verify or monitor code sources for changes and malicious software. This can be an area where

an internal security team may help monitor.

It is impossible to determine if any security analysis may have help eliminate the issues

reviewed here. It is an important note even with added security analysis no system is 100% se-

cure. However, if proper security analysis is applied to development it may help lessen the im-

pacts from security incidents.

4.3 Improving requirements gathering to include security.

As seen with previous research it is possible to push security higher in the requirements

process. Even with the previous work and processes this gap still remains. Why it does remain

is a mystery; however whatever method is used should not add too much overhead to the existing

development teams. Also it must have a greater affect than a contractor running an external pen-

etration test would. This is achievable if we build upon methods like: (Mead et al., n.d.)

SQUARE, (N. Hallberg & Hallberg, 2006) USeR process, (McDermott & Fox, 1999) abuse-

case, (Crook et al., 2002) anti-requirements and (Lin et al., 2003) abuse-frames. All these offer

great frameworks that can be used by an internal software security team that works in parallel

with development.

The internal security team is involved upfront and generates sets of anti-user require-

ments (AUR) during functional requirements, and development, that outlines possible exploits.

The AUR defines an action and possible inputs that outline malicious behavior to break a part of

!15

a system. The AUR will typically mirror a functional requirement; like the example in Figure 2

(below). Using the same traceability for functional tests back to functional requirements; the

AUR has a security test cases that can be traced back to it. Figure 2 (below) is an example of a

functional requirement (FR) that has points during development where AURs, and later security

test cases, are generated. This is a specific example where user login and password storage and

handling test points are identified.

Figure 2. Anti-User Requirements

!16

Possible ares for security risk are identified; starting from functional requirements and

carried down into certification testing. An important part of the process is a feedback loop to de-

velopment teams on possible areas of risk that will be tested and should be addressed. This may

be a person or a possible system that uses the text of the requirements and a current threat dB,

like Metasploit, to generate the possible exploits. Another important part of this process is it’s

not intended to be held-off until the end of development; in contrast it’s to test early and often

with continuous feedback. An important part that will be needed is the test for the AURs and

metrics generated from this process.

4.4 Improving security testing and metrics.

Different processes or measures are added during AUR development. An example using

the AUR SQL exploit injection (from 4.3) is security testing will need to build and execute tests

to ensure they do NOT pass (thus, ensure the exploits do not work). Figure 3 is an extension of

the AURs generated in Figure 2 and shows possible tests to be generated and executed. Like

SQL injection, authentication web service vulnerabilities, decryption tests and software patch

!17

levels.

!

Figure 3. AUR Tests and Metrics

These tests may also utilize existing security tools, like Metasploit shown in the example,

to strengthen the test parameters. The AUR, tests and results are feed back into a security data-

base to be leveraged in future analysis. Additional use of the metrics after a system is released

into live production are collected to help improve future anti-user requirements.

The process of scanning functional requirements and deriving possible security tests are

completed manually by a security team member and later may become automated. As profiles

for parts of a system are built a knowledge base is developed that may be leveraged with some

form of scanning AI. This may extend from scanning text to running security audits of develop-

ment, integration and final test environments.

!18

The end of the development process is not the end of the security metrics. Feedback

from live production events are also funneled back into the security DB to further bolster future

efforts. Example from live attack patterns may help profile current threats that are happing to

live systems; and may also happen to systems under development.

!

!19

5. Summary and Recommendations

Security has taken a “non-functional” seat on the functional train. This is not a surprise

as most development teams are not security experts; or should they be expected to. However,

this has left a gap between security and functionality where security is a “back-room” function or

something a contract firm does after release. Worse, when security does bubble-up into func-

tional requirements most teams are not at an expertise level to generate proper tests for the secu-

rity.

Several incidents have been documented where this gap may have been the cause. Even

with current research into this the problem still remains; one where this gap is not fully ad-

dressed. The same reasons we add a UI designer or performance engineer into the development

team, because they are experts in these areas, we need to add a security engineer. The security

engineer can generate anti-user requirements (AUR), and AUR tests, starting at functional re-

quirements down into final acceptance testing. Metrics from AURs, test designs, and results are

captured and used to build a security database. This may also utilize current exploit knowledge

bases and profile data from live attacks in production.

The result is security tests that are ran early and often. More, a system in place that also

provides feedback to development during the process. Later, parts of this process may be auto-

mated for efficiency. Thus easing the burden of security on functional developers while main-

taining security in the development process.

!!!

!20

Recommendations

Recommendation How it helps

Addition of Internal Software Security Team

Internal security specialist that are part of the development team. This brings a security specialist into the team. To help assess security and be a subject matter expert that helps grow security awareness.

Anti-User Requirement (AUR) development.

Starting at requirements; the software security team generates possible security exploits to a system. These are shared with development to make design fixes early in the process.

AUR test case development Test cases from the AURs generated allows for verification early and often.

Anti-User Metrics Learning from the process is important to continuous improvement. Collection of AURs and test cases, along with test results, helps with this improvement.

Security dB Along with keeping the metrics from AURs a database of external threats, like Metasploit, is maintained to allow traceability of AURs back to known exploits.

Live security metrics Security incidents from live production offers valuable profiles of know attacks that have happened. This data is also keep to allow for refined AUR creation.

!21

References

Bishop, M. (2003). What is computer security? Security & Privacy, IEEE, 1(1), 67–69. doi:

10.1109/MSECP.2003.1176998

Cheng, B. H. C., & Atlee, J. M. (2007). Research Directions in Requirements Engineering. Fu-

ture of Software Engineering, 2007. FOSE '07, 285–303. doi:10.1109/FOSE.2007.17

Crook, R., Ince, D., Lin, L., & Nuseibeh, B. (2002). Security requirements engineering: When

anti-requirements hit the fan (pp. 203–205). Presented at the Requirements Engineering,

2002. Proceedings. IEEE Joint International Conference on, IEEE.

Hallberg, N., & Hallberg, J. (2006). The Usage-Centric Security Requirements Engineering

(USeR) Method. Proceedings of the 2006 IEEE Workshop on Information Assurance, 34–

41.

Islam, S., & Falcarin, P. (2011). Measuring security requirements for software security (pp. 70–

75). Presented at the 2011 IEEE 10th International Conference on Cybernetic Intelligent

Systems (CIS), London, UK: IEEE.

Kohno, T., Stubblefield, A., Rubin, A. D., & Wallach, D. S. (2004). Analysis of an electronic vot-

ing system (Vol. 0, pp. 27–40). Presented at the IEEE Symposium on Security and Priva-

cy, 2004. 2004, IEEE. doi:10.1109/SECPRI.2004.1301313

Lin, L., Nuseibeh, B., Ince, D., Jackson, M., & Moffett, J. (2003). Introducing abuse frames for

analysing security requirements (pp. 371–372). Presented at the Requirements Engineer-

ing Conference, 2003. Proceedings. 11th IEEE International, IEEE.

!22

McDermott, J., & Fox, C. (1999). Using abuse case models for security requirements analysis

(pp. 55–64). Presented at the 15th Annual Computer Security Applications Conference

(ACSAC '99), 1999, Phoenix, AZ: IEEE Comput. Soc. doi:10.1109/CSAC.1999.816013

Mead, N. R., Viswanathan, V., & Zhan, J. (n.d.). Incorporating Security Requirements Engineer-

ing into the Rational Unified Process. 2008 International Conference on Information Se-

curity and Assurance ISA, 537–542. doi:10.1109/ISA.2008.19

Open Web Application Security Project (OWASP). (2010, October 16). Open Web Application

Security Project (OWASP). Retrieved January 13, 2013, from http://www.owasp.org

Verdon, D., & McGraw, G. (2004). Risk analysis in software design. IEEE Security & Privacy

Magazine, 2(4), 79–84.

R. Anderson, Security Engineering: A Guide to Building Dependable Distributed Systems,

Wiley, 2001

!23


Recommended