+ All Categories
Transcript

Vo

lum

e 1

| 2

01

4

&ELECTRICAL ENGINEERINGCOMPUTER SCIENCE NEWSLETTER

Securing Online Transactions: The National Strategy for Trusted Identities in CyberspaceAuthors: Brad A. McGoran, P.E., CSCIP/G, GIAC, and Alexander D. Naiman, Ph.D.Reviewer: John R. Fessler, Ph.D., P.E., CSCIP, GIAC

OverviewIn the world of cyberspace, our online credentials and personas are under constant attack. Individuals, corporations, and government entities suffer a daily onslaught of hacking attempts and attack vectors, including spear phishing, malware, trojans, misdirection, man in the middle, and a host of other exploitation attempts. A majority of these malicious activities have a common purpose: stealing one’s online credentials and other authentication mechanisms to thereby facilitate fraudulent transactions. The prevalence of commonplace and easily guessed usernames and passwords makes these attacks highly successful and profitable for not only the expert cyber hacker, but for even the most junior of miscreants and emerging “black hats.” Moreover, advances in computing power and a wealth of readily available online information have further empowered the hacker

community to break most 4-to 6-character passwords within minutes using downloadable tools. To mitigate this situation, websites and corporations have resorted to asking individuals to create increasingly long and complex passwords, complete with random characters and symbology. The result? Users, unable to remember a complex password such as Yell0w$tonE79322@!, resort to reuse of one password over and over, or worse yet, write down passwords on sticky notes conveniently stuck to the computing device itself. The result is an ever-increasing incidence of identity theft, misappropriation of credentials, fraudulent transactions, and the corresponding compounding of losses in the areas of finance, intellectual property, national security, electronic health records, and other areas of corporate and individual privacy, security, and assets.

To combat this exponentially growing problem, in April 2011, the Obama Administration announced a White House initiative called the National Strategy for Trusted Identities in Cyberspace (NSTIC). The strategy serves as a key constituent of the global cybersecurity initiative to safeguard the online world. In particular, the strategy outlines the following vision:

• Individuals and organizations utilize secure, efficient, easy-to-use, and interoperable identity solutions to access online services in a manner that promotes confidence, privacy, choice, and innovation.

The President’s vision should not be confused as a national identification and credentialing program. On the contrary, the focus of the strategy is for the government to facilitate the launching of a private-industry-led effort that will enhance the

ELECTRICAL ENGINEERINGCOMPUTER SCIENCE

&NEWSLETTER

2

security and privacy of all online transactions, the majority of which have no government involvement whatever. For example, imagine getting an identity from a company X, that is vetted by an independent third party such as credit agency Experian, which is in turn used to purchase goods on an e-commerce website Y. The government is not involved in such a transaction in any way, but may be able to facilitate the development of an ecosystem that enables relying parties (Amazon in this example) to trust the identity provided by an unrelated third party (Google in this example). As the lead agent for this impetus, a National Program Office (NPO) was established at the National Institute for Standards and Technology (NIST) to coordinate activities intended to implement an identity ecosystem that would fulfill the NSTIC vision. The government further understood that an injection of initial financial capital would be required to kick-start industry participation. In fulfillment of this plan, the NPO has, to date, awarded more than $18 million in grants, split over several funding years, to a total of 12 awardees. Each awardee is to use this capital to fund pilot projects that will form the foundation of an emerging online Identity Ecosystem.

The remainder of this article discusses the NSTIC vision for the Identity Ecosystem, and entities that are working to bring this vision to life.

The NSTIC Guiding Principles and VisionNSTIC specifies four guiding principles in defining the vision for the Identity Ecosystem:

• Identity solutions will be privacy enhancing and voluntary

Individual choice in how identity information is created, transferred, stored, and used forms the basis of the first NSTIC guiding principle. Both individuals and organizations will voluntarily choose whether or not to participate in the Identity Ecosystem. Participating individuals will also be able to choose what information they divulge, decide which entities have access to that information, and obtain full knowledge of how that information is being used.

Along with this freedom of choice within the NSTIC vision, protection of personally identifiable information (PII) is equally paramount. If Jane, a young lady, wants to buy a bottle of wine for a nice dinner on her way home one day, why should she have to hand over information that tells her height, weight, address, eye color, and hair color, as contained on a driver’s license? Only her age is important, relevant, and necessary for this physical transaction. In the same way, users routinely give up excessive, irrelevant, and potentially harmful information in their online transactions. To minimize this data leakage, the NSTIC vision seeks a “holistic implementation” of the Fair Information Practice Principles (FIPPs) that will preserve and improve privacy in the Identity Ecosystem. As one example, in the NSTIC vision, the Identity Ecosystem will preserve the ability of individuals to act anonymously online when desired. In fact, the vision is for the Identity Ecosystem to support a range of levels of identity validation, from complete anonymity to unique identification, depending on the information required by a particular transaction.

• Identity solutions will be secure and resilient

The NSTIC vision seeks to provide all participants in the Identity Ecosystem confidence that the identity solutions they choose are secure and resilient. Security means that an identity credential must be resistant to attack and misuse, issued in a secure manner by trusted and verifiable identity providers, and issued using a level of identity

ELECTRICAL ENGINEERINGCOMPUTER SCIENCE

&NEWSLETTER

3

verification commensurate with the intended use of the credential and the corresponding risk of the transaction. Resiliency, in turn, necessitates that credentials must also be reliable; recoverable in case of loss, compromise, or theft; and able to be revoked or suspended in the case of misuse. Furthermore, diversity within the Identity Ecosystem credentials, systems, and processes enhances solution resiliency by removing problems associated with single points of failure.

• Identity solutions will be interoperable

Interoperability within the Identity Ecosystem is a key to developing useful identity solutions. Ideally, standardized credentials will become widespread in both the government and private sectors, and a variety of entities will be capable of accepting and verifying these credentials to facilitate transactions, thus eliminating the current problem of different login information for every website visited. NSTIC encourages the use of existing, non-proprietary standards to ensure interoperability wherever possible. NSTIC further strongly supports the development of new standards and specifications, provided they remain open and continue to encourage a wide range of solutions and solution providers, and thereby encourage fair competition within the global online marketplace.

• Identity solutions will be cost effective and easy to use

The NSTIC vision recognizes that identity solutions must be attractive to participants in the Identity Ecosystem in order for them to be adopted widely. To this end, the fourth NSTIC guiding principle promotes a well-developed Identity Ecosystem as lowering costs for all participants by reducing fraud, eliminating redundancies, and replacing paper-based processes. Intuitive solutions that are easy to use are more likely to be adopted by individuals, and this principle encourages the use of existing technology such as smart cards, mobile phones, and other prevalent devices that can provide secure platforms for implementing such solutions.

Getting InvolvedIn addition to the funded pilot program, and to further promote participation in the Identity Ecosystem, the NPO funded the creation of a new, industry-led Identity Ecosystem Steering Group. The steering group acts much like other well-known standardization bodies such as the American National Standards Institute (ANSI) and others. Individual participants and company representatives convene and meet several times per year to try to reach consensus on best practices and processes to safeguard our online identities and transactions. All are encouraged to participate in the creation and guidance of this new initiative. Currently, hundreds of companies and many more individuals are actively involved.

Ongoing projects cover a plethora of online experiences, ranging from the safeguarding of children to securing electronic medical records, to protecting sensitive corporate assets, to enabling retiree access to health benefits, and beyond.

Final ThoughtsThe number of online transactions is growing at an exponential pace, and for convenience, more and more people are turning to the Internet and cyberspace. But while greatly facilitating our lives, the back-and-forth flows of data across servers, terminals, and mobile devices represent a treasure trove of information that serves to attract interest from those with harmful intent. The anchor for securing these transactions is trust. Trust in the identity of parties to a transaction serves to legitimize that transaction. As more and more user names and passwords are compromised on a daily basis, a new paradigm is required. The National Strategy for Trusted Identities in Cyberspace has been established to meet this need.

ELECTRICAL ENGINEERINGCOMPUTER SCIENCE

&NEWSLETTER

4

Cybersecurity: Common-Sense Tools and Techniques, and their LimitationsAuthor: Srinivasan Jagannathan, Ph.D., M.B.A.Reviewer: Adam Rowell, Ph.D.

Broadly speaking, the term “cyber” denotes anything related to computers, computer networks, and/or the Internet at large, and the term “cybersecurity” refers to security in the “cyber” world. The Merriam-Webster dictionary defines the term “cybersecurity” as “measures taken to protect a computer or computer system (as on the Internet) against unauthorized access or attack”[1]. Unauthorized access and attacks on computer systems and networks have been splashed in the media with not-so-surprising regularity. Be it statements from firms that consumer credit card information has been compromised,

or reports of hacker groups attempting to embarrass one organization or another, such headlines emphasize the need for cybersecurity. Recent studies, such as the 2012 Norton Cybercrime Report, also highlight the massive scale and sizeable footprint of cyberattacks[2]. This report, based on an online survey of more than 13,000 online adults aged 16–24, spread across 24 countries, estimates that there are 556 million victims per year, and that nearly two-thirds of online adults have been victims of cybercrime in their lifetime.

Given such grim statistics, what is an Internet user supposed to do? Is a cybersecurity toolbox available that is accessible, easy to use, and effective? As with many things in the real world, the solution relies on an understanding of how one interacts with the cyberworld, analyzing the vulnerabilities of the particular profile of interaction, and searching for tools and techniques that mitigate those vulnerabilities. No solution is without risks, and given the nature of evolving technology, no solution is permanent. Even regularly updated anti-virus software does not protect users against new threats that are being developed continually, at least until solutions to the new threats are found. It is wise to understand the limitations of the solution, and prepare for it to be breached.

In this article, we analyze some of the common modes of cyberworld interaction and the broad categories of threats associated with them. We identify a number of common-sense tools and techniques that mitigate, but not necessarily eliminate, the threats. We also explore the inherent limitations of these tools.

ELECTRICAL ENGINEERINGCOMPUTER SCIENCE

&NEWSLETTER

5

Modes of Interaction and Venues for AttackOur interactions with cyberspace can be broadly categorized as interactions with: (1) computers (desktops and laptops), (2) devices (mobile devices, USB devices, etc.), and (3) the network (computers and devices that are remotely located, including web sites, servers, and various network equipment). Each of these modes of interaction is vulnerable to attack, and can serve as the means for attacking others. For example, desktop computers are vulnerable to viruses and other malicious software. USB devices are vulnerable to theft, and as a means for propagating malware. Web sites and servers are vulnerable to attack, and are a source for downloading malware to the computers used to access them. Likewise, network equipment that is accessed during network communications is vulnerable to attack, and can be a means for propagating an attack.

It is useful to examine what really is meant by an attack. At its core, a successful attack causes some damage, for example in the form of: (1) disclosure of information that is subject to misuse, (2) loss of information that serves some meaningful purpose that cannot be recovered after the attack, and (3) loss of productivity or equipment that must be replaced or recovered. Examined from this perspective, the high-level objectives of cybersecurity risk management are: (1) if possible, prevent disclosure or loss of

information to unauthorized parties; (2) recover from loss or unauthorized disclosure of information; (3) recover as efficiently as possible from an attack. Of course, the risk management objectives can vary, depending on the specifics of each situation.

bundled software. Although not all downloaded programs are malicious, malicious programs such as adware often hitch a ride as part of bundled software[3]. It is advisable to download software judiciously, and avoid unknown and unfamiliar web sites. If in doubt, searching for related keywords on a search engine will usually, though not always, reveal any relevant negative feedback. It is also useful to actually scan through the dialog boxes and uncheck any software that one does not want to download but that are being bundled. For mobile apps, attention should be paid to the type of information that is accessed, which is usually displayed to the user to seek permission.

It is also useful to install and regularly update computer protection software from a reputable vendor. Security software is available for personal computers and mobile devices. Often, such software includes an anti-malware component, and in the case of personal computers, a firewall to protect from network-based attacks. Additionally, software vendors often discover new security vulnerabilities as time goes on and provide patches to fix them. It is good practice to apply the patches regularly, to maintain protection against vulnerabilities as they become known to a wider audience.[4]

Protecting the ComputerThe personal computer, and more recently, the mobile device, is our gateway to cyberspace. Modern computers typically include a number of software programs that run in the background without the active knowledge of the user. Added to this, users frequently add new software and often click on installation dialogs that install

ELECTRICAL ENGINEERINGCOMPUTER SCIENCE

&NEWSLETTER

6

Protecting Information: Encryption, Authentication, and IntegrityTwo of the most common complementary technical solutions for protecting information are: (1) encrypt the information so that unauthorized entities cannot make sense of it without undoing the encryption (i.e., decrypting), and (2) authenticate users to allow access to the tools for decryption. A third and related technical solution for protecting information is to authenticate the information itself, to confirm it has not been modified for malicious purposes. We often see these technical solutions deployed in our day-to-day interactions with the cyberworld. For instance, users’ credit card data and other personal information are stored in encrypted form by vendors (in compliance with standards such as the Payment Card Industry Data Security Standard), and users are provided access to this information when they authenticate themselves by logging into the website.[5] Additionally, the messages that users exchange with a web site may be subjected to further authentication to confirm their integrity.

These techniques can also be applied in our day-to-day computer use. We can store data in our disk drives and USB drives in encrypted form using software such as TrueCrypt and BitLocker. Encrypted information is stored securely and is inaccessible until accessed with the correct password. Using such an approach protects the information from unauthorized use if the

storage device is lost or stolen. However, this solution does not protect the user if the password is guessed, or against the loss of the protected data if the user cannot produce the password.

Designing Complex Passwords that are Easy to RememberPassword verification is used as a gatekeeper for access to protected information in numerous settings, including the TrueCrypt example above. Using complex passwords is important to prevent others (including computers) from “guessing” the password and thereby accessing the protected information. Additionally, if a hacker then changes the password using authorized procedures, then the original user can be locked out of the information. Such attacks can seriously compromise an individual’s personal and financial well-being—imagine being locked out of your e mail account or losing access to your online bank account.

Ideally, passwords should comprise a suitably long string of characters that include lower-case and upper-case letters, numbers, and special characters such as !, @, #, and $. It is easy to construct such passwords, but it can be very difficult to remember them. One solution is to construct passwords based on life events and relationships. Consider, for example, a theme for passwords based on your sister’s age ten years ago. Suppose your sister’s name is “Jane” and that

she was 43 years old in 2003 (10 years ago). Your password could be “Jane43in2003$#.” The ‘$’ character corresponds to the number “4” on the keyboard, and the ‘#’ character corresponds to the number “3.” This password is relatively difficult to crack using a software program, unless specific personal details about you are known to the hacker. However, this password is still relatively easy for you (and no one else) to remember.

It is also good practice to use different passwords for different accounts and applications, and to change the passwords at regular intervals. In the password scheme above, you could designate each bank account with a specific sibling or family member, and remember to update their age each New Year (or birthday).

ELECTRICAL ENGINEERINGCOMPUTER SCIENCE

&NEWSLETTER

7

The limitation of the complex but easy-to-remember password scheme is that it is susceptible to malware, such as keystroke loggers that can record all your keystrokes and transmit them to the hacker. If you are trying to access a remote location, you might be susceptible to network-based attacks, such as a network packet sniffer that examines each network message being transmitted from your computer. Network sniffers are one reason that passwords are often (but not always) transmitted in encrypted form.[6] Of course, if you access a remote server that has been compromised, your data stored at that server could be appropriated without your password ever being compromised.[7]

Common-sense Security for Network Access

It is important to not assume that a network is secure because it is advertised as “secure” or comes with a login screen. It is very easy and convenient to get WiFi access at a coffee shop, but what if a hacker also provides a WiFi zone with a familiar-sounding name that is also available at the same coffee shop? It is easy for malicious users to sniff unencrypted network packets on public networks. For instance, network tools (e.g., FireSheep) can provide even unsophisticated hackers with access to unencrypted cookies that are used by web sites to indicate that a user has been authenticated. Think of these cookies as a ticket that says the holder is who he says he is, and can be used to allow various functions available on the web site. Anyone who can see the contents of that ticket can then use a copy of it to hijack web sessions. The onus of correcting this vulnerability is for the web site issuing the cookie to do so only via encrypted channels, but users can also take action by using the HTTPS protocol, instead of HTTP, or by using a Virtual Private Network (VPN).

HTTPS is HTTP over an encrypted session, and is supported by popular web browsers. To use HTTPS, simply substitute the “http://” in a URL with “https://.” However, there are some limitations of HTTPS. First, the remote web site must support HTTPS. Additionally, HTTPS is susceptible to certain kinds of attacks, including

a man-in-the-middle attack, where a malicious server inserts itself into a communication between a server and a client.[8] One of the ways in which such a man-in-the-middle attack is made possible is that HTTPS relies on a model of “trust.” Remote servers provide a digital certificate that is used to confirm with a “trusted” source that the server is authentic. When there is a concern about the digital certificate, the browser software will typically provide a user dialog to report that a server with which the user is trying to communicate is not trustworthy, and will notify the user to confirm the intent to communicate. Users generally click through such dialogs without paying attention, and in doing so, they potentially communicate with untrustworthy servers, including a server mounting a man-in-the-middle attack. The “trust” system is also subject to abuse, because each browser uses its own set of rules on whom to trust (called certificate authorities), and there is no foolproof mechanism to validate the trust relationships. In effect, the user trusts the browser to identify whom to trust, and the chain of trust assumes that the trusted parties are not engaging in foul play.

Another security approach when accessing public networks is to use a Virtual Private Network (VPN) for all communications over the Internet. In this approach, all communications between the user’s computer and a remote VPN server are encrypted. Whereas this affords protection from unwanted packet sniffing in the coffee shop or airport network, the communications beyond the

ELECTRICAL ENGINEERINGCOMPUTER SCIENCE

&NEWSLETTER

8

VPN server to a remote web site or other server may not be encrypted. One solution is to use HTTPS even when a VPN is being used. This comes at a cost of speed and computation, but adds a measure of security in the communications. Many browsers support the use of HTTPS by default (if the server supports HTTPS), and are a convenient feature to provide an added measure of security.

For security over private networks, using a firewall and a security protocol for protecting WiFi communications can protect against a number of attacks. Many home routers and gateway products are sold with built-in firewalls, and using these is a good first step to enhance security. Additionally, WiFi is popular in homes and businesses, and without a secure WiFi, wireless communications are susceptible to sniffing by hackers. It must be noted, however, that not all WiFi security standards provide adequate security. For instance, WEP is a WiFi security standard that is no longer considered to provide adequate security and is easily broken, often in just a few seconds.[9] WiFi Protected Access 2 (WPA2) is generally considered to provide adequate security, although vulnerabilities have been discovered.[10, 11]

Risk managementThis discussion makes clear that, while a number of tools are available to increase cybersecurity, no single, foolproof security solution exists. The constantly evolving milieu of new threats and vulnerabilities suggests that, even vigilant attention to cybersecurity cannot preclude new vulnerabilities, for which a patch or solution is still days or even months away.[12]

From a risk management perspective, it makes sense to prepare for what happens after an attack. While detailed risk management and disaster recovery plans depend on the specific situations, at a minimum, individuals and organizations must have an operational data backup plan that allows for recovery of lost data in the event of an attack. Additionally, an audit of information and computer assets, to identify what is valuable and what is essential, can go a long way in defining a plan for protection, as well as a plan for recovery in the event of an attack. An important, but often overlooked, aspect of a monitoring plan is the capability to detect an attack, even when it doesn’t cause obvious and visible disruption of our cyber-experience.

References[1] “cybersecurity,” http://www.merriam-webster.com/

dictionary/cybersecurity[2] “2012 Norton CyberCrime Report,” http://now-static.

norton.com/now/en/pu/images/Promotions/2012/cybercrimeReport/2012_Norton_Cybercrime_Report_Master_FINAL_050912.pdf

[3] “Win32/DomaIQ – An annoying bundled adware,” http://www.totaldefense.com/blogs/2013/06/19/win32/domaiq-an-annoying-bundled-adware.aspx

[4] “Targeted attacks exploit now-patched Windows bug revealed by Google engineer,” http://www.computerworld.com/s/article/9240774/Targeted_attacks_exploit_now_patched_Windows_bug_revealed_by_Google_engineer

[5] “Payment Card Industry (PCI) Data Security Standard Requirements and Security Assessment Procedures Version 2.0,” https://www.pcisecuritystandards.org/documents/pci_dss_v2.pdf

[6] “Tumblr iPhone app fixed after plaintext password goof spotted,” http://www.slashgear.com/tumblr-iphone-app-fixed-after-plaintext-password-goof-spotted-17290712/

[7] “RockYou Hack: From Bad To Worse,” http://techcrunch.com/2009/12/14/rockyou-hack-security-myspace-facebook-passwords/

[8] “The Spy in the Middle,” http://www.crypto.com/blog/spycerts/

[9] “Security of the WEP algorithm,” http://www.isaac.cs.berkeley.edu/isaac/wep-faq.html

[10] (WiFi) “Security,” http://www.wi-fi.org/discover-and-learn/security

[11] “WPA2 vulnerability found,” http://www.networkworld.com/newsletters/wireless/2010/072610wireless1.html

[12] “Zero-Day Vulnerabilities,” http://www.symantec.com/threatreport/topic.jsp?id=vulnerability_trends&aid=zero_day_vulnerabilities

ELECTRICAL ENGINEERINGCOMPUTER SCIENCE

&NEWSLETTER

9

Software Security and ReliabilityAuthor: Ernesto Staroswiecki, Ph.D.Reviewer: Srinivasan Jagannathan, Ph.D., M.B.A.

Software security and software reliability are two concepts that are completely different, yet extremely related. In the software development world, security and reliability features of software compete for limited resources; however, we claim herein that neither is truly achievable without care for the other: no significant software product can claim to be reliable if it does not include considerable security capability, nor can it claim to be secure without implementing important reliability measures.

Software ReliabilitySoftware reliability generally deals with writing fault-free software, and has been a concern since the early days of software development, with classic textbooks dating from 1976.[1] As software has approached ubiquity in our world, software reliability has become a larger component of overall system reliability. While a software failure in a computer can lead to a program, or the whole computer, crashing and requiring a restart, software is running right now in a myriad of devices and locations, including consumer electronics, household appliances,

vehicles and aircraft, medical devices, utilities, and in space. Thus, software failures can produce consequences ranging from mere inconvenience to economic calamity and even loss of life.

Formally, software reliability is the probability of failure-free software operation for a specified period of time in a specified environment.[2] In some situations, it may be simple to obtain an intuitive measure of software reliability: if we consider, for example, a failure corresponding to a computer crashing, we can ask, “What are the chances that my computer will not crash while streaming a movie tonight?” However, there are many subtle ways of defining software failure that might lead to very different estimates of software reliability.

In general, a failure can be defined as a behavior that is different from the required behavior for a given piece of software in a given situation. For a software product, this behavior is usually specified in an SRS (Software Requirements Specification) document. Another important behavior-related document for any system is the URS (User Requirements Specification). Current best practices in quality

in general, and specifically in software quality, mandate that the user requirements be given the utmost consideration, including those that may not be spelled out in a formal document, but that would be expected de facto by a typical user. For example, a typical user would not consider the exposure of private information to be acceptable behavior for a software product. Side effects of an intrusion into a system (e.g., loss of bandwidth, the computer being “slow,” “pop-up” windows, etc.) are also typically considered to be different from the required behavior of a software product. In this sense, a lapse in software security can be considered a failure, leading to a loss of reliability.

The definition of software reliability leads to a probabilistic function that depends on a specific period of time. However, while this definition mimics that of reliability in general, there are some important differences between the reliability of software and that of hardware.

ELECTRICAL ENGINEERINGCOMPUTER SCIENCE

&NEWSLETTER

1 0

Unlike hardware, which can wear, corrode, or change with use or the passage of time, software can usually be considered not to age unless it is changed by the user. The sources of software failures include errors, ambiguities, and misinterpretations of the software requirements, as well as incompetent coding, poor testing, incorrect usage, and unanticipated issues.[3] Unlike with hardware, where failures may be manufacturing or wear-and-tear faults, all the failures in software are design faults: i.e., faults are an innate part of software and cannot be removed simply by re-downloading or making new copies of the software.[4,5] The software faults are usually much more difficult to detect and correct, making good quality practices critical for the software development cycle.

With the increasing number of functions and modules of current software products, ensuring failure-free execution of programs is becoming a more difficult problem. It is generally understood that the reliability of a program is inversely proportional to the complexity of the software.[6] While software quality is imperative to achieving a reasonable degree of software reliability, software reliability is also a major component of software quality. It is then extremely important to apply good engineering and quality methods, processes, and techniques to generate the highest quality and most reliable software product possible. Some of these techniques include coding standards tailored toward reliability (e.g., boundary

checking, plausibility checks, etc.), fault tolerance and avoidance methods, and compliance with standard processes and documentation practices. Testing, verification, and validation of software are essential to ensure a product with an acceptable level of quality and reliability. Additionally, several fault models and test coverage metrics are available for comparing and evaluating reliability techniques.[7]

Software SecuritySoftware security refers to the ability of a software product to protect all the resources that are contained or controlled by the software. These resources can include data, computing cycles, memory, and the software itself. The process of ensuring that software is designed with an appropriate degree of software security is called Software Security Assurance (SSA).

Similar to software reliability, it is impossible to guarantee that a software product of a significant size is perfectly secure, or invulnerable to any past, current, or future attack. There is a constant “arms race” between attackers of software systems (devising new attacks, looking for new vulnerabilities to exploit, and finding ways to hide their tracks after an attack) and software security experts (finding and fixing vulnerabilities before an exploit is developed, building new barriers and defenses for attacks, and monitoring and tracking attackers). An appropriate level of software security for a product needs to be determined considering the value of the resource being protected, or the potential harm that a successful attack may cause. Expending more effort in protecting a system than the effort it would take to recover from an intrusion may not be a wise allocation of resources.

Security features usually are part of the SRS, so most security vulnerabilities are also software reliability failures. As such, they are also usually the result of non-conformance to the requirements,

Given the increasing complexity of software products, it has been posed that it is impossible to write an error-free piece of source code of any significant size. For these situations, it is critical to ensure the existence of “safe” states in the system, and to guarantee, many times through hardware means, that a failure of the software would land the entire system in one of these “safe” states. The definition of what consists of a “safe” state depends on the system being considered.

ELECTRICAL ENGINEERINGCOMPUTER SCIENCE

&NEWSLETTER

1 1

errors or omissions in the requirements, or inability to evaluate or predict consequences of coding decisions. One aspect that is unique to software security is that a security failure is almost always caused directly by a malicious third party—an attacker intending to cause damage or to obtain resources for his or her own benefit. The constant, intentional search for vulnerabilities in software may cause a product that is considered secure at the outset to have a vulnerability or flaw exposed in the future, and thus it becomes a faulty or insecure product that is now vulnerable to exploitation and may need to be upgraded or replaced.

The level of security of a software product is influenced by almost every choice made during development of the code. Every programming language (e.g., Java, C, C++, C#, Python) has specific vulnerabilities that could be exploited if the language used to write the source code of a product were known. The same is true for all operating systems (e.g., MS Windows, Mac OS X, Linux), database management servers (e.g., MS SQL, MySQL.), microprocessors, and other applications or modules that make up a complete system. SSA processes include methods and techniques to evaluate the sensitivity of different resources (i.e., the level of damage that would be caused by compromising different resources), establish security requirements for a software product, review and evaluate the security requirements, test and verify the conformance of the software product to its security requirements,

assess known and potential vulnerabilities, and ensure that the appropriate defense mechanisms are present in the software product.

Just as it is an integral part of software reliability, SSA is also a major component of software quality. However, it is interesting to note that many particular techniques typically prescribed to improve software security can be understood to undermine software reliability, and vice versa. For example, a common software security technique is to add layers of authentication, encryption, and misdirection to data, a step that increases dramatically the complexity of some sections of code, and therefore reduces the software reliability of these sections. Another software security technique involves code obfuscation, which by definition, makes maintaining and reviewing this code much more difficult, and therefore can also affect negatively the reliability of the obfuscated code. Furthermore, in every software life cycle, there are limited resources to be shared among all aspects of development. For example, while testing is a critical aspect of reliability, security assurance, and software quality, it may be impossible to run enough tests in a limited timeframe to ensure 100% coverage of all security features and all reliability features. Therefore, security and reliability aspects of software quality may be competing for the allocation of development resources.

While software reliability and security may seem to be at odds, we will see that their interrelation is more complex, and that positive effects of security on reliability, and of reliability on security, dominate this relationship, making each one critical and essential to achieve a good measure of the other.

Software Reliability Through SecuritySince security measures increase the software complexity and add further requirements that expand reliability requirements, one might erroneously think that software security decreases software reliability. However, well-designed security features will increase the system’s reliability by protecting many aspects of the system, including the software program itself.

ELECTRICAL ENGINEERINGCOMPUTER SCIENCE

&NEWSLETTER

1 2

We have already discussed some of the ways in which improving software security improves software reliability. First, security features are part of a software product, and any security failure negatively affects the product’s reliability. Therefore, by improving the security of a software product, its reliability is also improved. Second, as discussed in more detail below, many techniques used to improve the security of a piece of software also improve its reliability. Last, it is important to remember our definition of software security assurance, which is the ability to protect all of a program’s resources, including the software itself. In that sense, software security leads to software reliability, a degree of security measures is a necessary condition to achieve reliable software, and security features are directly reliability features.

Let’s see an example of how security features can be used to improve the reliability of a system. One of the most common security features available in a software system is restricted access to the system by means of a user name and a password. With such a device, access to a program could be restricted only to users that have been trained to use this program. Trained users are less likely to use the system improperly and cause it to fail. Similarly, protecting other resources, such as data, decreases the probability of these resources becoming corrupted and, in consequence, minimizes the chance of a software failure.

Software Security Through ReliabilityWe have just shown how software security leads to software reliability. The contrapositive statement is also true: software reliability is critical for, and leads to, software security.

If an attacker’s intent is to make the software fail, this is enough to make this fault a security fault. Furthermore, the failure mode can be manipulated by these attackers to achieve a state of the system that is not secure, and where other resources (such as data or bandwidth) may become available to them.

Ensuring the reliability of the security features of a software product directly affects its security, but following good quality and reliability practices in the entire software development cycle also positively influences the security of the system. In the following paragraphs, we expand on some examples of this concept.

An example to which most people can relate involves using an ordinary personal computer. Most personal computer users have experienced a software product “crash,” usually closing with little or no warning at all, and landing the user on an operating system management program that typically allows access to applications and data. Similar situations can occur in many systems beyond personal computers, such as vehicles or utilities. An attacker may attempt to crash the program and gain access to the underlying operating system, gaining a degree of control and flexibility that was never intended for a user to achieve. On a typical system, while many features may be unavailable through the graphical interface to the system, they could still be accessible through a command prompt. Therefore, gaining access to a command prompt could be seen as a vulnerability.

Reliability Failures that Lead to Security ProblemsA failure in a software product may be found by a well-intentioned user who attempts to use the software in a situation that—even though it should be allowable for the software—might not have been fully tested. However, since software faults are design faults, once a method for inducing failure in a software package is found, this failure is usually easy to reproduce. This failure is now exploitable by attackers.

ELECTRICAL ENGINEERINGCOMPUTER SCIENCE

&NEWSLETTER

1 3

A typical statement regarding software reliability is that “software will fail.” Regardless of whether this statement is universally true, it is important to consider it true when designing a reliable system. Therefore, one of the expectations of a reliable system is a software failure will land the system in a “safe” mode, and considering security aspects of the software, this safe mode needs to also be secure.

Another good example to study is that of a failure called a buffer overflow, or buffer overrun failure. When a variable is declared in a software program, or some value is expected from a user or a function, a computer system saves a place for it in memory, with enough room to store the values expected. If the user is somehow allowed to enter a value that exceeds the space reserved (e.g., too many characters in a text, or a number too large), and the software does not prevent it from happening (i.e., boundary checking), it is possible for the extra amounts of data to end up in regions of memory not meant for this data. If done inadvertently, this could lead to a memory or data corruption problem that could crash the program, or even worse, stay under the radar and alter the results from the program without being detected. Furthermore, a malicious user might abuse this issue by injecting specially designed data into places in memory that may specify the next step to be taken by the program, effectively taking over the execution of software, and possibly the entire computer.

One last example of a reliability issue turning into a security failure is that of the actual security system failing. While this seems a rather obvious example, it is also fairly common. If the part of a software product that maintains security were to fail, and the proper safeguards were not in place, many system resources would be exposed and available to users that were not supposed to be. One of the most common failures of the security system is through its most vulnerable component: the user. Users may cause a failure of the security system by engaging in behavior that weakens the system as a whole. Some examples of this behavior include choosing passwords that are easy to guess, sharing passwords, leaving systems logged in, accessing questionable content on the Internet (and inadvertently installing malware or spyware in the system), falling prey to phishing schemes, among many others. It is the responsibility of the software designers and implementers to minimize the probability of these failures (e.g., enforcing password standards, using challenge questions, providing server verification cues), to improve both the reliability and the security of a software program.

Reliability Practices that Lead to Security ImprovementsThere are many well-known standard quality and reliability practices that reduce the number of faults and failures in a software product. Some of the most common general reliability methods can generate direct security improvements for software programs:

Documentation

Well-written documentation can assist developers and reviewers in effectively assessing which security threats have been addressed and in what manner, allowing them to propose and implement further steps to improve security.

Traceability

A database of changes made to a codebase can significantly shorten the amount of time necessary to fix a vulnerability that might have been introduced or exposed by said changes.

Auditing

Audits help determine the compliance of a software product and its development process with the appropriate standards and specifications, helping assess the degree of adherence to the security practices required.

ELECTRICAL ENGINEERINGCOMPUTER SCIENCE

&NEWSLETTER

1 4

Verification/Code Reviews

Having multiple sets of eyes examine every element of a software product, through either verification or code reviews, can lower the probability of defects in the code, including those related to security.

Validation

Validation and integration testing of the system should include foreseen attacks and would help determine vulnerabilities of the system.

Testing

Testing at each step of development (unit, system, integration, regression, etc.) helps, among other benefits, lower the number of defects present in the finished product, including those defects related to security.

Continuous Improvements

New security threats to software (or to hardware but that may be addressable with software changes) are being created continuously, as attackers are always trying to be a step ahead of developers. For this reason, it is imperative to support an environment of continuous improvement, wherein new security measures can be implemented when a new threat is discovered.

Using Previous Knowledge

It is important to address even the best-known threats, because attackers are always trying old techniques to find easy targets. Dictionaries of measures [8] or handbooks [4, 6, 9] provide a minimum base of tools that should be used to counter security and reliability defects.

In conclusion, while software reliability and security are different concepts, they are intimately related, and no software system can be said to be secure if it is not reliable, nor reliable if it is not secure. While no modern software product of a significant size can be guaranteed to be defect free, completely reliable, and completely secure, a set of common-sense and previously documented measures, together with a high-quality development process, can generate software products with a high degree of both reliability and security.

References[1] Software Reliability, Principles and Practices, Glenford

J. Myers, John Wiley and Sons, Inc., New York 1976.[2] Software Engineering Quality Practices, Ronald K.

Kandt, Auerbach Publications, Boca Raton, Florida, 2006.

[3] On the Use and the Performance of Software Reliability Growth Models, Peter A. Keiller and Douglas R. Miller, Software Reliability and Safety, Elsevier, 1991 (pp. 95-117).

[4] Handbook of Software Reliability Engineering, Michael R. Lyu, McGraw-Hill, New York, New York, 1995.

[5] Comparing Hardware and Software Reliability, S. J. Keene, Reliability Review 14(4), December 1994, pp. 5-7, 21.

[6] The Certified Software Quality Engineer Handbook, Linda Westfall, ASQ Quality Press, Milwaukee, Wisconsin, 2009.

[7] Metrics and Models in Software Quality Engineering, 2nd Ed., Stephen H. Kan, Addison-Wesley, Upper Saddle River, New Jersey, 2003.

[8] IEEE Standard Dictionary of Measures to Produce Reliable Software, IEEE Std. 982.1-1988, IEEE, 1989

[9] Computer Security Handbook, 5th Ed., Seymour Bosworth, M. E. Kabay and Eric Whyne, (Editors), Wiley, New York, 2009.

ELECTRICAL ENGINEERINGCOMPUTER SCIENCE

&NEWSLETTER

For more information on Exponent capabilities, please visit our website,

www.exponent.com.

1 5

About ExponentExponent is a leading engineering and scientific consulting firm dedicated to providing solutions to complex problems.

Exponent’s team of electrical engineers, computer engineers, and computer scientists performs investigations in a wide array of areas including electric power systems, semiconductor devices, computer networks, security, and software. We operate laboratories for testing both heavy equipment and light electronic equipment, and we analyze electric power systems, circuits, and other equipment configurations.

ContactShukri J. Souri, Ph.D.Corporate Vice President & Practice Director

(212) 895-8126 [email protected]


Top Related