+ All Categories
Home > Documents > Etrac or equiremen twar oducdownload.xuebalib.com/6f6erhnS9Fvm.pdf · Requirements Engineering 1 3...

Etrac or equiremen twar oducdownload.xuebalib.com/6f6erhnS9Fvm.pdf · Requirements Engineering 1 3...

Date post: 24-Jan-2021
Category:
Upload: others
View: 3 times
Download: 0 times
Share this document with a friend
20
Vol.:(0123456789) 1 3 Requirements Engineering https://doi.org/10.1007/s00766-018-0307-0 ORIGINAL ARTICLE Extracting core requirements for software product lines Iris Reinhartz‑Berger 1  · Mark Kemelman 1 Received: 28 August 2017 / Accepted: 14 December 2018 © Springer-Verlag London Ltd., part of Springer Nature 2019 Abstract Software Product Line Engineering (SPLE) is a promising paradigm for reusing knowledge and artifacts among similar software products. However, SPLE methods and techniques require a high up-front investment and hence are profitable if several similar software products are developed. Thus in practice adoption of SPLE commonly takes a bottom-up approach, in which analyzing the commonality and variability of existing products and transforming them into reusable ones (termed core assets) are needed. These time-consuming and error-prone tasks call for automation. The literature partially deals with solutions for early software development stages, mainly in the form of variability analysis. We aim for further creation of core requirements—reusable requirements that can be adapted for different software products. To this end, we introduce an automated extractive method, named CoreReq, to generate core requirements from product requirements written in a natural language. The approach clusters similar requirements, captures variable parts utilizing natural language processing techniques, and generates core requirements following an ontological variability framework. Focusing on cloning scenarios, we evaluated CoreReq through examples and a controlled experiment. Based on the results, we claim that core requirements generation with CoreReq is feasible and usable for specifying requirements of new similar products in cloning scenarios. Keywords Software Product Line Engineering · Systematic reuse · Requirements specification · Variability analysis 1 Introduction Development of software commonly reuses existing arti- facts. The reuse may be done ad hoc (“clone-and-own”) or systematically. Targeting toward systematic reuse, the field of Software Product Line Engineering (SPLE) [7, 32] emerged for developing reusable artifacts (known as core assets or domain artifacts) and guiding their reuse in particular software products. To this end, known software engineering processes, such as requirements engineering, design, and implementation, are conducted on families of software products, called Software Product Lines (SPL), rather than on individual products. These are considered domain engineering activities, whose artifacts (i.e., the core assets) can be adapted to create individual software products through application engineering activities. The adaptation is typically done by selecting relevant core assets, customizing them to satisfy the particular requirements, and extending the created product artifacts such that they do not resemble the core assets that spawned or generated them [14]. SPLE techniques have the potential to decrease time-to- market and increase product quality, yet they require a high up-front investment, particularly in the development of core assets [32]. Hence, SPLE techniques are commonly adopted in a bottom-up approach, either in an extractive or reactive manner, namely after several similar product variants have been created [4]. In these scenarios, creation of core assets based on existing product artifacts is feasible and may help improve future development and maintenance (see a recent survey of methods for reengineering legacy applications into SPLs in [2]). These studies address different development stages, but most notably requirements engineering (mainly through functional requirements) and implementation (code). Nevertheless, the actual creation of core assets from existing artifacts (the transformation part) is under-studied [2], especially in the early development stages. The aim of our research is to address this gap in the context of requirements engineering. Requirements play a central role in many development processes: They serve for * Iris Reinhartz-Berger [email protected] Mark Kemelman [email protected] 1 Department of Information Systems, University of Haifa, Haifa, Israel
Transcript
Page 1: Etrac or equiremen twar oducdownload.xuebalib.com/6f6erhnS9Fvm.pdf · Requirements Engineering 1 3 communicationbetweenclients,users,anddevelopers,and theyareassociatedtovariousdevelopmentartifacts,includ-ingdesign,code

Vol.:(0123456789)1 3

Requirements Engineering https://doi.org/10.1007/s00766-018-0307-0

ORIGINAL ARTICLE

Extracting core requirements for software product lines

Iris Reinhartz‑Berger1 · Mark Kemelman1

Received: 28 August 2017 / Accepted: 14 December 2018 © Springer-Verlag London Ltd., part of Springer Nature 2019

AbstractSoftware Product Line Engineering (SPLE) is a promising paradigm for reusing knowledge and artifacts among similar software products. However, SPLE methods and techniques require a high up-front investment and hence are profitable if several similar software products are developed. Thus in practice adoption of SPLE commonly takes a bottom-up approach, in which analyzing the commonality and variability of existing products and transforming them into reusable ones (termed core assets) are needed. These time-consuming and error-prone tasks call for automation. The literature partially deals with solutions for early software development stages, mainly in the form of variability analysis. We aim for further creation of core requirements—reusable requirements that can be adapted for different software products. To this end, we introduce an automated extractive method, named CoreReq, to generate core requirements from product requirements written in a natural language. The approach clusters similar requirements, captures variable parts utilizing natural language processing techniques, and generates core requirements following an ontological variability framework. Focusing on cloning scenarios, we evaluated CoreReq through examples and a controlled experiment. Based on the results, we claim that core requirements generation with CoreReq is feasible and usable for specifying requirements of new similar products in cloning scenarios.

Keywords Software Product Line Engineering · Systematic reuse · Requirements specification · Variability analysis

1 Introduction

Development of software commonly reuses existing arti-facts. The reuse may be done ad hoc (“clone-and-own”) or systematically. Targeting toward systematic reuse, the field of Software Product Line Engineering (SPLE) [7, 32] emerged for developing reusable artifacts (known as core assets or domain artifacts) and guiding their reuse in particular software products. To this end, known software engineering processes, such as requirements engineering, design, and implementation, are conducted on families of software products, called Software Product Lines (SPL), rather than on individual products. These are considered domain engineering activities, whose artifacts (i.e., the core assets) can be adapted to create individual software products through application engineering activities. The adaptation is

typically done by selecting relevant core assets, customizing them to satisfy the particular requirements, and extending the created product artifacts such that they do not resemble the core assets that spawned or generated them [14].

SPLE techniques have the potential to decrease time-to-market and increase product quality, yet they require a high up-front investment, particularly in the development of core assets [32]. Hence, SPLE techniques are commonly adopted in a bottom-up approach, either in an extractive or reactive manner, namely after several similar product variants have been created [4]. In these scenarios, creation of core assets based on existing product artifacts is feasible and may help improve future development and maintenance (see a recent survey of methods for reengineering legacy applications into SPLs in [2]). These studies address different development stages, but most notably requirements engineering (mainly through functional requirements) and implementation (code). Nevertheless, the actual creation of core assets from existing artifacts (the transformation part) is under-studied [2], especially in the early development stages.

The aim of our research is to address this gap in the context of requirements engineering. Requirements play a central role in many development processes: They serve for

* Iris Reinhartz-Berger [email protected]

Mark Kemelman [email protected]

1 Department of Information Systems, University of Haifa, Haifa, Israel

Page 2: Etrac or equiremen twar oducdownload.xuebalib.com/6f6erhnS9Fvm.pdf · Requirements Engineering 1 3 communicationbetweenclients,users,anddevelopers,and theyareassociatedtovariousdevelopmentartifacts,includ-ingdesign,code

Requirements Engineering

1 3

communication between clients, users, and developers, and they are associated to various development artifacts, includ-ing design, code, and testing artifacts. Early software reuse (i.e., requirements reuse) is considered as the most beneficial form of software reuse [17]. We introduce in this paper an automatic extractive method, named CoreReq, for gener-ating core requirements—reusable requirements that can be adapted for different members of a SPL—from existing product requirements. The method is based on an ontologi-cal framework for analyzing variability suggested in [34]. This framework refers to two dimensions, element and prod-uct, and arranges known variability aids (such as optional-ity, variants, and extensions) along these dimensions. The newly introduced CoreReq method analyzes the structure of individual (product) requirements using natural language processing (NLP) techniques, compares them using seman-tic measures, categorizes similar requirements according to the framework’s variability dimensions, and creates core requirements that capture both commonality and variability.

The contribution of our work is twofold. With respect to domain engineering activities, CoreReq automatically gen-erates core requirements from product requirements. With respect to application engineering activities, CoreReq guides the use (i.e., systemize the reuse) of the generated core requirements for specifying new software products in the same SPL. Focusing on cloning scenarios, which are com-mon practices in requirements engineering [9], we discuss the feasibility and usability of CoreReq (through examples and a controlled experiment, respectively).

The rest of the paper is structured as follows. Section 2 reviews studies that address to some extent variability analysis and core asset creation, focusing on requirements engineering outcomes. Section 3 presents a running exam-ple which motivates the need for our method and is later used for demonstrating its stages. Section 4 briefly presents the variability framework and its theoretical foundations, while Sect. 5 elaborates on the CoreReq method. Section 6 presents the objectives, design, results, and conclusions of the method evaluations, and, finally, Sect. 7 summarizes the research and raises issues for future research.

2 Related work

The literature on reuse and variability in software engineer-ing in general and within requirements engineering in par-ticular, as well as on the strengths and limitations of NLP techniques in this context, is enormous. We narrow down our literature review to studies which deal with reengineer-ing applications, software products, and systems into SPLs in order to increase reuse and improve development and maintenance. A recent systematic mapping (from 2017) [2] refers to 119 studies and provides an overview of the current

state-of-the-art. Three phases of reengineering are identi-fied in that survey: (1) detection—relevant information is extracted from the input artifacts to understand the exist-ing structure, data flow, relationships, features, and so on; (2) analysis—the information discovered is used to infer, design, and organize new partitions that cluster the features; and (3) transformation—the considered artifacts are changed in order to enable their systematic reuse. Here, we elaborate on these phases with respect to requirements engineering. Particularly, Sect. 2.1 refers to generation of variability mod-els from requirements (the detection and analysis phases) and Sect. 2.2 reviews studies that address the creation or representation of core requirements (potentially tackling some aspects of the transformation phase). To complete the needed background, Sect. 2.3 briefly reviews typical uses of NLP techniques in requirements engineering in general.

2.1 Generating variability models from requirements

Variability analysis identifies and determines the differences among products [32]. Its outcomes are variability models, which commonly use the notation of feature diagrams, both in research [6] and in industry [4]. Feature diagrams are tree or graph structures whose nodes are features—prominent or distinctive user-visible aspects, qualities, or characteristics of software systems or products—and edges—relationships and dependencies that represent differences among prod-ucts (including mandatory vs. optional features and variants) [19].

In order to automate variability analysis, different arti-facts, including requirements, have been used to generate variability models. The systematic review in [3] concen-trates on natural language requirements as inputs and iden-tifies four phases of detecting and analyzing variability: (1) Requirement assessment—scrapping product descriptions or retrieving legacy documents; (2) Term extraction—using text processing techniques, such as tokenization, parts of speech tagging, stemming, and term occurrences; (3) Fea-ture identification—using similarity measures and clustering algorithms; and (4) Feature diagram (or variability model) formation—converting clusters to models. The inputs of the reviewed studies are commonly Software Requirements Specifications (SRS), but can also be product descriptions, brochures, or user comments. The outputs are typically fea-ture diagrams [19], but can also be clustered requirements, keywords or direct objects.

The systematic literature review in [11] investigates varia-bility handling in major software engineering phases includ-ing requirements engineering. Based on the findings, the authors propose eight dimensions of variability in software engineering. These dimensions are grouped into two clus-ters: type of variability—dealing with the introduction and

Page 3: Etrac or equiremen twar oducdownload.xuebalib.com/6f6erhnS9Fvm.pdf · Requirements Engineering 1 3 communicationbetweenclients,users,anddevelopers,and theyareassociatedtovariousdevelopmentartifacts,includ-ingdesign,code

Requirements Engineering

1 3

specification of variability (requirement type, representation, artifact, and orthogonality), and mechanisms of variability—referring to the way variability is actually realized (trigger, realization technique, time of binding, and automation).

In [18], another comparison of variability analysis meth-ods is done, based on the techniques used for calculating similarity, for structuring hierarchy, and for inferring varia-bility. The perspectives (i.e., variability aspects) these meth-ods highlight in the generated outputs are further discussed. Three categories of studies are analyzed: studies assuming unique terminology (meaning that similar features have the same name), studies using syntactic or semantic measures, and studies that do not use specific similarity metrics. The authors additionally observe that the reviewed studies gener-ate outcomes that are derived from predefined perspective of variability (most notably using nouns, verbs, or verbs and direct objects).

The generated variability models in all the studies reviewed in the aforementioned works depict the similarities and differences among the various product artifacts. They commonly treat whole requirements and associate them to features, which can be mandatory, optional, alternative, and so on. These approaches do not deal with the actual creation of core requirements.

2.2 Creation and representation of core requirements

As noted, creation of core assets is important for improving application engineering activities, but the transformation of product artifacts into core assets is under-studied [2] in soft-ware engineering in general and in requirements engineering in particular. Here we discuss a few studies that handle to some extent this challenge.

Moon et al. [29] proposed a method for collecting and generalizing requirements for families of similar systems. The method consists of scoping, identifying, and refin-ing domain requirements, as well as developing a domain use-case model. Commonality decisions are guided using context and use-case matrices, and the variabilities in the requirements models are categorized into four types: three refer to behavioral aspects (computation, external computa-tion, and control) and one to structural aspects (data). The method requires a lot of manual work, especially for iden-tifying and generalizing the domain requirements. We aim at automating the generation of core requirements, based on available product requirements.

Rubin et al. [35, 36] suggest a set of seven conceptual operators for managing cloned product variants that support: (1) the unification of the cloned variants into single-copy representations promoted by SPLE methods and (2) the con-struction of a management infrastructure on top of existing

variants. Here also some of the suggested operations require involvement and refinement of domain experts.

Martinez et al. [25] propose a bottom-up approach to SPLE which can be applied to different types of artifacts, including requirements. Particularly, the requirement adapter of the approach uses requirements specified in the Require-ments Interchange Format (ReqIF)1 as inputs, identifies the requirement elements based on the attributes and internal information in these files, computes similarity using the measure by Wu and Palmer [43],2 and syntheses a feature model that captures the variability among software prod-ucts or systems [1]. The approach requires involvement of domain experts in features identification and location and the inputs need to be provided in (or transformed into) ReqIF format. We target requirements written in a natural language, without making assumptions on their structure or content.

2.3 Uses of NLP in requirements engineering

The applications of NLP for requirements engineering are studied in [30]. Six possibly overlapping categories of stud-ies are identified:

1. Classification: NLP can be used for classifying or cat-egorizing requirements. The work in [40], for example, suggests a framework to automatically detect and clas-sify non-functional requirements from textual natural language requirements. The work in [39] categorizes the requirements into critical, normal, and optional ones, based on a rule set that makes the distinction according to the verbs used in the sentences.

2. Prioritization: NLP can be used for extracting the importance of requirements and prioritizing them. The work in [26], for example, proposes a method for requirements prioritization and selection based on NLP and satisfiability modulo theories solvers. The work in [5] introduces two methods: one for classifying customer demands using NLP techniques in order to obtain cus-tomer expectations and the other for determining the revised priority of the customer demands using a fuzzy logic inference.

3. Ambiguity removal: The work in [38] surveys and ana-lyzes approaches for resolving different types of ambigu-ities, including linguistic and conceptual ones, in natural language software requirements. Ambiguities can be resolved using checklist-based inspection, style guides,

1 See http://www.omg.org/spec/ReqIF .2 The measure by Wu and Palmer calculates similarity by consider-ing the depths of the concepts in WordNet, a large, well-known lexi-cal database for the English language, along with the depth of their Least Common Super (LCS) concepts.

Page 4: Etrac or equiremen twar oducdownload.xuebalib.com/6f6erhnS9Fvm.pdf · Requirements Engineering 1 3 communicationbetweenclients,users,anddevelopers,and theyareassociatedtovariousdevelopmentartifacts,includ-ingdesign,code

Requirements Engineering

1 3

controlled language, knowledge-based, and heuristics-based approaches.

4. Requirements elicitation: Utilization of NLP tech-niques can help automatically or semi-automatically elicit requirements from textual descriptions. The work in [31], for example, identifies three core properties of domain ontologies suitable for requirements elicitation: explicit relational expression, qualified relation identi-fication, and explicit temporal and spatial expressions. A rule-based approach is suggested for building such domain ontologies from natural language technical doc-uments. The works in [16, 27] use NLP for extracting formal representations of the requirements.

5. Requirements assessment: Quality of requirements is defined in terms of different desirable properties, includ-ing completeness, consistency, unambiguity, under-standability, validability, verifiability, modifiability, traceability, abstraction, precision, and atomicity [12]. NLP techniques can be used for evaluating requirements quality. The work in [12], for example, presents some morphological, lexical, analytical, and relational indica-tors for measuring quality in textual requirements.

6. Requirements analysis: NLP techniques can help extract information and synthesize models. The work in [8], for example, generates UML models (including use-case, analysis class, collaboration, and design class diagrams) from natural language requirements using a set of syn-tactic reconstruction rules. The work in [15] proposes a tool-supported method to facilitate requirements analy-sis process and class diagram extraction from textual requirements supporting NLP and domain ontology techniques.

Many of the approaches use tokenization, parts of speech tagger, text chunking, and other parsing techniques [30]. We also use these NLP techniques, but for generating core requirements from existing ones (see next for a motivating example). Particularly, we use a Semantic Role Labeling (SRL) technique [13], which is an NLP technique that labels constituents of a phrase with their semantic roles in the

phrase, for this purpose (Sect. 5.2 elaborates on our adop-tion of this technique to requirements engineering).

3 Running example: the Check‑In Check‑Out (CICO) domain

Reuse of requirements is very common. Empirically study-ing 11 real-world requirements specifications, Domann et al. [9] found that “a considerable amount of cloning exists” and that “most clones are indeed created through copy and paste and do not occur coincidentally.” For motivating the need for automatic generation of core requirements and for exem-plifying the proposed method later, consider the following scenario: a software house has developed a management application for a second hand book shop. Its new client is a library which is interested in an application for borrowing and returning books. The software house reuses require-ments from the second hand book shop, because of the simi-larities in items—books. The third and fourth clients, who are interested in a hotel management application and a car rental application, respectively, impose short schedules and hence the software house tries to reuse requirements as much as possible, especially these dealing with borrowing and returning of items. Only after the development of the four applications, the software house realizes that it specializes in the domain of Check-In Check-Out (CICO), namely, applica-tions that deal with items (such as books, cars, and rooms) and provide functions for their checking-out (borrowing) and checking-in (returning). Traditionally, this domain cannot be considered a product line, since the applications are not very close together. Still, there are similarities between their requirements due to the process of elicitation and the com-mon domain. Table 1 presents a check-out requirement in the four applications: buying a second hand book, borrowing a book copy in a library, ordering a room, and renting a car.

Reusing these requirements in future applications requires different types of adaptation and may be error-prone. Hence, the software house could benefit if it had a core requirement specifying the mandatory, optional, and variant parts of a

Table 1 Four similar requirements of item checking-out function

App. Requirement

1-Second Hand Book Shop When a buyer buys a book, she inserts her identification number. The system displays the tracking number and updates the number of available copies.

2-Library When a borrower borrows a book copy, she inserts her identification number. The system suggests related books and updates the number of available book copies.

3-Hotel When a client orders a room, she inserts the requested dates. The system displays the reservation number and updates the number of available rooms.

4-Car Rental Agency When a client rents a car, she inserts the requested dates. The system displays the rented car number and updates the number of available cars.

Page 5: Etrac or equiremen twar oducdownload.xuebalib.com/6f6erhnS9Fvm.pdf · Requirements Engineering 1 3 communicationbetweenclients,users,anddevelopers,and theyareassociatedtovariousdevelopmentartifacts,includ-ingdesign,code

Requirements Engineering

1 3

checking-out operations, based on the existing requirements. Such a core requirements can have the following form:

When a <stakeholder> <action> <item>, she inserts <input>. The system [displays <output> and] updates <info>.

The mandatory parts that specify commonality appear in bold, while the optional parts between square brackets []. Note that the second requirement (from the Library appli-cation) refers to retrieval functionality (“suggests related books”) rather than to displaying. Thus, displaying is consid-ered optional. The retrieval functionality is not percolated to the core requirement, as it appears only in one requirement and is therefore considered an application-specific extension.

The variants, which appear between triangle brack-ets—<>, can be further specified as follows:

• stakeholder: {buyer, borrower, client, …}• action: {buys, borrows, orders, rents, …}• item: {a book, a book copy, a room, a car, …}• input: {her identification number, the requested dates,

…}• output: {the tracking number, the reservation number, the

rented car number, …}• info: {the number of available copies, the number of

available book copies, the number of available rooms, the number of available cars, …}

The differences among variants in our example have sev-eral sources. First, the terminology differs: the stakeholder is buyer, borrower, or client and the action is buy, borrow, order, or rent. Second, the item being checked-out is differ-ent: book, room, or car. Third, the inputs provided by the stakeholders are different: stakeholder identification number or requested dates. Finally, the system provides different out-puts: tracking number, reservation number, or car number.

Our aim is to automatically generate such core require-ments from existing requirements, without using any prede-fined templates.

4 Variability framework and its theoretical foundations

Before presenting our approach for core requirements gen-eration, we first provide an overview on its formal foun-dations—based on an ontological variability framework suggested in [34]. The framework is generic and can be adopted to a variety of software artifacts, such as require-ment documents, design models, and code. These artifacts are composed of elements, e.g., requirements in require-ment documents, and classes in object-oriented design and

implementation, whose similarities and differences are iden-tified and analyzed. In Sect. 5, we elaborate how this frame-work is utilized for generating core requirements.

The variability framework [34] is based on elements map-ping. We distinguish between two types of elements: product elements and core elements. Product elements are elements used in the context of a specific product (e.g., the require-ments in Table 1), while core elements are elements intended to be reused in different products (e.g., the core requirement given below Table 1). We denote by PAi the set of all product elements of product i and by CA the set of all core elements.

The mapping CA → PAi maps a core element in CA to a product element in PAi if the later can be obtained from the former by introducing modifications that concretize the core element and do not violate its specification.

Given a set of products {P1,…, Pn} and a set of core elements CA, we are interested in the set of mappings {CA → PAi}i=1…n. Returning to our running example, con-sider the phrase “a <stakeholder> <action> <item>” as a core element (part of a requirement) describing the precon-dition for the checking-out operation. This core element is mapped in our four mappings (one for each application) to the following product elements: (1) “a buyer buys a book” in the first application, (2) “a borrower borrows a book copy” in the second application, (3) “a client orders a room” in the third application, and (4) “a client rents a car” in the fourth application. All these cases can be considered concretiza-tions of the core element, as they all deal with a stakeholder that performs some action on an object in order to trigger the checking-out operation.

Based on the aforementioned mappings, the variability framework is composed of two dimensions. The first one—product—focuses on the questions: Which core elements are reused? Which product elements are obtained by reuse? The second dimension—element—refers to how a single core element varies to create a single product counterpart.

With respect to the product dimension, we utilized two useful notions of mappings: non-total and non-onto sets. The non-total set of a similarity mapping CA → PAi is the set of all core elements in CA which has no corresponding product elements in PAi. The non-onto set is the set of all product elements in PAi which has no corresponding core element in CA. Looking at a set of mappings {CA → PAi}i=1…n, we dis-tinguish between the following cases in the product dimen-sion (see Table 2):

1. Mandatory A core element x ∈ CA which is reused in all products. This holds whenever for every product i, there is ei ∈ PAi such that x is mapped to ei (in other words, x ∉ NonTotal and x ∉ NonOnto of any similarity map-ping CA → PAi). An example of a mandatory element in our CICO case is the aforementioned precondition: “a <stakeholder> <action> <item>.”

Page 6: Etrac or equiremen twar oducdownload.xuebalib.com/6f6erhnS9Fvm.pdf · Requirements Engineering 1 3 communicationbetweenclients,users,anddevelopers,and theyareassociatedtovariousdevelopmentartifacts,includ-ingdesign,code

Requirements Engineering

1 3

2. Optional A core element y ∈ CA which is reused in some but not all products. This holds when for some product i, there is no ei ∈ PAi such that x is mapped to ei (in other words, y ∈ NonTotal of a similarity mapping CA → PAi). The displaying functionality (“the system displays <out-put>”) in the CICO example is an optional element, as it appears in three out of the four input applications.

3. Extension A product element zi ∈ PAi which is a prod-uct-specific addition, i.e., is not derived (reused) from a core element. This holds whenever there is no c ∈ CA such that c is mapped to zi (in other words, zi ∈ NonOnto of the similarity mapping CA → PAi). The retrieval functionality that appears only in the library application (“the system suggests related books”) is a (product-specific) extension.

With respect to the element dimension, we utilize simi-larity metrics to estimate the degree of adaptation required for reusing a core element and creating a product element from it. For simplicity, we assume the existence of a simi-larity mapping sim: CA ×

i=1…nPA

i→ [0, 1] , where 1

means identical (no adaptation is needed) and 0 indicates completely different elements. Section 5 elaborates how this mapping can be achieved in practice for requirements.

We distinguish between the following cases of reuse with respect to the element dimension:

1. Common: The product element is identical to the core element (sim = 1).

2. Variant: The product element is a variant of the core ele-ment (1 > sim ≥ TH, where TH is a predefined threshold).

The element dimension is characterized by the properties of the chosen similarity metric. Here we only distinguish between two types of relationships: the product element is identical or similar to the core element (see Table 3). If none of them holds (i.e., the elements are different), we assume that no reuse is feasible and the product element is product-specific (see the extension option in the product dimension). The common elements in the CICO example are marked in bold in the core requirement (see Sect. 3), whereas the vari-ants have placeholders within triangle brackets—<> (e.g., stakeholder, action, and item).

5 CoreReq for core requirements generation

The framework presented above provides a concrete way of representing the association between a core asset and its product artifacts while abstracting away the details of the representation language. For guiding the creation of require-ments based on similar, previously developed systems, our CoreReq method is composed of three activities, which are

Table 2 The product dimension

Table 3 The element dimension

Page 7: Etrac or equiremen twar oducdownload.xuebalib.com/6f6erhnS9Fvm.pdf · Requirements Engineering 1 3 communicationbetweenclients,users,anddevelopers,and theyareassociatedtovariousdevelopmentartifacts,includ-ingdesign,code

Requirements Engineering

1 3

depicted in gray in Fig. 1 and elaborated next: clustering similar requirements, capturing variable parts, and generat-ing core requirements following the variability framework presented in Sect. 4. We further refer at the end of this sec-tion to a prototype tool that implements the method.

5.1 Clustering similar requirements

The inputs of CoreReq are requirements documents written in a natural language. In the first step, the products’ require-ments are clustered using a hierarchical agglomerative clustering algorithm [21], which builds a bottom-up hierar-chy of similar elements (requirements in our case). In each iteration, the closest clusters are merged, where the distance between clusters is computed as the distance between the farthest elements (least similar requirements) in the corre-sponding clusters. This kind of distance computation has been shown to provide compact and conservative clusters [24]. Merging stops when the distance between clusters exceeds some predefined threshold.

For measuring distance among elements, different meas-ures can be used, e.g., Latent Semantic Analysis (LSA) [22], which is a corpus-based semantic technique, and Semantic and Ontological Variability Analysis (SOVA), which intro-duces ontological considerations and is tailored to analyze variability of functional requirements [18, 33]. We chose to particularly use a well-known knowledge-based semantic measure, suggested by Mihalcea, Corley, and Strapparava (MCS) [28]. This measure uses information drawn from semantic networks and has been shown to provide good results for measuring the semantic similarity of short texts (individual requirements in our case) [28].

Each of the emerging clusters is later associated with a core requirement.

As an example to this step, consider the requirements presented in Table 1. Table 4 shows their pairwise similarity values. Assuming a similarity threshold of 0.6, these require-ments are clustered into one cluster as follows. First, Room ordering of Hotel and Car renting of Car Rental Agency are clustered due to their highest similarity value. Then, Book

1. Clustering similar

requirements

2. Capturing variable parts

Requirements of Product 1

Clusters of similar

requirements

Common parts of requirements

3. Genera�ng core

requirements

Core requirements

Seman�c Role Labeling

Legend:

Object Input/Output

Process Instrument

Hierarchical agglomera�ve

clustering

Variable parts of requirements

Similarity measures

Frameworkdimensions

Similarity measures

NLPtechniques

Requirementsof Product n

...

Fig. 1 The process of the CoreReq method (using OPM notation [10])

Table 4 Similarity values and clustering order of the requirements from Table 1

First requirement Second requirement Similarity value Cluster-ing step

Book buying of second hand book shop Copy borrowing of library 0.765 3Book buying of second hand book shop Room ordering of hotel 0.758 2Book buying of second hand book shop Car renting of car rental agency 0.782 2Copy borrowing of library Room ordering of hotel 0.640 3Copy borrowing of library Car renting of car rental agency 0.695 3Room ordering of hotel Car renting of car rental agency 0.809 1

Page 8: Etrac or equiremen twar oducdownload.xuebalib.com/6f6erhnS9Fvm.pdf · Requirements Engineering 1 3 communicationbetweenclients,users,anddevelopers,and theyareassociatedtovariousdevelopmentartifacts,includ-ingdesign,code

Requirements Engineering

1 3

buying of Second Hand Book Shop is added due to its high similarity value to the previously clustered requirements. Finally, Copy Borrowing of Library is added for a similar reason. The emerged cluster deals with checking-out items. We next analyze the common and variable parts of require-ments jointly clustered.

5.2 Capturing variable parts

In order to capture the variable parts of requirements belong-ing to the same cluster, we parse them utilizing the Semantic Role Labeling (SRL) technique [13]. This is an NLP tech-nique that labels constituents of a phrase with their semantic roles in the phrase. We primarily refer to three semantic roles that are common in textual descriptions in general: (1) Agent—Who performs? (2) Action—What is performed? (3) Object—On what object is it performed? A single requirement may include several (ordered) triplets of the form (actor, action, object). We call these triplets require-ment vectors. Some of the constituents of a requirement vec-tor may be missing, e.g., the actor in passive sentences or the object for “simple” actions, such as paying. Note that although these roles are intuitively related to functional requirements, they are also very relevant for non-functional requirements. Table 5 exemplifies the parsing outcomes for a functional requirement (from our CICO example) and three non-functional requirements.

After parsing the requirements in the same cluster, Cor-eReq has to map similar requirement vectors. To this end, we use semantic similarity measures for comparing semantic roles (particularly, MCS [28] for phrases and Wu and Palmer [43] for words), as well as weighted averages for calculating the similarity of requirement vectors.

Considering the borrowing requirement of the Library application and the renting requirement of the Car Rental application in our CICO example, we receive the following mappings:

• “borrower borrows a book copy” and “client rents a car”—similar agents (0.76), similar actions (0.67)

• “ borrower inserts her identification number” and “ client inserts the requested dates”—similar agents (0.76), identical actions

• “ borrower updates the number of available book copies” and “ client updates the number of available cars”—similar agents (0.76), identical actions, similar objects (0.82)

The third requirement vectors of both systems (“the system suggests related books” and “the system displays the rented car number,” respectively) have no counterparts in the other application, as only the agents are identical. However, con-sidering the additional applications (of the Hotel and the Second Hand Book Shop, presented in Table 1), it can be noticed that variants of the displaying requirement vector (the third vector in the Car Rental application) appear in other applications as well, while the retrieval requirement vector (the third vector in the Library application) is unique to that system.

5.3 Generating core requirements

After mapping the similar requirement vectors in each clus-ter, CoreReq generates core requirements (one for each clus-ter) using the variability framework discussed in Sect. 4.

With respect to the product dimension:

Table 5 Examples of requirements parsing using SRL

a These replacements of pronouns by their anaphors (i.e., the nouns to which they refer) are done automatically utilizing a coreference resolution algorithm [23]

Requirement text Requirement type Parsing

Agent Action Object Additional

When a buyer buys a book, she inserts her identifica-tion number. The system displays the tracking number and updates the number of available copies.

Functional requirement Buyer Buy Book Buyera Insert Buyera identification

numberSystem Display Tracking numberSystem Update Number of available copies

Time of changes to data must be recorded to the nearest second.

Precision (non-functional) Requirement

Record Time of changes to data To the nearest second

Only registered users can update data in the system.

Security (non-functional) Requirement

Only registered users Update Data In the system

Full history of all changes must be maintained.

Maintainability (non-func-tional) Requirement

Maintain Full history of all changes

Page 9: Etrac or equiremen twar oducdownload.xuebalib.com/6f6erhnS9Fvm.pdf · Requirements Engineering 1 3 communicationbetweenclients,users,anddevelopers,and theyareassociatedtovariousdevelopmentartifacts,includ-ingdesign,code

Requirements Engineering

1 3

• Similar requirement vectors which appear in all ana-lyzed products are defined as mandatory parts of the core requirements.

• Similar requirement vectors which appear in a significant number of products (e.g., half of them) are considered optional parts.

• All other similar requirement vectors are treated as prod-uct-specific extensions, even if appearing in more than one analyzed product (as long as the number of occur-rences does not exceed the predefined threshold for deter-mining optional parts).

Table  6 (the right-most column) presents the product dimension variability for the requirement vectors extracted from the four requirements of checking-out items (listed in Table 1).

With respect to the element dimension, the semantic parts (roles) of requirement vectors which have identical values are classified as common, identifying the anchors of the core requirements. All other similar semantic parts are considered variants (see examples in the second right-most column of Table 6). The variants may be the result of syntactic, seman-tic, or domain-specific differences among the input product requirements.

The generated core requirements contribute to the domain engineering activities, as they are automatically created from product requirements. They further contribute to application

engineering activities, as they include guidance for specify-ing new similar software products in the form of mandatory, optional, variant, and extension parts. In Sect. 5, we report on the method evaluation with respect to these contributions.

5.4 CoreReq prototype

We implemented a prototype tool for CoreReq in Java. For clustering similar requirements, the tool uses SEMILAR3 and WS4 J.4 It further implements the hierarchical agglom-erative clustering algorithm presented in [21]. For capturing variable (and common) parts, the tool uses a semantic role labeler5 (based on LTH and OpenNLP6) and coreference resolution (based on StanfordNLP7).

The generated core requirements are presented in a way that is convenient to requirements engineers to reuse them in new products (application engineering tasks). Particularly, variants appear in combo boxes (to allow both selection and addition) and optional parts and extensions appear close to

Table 6 The variability of four requirements of checking-out items (listed in Table 1)

a The original form “she” represents different agents: buyer, borrower, client. Nevertheless, syntactically it can be perceived as a common part of the requirements, whereas the differences are captured in the agent part of the first requirement vector

Req. vector Semantic role Second hand book shop

Library Hotel Car rental agency Element Product

I Agent Buyer Borrower Client Client Variant MandatoryAction Buys Borrows Orders Rents VariantObject Book copy Book Room Car Variant

II Agenta She She She She Common MandatoryAction Inserts Inserts Inserts Inserts CommonObject Her buyer identifica-

tion numberHer borrower identi-

fication numberRequested dates Requested dates Variant

III Agent System System System Common OptionalAction Displays Displays Displays CommonObject Tracking number Reservation number Rented car number Variant

IV Agent System System System System Common MandatoryAction Updates Updates Updates Updates CommonObject Number of available

copiesNumber of available

book copiesNumber of available

roomsNumber of available

carsVariant

V Agent System Common ExtensionAction Suggests CommonObject Related books Common

3 http://www.seman ticsi milar ity.org/.4 http://ws4jd emo.appsp ot.com/.5 http://en.sempa r.ims.uni-stutt gart.de/ or http://barba r.cs.lth.se:8081/.6 http://openn lp.apach e.org/.7 https ://nlp.stanf ord.edu/.

Page 10: Etrac or equiremen twar oducdownload.xuebalib.com/6f6erhnS9Fvm.pdf · Requirements Engineering 1 3 communicationbetweenclients,users,anddevelopers,and theyareassociatedtovariousdevelopmentartifacts,includ-ingdesign,code

Requirements Engineering

1 3

check boxes (to allow their selection and deselection). Fig-ure 2 depicts an example of the tool output. Note that embed-ding is allowed, e.g., definition of variants in optional parts. Extensions appear separately from the core requirements to avoid confusion, as there is no enough evidence to consider them part of the domain knowledge or the core asset. Yet, their future reuse may be required.

6 Evaluation

The evaluation of CoreReq is divided into two parts: fea-sibility through examples (see Sect. 6.1) and usefulness through a controlled experiment (see Sect. 6.2). Section 6.3 discusses the benefits and limitations of the approach, based on the two parts of the evaluation.

6.1 Method feasibility

Although the examples used in these parts cannot be consid-ered case studies, as they may not appear in their “natural” context, we followed the suggestion in [37] for reporting case studies in software engineering in order to systemati-cally elaborate on the objectives, design, results, and conclu-sions of the method feasibility evaluation.

6.1.1 Objectives

Problem statement: Our claim is that CoreReq method can be used for generating core requirements based on existing product requirements which share some similarities.

Research objectives and questions: We aim to assess to what extent CoreReq method can generate core requirements. We concentrate on cloning versus no-cloning scenarios, where in cloning scenarios we found evidence to the (re)use of requirements across products and systems. We phrased the following research questions:

RQ1. To what extent (quantitatively and qualitatively) can CoreReq method generate core requirements in cloning scenarios?

RQ2. To what extent (quantitatively and qualitatively) can CoreReq method generate core requirements in no-cloning scenarios?

Context: The context in which CoreReq is expected to be used, and hence, this is the context of our evaluation, is the existence of textual requirements of similar software prod-ucts that have been developed in some cloning scenarios, namely, artifacts of previously developed products were available when developing newer products. In addition, there is a need to generate core requirements for future develop-ment and easy maintenance of other similar products.

6.1.2 Design

Case and subjects selection: The method feasibility was evaluated on four examples, listed in Table 7. We faced dif-ficulties in finding “real” requirements documents that are not too similar (almost identical, e.g., versions of the same system) and yet are not completely different. Hence, the sources of our cases are mainly academic. The cases were

Fig. 2 The core requirement of checking-out functionality

Page 11: Etrac or equiremen twar oducdownload.xuebalib.com/6f6erhnS9Fvm.pdf · Requirements Engineering 1 3 communicationbetweenclients,users,anddevelopers,and theyareassociatedtovariousdevelopmentartifacts,includ-ingdesign,code

Requirements Engineering

1 3

taken from the internet and belong to different domains. The number of variants (systems) in each case ranged between 3 and 5 with overall number of requirements—between 105 and 318.

Data collection procedure: The textual requirements were extracted as they were from the selected sources. Each requirement got a unique identity. All requirements of a cer-tain product appeared in the same file. Syntactic errors (only spelling and grammar) were manually corrected.

Analysis procedure: We ran the prototype of CoreReq described in Sect. 5.4 on the four examples. Each gener-ated cluster (i.e., core requirement) was categorized accord-ing to the similarity of its requirements in terms of content (semantics) and style (syntax), resulting in four categories: A—similar contents and styles, B—similar contents but dif-ferent styles, C—similar styles but different contents, and D—different contents and styles.

Validity procedure: The categorization was independently done by two people: an author of this paper and a system engineer with experience of over 10 years in development and requirements engineering. (The latter was not involved in the research and only got explanations on the meaning of the four categories.) The independent categorization was followed by a discussion session till full agreement on the outcomes was reached.

6.1.3 Results

Results: Table 7 presents our results. As can be seen, the medical systems case, which followed a classic cloning sce-nario in which the requirements from previously developed systems were copied as they were to the new system and then adapted to the particular context of the system, resulted with most requirements (89%) clustered. The average size of clusters was 3.1 requirements, and they were all similar in terms of both content and style. Adaptation to the specific needs of the systems was observed in the form of optional, variant, and extension parts. This provides evidence that

our method—CoreReq—can generate core requirements to a large extent in cloning scenarios (RQ1).

The photo sharing case, which was based on the known application of Picasa, also followed some cloning scenario. However, this time the differences were larger, especially with respect to the third system. Fifty clusters were gener-ated, clustering 40% of the requirements. This demonstrates the feasibility of our approach to cases where cloning is only partially followed, as the requirements in many of these clusters referred to the same content (36 out of 50 clusters belong to category A or B).

The cases of library systems and e-shop applications originated from different sources, so no-cloning scenarios were observed. Indeed, they yielded small numbers of clus-ters (2 and 7, respectively), proposing that our method—CoreReq—is limited for generating core requirements in no-cloning scenarios (RQ2). Yet, a few meaningful clus-ters were emerged, for the e-shop applications (the 2 clus-ters categorized as A and B). Generally speaking, the core requirements in these two cases (library systems and e-shop applications) included more optional and extension parts, as well as larger variants (as common anchor parts could not be identified).

Evaluation of validity: Several threats can be pointed out. The first threat refers to the selected cases. The limitations with respect to the selection procedure are described above. We indeed used four examples whose sources are academic, but we made the textual requirements extracted from these sources, as well as CoreReq outcomes, available.8 Yet, eval-uation based on additional cases with different characteris-tics, preferably originated from industrial sources, is needed for assessing our method in a broader context. Second, we used a specific similarity measure (MCS [28]) for assessing the similarity of requirements and clustering them. MCS is a well-known metric, yet, as discussed in [18], it may be lim-ited when analyzing functional requirements originated from different sources. Utilizing additional similarity metrics is

Table 7 Method feasibility: characteristics of CoreReq’s outcomes for four cases

Domain Perceived reuse scenario No. of systems

Overall no. of requirements

Outcomes characteristics

No. of clusters Clustered req. (%)

Categorization

Library systems No-cloning 4 266 2 1 D-2E-Shop applications No-cloning 5 122 7 20 A-1; B-1; C-1; D-4Photo sharing applications Partial cloning (2 out of 3 systems) 3 318 50 40 A-26; B-10; C-3; D-11Medical systems Cloning 4 105 30 89 A-30

8 See http://is.haifa .ac.il/~iris/resea rch/CoreR eq/gener ated-outpu ts.html.

Page 12: Etrac or equiremen twar oducdownload.xuebalib.com/6f6erhnS9Fvm.pdf · Requirements Engineering 1 3 communicationbetweenclients,users,anddevelopers,and theyareassociatedtovariousdevelopmentartifacts,includ-ingdesign,code

Requirements Engineering

1 3

needed to examine whether the outcomes of CoreReq can be improved. Finally, in order to assess the quality of the gener-ated clusters each cluster was associated with a category (see the explanations on categories A–D above). This procedure can be criticized as being subjective. Hence, to avoid bias, we asked an experienced engineer to independently catego-rize the clusters as well. We found only minor differences (in e-shop and photo sharing cases) which did not change the overall analysis results. Yet, we discussed the sources of these differences till reaching a complete agreement on the categorization. The experiment regarding the usefulness of CoreReq, which is described in Sect. 6.2, aims to comple-ment this feasibility assessment by evaluating the usefulness of the generated outcomes to potential users.

6.1.4 Conclusions

CoreReq is able to generate core requirements in cloning scenarios. Similar requirements are first clustered together and then a core requirement is generated for each cluster. The core requirement includes mandatory, optional, and variant parts, as well as extensions that are specific to certain systems or products. Returning to our research questions, the method’s outcomes are better in terms of quantity (number of clusters) and quality (categorization of clusters accord-ing to content, categories A and B) for (potentially partial) cloning scenarios. However, CoreReq may result in some interesting suggestions for no-cloning scenarios as well.

6.2 Method usefulness

As core requirements include many options and various reuse decisions, their usefulness for guiding the creation of new specifications may be questioned, even in cloning sce-narios. Particularly, adapting existing product requirements may be easier in some scenarios. Hence, we conducted a controlled experiment whose purpose was to primarily examine the usefulness of reusing the outcomes of CoreReq, with respect to regular reuse of product requirements. Fol-lowing the guidelines for reporting experiments in software engineering by Wohlin et al. [42], we next elaborate on the experimental goal, settings, execution, analysis, and results, as well as on threats to validity.

6.2.1 Experimental goal

The goal of the experiment was to analyze the usefulness of the core requirements generated by CoreReq for the purpose of creating requirements of a new similar product. The qual-ity focus is on evaluating whether the generated core require-ments are capable for providing assistance and guidance for creating requirements of new products. The researchers’ per-spective is mainly on comparing the quality of requirements

written for a new product (application engineering) and the time taken to complete the task.

6.2.2 Participants

The experiment was executed at the University of Haifa, Israel, in the fall semester of 2016–2017. Fifty-three students participated in the experiment through an undergraduate course dealing with design and development of Informa-tion Systems. All students were studying in the Information Systems department and had knowledge in programming and software development processes. Two students were graduates.

6.2.3 Experimental material

The material used in the experiment included requirements of four software products: Library, Hotel, Car Rental, and Second Hand Book Shop. As noted, all four applications deal with Check-In Check-Out (CICO) operations. We used a set of requirements of a library system written for other purposes and created the requirements of the other products following a typical cloning scenario, simulating a case in which all systems were developed in the same department or company and reuse involved copying the requirements and adapting them to the terminology and context of the particular systems. The requirements of the four systems were used to automatically generate core requirements utiliz-ing CoreReq tool described in Sect. 5.4. The core require-ments included mandatory and optional parts, variants, and extensions.9

6.2.4 Tasks

The task was to write the requirements of a fifth system—Medical Equipment Rental—which can also be considered a CICO application. The participants got a textual description of this system and were requested to reuse (product or core) requirements as much as they can. The motivation for this request was that existing requirements are commonly linked to other development artifacts (such as design models, code, and test cases) and thus reuse of the requirements will enable reuse of the related artifacts, decreasing development time and errors. Addition of requirements was allowed, if suitable requirements for reuse (and adaptation) could not be identi-fied. The requirements (both product and core requirements) appeared in Microsoft Word files which further included but-tons for registering the start and end experiment times. The core requirements included controls, such as combo boxes

9 The whole material of the experiment can be found at http://is.haifa .ac.il/~iris/resea rch/CoreR eq/exp.zip.

Page 13: Etrac or equiremen twar oducdownload.xuebalib.com/6f6erhnS9Fvm.pdf · Requirements Engineering 1 3 communicationbetweenclients,users,anddevelopers,and theyareassociatedtovariousdevelopmentartifacts,includ-ingdesign,code

Requirements Engineering

1 3

and check boxes, to support reuse of variants and optional parts. The hard copy of the textual description of the fifth system was distributed to the participants, for convenience.

For each requirement written for the fifth (new) system, the participants were asked to rate their subjective opinion on the degree of difficulty to specify it (1—very difficult, 5—very easy) and their confidence in the specification (1—not confident at all, 5—very confident).

Finally, the participants were requested to provide gen-eral feedback on the task (e.g., on difficulties they face to perform it).

6.2.5 Hypotheses, parameters, and variables

Following our goal of analyzing the usefulness of generated core requirements for creating requirements of a new similar system, we phrased three research questions: the first one (RQ3) referred to the quality of the outcomes and particu-larly to their correctness; the two other research questions referred to properties of the requirements reuse process: efficiency (time, RQ4) and perceived complexity (difficulty and confidence, RQ5). The independent variable was in all cases the experiment’s group, which could be: core require-ments (using CoreReq, the experimental group) or product requirements (utilizing regular reuse, the control group). The research questions, the corresponding sets of null hypoth-eses, and the variables are presented in Table 8.

6.2.6 Experimental design

We followed a between-subjects design. The participants were divided into two groups according to their achieve-ments in their studies (GPA). The first group—experimental group—included 26 students who got the core requirements (as generated by CoreReq) and had to reuse and adapt them to meet the description of the fifth system. The second group—control group—included 27 students who had to do the same only with the product requirements, without having the core requirements. As can be seen from Table 9, there were no significant differences between the two groups in terms of achievements in studies (GPA) and familiarity with the domains-of-discourse (Library, Hotel, Car Rental, Second Hand Book Shop and Medical Equipment Rental).

6.2.7 Analysis procedure

Before analyzing the data, we had to perform some pre-processing, including: (1) spelling mistakes correction using Microsoft Word facilities; (2) automatic extraction of requirements and their parts using a macro; and (3) manual check of the automatically extracted requirements to detect potential technical errors, such as incorrect classification of

Tabl

e 8

Res

earc

h qu

estio

ns a

nd th

e co

rres

pond

ing

set o

f nul

l hyp

othe

ses a

nd v

aria

bles

Rese

arch

que

stion

Nul

l hyp

othe

ses

Inde

pend

ent v

aria

bles

Dep

ende

nt v

aria

bles

RQ

3. A

re re

quire

men

ts w

ritte

n m

ore

corr

ectly

whe

n re

usin

g C

oreR

eq g

ener

ated

requ

irem

ents

than

whe

n re

usin

g pr

oduc

t req

uire

men

ts?

H0 q

lt: T

here

is n

o di

ffere

nce

in te

rms o

f cor

rect

ness

be

twee

n re

quire

men

ts w

ritte

n re

usin

g C

oreR

eq g

ener

-at

ed re

quire

men

ts a

nd th

ese

writ

ten

usin

g pr

oduc

t re

quire

men

ts

Gro

up: c

ore

or p

rodu

ctC

orre

ctne

ss (n

umbe

r bet

wee

n 0

and

1; 0

—co

m-

plet

ely

inco

rrec

t or m

issi

ng; 1

—co

mpl

etel

y co

rrec

t)

RQ

4. A

re re

quire

men

ts w

ritte

n m

ore

effici

ently

whe

n re

usin

g C

oreR

eq g

ener

ated

requ

irem

ents

than

whe

n re

usin

g pr

oduc

t req

uire

men

ts?

H0 e

ff: T

here

is n

o di

ffere

nce

in te

rms o

f effi

cien

cy

betw

een

writ

ing

requ

irem

ents

from

Cor

eReq

gen

erat

ed

requ

irem

ents

and

writ

ing

them

from

pro

duct

requ

ire-

men

ts

Gro

up: c

ore

or p

rodu

ctTi

me

(in s)

to p

erfo

rm ta

sk

RQ

5. I

s req

uire

men

ts w

ritin

g pe

rcei

ved

mor

e co

mpl

ex

(mor

e di

fficu

lt an

d w

ith le

ss c

onfid

ence

) whe

n re

usin

g C

oreR

eq g

ener

ated

requ

irem

ents

than

whe

n re

usin

g pr

oduc

t req

uire

men

ts?

H0 c

mp:

Ther

e is

no

diffe

renc

e in

term

s of c

ompl

exity

be

twee

n w

ritin

g re

quire

men

ts fr

om C

oreR

eq g

ener

ated

re

quire

men

ts a

nd w

ritin

g th

em fr

om p

rodu

ct re

quire

-m

ents

Gro

up: c

ore

or p

rodu

ctPe

rcei

ved

diffi

culty

Perc

eive

d co

nfide

nce

Page 14: Etrac or equiremen twar oducdownload.xuebalib.com/6f6erhnS9Fvm.pdf · Requirements Engineering 1 3 communicationbetweenclients,users,anddevelopers,and theyareassociatedtovariousdevelopmentartifacts,includ-ingdesign,code

Requirements Engineering

1 3

parts. The manual step (3) was done by the two authors of the paper to avoid bias and accidental mistakes.

After preprocessing, the quality of the requirements was assessed in terms of correctness. In the absence of a gold standard, we used the following procedure. Each require-ment was divided into several parts each of which was asso-ciated with a logical expression of expected answer. The cor-rectness score was calculated as an average of F-measures of all parts of a given requirement. As an example, consider the index provided in Table 10 for checking-out items in the

Medical Equipment Rental system, and the two answers and their scores in Table 11.

The expected requirement, “When a client borrows a medical equipment, she inserts the id number of the item type, and the system updates the number of available items”, was divided into three parts. The logical expres-sions of expected answers were specified based on the description of the Medical Equipment Rental system, but also verified by searching for additional correct candidates in the participants’ answers.

Table 9 Differences between the experiment’s groups

a Statistical tests: T independent samples T test, M Mann–Whitney U testb Analyzed together due to the high internal consistency of the familiarity with the five domains (according to Cronbach alpha test)

Experimental group (core req.)

Control group (product req.) Stat. testa P value

Mean Median SD Mean Median SD

Achievements in studies (out of 100)

77.15 78.11 9.12 77.86 81.29 9.67 M 0.457 (2-tailed)

Familiarity with the domainb (out of 5)

2.33 2.2 0.75 2.24 2.2 0.61 T 0.618 (2-tailed)

Table 10 An index for examining checking-out items in medical equipment rental system

No. Requirement Part Logical expression of expected answer

1 a. client borrows (borrower OR client) AND (borrow OR rent)b. a medical equipment equipment OR item

2 a. client inserts (borrower OR client) AND insertb. id number of the item type number AND (item OR equipment) AND

(“requested type” OR type)3 a. system updates system AND update

b. number of available items number AND available AND (equipment OR item)

Table 11 Two answers and their correctness scores

The cells’ content is in the following format: (P, R; F), where P is Precision, R is Recall, and F is F-measure; all values are in the range [0, 1]

Answer Part 1 Part 2 Part 3 Score (avg. F-measure)

(a) (b) (a) (b) (a) (b)

When a client borrows a medical equipment, she inserts the id number of the item, the system updates the number of available items, and the system suggests related items

(1, 1; 1) (1, 1; 1) (1, 1; 1) (1, 0.67; 0.8)

=> item rather than type

(1, 1; 1) (1, 1; 1) 0.97

Borrowing medical equipment is carried out easily through the system. The system updates the number of available items of the chosen medical equipment

(1, 0.5; 0.67)

=> the agent is missing

(1, 1; 1) (0, 0; 0) (0, 0; 0) (1, 1; 1) (1, 1; 1) 0.61=> insertion of identification number of the

equipment type by the client is missing

Page 15: Etrac or equiremen twar oducdownload.xuebalib.com/6f6erhnS9Fvm.pdf · Requirements Engineering 1 3 communicationbetweenclients,users,anddevelopers,and theyareassociatedtovariousdevelopmentartifacts,includ-ingdesign,code

Requirements Engineering

1 3

All other dependent variables—time to complete task, perceived difficulty, and perceived confidence—were directly retrieved from the questionnaire.

All data analyses were done using SPSS 21. For con-tinuous values (correctness and time), we analyzed data normality using Shapiro–Wilk test [42]. If the data were distributed normally, we used t test for independent samples. When the data were not distributed normally, we adopted the Mann–Whitney U test for independent samples [42]. Dis-crete values were analyzed using Cronbach’s alpha reliability test. In all the tests, we decided to accept a probability of 5% of committing a Type I error [42], i.e., rejecting the null hypothesis when it is actually true.

6.2.8 Execution (preparation and deviations)

The experiment took place in a special session of the course. The participation was voluntary, but participants who took part in the experiment could get 2–5 extra points to their final grade in the course, depending on their performance. Nevertheless, the course supplied other (alternative) ways to get bonus, so participation in the experiment was not mandatory.

The special session started with an introduction to requirements reuse, SPLE, and core requirements. This was followed by a tutorial on how to reuse product and core requirements. The introduction and the tutorial lasted about 1 h. Then, the participants had to answer some preliminary questions about their background (studies and experience) and degrees of familiarity with each one of the domains-of-discourse. Afterward, they had to press the start time button, perform the task depending on the group to which they were assigned, and press the end time button. Finally, they had to provide feedback on difficulties and complexity (a post-experiment question).

6.2.9 Results

Table 12 summarizes the statistical results relevant to the three hypotheses listed in Table 8.

Quality is calculated as the sum of the correctness scores of the different expected requirements (presented in percent-ages). As can be seen, the participants in the experimental group significantly outperformed with respect to the par-ticipants in the control group in terms of quality (averages 82.2% vs. 75.1%, medians 88.9% vs. 76.1%, respectively; P value = 0.012). With respect to efficiency (time to complete a task), participants spent significantly more time to write requirements for a fifth system in the control group than in the experimental group (averages 49.49 vs. 38.56 min, medi-ans 49.27 vs. 38.17 min, respectively; P value = 0.01). These results may be attributed to the “template” the experimental group got and the guidance for its reuse. The control group, on the other hand, had to search for relevant requirements and find ways to adapt them as required. Thus, the partici-pants in the control group needed significantly more time (about 10 min more) to complete the relatively short task and the quality of their outcomes were poorer.

Despite these significant results in favor of core require-ments, the differences between the perceived difficulty and confidence in the two groups were not found to be signifi-cant. This might be due to the fact that both groups could rely on existing artifacts and did not have to specify require-ments from scratch.

Based on the above analyses, we can reject the null hypotheses H0qlt and H0eff, while H0cmp cannot be rejected. This means that requirements are written significantly more correctly and efficiently when reusing CoreReq generated requirements than when reusing product requirements (RQ3 and RQ4, respectively). However, there is no significant difference in terms of perceived difficulty and confidence between writing requirements from CoreReq generated requirements and writing them from product requirements (RQ5). Possible interpretations to these results are discussed in Sect. 6.3.

6.2.10 Threats to validity

The validity of our study is subject to several threats. We report these threats and the actions taken to minimize them, following the suggestion in [42].

Table 12 Raw statistics

a Statistical tests: T independent samples T test, M Mann–Whitney U testBold represents that in all our statistical tests, we decided to accept a probability of 5% of committing a Type I error

Experimental group Control group Stat. testa P value

Mean Median SD Mean Median SD

Quality (in %) 82.2 8.89 1.49 7.51 76.1 15.6 M 0.012 (1-tailed)Efficiency (in min) 38.56 38.17 12.15 49.49 49.27 20.26 T 0.01 (1-tailed)Difficulty (out of 5) 3.43 3.42 0.47 3.58 3.44 0.68 T 0.354 (2-tailed)Confidence (out of 5) 3.62 3.65 0.79 3.71 3.6 0.67 T 0.658 (2-tailed)

Page 16: Etrac or equiremen twar oducdownload.xuebalib.com/6f6erhnS9Fvm.pdf · Requirements Engineering 1 3 communicationbetweenclients,users,anddevelopers,and theyareassociatedtovariousdevelopmentartifacts,includ-ingdesign,code

Requirements Engineering

1 3

Construct validity threats concern the relationships between theory and observation. They are mainly due to the method used to assess the outcomes of tasks. The experimental material was indeed created by the researchers, but we made sure to include different types of variability as reported in the literature (mandatory, optional, alternative, and prod-uct-specific extensions). We used a structured questionnaire and a robust preprocessing procedure (which includes mac-ros and manual assessment done by the two authors of the paper). In order to avoid subjective evaluation as much as possible, the correctness of answers was evaluated using a predefined checking index (see Table 10 for an exam-ple). The construction of this index was very systematic, as described in Sect. 6.2.7. Time was automatically recorded by the use of the start and end buttons inside the questionnaire. Difficulty and confidence were collected using a common Likert scale.

Internal validity threats refer specifically to whether an experimental condition makes a difference or not and deals with the causes and effects in the specific study. The par-ticipants’ knowledge and background, which could impact the results, were taken into account. First, the groups were divided considering the GPA of the participants. Second, the participants were asked about their familiarity with the dif-ferent domains-of-discourse. As reported earlier, there were no significant differences among groups with respect to both GPA and domain familiarity.

External validity threats concern the ability to general-ize the results. The main threats in this area are due to the specific domains and tasks we used, and from the type of participants. As for the domains, due to the complexity of the design used in the experiment, we could not use addi-tional application domains. The tasks were very concrete and relatively small (using about 40 requirements overall and requesting the specification of 10 requirements for the fifth system). In addition, it can be argued that the require-ments were too similar. However, they included examples of all variability aids, including optionality, variants, and extensions. We could not use larger examples due to time constraints and concentration capacity. In the future, it will be interesting to replicate our experiment with different domains, more complicated tasks, and other reuse strategies.

As for participants, despite a little experience of the participants in the field of requirements engineering and modeling, they still got the required knowledge and per-formed the training exercise during the experiment. The experiment was not designed for experts, and therefore, the population is acceptable for evaluation [20]. Particu-larly, in the requirements engineering field, it was noticed that students have a good understanding of the way indus-try behaves, and may be the subjects of such empirical

studies [41]. Further studies may confirm whether or not our results can be generalized to more experienced participants.

Finally, conclusion validity concerns refer to the rela-tionship between the treatment and outcomes. The statisti-cal analysis was performed using parametric T tests and nonparametric Mann–Whitney U tests, both for independ-ent samples. These tests are well suited for use on small samples, such as the 53 participants in our experiment (resulting in 26 or 27 participants in each group). Moreo-ver, Pearson correlation was used to detect possible rela-tions between independent variables (e.g., correctness and efficiency).

6.3 Discussion

Our findings indicate on the potential feasibility and useful-ness of the suggested approach. We discuss here strengths and sources of difficulties of the approach, as demonstrated in the two evaluations we performed, including the post-feedbacks provided by half of the participates in the experi-ment. The discussion is integrative, but when appropriate we refer to the relevant research questions in parentheses. As reusing includes searching for appropriate artifacts and adapting them to the context at hand, we categorized the raised issues into those relevant to selection of requirements for reuse and those referring to adaptation of requirements to the specific content. We finally conclude with the implica-tions for research and practice.

Selecting requirements for reuse: Reuse starts with the selec-tion of the artifacts to be reused. This decision is not trivial due to the size and variety of requirements documents. It becomes even more complicated when the artifacts have been developed without considering reuse options, as it is commonly the case with product requirements. We saw in the utilized examples that our approach clusters similar requirements and thus reduce the size and variety of the arti-facts considered for reuse. The success of this stage depends on the degree of similarity of the given software products—both in content and style; the more cloning there is, the more core requirements are generated. Indeed, in the no-cloning cases we inspected, the numbers of clusters were small, there were many optional parts, and the length of the variants was relatively long (RQ2). In the cloning cases, on the other hand, many requirements were clustered into content-similar core requirements (categories A and B, RQ1). Hence, select-ing the appropriate requirements for reuse in such cases was easier, as observed in the shorter times to complete the task (RQ4) and reported by the participants in the experimental group. Participants in the control group, on the other hand, did not always select the product requirement that is most

Page 17: Etrac or equiremen twar oducdownload.xuebalib.com/6f6erhnS9Fvm.pdf · Requirements Engineering 1 3 communicationbetweenclients,users,anddevelopers,and theyareassociatedtovariousdevelopmentartifacts,includ-ingdesign,code

Requirements Engineering

1 3

similar to the required one, and thus, the adaptation in these cases was more challenging and resulted in more time and incompleteness (namely, low recall, RQ3 and RQ4).

Adaptation of requirements: Part of the challenges in requirements reuse deal with the needed adaptations in terms of terminologies, configurations, and/or values. When the requirements are similar in style, format, and terminology, as in the cloning examples we analyzed, the variants can be clearly identified and clustered to create comprehendible core requirements (RQ1). These core requirements include guidance for adaptation, as opposed to product requirements. Indeed, we noticed in the experiment that when reusing product requirements, parts of the requirements remained without adaptation or modifications were inaccurate, prob-ably due to overlooking of the participants (RQ3). With the core requirements, on the other hand, which included some guidance in the form of mandatory and optional parts, vari-ants, and extensions, reuse was mainly performed by select-ing existing options or using them as a basis for conducting product-specific adaptations. This yielded creation of more correct requirements (RQ3) in shorter times (RQ4).

However, when adaptation in contrast to the guidance is needed, using core requirements may be confusing (RQ5). Particularly, mandatory parts which appeared in previously developed products may not be needed in new products. We noticed that participants in the experimental group tend to leave these parts untouched, even when the description of the new system did not refer to them (RQ3). This result may call for user guidance improvement, potentially explaining to the requirements engineers what the meaning of mandatory parts in this context is. Creation of new variants and omis-sion of optional parts, on the other hand, seem to be easier to the participants in this group (RQ5).

Implications: CoreReq has implications for both research and practice. In terms of research, we add a method for gen-erating core requirements to a growing body of studies that deal with reengineering of software applications and prod-ucts into product lines [2]. As elaborated in Sect. 2, with respect to requirements, this corpus mainly handles genera-tion of variability models and representation of core require-ments that are created either manually or after intensive involvement of domain experts. We took these efforts one step forward utilizing natural language processing (NLP) techniques, such as Semantic Role Labeling (SRL) [13], in order to automatically generate mandatory, optional, vari-ant, and extension parts of requirements and consequentially generate core requirements. As demonstrated in this section, the generation of core requirements is feasible and the gener-ated requirements are useful for creation of requirements of new similar products in cloning scenarios.

With respect to practice, CoreReq may provide a method for both analyzing the possibilities to adopt SPLE and real-izing how to do this. A survey on variability modeling in industry [4] indicates that in practice SPLE is commonly adopted in a bottom-up approach, namely after several products have already been developed using ad-hoc reuse techniques, such as cloning. CoreReq is geared to such situa-tions. In particular, its outcomes show the degree of similar-ity between the different products (in terms of the number of clusters/core requirements) and the degree of variability (in terms of the numbers of variants and extensions). Hence, examining the outcomes of CoreReq, managers can reach decisions regarding the feasibility of SPLE adoption.

7 Summary and future work

In this research, we addressed the need to generate core requirements, i.e., reusable requirements and guide their systematic reuse in new software products. The generation is based on existing product requirements and an ontological framework for analyzing variability of cloned artifacts [34]. The CoreReq method includes clustering similar require-ments utilizing semantic measures, capturing variable parts using semantic roles labeling, and generating core require-ments that explicitly refer to product (mandatory, optional, extension) and element (common, variant) dimensions. The outcome of CoreReq is form-based core requirements from which the product requirements can be created through selection and adaptation. The empirical evaluation we con-ducted indicates on the potential feasibility and usefulness of the approach.

Future research can include several directions. First, the core requirements did not include dependencies and con-straints between the variable parts. Improving the guidance of core requirements by handling configurations of vari-able parts requires further research. Second, analyzing the suitability of our approach to different cloning scenarios is needed. We assumed relatively small differences between requirements, thus clustering algorithms and semantic simi-larities worked properly. When examining our approach on no-cloning scenarios, we got a small number of clusters with a lot of variability. It would be interesting to examine additional scenarios, such as systems developed in different companies but aim to provide similar functionality. Third, CoreReq has to be evaluated with different domains, tasks, and populations (including non-students). Fourth, associa-tion of the elements in the ontological variability framework to variability mechanisms, such as specialization and param-eterization, deserve further investigation. Finally, the con-tinuous evolution of core requirements as a result of creating additional systems requires further research.

Page 18: Etrac or equiremen twar oducdownload.xuebalib.com/6f6erhnS9Fvm.pdf · Requirements Engineering 1 3 communicationbetweenclients,users,anddevelopers,and theyareassociatedtovariousdevelopmentartifacts,includ-ingdesign,code

Requirements Engineering

1 3

References

1. Andersen N, Czarnecki K, She S, Wąsowski A (2012) Efficient synthesis of feature models. In: Proceedings of the 16th interna-tional software product line conference, vol 1. ACM, pp 106–115

2. Assunção WK, Lopez-Herrejon RE, Linsbauer L, Vergilio SR, Egyed A (2017) Reengineering legacy applications into software product lines: a systematic mapping. Empir Softw Eng 22(6):1–45

3. Bakar NH, Kasirun ZM, Salleh N (2015) Feature extraction approaches from natural language requirements for reuse in soft-ware product lines: a systematic literature review. J Syst Softw 106:132–149

4. Berger T, Rublack R, Nair D, Atlee JM, Becker M, Czarnecki K, Wąsowski A (2013) A survey of variability modeling in industrial practice. In: Proceedings of the seventh international workshop on variability modeling of software-intensive systems. ACM, pp 7:1–7:8

5. Chen CY, Chen LC, Lin L (2004) Methods for processing and pri-oritizing customer demands in variant product design. IIE Trans 36(3):203–219

6. Chen L, Babar MA (2011) A systematic review of evaluation of variability management approaches in software product lines. Inf Softw Technol 53:344–362

7. Clements P, Northrop L (2002) Software product lines. Addison-Wesley, Reading

8. Deeptimahanti DK, Babar MA (2009) An automated tool for generating UML models from natural language requirements. In: Proceedings of the 2009 IEEE/ACM international conference on automated software engineering, pp 680–682

9. Domann C, Juergens E, Streit J (2009) The curse of copy and paste cloning in requirements specifications. In: Proceedings of the 3rd international symposium on empirical software engineering and measurement. IEEE Computer Society, pp 443–446

10. Dori D (2002) Object-process methodology: a holistic systems paradigm. Springer, Berlin

11. Galster M, Weyns D, Tofan D, Michalik B, Avgeriou P (2014) Variability in software systems—a systematic literature review. IEEE Trans Softw Eng 40(3):282–306

12. Génova G, Fuentes JM, Llorens J, Hurtado O, Moreno V (2013) A framework to measure and improve the quality of textual require-ments. Requir Eng 18(1):25–41

13. Gildea D, Jurafsky D (2002) Automatic labeling of semantic roles. Comput Linguist 28(3):245–288

14. Halmans G, Pohl K, Sikora E (2008) Documenting application-specific adaptations in software product line engineering. In: Proceedings of the 20th international conference on advanced information systems engineering (CAiSE’2008). Lecture notes in computer science, vol 5074, pp 109–123

15. Ibrahim M, Ahmad R (2010) Class diagram extraction from tex-tual requirements using natural language processing (NLP) tech-niques. In: IEEE 2010 second international conference on com-puter research and development, pp 200–204

16. Ilieva M, Ormandjieva O (2005) Automatic transition of natural language software requirements specification into formal presen-tation. In: Natural language processing and information systems, pp 427–434

17. Irshad M, Petersen K, Poulding S (2017) A systematic literature review of software requirements reuse approaches. Inf Softw Technol 93:223–245

18. Itzik N, Reinhartz-Berger I, Wand Y (2016) Variability analysis of requirements: considering behavioral differences and reflecting stakeholders’ perspectives. IEEE Trans Softw Eng 42(7):687–706

19. Kang KC, Cohen SG, Hess JA, Novak WE, Peterson AS (1990) Feature-oriented domain analysis (FODA) feasibility study (No.

CMU/SEI-90-TR-21). Carnegie-Mellon University, Software Engineering Institute, Pittsburgh, PA.

20. Kitchenham BA, Lawrence S, Lesley P, Pickard M, Jones PW, Hoaglin DC, Emam KE (2002) Preliminary guidelines for empiri-cal research. IEEE Trans Softw Eng 28(8):721–734

21. Kurita T (1991) An efficient agglomerative clustering algorithm using a heap. Pattern Recognit 24(3):205–209

22. Landauer TK, Foltz PW, Laham D (1998) Introduction to latent semantic analysis. Discourse Process 25:259–284

23. Lee H, Recasens M, Chang A, Surdeanu M, Jurafsky D (2012) Joint entity and event coreference resolution across documents. In: Proceedings of the 2012 joint conference on empirical methods in natural language processing and computational natural language learning. Association for Computational Linguistics, pp 489–500

24. Manning CD, Raghavan P, Schütze H (2008) Introduction to information retrieval, vol 1(1). Cambridge University Press, Cam-bridge, p 496

25. Martinez J, Ziadi T, Bissyandé TF, Klein J, Le Traon Y (2015) Bottom-up adoption of software product lines: a generic and extensible approach. In: Proceedings of the 19th international conference on software product line. ACM, pp 101–110

26. McZara J, Sarkani S, Holzer T, Eveleigh T (2015) Software requirements prioritization and selection using linguistic tools and constraint solvers: a controlled experiment. Empir Softw Eng 20(6):1721–1761

27. Mich L (1996) NL-OOPS: from natural language to object ori-ented requirements using the natural language processing system LOLITA. Nat Lang Eng 2(2):161–187

28. Mihalcea R, Corley C, Strapparava C (2006) Corpus-based and knowledge-based measures of text semantic similarity. In: The 21st national conference on artificial intelligence (AAAI’2006), vol 1, pp 775–780

29. Moon M, Yeom K, Chae HS (2005) An approach to develop-ing domain requirements as a core asset based on commonality and variability analysis in a product line. IEEE Trans Softw Eng 31(7):551–569

30. Nazir F, Butt WH, Anwar MW, Khattak MAK (2017) The applica-tions of natural language processing (NLP) for software require-ment engineering: a systematic literature review. In: International conference on information science and applications, pp 485–493

31. Omoronyia I, Sindre G, Stålhane T, Biffl S, Moser T, Sunindyo W (2010) A domain ontology building process for guiding require-ments elicitation. In: International working conference on require-ments engineering: foundation for software quality, pp 188–202

32. Pohl K, Böckle G, van der Linden F (2005) Software product-line engineering: foundations, principles, and techniques. Springer, Berlin

33. Reinhartz-Berger I, Itzik N, Wand Y (2014) Analyzing variability of software product lines using semantic and ontological consid-erations. In: Proceedings of the 26th international conference on advanced information systems engineering (CAiSE’14). Lecture notes in computer science, vol 8484, pp 150–164

34. Reinhartz-Berger I, Zamansky A, Kemelman M (2015). Analyzing variability of cloned artifacts: formal framework and its applica-tion to requirements. In: International conference on enterprise, business-process and information systems modeling. Springer, pp 311–325

35. Rubin J, Czarnecki K, Chechik M (2013) Managing cloned vari-ants: a framework and experience. In: Proceedings of the 17th international software product line conference. ACM, pp 101–110

36. Rubin J, Czarnecki K, Chechik M (2015) Cloned product variants: from ad-hoc to managed software product lines. Int J Softw Tools Technol Transf 17(5):627–646

37. Runeson P, Höst M (2009) Guidelines for conducting and report-ing case study research in software engineering. Empir Softw Eng 14(2):131–164

Page 19: Etrac or equiremen twar oducdownload.xuebalib.com/6f6erhnS9Fvm.pdf · Requirements Engineering 1 3 communicationbetweenclients,users,anddevelopers,and theyareassociatedtovariousdevelopmentartifacts,includ-ingdesign,code

Requirements Engineering

1 3

38. Shah US, Jinwala DC (2015) Resolving ambiguities in natural language software requirements: a comprehensive survey. ACM SIGSOFT Softw Eng Notes 40(5):1–7

39. Sharma A, Kushwaha DS (2011) Natural language based com-ponent extraction from requirement engineering document and its complexity analysis. ACM SIGSOFT Softw Eng Notes 36(1):1–14

40. Sharma VS, Ramnani RR, Sengupta S (2014) A framework for identifying and analyzing non-functional requirements from text. In: Proceedings of the 4th international workshop on twin peaks of requirements and architecture, pp 1–8

41. Svahnberg M, Aurum A, Wohlin C (2008) Using students as subjects: an empirical evaluation. In: Proceedings of the second

ACM-IEEE international symposium on empirical software engi-neering and measurement, Kaiserslautern, Germany, pp 288–290

42. Wohlin C, Runeson P, Höst M, Ohlsson M, Regnell B, Wesslén A (2000) Experimentation in software engineering: an introduction. Kluwer Academic Publishers, Dordrecht

43. Wu Z, Palmer M (1994) Verbs semantics and lexical selection. In: Proceedings of the 32nd annual meeting on Association for Com-putational Linguistics. Association for Computational Linguistics

Publisher’s Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Page 20: Etrac or equiremen twar oducdownload.xuebalib.com/6f6erhnS9Fvm.pdf · Requirements Engineering 1 3 communicationbetweenclients,users,anddevelopers,and theyareassociatedtovariousdevelopmentartifacts,includ-ingdesign,code

本文献由“学霸图书馆-文献云下载”收集自网络,仅供学习交流使用。

学霸图书馆(www.xuebalib.com)是一个“整合众多图书馆数据库资源,

提供一站式文献检索和下载服务”的24 小时在线不限IP

图书馆。

图书馆致力于便利、促进学习与科研,提供最强文献下载服务。

图书馆导航:

图书馆首页 文献云下载 图书馆入口 外文数据库大全 疑难文献辅助工具


Recommended