+ All Categories
Home > Documents > Optimizing data flow - DIVA

Optimizing data flow - DIVA

Date post: 22-Feb-2023
Category:
Upload: khangminh22
View: 0 times
Download: 0 times
Share this document with a friend
39
IN DEGREE PROJECT MECHANICAL ENGINEERING, SECOND CYCLE, 30 CREDITS , STOCKHOLM SWEDEN 2019 Digital Waste ELIMINATING NON-VALUE ADDING ACTIVITIES THROUGH DECENTRALIZED APPLICATION DEVELOPMENT MACHTELD BÖGELS KTH ROYAL INSTITUTE OF TECHNOLOGY SCHOOL OF INDUSTRIAL ENGINEERING AND MANAGEMENT
Transcript

IN DEGREE PROJECT MECHANICAL ENGINEERING,SECOND CYCLE, 30 CREDITS

, STOCKHOLM SWEDEN 2019

Digital WasteELIMINATING NON-VALUE ADDING ACTIVITIES THROUGH DECENTRALIZED APPLICATION DEVELOPMENT

MACHTELD BÖGELS

KTH ROYAL INSTITUTE OF TECHNOLOGYSCHOOL OF INDUSTRIAL ENGINEERING AND MANAGEMENT

ELIMINATING NON-VALUE ADDING ACTIVITIES THROUGH

DECENTRALIZED APPLICATION DEVELOPMENT

Digital Waste

1 0

0

0

0

0 1

1 1

1

1

1

1 0

0

1

1

0 0

1

1 0

1

1

0 0

1

1 1

1

0

Stockholm, Sweden

June 2019

MG213X – Degree Project

MSc Production Engineering & Management

Department of Production Engineering

School of Industrial Engineering & Management (ITM)

KTH Royal Institute of Technology

Machteld Bögels

Gunilla Franzén Sivard

Lasse Wingård

Ron Augustus

Author

Academic Supervisor

Examiner

Company Supervisor

1

Abstract

abstract

keywords digital waste, information flows, data flow diagram, lean information management,

automation, decentralized application development

In an era where the network of interconnected devices is rapidly expanding, it is difficult for

organizations to adapt to the increasingly data-rich and dynamic environment while remaining

competitive. Employees experience that much of their time and resources is spent daily on

repetitive, inefficient and mundane tasks. Whereas lean manufacturing has manifested itself as

a well-known optimization concept, lean information management and the removal of waste is

not yet being used to its full potential as its direct value is less visible. A case study was

conducted to define which types of non-value adding activities can be identified within

information flows and to determine whether decentralized application development can

eliminate this digital waste. An internal information flow was modelled, analyzed and optimized

by developing customized applications on the Microsoft Power Platform. Based on literature

from the field of manufacturing and software development, a framework was developed to

categorize digital waste as well as higher order root causes in terms of business strategy and IT

infrastructure. While decentralized app development provides the ability to significantly reduce

operational digital waste in a simplified manner, it can also enable unnecessary expansion of a

common data model and requires application lifecycle management efforts as well as edge

security to ensure data compliance and governance. Although limited to one case study, the

suggested framework could give insights to organizations that aim to optimize internal

workflows by identifying and eliminating digital waste and its root causes.

I en tid där nätverk av sammankopplade enheter expanderar snabbt, är det svårt för

organisationer att anpassa sig till den allt mer datoriserade och dynamiska miljön och samtidigt

förbli konkurrenskraftiga. Anställda upplever att mycket av deras tid och resurser spenderas på

repetitiva, ineffektiva och vardagliga uppgifter. Lean manufacturing har visat sig vara ett välkänt

optimeringskoncept, dock har informationshantering och avlägsnande av slöseri inte ännu nått

sin fulla potential eftersom dess direkta värde är svårare att se och räkna. En fallstudie

genomfördes för att definiera vilka typer av icke-värdeskapande aktiviteter som kan identifieras

inom informationsflöden och för att avgöra om decentraliserad applikationsutveckling kan

eliminera detta digitala slöseri. Ett internt informationsflöde modellerades, analyserades och

optimerades genom att utveckla anpassade applikationer på Microsoft Power Platform. Baserat

på litteratur från tillverknings- och mjukvaruutvecklingsområdet utvecklades en ram för att

kategorisera digitalt slöseri samt högre grundorsaker när det gäller affärsstrategi och IT-

infrastruktur. Medan decentraliserad apputveckling ger möjlighet att avsevärt minska det

operativa digitala slöseriet på ett förenklat sätt, så kan det också möjliggöra onödig expansion

av en gemensam datamodell och kräver hantering av livscykelanalyser samt kantsäkerhet för

att säkerställa datahantering och styrning. Trots begränsad till en fallstudie, så kan det

föreslagna ramverket ge insikter till organisationer som syftar till att optimera interna

arbetsflöden genom att identifiera och eliminera digitalt slöseri och dess grundläggande

orsaker.

samman fattning

nyckelord digital waste, informationsflöden, data flow diagram, lean information management,

automatisering, decentraliserad apputveckling

2

Contents Abstract ............................................................................................................................................................1

Abbreviations ................................................................................................................................................. 3

1. Introduction ................................................................................................................................................ 4

1.1 Background .................................................................................................................................................... 4

1.2 Problem statement ....................................................................................................................................... 7

1.3 Research scope ............................................................................................................................................. 7

1.4 Limitations.......................................................................................................................................................8

2. Literature Review ....................................................................................................................................... 8

2.1 Information management in organizations ...........................................................................................8

2.2 Digital Waste .................................................................................................................................................8

2.3 Eliminating waste through automation ................................................................................................ 10

3. Methodology............................................................................................................................................. 11

3.1 Research questions ...................................................................................................................................... 11

3.2 Research methodology.............................................................................................................................. 11

3.3 Software ........................................................................................................................................................ 14

4. Analysis, results and proposal ................................................................................................................ 14

4.1 Describe ........................................................................................................................................................ 14

4.2 Identify .......................................................................................................................................................... 17

4.3 Prescribe ...................................................................................................................................................... 20

4.4 Develop ....................................................................................................................................................... 22

4.5 Implement ................................................................................................................................................... 27

4.6 Evaluate ....................................................................................................................................................... 28

5. Discussion ................................................................................................................................................. 31

6. Conclusions .............................................................................................................................................. 33

7. Bibliography ............................................................................................................................................. 35

8. Table of Figures ....................................................................................................................................... 36

9. Appendix ......................................................................................................................................................

9.1 PowerApps code ....................................................................................................................................

3

Abbreviations

API Application Programming Interface

ATU Account Team Unit

BSO Business & Sales Operations

CSU Customer Success Unit

DFD Data Flow Diagram

EOU Enterprise Operating Unit

ER Entity-Relationship

FY Future Year

MSAccess Revenue reporting/billing engine tool

MSCALC Tool that manages account structure and id’s from MSX and MSSales

MSSales Tool that reports revenue

MSX Internal Customer Relations Management (CRM) tool

SMB Small-to-Medium Businesses

STU Specialty Technology Unit

4

1. Introduction

1.1 Background The increased usage of online devices has enabled a global interconnectivity impacting consumers as

well as organizations. Companies have found the strong incentive for digital transformation and cloud

adoption to optimize internal operations and maintain a competitive position in the market. Data is

collected on a continuous basis to obtain strategic and real-time data-driven decision making,

imposing significant challenges on maintaining an efficient work environment for employees. Decisions

that will impact employees and organizations are preferably made based on accurate and up to date

information. In many cases correct information is missing, or a significant amount of time is spent on

information gathering and knowledge exchange. It is estimated that on average 59% of managers are

missing valuable information daily and in general knowledge workers spend around 20% of their time

looking for the right information (Feldman, 2001).

Organizations consist of siloed departments instead of having a shared information environment in

which accurate information is always available to the right person or team when required. Although

numerous software applications have been developed to support different purposes, typically each

department or employee maintains their own working method with respect to file storage and

information sharing.

Whereas lean manufacturing has manifested itself as a well-known optimization concept, lean

information management and the removal of waste is not yet being used to its full potential as its

direct value is less visible. Non-value adding efforts regarding information management can be

defined as digital waste, which exists in many forms. Within manufacturing, production line automation

has proven itself capable of optimizing production flow and eliminating many non-value adding

activities. Similarly, technological solutions that support rapidly changing business needs in an effective

and agile manner could potentially provide the same level of optimization for information flows.

Software companies such as Microsoft have grasped the opportunity to provide the ability for

employees inside an organization and outside of a typical IT department to rapidly develop

applications that can be completely customized to support a specific business process or information

flow in a simplified manner. The question that arises is whether these decentralized technological

solutions such as automated internal workflows will be able to eliminate non-value adding activities

and what would be necessary for such an implementation to be successful.

This research aims to investigate digital waste by analyzing an organizational information flow to

determine whether theories from other research fields can be applied to identify and categorize non-

value adding activities. The information flow considered within this case-study will then be optimized

through automated workflows and decentralized app development, to determine the ability of

customized technological solutions to remove digital waste. The software which will be used in the

attempt to eliminate digital waste is provided by Microsoft, from wherein this research will be

conducted.

5

1.1.1 Microsoft Corporation

The Microsoft Corporation (Redmond, WA, U.S.) is a multinational organization developing and selling

computer software, personal computing devices and services. Satya Nadella, the current CEO,

initiated a major shift in the company culture, for example through reformulating the global mission

statement, namely ‘to empower every person and every organization on the planet to achieve more’.

Over the past years, Microsoft has realized the necessity to move from on-premise software solutions

to focusing more on the development of Azure, Microsoft’s cloud platform, on which these existing

products can be deployed. There are five main solution areas on which the focus lies within product

development at Microsoft: Modern Workplace, Business Applications, Applications & Infrastructure,

Data & Artificial Intelligence and Gaming. Microsoft’s ambition is to reinvent productivity and business

processes by building and providing an intelligent cloud platform and enabling more personal

computing.

As organizations and their operations must adapt to rapidly changing circumstances, the need for

customized and agile application development to support specific business processes has increased

over the past years. In order to fulfill that need, Microsoft has developed the Power Platform which

decentralizes application development and enables employees from different departments and teams

to build applications that support customized scenarios, automated workflows that remove repetitive

tasks and dashboards that visualize data. These three solution areas are described in more detail in

section 3.3.

1.1.2 Microsoft Netherlands

The Dutch subsidiary of the Microsoft Corporation aims to enable organizations in the Netherlands to

become an icon for digital transformation worldwide. Over the past decades, companies have realized

the need to migrate their activities to a digital and even cloud-based environment to improve the

efficiency of their internal and external operations and to remain competitive in a highly dynamic

environment. The purpose of products and services that Microsoft has developed is to support and

enable organizations to realize their full potential in an effective and agile manner.

Microsoft Netherlands consists of different units which are responsible for varying operations from a

business-to-business perspective towards customers. These customers have implemented or are

currently deploying Microsoft products including for example the cloud platform (Azure), tools that

support business operations (Dynamics) as well as collaboration tools (Office) based on licenses that

are set for a certain period. Based on the number of individual licenses (seats) they have deployed,

organizations are either considered an enterprise or a small-to-medium sized business (SMB). The

Enterprise Operating Unit (EOU) is responsible for the contact and support of managed customers,

which are enterprise organizations within the public and commercial sector. For each of these

organizations, an account is created to which different Microsoft employees are assigned and involved

from three main units: the Account Team Unit (ATU), the Specialty Technology Unit (STU) and the

Customer Success Unit (CSU). Every role that is related to an account has a specific orientation and

area to focus on as well as their own roadmap, depending on the team and/or unit from which they

operate. For SMB customers there are no specific roles assigned to the account, i.e. they are not solely

managed, but they are supported batchwise.

6

1.1.3 Current situation

As was described before, the products that Microsoft provides to its customers are typically deployed

as a service based on a licensing contract that is established by employees that operate within a sales

role. For larger accounts, i.e. based on the number of seats which are licensed through the contract,

there are multiple roles assigned within the units that were stated before, i.e. the ATU, STU and CSU.

Whenever a new license is deployed, an invoice is sent to the customer for which an identifier is

created (MSSalesID). If that license would be renewed, another invoice would be sent to that same

customer and corresponding MSSalesID. Revenue received through invoices lands in MSSales, a tool

which is internally used to match revenue to a certain (existing) account. Employees from the Business

& Sales Operations team (BSO) are responsible for managing revenue and ensuring correct account

structures.

An important aspect to address within this scenario is that accounts can be subsidiary organizations

to other organizations. Revenue that is generated for a subsidiary account also contributes to the total

amount of revenue for its parent organization. Customers’ organizational structures are monitored

within Microsoft to ensure that revenue is reported in the right location. If an account has a parent

organization which is also a customer, the MSSalesID of that parent organization is added to the

subsidiary account using an additional identifier, the Top Parent ID (TPID). Over the past years, the

number of accounts that are incorrectly positioned in an organizational structure has increased

significantly. Employees from the ATU who establish license contracts with these customers are

responsible for monitoring their own portfolio in terms of the total revenue that they have generated

throughout the year. To reassign accounts to the correct parent organization, a request can be made

by an employee from the ATU which is to be approved by an employee from the BSO based on

certain conditions.

Currently, there is a Microsoft Excel file that is sent to employees from the ATU, i.e. sellers, in which

they can search for accounts that have generated revenue during that month which are located in the

wrong place. A lot of time is spent by both sellers who search for misaligned accounts as well as BSO

employees who need to approve or reject these requests, as there is no structured workflow in place.

Many emails are sent back and forth about a specific case and sometimes multiple requests are done

for one invoice, causing a lot of inefficiency and extra workload. As of now there is no possibility to

gain insight in requests from the past or those that are to be reviewed once again the year after.

Overall, a lot of time is spent on mundane and inefficient tasks such as gathering, managing and

distributing files related to these account revenue requests. It is an unstructured but reoccurring

process which can potentially be optimized if the right approach is used.

7

1.2 Problem statement

1.2.1 Definition

The problem that is defined within the scope of this research is that employees from the BSO and ATU

teams experience that a lot of their time is spent on non-value adding activities regarding account

revenue requests. It is an unstructured process that consists of a lot of inefficient, repetitive and

mundane activities which negatively impacts daily operations for many employees.

1.2.2 Stakeholders

From an operational perspective, the main stakeholders within this scenario are BSO employees who

are responsible for reviewing requests and ATU employees, as they are responsible for their own

customer portfolio and corresponding revenue. On a more strategic level, Microsoft as a whole is

considered another stakeholder in this scenario as the organization aims for high employee efficiency

as well as having up to date organizational structures in terms of revenue reporting. Additionally,

investigating whether Microsoft’s customized application development platform provides the ability

to remove non-value adding activities internally could be beneficial to understand its impact and how

to support customers in similar situations.

1.3 Research scope To improve the existing level of internal collaboration and knowledge exchange, it is necessary to

critically reflect on internal information management efforts. Analyzing data flow from collection to

strategic decision making could gain the necessary insights in how data travels through organizations

and whether this data and its related activities add value to the overall process or not.

The aim of this research is to identify non-value adding activities through information flow modelling

in order to gain insights that support a potential categorization of digital waste and to optimize and

eliminate activities that are currently counteracting a collaborative work environment. It is expected

that the information model that is to be developed, assuming that it represents the actual functionality

of the real information system, can provide insights in the amount of wasteful activities and potential

root causes that exist within the described scenario.

Analyzing the similarity between waste that is found within production lines and information flows can

be useful to determine whether optimization strategies such as lean manufacturing could be applied

in information management scenarios to eliminate wasteful processes. An optimized model will be

designed and developed in which information is exchanged in a structured and more efficient manner

using customized technical solutions. An analysis will be made to determine whether this optimized

information system contains less non-value adding activities and which prerequisites hold for such an

implementation to be successful.

8

1.4 Limitations This research aims identify which types of digital waste exist within a certain information exchange

scenario at Microsoft Netherlands to analyze the impact of a technical (automated) solution on the

level of digital waste. The research is narrowed down to one specific scenario which can be modelled

and for which an end-to-end solution can be developed using Microsoft products, inherently

evaluating the ability of these tools to solve internal situations in which digital waste is found. The

usage of these software applications limits the research in the sense that only these tools are evaluated

for their ability to support automated workflows and elimination of digital waste. A more detailed

description of the Microsoft tools that are used in this research is given in section 3.3.

2. Literature Review

2.1 Information management in organizations As organizations are currently in their digital transformation journey, the amount of data that is

generated and collected daily increases at a significant rate. Long-term business sustainability depends

on the ability to acquire knowledge throughout an organization and which can promote the

development of better products and production processes (Lodgaard, 2018). Appropriate information

management strategies can provide the necessary guidelines for optimizing internal knowledge

exchange. Information modelling can be used to gain insights current informational ecosystems and

can be a foundation for software development. Different information models based on how they can

be used to describe information exchange processes within organizations (Durugbo, Tiwari, & Alcock,

2013). Where information modelling efforts were previously aimed at improving its efficiency, there is

now an increasing interest in evaluating information management efforts based on adaptability and

flexibility. Considering that there are numerous standards for information modelling it depends on the

specific requirements for the outcome of the modelling efforts to determine which standard is most

suitable. In order to identify data latency, the Data Flow Diagram (DFD) is considered most suitable

information model to use (Hoitash, 2006). Within the DFD, information is modelled through processes

and data stores, enabling the identification of manual data entry as well as the different data formats

and software applications that are used. Developing and understanding the flow of data in and

through the system given the complexity of work processes is a challenging task (Murray, 2003).

2.2 Digital Waste Toyota developed the concept of lean manufacturing which has been widely applied throughout the

production industry and has been introduced in other areas as well. Within this theory seven categories

of waste are identified, which are all non-value adding activities related to transport, inventory, motion,

waiting, overproduction, over-processing and defects. Additionally, an eighth category is identified

which describes the waste of not using the talent and capabilities of humans, i.e. the employees within

your organization (Liker, 2004).

9

The principles of lean thinking, the removal of waste and the pursuit of perfection can be applied to

any system where products flow to meet the demand of a customer, user or its consumer (another

system). More specifically, it can be applied to information management since information typically

flows through an organization and related efforts are aimed to add value to the product, e.g. when

data is generated to lead to or support a certain decision that is to be made (Hicks B. , 2007).

Fundamental to successful application of lean is the identification of value, understanding of flow and

characterization of waste. Within information management, identifying value as well as characterizing

waste is a complex task since it is less tangible and highly subjective. This becomes especially clear

when compared to a manufacturing environment where value and waste, due to its visibility, can be

measured in key performance indicators, giving direct insights in flow efficiency. The potential barrier

of understanding value and waste is important to consider when modelling information flow and

developing possible improvements, as its effects are difficult to measure objectively.

Figure 1 conceptualizes a pyramid of knowledge: the raw data stored in a database will add value

towards the decision that is to be made only if the right information is presented in the right format

to the right people at the right time. It must be structured and presented as digestible information

such that a human can interpret it and knowledge is created. Over time, as knowledge is accumulated

and combined with experience and judgement wisdom is developed. (Bell, 2006)

Structuring and filtering raw data such that value-adding information can be presented to the right

person can be done by implementing and adopting the right technology, which can be challenging

within multidisciplinary organizations.

Evaluating information management issues within ten small and medium sized enterprises has led to

the identification of 18 core issues that occurred among these organizations (Hicks B. C., 2006). After

reevaluating it was determined that these issues were caused by four fundamental waste categories

(Hicks B. , 2007):

• Failure demand: the resources and activities necessary to overcome a lack of information.

FIGURE 1 – PYRAMID OF KNOWLEDGE (BELL, 2006)

10

• Flow demand: the time and resources spent trying to identify the information elements that

need to flow

• Flow excess: the time and resources necessary to overcome excessive information, i.e.

information overload

• Flawed flow: the resources and activities necessary to correct or verify information as well as

the unnecessary or inappropriate activities that result from its use.

Additionally, these waste categories were mapped directly onto the types of waste that were defined

for the Toyota Production System, suggesting that Failure demand, Flow demand, Flow excess and

Flawed flow are similar to over-processing, waiting, overproduction and defects. The remaining waste

categories, namely transport, inventory, motion as well as not using people to their full potential are

in this study not considered as digital waste.

Others argue that digital waste is to be defined beyond non-value adding activities by categorizing it

as having either a passive or active nature. Passive digital waste occurs when digital opportunities are

missing to unlock the power of (existing) data. Active waste on the other hand, results from a data-

rich environment that lacks the appropriate information management approach to derive the right

information to be provided at the right time to the right person, machine or information system for

decision-making (Romero, Gaiardelli, Powell, Wuest, & Thürer, 2018). Simultaneously, digital waste can

also be described more literally within four categories (unintentional data, used data, degraded data

and unwanted data) to which a waste elimination approach could be applied similar to how physical

waste (from households for example) is handled in daily life (Hasan & Burns, 2013). It can be concluded

that there is no significant consensus on the definition of digital waste, due to the fact that it depends

highly on the situation and environment to which it is applied.

2.3 Eliminating waste through automation In terms of waste elimination strategies, automation arises as an optimization strategy for production

lines which has been widely applied across the industry and is being developed constantly. Automation

can be described as the ‘automatically controlled operation of an apparatus, process or system by

mechanical or electronic devices that replace human labor’. One distinction that must be made is

whether the process that is to be automated consists of continuous or discrete events, as it results in

either process automation (continuous) or factory automation (discrete). Another categorization that

can be made is whether automation is considered hard, i.e. when an industrial robot is programmed

to perform one specific task, or soft, i.e. when more flexibility is required through the ability to perform

different tasks (Wilson, 2015).

Automating manufacturing systems improves productivity and the overall efficiency of a production

line as well as a significant increase in quality due to the specific tolerances that can be applied to

automated assembly systems. Moreover, it replaces repetitive and mundane tasks previously

performed by humans. Successfully developing, implementing and maintaining automated solutions

is critical to optimizing a production line, as it should not lead to jeopardizing critical operations or

processes. A significant challenge during development is translating business process into system logic

which is then supported by an automated workflow system (Murray, 2003).

11

An important aspect to consider regarding is the involvement of people that will be using the solution

in the end, e.g. production employees. They understand the difficulties and variability of the system

and will therefore be able to provide useful insights during the development and implementation

phase (Wilson, 2015). Another crucial aspect of successful adoption is organizational support and

employing a cross-functional implementation team. Additionally, understanding the impact of

workflow automation on the organization is important to consider as well as the human interaction

and participation intrinsic to such solutions (Murray, 2003).

3. Methodology

3.1 Research questions Based on previous research it can be stated that the definition of value within information

management is inconclusive due to its invisible nature and lack of measurability in terms of monetary

value. This leads to the suggestion that the first research question is to be formulated as: how to define

value within the concept of information management?

Furthermore, if there would be a common agreement on the definition of value, would there be a

possibility to find significant similarities between a production line and an information flow and more

specifically: which types of digital waste can be found within information flows?

Identifying and classifying potential types of digital waste are important steps in optimizing information

processes such as the scenario described within the scope of this research. The Power Platform,

developed by Microsoft to simplify decentralized application design and enable end-users to develop

their own tools that support their specific operational business needs, can then be used to investigate

the final research question: can decentralized application development be used to eliminate digital

waste?

3.2 Research methodology Since this research is aimed at modelling a complex system by collecting information based on

interviews with employees inside the organization, different views on the systems and/or organization

can be expected based on experience and perspectives. Therefore, a Soft Systems Methodology (SSM)

approach (Checkland & Poulter, 2006) is used within the scope of this research since there might not

be one unified solution or suggestion to the problem that was stated before. The methodology is used

to identify and describe a situation to create (conceptual) models that describe the behavior of the

actual system, especially within complex systems where variables are unknown and the system can be

viewed from many perspectives. Capturing the overall functionality and behavior of an information

flow model in terms of people, information and the technology used is a complex task. Therefore, the

research is aimed towards modelling one specific scenario to be able to identify challenging or wasteful

areas that most likely exist.

12

The methodology that was selected to conduct this research is an interpretation of the engineering

design process (ITEA, 2007), which is a systematic problem-solving strategy that aims to develop a

solution that meets predefined requirements or demands. Figure 2 shows the research methodology

that was developed for this research which is an interpretation of the engineering design process.

In this scenario, the describe, identify and prescribe steps combined lead to a set of requirements

which is used to develop, test and potentially redesign a solution before implementing it into the real

scenario and evaluating its impact. A more detailed description of each phase is given in the next

sections.

3.2.1 Describe

The first step is to describe the current status of internal collaboration and how data is handled by

creating an information flow model according to the standard of a Data Flow Diagram (DFD) which

consists of processes, data stores and conditional statements. These conditional statements will

provide the ability to identify decision points and introduce logic that determine whether, in this

specific scenario, requests are to be approved or rejected resulting in different subsequent flows. With

a DFD model it will also be possible to evaluate the flow from different perspectives to identify which

role is responsible for which part of the information exchange and how this potentially could be

optimized.

3.2.2 Identify

After modelling the information flow, it is necessary to define value within the concept of information

management which will be used to determine which activities are considered value-adding and which

are not. Given an established definition of informational value, different types of waste can be

identified as well as to what extent they exist within the flow, necessary to answer the first two research

FIGURE 2 – RESEARCH METHODOLOGY

13

questions. Additionally, suggestions for potential root causes can be made, enabling stakeholders

within this research to critically reflect on their information flow in other scenarios as well.

3.2.3 Prescribe

After that, the aim is to optimize the system by eliminating non-value adding activities and developing

an information model that prescribes how information can be exchanged in a more efficient manner

using technology. This will be done by developing an optimized DFD-model including only necessary

and value-adding activities including required data stores. These data stores are then incorporated

into an Entity-Relationship (ER) model that shows how various data sources are linked to each other.

Resulting from this phase is a set of requirements necessary to cover the overall functionality of the

optimized scenario which is used to design and validate the solution that is to be developed.

3.2.4 Develop

The next step is to use the ER-model to develop a solution using different Microsoft products from

the Power Platform, i.e. PowerApps, Flow and PowerBI. The specific functionality of these products will

be described in section 3.3. The aim is to provide an end-to-end solution that automates the non-

value adding activities which were identified before. The functionality of the developed solution is

tested by applying a use case scenario. The prescribe, develop and test phase are part of a sub-

iterative cycle that is aimed to optimize the suggested solution. It is expected that during development

the solution will be tested and feedback will provided such that changes to the ER-model will be

conducted, developed and repetitively tested in an agile manner until the solution meets all

requirements.

3.2.5 Implement

After the solution is validated through a proof-of-concept, it can be implemented to replace the real

workflow. Successful adoption of the solution requires a thorough strategy that incorporates every

aspect of the implementation such as modifications to fit the actual situation, i.e. connecting specific

roles to tasks. Other elements include planning the launch to avoid corrupting daily operations but

also ensuring the adoption by employees through clear instructions and managing changes

appropriately.

3.2.6 Evaluate

The aim of this phase is to evaluate insights that were gained during the different steps that were

conducted within this research, to determine the validity of potential types of non-value adding

activities that were found and to establish which prerequisites hold for an implementation of Power

Platform solutions to successfully eliminate digital waste, contributing to answering the second and

third research question. Another important aspect to consider is how these insights can be used to

provide feedback to the prescribe phase, enabling potential improvements on this optimization

strategy when applied in other scenarios.

14

3.3 Software Microsoft has developed a solution called the Power Platform which consists of three products, i.e.

PowerApps, Flow and PowerBI that have separate functions but can be combined to provide a

customized operating platform.

PowerApps is a cloud-based application that enables users to develop a personalized application that

consists of built-in functionalities, simplifying the application development process while focusing

more on the customization aspect. It has a built-in common data model which provides a database

in which entities are stored and data can be referred to and edited from multiple locations.

Additionally, it can establish connections to read and write data from over two hundred external data

sources such as Microsoft OneDrive, SQL servers as well as online services such as social media sites

and weather applications.

Microsoft Flow is a tool that can be used to automate repetitive tasks and business processes by

connecting different applications and databases to each other through connections and APIs based

on predefined workflows or triggers.

PowerBI is a visualization platform that can give end-users actionable insights through customized

dashboards based on data collected through different sources and connectors such as SQL servers,

PowerApps and Flow.

4. Analysis, results and proposal

4.1 Describe

4.1.1 Account Revenue

When revenue is generated on either a managed or unmanaged account, an entry is made within

MSSales, a tool that manage sales and account revenue globally. If revenue is generated for a new

account, a unique identifier is created, i.e. the MSSalesID. It could also occur that revenue is generated

for an unmanaged (small to medium) account which is a subsidiary of a managed (enterprise) account.

FIGURE 3 – OVERVIEW OF APPLICATIONS AND DATA MODELS IN THE POWER PLATFORM

15

That information is stored in the MSCALC tool, which manages and combines information from both

Customer Relationship Management (CRM) software as well as MSSales data. In MSCALC, unmanaged

subsidiary accounts are assigned as a child organization to a managed parent organization. A

significant amount of managed accounts (parents) have multiple subsidiaries related to them as child

accounts. Whenever revenue is generated on a subsidiary account, matching occurs to ensure that

the correct MSSalesID of the parent organization is added to that specific data entry as a Top Parent

ID (TPID). In those situations, the revenue of the unmanaged account is recorded under the managed

account, as it contributes to the overall revenue that was generated for the Enterprise Operating Unit

and partially determines the account budget that is to be defined for next year.

4.1.2 Requests

According to employees from the BSO team, existing accounts are often not recognized when revenue

is generated as well as accounts not being matched to the right parent account. In those situations, a

novel MSSalesID is generated and the account becomes its own parent, i.e. the Top Parent ID (TPID)

is the same as the MSSalesID and it will be considered an independent SMB-account. On a monthly

basis, employees from the Sales Operations team send an Excel file to ATU sellers consisting of all

novel SMB-accounts that have been generated for that month, of which some are potentially wrongly

considered to be a SMB. Within that list, sellers can search for accounts that should have been assigned

or parented to one of their managed accounts. If that is the case, they can submit a parent request

by adding a name of the desired parent to the entry within that Excel file. An employee from the BSO

team then approves or rejects this request. One of the conditions for approving parent request is that

the account (or MSSalesID) should have been created less than two months ago. When this period

has passed, employees can still submit a request, but it is less likely to be approved. Other

requirements for approval include that the subsidiary organization should be owned by the desired

parent organization for more than 51%, and that no previous revenue was generated for the subsidiary

organization in the past three years. For each request, employees from the BSO have to search for

information that either supports or refutes these conditions leading to a decision to either approve

the request, approve it to be included into next year reports or reject it. In some situations,

organizations no longer exist as a subsidiary to another organization, i.e. they have become

independent and therefore should be assigned to themselves as a parent. There are scenarios in which

organizations have merged but the new (merged) organization is still a customer for which revenue

could be generated. It is then necessary to submit a merge request to combine both accounts without

losing existing information about previously recorded revenue. One organization then becomes the

victim organization and the other the survivor, the latter of which the MSSalesID remains.

Additionally, sellers often do not prioritize the search for missing child accounts until the end of the

fiscal year when they realize that some of their revenue is not recorded. In many situations the two-

month period has then passed so the account remains unassigned. Over the years, these factors have

contributed to an incrementation of the number of unassigned accounts until the point where it now

consists of around 700.000 entries.

16

4.1.3 Data Flow Diagram

Figure 4 shows the Data Flow Diagram for the scenario that is described. The processes depicted in

the blue shaded area within the DFD are the most repetitive and time-consuming and therefore define

the scope of this project for information flow optimization. The black arrows depict the process flow

whereas the grey arrows describe the flow of information. Processes 1 until 4 describe the steps that

need to be taken to create a database with all necessary information about the account, e.g. name,

address, account manager as well as its current parent-child structure and recommended parent. In

processes 5 and 6 an Excel file is created with SMB-accounts that have landed that month. The actual

monthly revenue (visible in the cloud in Figure 4), is reported in MSSales and added to the database

in process 7. Prospected revenue for the coming year is reported in MSAccess, an additional reporting

tool. These values are visible for the BSO team through an Excel plugin in which they can run queries

to export the specific revenue that was created within SMB-accounts and add that to the database in

processes 8 and 9. In process 10, the recommended parent from the SQL database is added to the

Excel file. During process 11, the BSO team then sends this file to the sellers by email including empty

columns in which e.g. a new parent can be stated as part of the parent request. They can search for

missing accounts based on for example their name, MSSalesID or revenue that was generated (process

12). When a suspected missing account is found, the suggested new parent is written down in the

additional columns, and the request is sent back to the BSO team by email (process 13). In process 14,

the BSO team reviews the request based on certain predefined conditions which can lead to different

outcomes. In some specific situations, there exists doubt whether a request should be approved or

rejected, so it is reassigned to the SMB-team being responsible for all small-to-medium businesses

(unmanaged) accounts (process 15c). They can then decide whether it is allowed that that specific

FIGURE 4 – DATA FLOW DIAGRAM OF CURRENT SITUATION

17

revenue contributes to the overall EOU revenue or if it should remain at SMB (process 16b). The ATU

seller is informed on the outcome of the request(s) through manual emails (process 16a). In process

17 and 18, the requests that were approved are then updated by copying the correct new MSSalesID’s

into MSCALC, which is done by someone from the BSO team. The last step in this scenario is for

another member of the BSO team to approve the changes that were made in MSCALC (process 19).

4.2 Identify

4.2.1 Value definition

To be able to identify different types of waste that exist in the Data Flow Diagram is it necessary to

define value in the context of information flows. As described by (Bell, 2006) and depicted in Figure 1,

information is data which is stored and structured until it is interpreted by someone enabling it to

become knowledge. Over time, this knowledge gets transferred into wisdom based on experience and

insights. Within this transformation, data could be considered the raw material of a production line to

which more value is added until it becomes a finished product, i.e. wisdom. On the contrary, as

information is less tangible and more subjective in its nature, a less quantifiable definition would be

more appropriate. Moreover, as information travels throughout different departments and levels

within an organization, the perceived value of a certain set of data or acquired knowledge changes.

The concept of informational value can be viewed from many different perspectives as well as the

level on which it is determined, i.e. whether it is on a local, regional or global level as well as an

operational or more strategical level. For example, data that seems unnecessary and therefore

invaluable within an operational setting could be useful on a long-term on a higher level to gain

predictive insights and determine data-driven strategies. The scenario that is depicted within this

research occurs on a more local and operational level, in which data as well as information typically

only provides value after it has been interpreted, e.g. by a human being or machine learning models.

In order to achieve a consensus on value definition within the scope of this project, it is therefore

suggested that activity that is described in the DFD which does not require knowledge or wisdom, is

a non-value adding activity. Based on this definition an evaluation is made on the DFD processes

considered within the scope of this project. The activities which do not add value are shown as the red

colored processes in Figure 5 implying that around 58% of the tasks can be considered digital waste.

The activities which are depicted in white require employees from either the BSO, ATU or SMB team

to interpret and evaluate the information that is presented to them during this process, e.g. reviewing

or approving a request.

FIGURE 5 – NON-VALUE ADDING ACTIVITIES IN THE DATA FLOW DIAGRAM

18

4.2.2 Types of waste

The activities that are depicted in Figure 5 are highly repetitive and require a significant amount of

time and resources spent by the BSO team, as it consists of verifying, checking and correcting a lot of

information. Processes 11, 13, 15c, 16b and 16c all include communicating with employees through

emails regarding the potential approval of requests based on information that usually requires

additional searching. There is no historic data on previously submitted requests which means that in

many cases emails, searches and requests are done multiple times, creating additional effort and time

spent by all employees involved. For requests that cannot be approved or rejected based on given

information, it is necessary to communicate with the SMB unit to perform additional reviewing. Based

on the identification of digital waste by (Hicks B. , 2007), especially the wasteful efforts and resources

related to data verification, correction and duplication frequently occurs within this scenario. Many

accounts are unassigned or assigned to the wrong parent and requests are done multiple times due

to a lack of monitoring. The monthly repetition of sending Excel files with new revenue that was

generated for SMB accounts leads to a lot of time spent on file transfer and version management. This

frustrates the involved employees as they experience that these activities are not adding any value to

their operations and have a negative impact on their daily routine.

For some processes it seems that the technology used is lacking, i.e. the Excel files with around 700.000

entries which are not filtering correctly is most likely caused by the software not functioning according

to these requirements. The efforts and resources spent on data duplication and verification are not

only caused by inappropriate technological solutions but are also due to a lack of information

management, i.e. the process that determines the setup of this scenario and how information flows

throughout the organization. The people, process, technology framework has been widely applied in

organizations to improve software development and implementation (Chen & Popovich, 2003). It

implies that each aspect within this framework is crucial for any application to be successful, as the

technology itself should be functioning but can simultaneously be useless if it not adopted by the

people or if it supports improperly designed processes in the first place. In an era where the

significance of data and data-driven insights for strategic decision is constantly increasing, it can be

suggested to expand this framework into a more current interpretation of what is necessary for

successful digital transformation.

Figure 6 represents an interpretation of the people, process and technology framework with the

addition of categorizations of digital waste as depicted in previous literature. As data affects and is

affected by all three aspects of the framework, it is considered to exist in between the people, process

and technology elements. The definition of active and passive digital waste as described by (Romero,

Gaiardelli, Powell, Wuest, & Thürer, 2018) provides a useful categorization to be applied to the

framework suggesting that a lack of technology can cause passive waste and that insufficient

information management on the process side can generate active waste.

19

Additionally, the various non-value adding activities that were found within the scope of this research

can be defined according to the classic categorization of waste as defined by (Liker, 2004) for the

Toyota Production System. The traditional eight types of waste are also applied to the framework

presented in Figure 6 and a more detailed interpretation that was defined within the analyzed

information flow is presented in Table 1.

Active Waste All time and resources spent on:

Motion verifying data, i.e. through interpretation by a certain person/tool

Overproduction duplicating files and handling excess data

Waiting finding and overcoming a lack of data

Defects correcting faulty data

Passive Waste All time and resources spent on:

Transport moving data to the right location, e.g. file handling

Inventory migrating and handling legacy data

Over-processing manual data entry

Unused potential Workarounds, e.g. Shadow IT and unused data

TABLE 1 - CATEGORIZATION OF DIGITAL WASTE

Within this scenario the non-value adding activities that are considered motion, overproduction,

waiting and defects can be attributed to inadequate processes or information management efforts

and are therefore categorized as active waste. Wasteful tasks that are considered transport, inventory,

overproduction, over-processing and unused potential in this information flow are caused mainly by

a lack of sufficient technological solutions or integrations, implying that they are defined as passive

waste.

FIGURE 6 – FRAMEWORK OF FACTORS CAUSING DIGITAL WASTE

20

The types of waste that are highly occurrent in this scenario are transport, motion and defect related

activities as most time is repetitively spent on file handling, verification and correcting faulty data.

Additionally, the unused potential of data that is collected through requests which could have useful

insights is a significant waste as well as the amount of time spent on filtering and searching within the

overproduced amount of data entries within the current account database. Beyond the scope of this

scenario, legacy data is suggested as an inventory type of waste as it consists of large quantities of

data which typically require significant resources for it to be migrated and accessed and by that

complicate overall data transferring processes. Shadow IT was added as an additional form of passive

digital waste which can be found when a novel software application is implemented without successful

adoption, i.e. the technology is not used to its full potential and therefore creates inefficient and

insecure workarounds.

In typical production flows, these types of waste can be eliminated by improving either the process

itself or the underlying technology through for example automated solutions that support these

processes. When it comes to information flows, it seems that the people involved in handling the

information play an important role in the level of waste that can be found. When more people are

involved in an information flow, logically a higher level of subjectivity can be expected. People evaluate

and interpret information differently which is useful in many situations but can also add a level of

complexity when it is not necessarily required. The information management processes are usually

defined by people as well, with the right technology implemented to support these processes. In order

to improve both the people and the process elements, i.e. both active and passive waste within the

framework, it is necessary to critically reflect on the technology that is used and whether it could be

optimized in order to remove the resulting digital waste.

4.3 Prescribe

4.3.1 Functional Requirements

To significantly optimize the existing information flow, it is necessary to develop a robust technological

solution which automates the non-value adding tasks. The main group that is targeted in this scenario

are the sellers (account managers) to whom currently the monthly dataset of newly created SMB-

accounts is sent. Instead of receiving a monthly update, the possibility should exist to search in a

database which is automatically updated and consists of all potentially misaligned accounts. Two

search scenarios are considered to distinguish between managed accounts and their related child

accounts as well as the unmanaged child accounts with corresponding parent accounts. Additional

information which is necessary for the sellers includes the currently assigned account manager of the

(managed) account, the revenue billed from the previous three years as well as the current or expected

revenue for this year.

A recommendation for the account which is most likely to be the right parent account for a certain

account should be included to simplify the matching process. They are then to submit parent or merge

requests which are automatically available to employees from the BSO team consisting of all the

required information for them to approve the request, reject it or approve it for the future year (FY).

21

Requests which are approved should then be stored in the right format directly for the parent-child

structure of the accounts to be updated automatically in the MSCALC tool. Additionally, whenever a

request is updated with information from the approver(s), the seller should be informed directly with

the approval status and corresponding comments.

4.3.2 Data Insights

The information regarding these requests, i.e. whether they have been approved or rejected including

commentary provided by the BSO team should be stored such that requests cannot be duplicated

and no communication is necessary to discuss how requests were handled for what reason.

Additionally, requests that are approved for the future year should be stored as well to ensure that no

information is lost in the process and necessary insights can be gained through visualization.

4.3.3 Optimized Data Flow Diagram

The optimized data flow diagram that incorporates the aspects described in section 4.3.1 given in

Figure 7. The activities that included communication through email are removed and will be done

automatically. In order to have one common data model, the suggestion is to have the databases

which will be used to store all the current account and create parent and merge requests located on

the Microsoft SQL Server such that this information can be accessed directly by the BSO team.

As was described before, the technology that will be used to develop this solution is based on the

Microsoft Power Platform, i.e. PowerApps, PowerBI and Flow. PowerApps has numerous existing APIs

through which different applications can be integrated. Unfortunately, there is currently no integration

with the MSCALC tool, which implies that updating the parent-child account structures will still be

done manually using an Excel file in the right format. Nevertheless, the suggested DFD still shows that

the total number of manual activities is reduced by half and that non-value adding activities have been

reduced from 58 to 28 percent of the total number of tasks. Simultaneously, this does not necessarily

imply that the amount of total time and effort spent in this scenario will reduce in a similar manner, as

they are two separate use-cases which are difficult to perform a quantitative comparison on before

implementation.

FIGURE 7 - OPTIMIZED DFD

22

4.3.4 Entity-Relationship model

The suggested data flow diagram as described in Figure 7 consists of four data stores, i.e. entities

which are necessary to conduct the processes in the optimized information flow. Figure 8 shows the

Entity Relationship model which is used to visualize the data model supporting the data flow diagram.

It includes all the existing entities necessary to retrieve information from MSCALC, the SQL server,

MSAccess and MSSales to generate an up to date account database as well as additional information

about internal users of the application. Additionally, the four entities required to support the DFD as

suggested in figure 7 are highlighted in the blue are. These consists of three SQL databases: one

consisting of the current accounts, one to create parent requests and one for merge requests. The

fourth additional data store is used to retrieve user information through an existing API between

PowerApps and Office365. For simplicity only the primary key which relates each entity to one another

is included in the ER-model, namely either the OrgID or MSSalesID, both referring to one instance of

an account.

4.4 Develop To support all requirements defined in the previous section, two applications are developed within

Microsoft PowerApps with customized functionalities for which the developed code can be found in

appendix 1, ensuring the possibility to recreate the designed solutions. All images depicted in this

section are anonymized to ensure GDPR-compliancy regarding customer data.

FIGURE 8 – ENTITY-RELATIONSHIP MODEL

23

4.4.1 CALC_Cleanup

The first application is developed for account managers, forming a target group of around 40 to 50

employees. As was stated before, two search scenarios are suggested from an account manager, i.e.

through managed and unmanaged accounts. Figure 9 shows the designed screen in the

CALC_Cleanup application including a gallery on the left side showing all managed accounts including

their segment (public or commercial) their subsegment (specific industry) as well as the responsible

account manager. This account database is updated daily, showing the most current version of the

parent-child structure as it is stored in MSCALC.

FIGURE 9 - MANAGED ACCOUNT SCREEN IN CALC_CLEANUP APPLICATION (ANONYMIZED)

There are multiple filtering options designed to simplify the search for any specific account, such as

the option to only show the accounts managed by the seller who is logged in. When a managed

account is selected, all related child organizations are shown in the gallery in the middle of the screen.

When a specific child organization is selected, there are two options for the user to choose from: to

create either a parent or merge request. When a new parent is required, the user can either find a

parent in the dropdown list or select the unparent checkbox, implying that the current child

organization becomes an independent account and the Top Parent ID will be the same as the MSSales

ID. This scenario will be used mostly to remove child organizations which are mistakenly assigned to

the wrong parent organization or for organizations that have become independent.

24

Similarly, Figure 10 represents a separate screen which is designed to show all the unmanaged

accounts, including all the SMB-accounts which are mistakenly independent but instead should

potentially be assigned to a managed account.

FIGURE 10 - UNMANAGED ACCOUNT SCREEN IN CALC_CLEANUP APPLICATION (ANONYMIZED)

As was stated before, there are some conditions which predefine whether accounts are eligible to be

parented by a seller or not, such as whether the account was created within the last two months, if

there was no revenue billed the last three years and if for example the child organization is owned by

its parent organization for more than 51%. If either one of these conditions is not met, the Parenting

Allowed property will become ‘No’ which informs the seller beforehand that it is unlikely for the request

to be approved. It does not prevent the request from being submitted as some requests can also be

approved for next year. The seller is obligated to add comments to validate the request, otherwise the

submit button will provide an error message. Additionally, when an organization is selected, a search

query is executed in the parent request database to determine whether a request for that account has

been done previously including the new parent that was requested and what the status for that specific

request is. In that way duplication of requests is prevented as the same request cannot be submitted

twice.

25

Figure 11 shows a third screen which is designed for the CALC_Cleanup application, showing the user’s

requests and their status including comments made by the approver.

FIGURE 11 - REQUEST SCREEN IN CALC_CLEANUP APPLICATION (ANONYMIZED)

26

4.4.2 CALC_Approvals

The second application that is developed to support the prescribed scenario is the CALC_Approvals

application which functions as a back-office application for BSO employees. Figure 12 shows a screen

with pending requests which are created in the CALC_Cleanup tool and can be approved directly,

approved for next year or rejected along with explanatory comments made by the approver.

Approved requests can be filtered and when the export button is pressed, all necessary information

will be patched to an Excel file consisting of the exact format used for importing in the MSCALC tool.

In that way, the number of manual activities is minimized as much as possible.

FIGURE 12 – OVERVIEW REQUESTS IN CALC_APPROVALS APPLICATION (ANONYMIZED)

4.4.3 Challenges during development

The main challenge during design and development of these solutions was to establish connections

between the SQL server and Power Platform using an on-premise gateway. The properties within the

entities that were interconnected through this gateway consisted of different formats and naming

structures which caused some inefficiency due to a lack of standardization. Outside of the scope of

this particular project, the decision was made to move from an on-premise SQL server to a cloud-

based server, implying that the gateway will have to be established once again.

Applications within PowerApps are always developed in a specific environment, either for trial,

development or production purposes with corresponding functionalities. The production environment

is most suitable for applications which are used by multiple employees that need to be scalable and

secure for actual usage. If a production environment is to be used internally at Microsoft, a specific

request must be made to monitor which employees are building which tools for what purpose. This

turned out to be another hurdle in the development process as it required more time than anticipated.

Another challenging aspect is that an on-premise SQL gateway is not supported in a production

environment, implying that the cloud gateway has to be established before further testing and

development in the production environment could be realized.

27

4.4.3 Testing

Development of technology to support a certain process for people requires the involvement of the

stakeholders which are to critically reflect on the process that they are using. It gives insight to the end

user in what data model is supporting their daily operations. This resulted in a series of internal

meetings with the BSO team to state the requirements which were then translated into development

functionalities in the tool. These functionalities were then presented in following meetings in which

feedback was provided in a cyclic and agile manner until all requirements were met and no additional

improvements were necessary.

4.5 Implement

4.5.1 Scalability and adoption

As was mentioned before, the target group to use the applications are account managers and

employees from the BSO team. Some important aspects to consider during the implementation phase

are the deployment, scalability and permissions for the designed solutions. Additionally, to ensure that

the applications will be used, an appropriate adoption strategy is to be determined as well as a plan

for managing the applications and incorporating potential changes and updates after implementation.

For the applications to be deployed amongst multiple teams and departments, it is necessary to

establish a secure production environment on the Power Platform in which the applications can run.

The applications are aimed to be launched at the start of the new fiscal year, but for testing purposes

it is necessary to give read/write rights within the production environment to different users to simulate

a real-world scenario.

For a novel tool to be successfully adopted, it can be useful to establish early adopters within the

organization, i.e. stakeholders in the ecosystem who can play an important role in engaging and

motivating other employees to use the tool. These people could be for example in a managing role

who motivate their own team members, or employees within the team itself who get a first glimpse at

the tool and then use their experience to get other team members on board. An introductory session

with a presentation and/or workshop can provide a simple explanation on the functionality of the

application and the scenarios that it involves.

4.5.2 Application management

An important aspect is to investigate the actual usage of the tool and to determine which teams use

the tool more than others and which causes can be identified. It is crucial to share knowledge and

experience about the applications in terms of how they are structured and how potential changes and

updates can be made easily. This is necessary to ensure that this knowledge does not remain siloed

but that instead multiple admins can edit the application and a more agile software developing

environment is established.

28

4.6 Evaluate The case study that was examined within this research provided an example of how digital waste can

be found in day-to-day operations and how it makes employees less efficient in terms of time and

effort spend on non-value adding activities. The tool that was developed to eliminate these activities

will be implemented beyond the timeframe of this project, so a thorough evaluation of its impact

cannot be made at this point. Nevertheless, more insights on the identification of potential root causes

of digital waste can be investigated further.

Throughout the whole organization, a lot of information is collected and stored about accounts in

different manners. For example, information that is collected regarding potential sales opportunities

ends up in the Customer Relationship Management (CRM) tool, whereas information regarding

revenue is stored in the MSSales tool. In both tools, a specific identifier is created although they refer

to the same account. The MSCALC tool, which is used to form a bridge between both tools, can be

considered a master data management effort ensuring that all information related to an account is

stored in one place to create a single point of reference for different departments, employees as well

as applications. The inaccurate alignment of new revenue onto existing accounts is mainly caused by

a variety of billing systems in which revenue can be reported, depending on the type of product for

which a license is deployed. This mismatching within MSSales occurs on a global level in the

organization and generates a bulk of erroneous account structures in the MSCALC tool. The tool that

was developed within this research may optimize daily operations for the subsidiary in the Netherlands,

but the technology and processes that originate on a global or strategic level are within this scenario

the underlying causes for this type of operational waste.

This leads to the suggestion that, from an operational perspective, the framework for digital waste

should include a hierarchical aspect to incorporate the level on which the root causes of waste can be

identified, whether it is operational, tactical or strategical as well as local, regional or global. The

suggested framework is presented in Figure 13, in which the lower, operational level in the pyramid

consists of the types of waste as presented in Figure 6. These are the types of waste that are found in

daily and repetitive information flows such as the one investigated in this project. Solutions such as

the Power Platform can be used to rapidly automate many of these tasks as they are able to support

specific and customized business needs or processes with a short and adaptive lifecycle. These

solutions are also used mainly on a local level such as the CALC_Cleanup application that will be used

within the Dutch subsidiary.

29

FIGURE 13 - FRAMEWORK FOR DIGITAL WASTE CATEGORIZATION

Passive waste that can be found on an operational level could be caused by a lack of integrated

solutions on a tactical level which are perhaps not technically possible to provide. For example,

extensive transporting of files could be caused by a lack of APIs that interconnect necessary functional

areas, i.e. the applications are siloed. Similarly, this lack of integrations could be caused by certain

occurrences on a strategical level such as organizational mergers which have significantly impacted

master data management efforts. Active waste on the other hand, could be caused by the existence

of siloed departments that focus on a specific business area without intra-functional standardization

efforts, i.e. there are no established information management efforts which enable a simplified

knowledge exchanging environment. For example, the time and effort spent on finding information

to support decisions is amplified by a lack of standardized file management throughout the

organization, i.e. in terms of naming and storing files. Typically, this also leads to duplicated or

incorrectly versioned files which are stored separately within departments throughout the

organization. In its turn, these siloed departments can be the result of business decisions on a more

strategic level which realized a certain organizational structure.

Additionally, a pace-layered application strategy as developed by Gartner (Mesaglio, 2016) can be

applied to the digital waste framework, as it provides a categorization of different levels of IT-systems

within an organization based on the business processes they support including their corresponding

lifecycle. Gartner describes that applications supporting critical business processes on a strategical

level such as supply chain operations and customer relationship management (CRM) tools belong to

the Systems-of-Records. These applications typically exist for longer than ten years since the processes

they support are usually well established within the organization and are less subjected to change.

30

Applications that support more specific business processes such as product development exist on the

tactical level and are considered Systems-of-Differentiation, i.e. they have a shorter lifecycle of around

one to three years as they need to be updated more frequently in order to adapt to changing

functional requirements. The applications with the shortest lifecycle exist on an operational level within

the Systems-of-Innovation with a lifecycle between zero to twelve months as they are mainly used to

support highly customized and adaptable business processes such as the ones that can be developed

using the Power Platform. The prospected lifecycle as well as the robustness of the application

increases for each higher level in the hierarchy.

To put the current scenario into perspective of the suggested framework, it can be stated that the

billing systems as well as MSSales and MSCALC exist on a Systems-of-Records level within the

infrastructure as they support critical business processes to receive income. Simultaneously, these

applications cause the digital waste that is found on the operational level. Despite the efforts of

cleaning up this waste through implementing a novel application on the Systems-of-Innovation level,

the amount of accounts being incorrectly assigned a new MSSalesID or parent will remain unchanged.

In order to optimize these systems and to minimize the error rate of accounts being assigned to the

wrong parent account, efforts are made on a tactical and strategical level to improve mapping of new

accounts based on an accurate recommendation model. As was described before, current

recommendations are made based on a fuzzy match, i.e. the similarity between organization names,

whereas this model compares the organization name to the existing parent accounts based on

additional properties that increase the likelihood of an account to be assigned a certain parent

account. If there is only one recommended parent account and no revenue has been booked on the

account yet, the mapping can be done automatically. If that is not the case, an employee will have to

approve reassigning the account to a suggested parent organization, similar to how the

CALC_Cleanup tool was developed within this project.

Implementing such efforts on a Systems-of-Records or Systems-of-Differentiation level will reduce the

amount of time and resources spent on unnecessary activities on an operational level as well as the

effort spent to solve these issues within the Systems-of-Innovation level. Whereas these innovative

customized applications may provide a short-term solution to poignant problems, it is less preferred

compared to addressing the issue where waste originates from, as it consists of a potentially

unnecessary expansion of the (common) data model.

31

5. Discussion Although the scope of this research was narrowed down to only one case-study, useful insights can

still be gained from the analysis and suggested optimization of the selected information flow. After

describing the current scenario in terms of processes and data stores, the aim was to answer the first

research question by defining value within the concept of information management in order to apply

a lean optimization strategy. It can be stated that due to the subjective nature of information, it is

difficult to determine one definition of value which is measurable within information flows. It depends

on the perspective that is applied to the scenario as well as the level on which the flow occurs, i.e.

information can have different value on an operational level compared to a strategical level within an

organization. Since an operational scenario is considered within this research, a conceptual pyramid

of knowledge as described within literature is used to support the definition that distinguishes between

non-value adding information and value-adding knowledge, i.e. before and after interpretation.

This definition of value was then used to answer the second research question, which described the

necessity to determine which types of digital waste can be found within information flows. Through

visualizing the data flow diagram while distinguishing between value-adding and non-value adding

processes, it could be determined that most activities are digital waste. In terms of defining the

different types of digital waste that can be found within an information flow, the people, process,

technology framework was used to distinguish between active waste being caused by a lack of

information management and passive waste being attributed to a lack of appropriate technological

solutions. The traditional categorization of waste within the Toyota Production System provided the

ability to directly translate waste found in production lines to this particular information flow.

Additionally, the digital waste framework was expanded to incorporate a level of hierarchy as

presented in Figure 13. It provides the categorization of various forms of digital waste, whether they

can be considered passive or active and which underlying causes can be identified from both a tactical

and strategical level.

A solution to overcome or eliminate operational digital waste is to develop technological solutions

that automate repetitive and mundane tasks. As within a completely automated production line,

focusing on technology could deliver products in a qualitative and highly efficient manner. Typically,

applications which are used for one specific business process require months of software development

and testing before the sudden realization that those predefined requirements are not met or have

changed in the meantime. The third research question was stated to determine whether decentralized

application development, i.e. automated flows or customized applications such as Microsoft Flow and

PowerApps can be used to eliminate digital waste. It may be noted that an agile software developing

environment such as the Power Platform allows employees outside of the IT department to build their

own solution which meets their particular demands without spending a significant amount of time or

resources on traditional development and implementation. In a relatively simplified manner, these

innovative approaches enable employees to remove the wasteful activities they encounter themselves,

as they have gained the most experience with that specific process. It can give additional insights to

employees on how the organization is structured in terms of data and underlying architectures,

leading to more understanding in how data flows from generation towards different end points in the

32

organization. It can help to establish a collaborative environment in which multiple employees are

involved in the design and development of the application in an agile and efficient manner.

Additionally, collaborative application development while using the digital waste framework can

enable employees to critically reflect on their internal workflows and define the exact requirements

that need to be incorporated in the technology.

On the contrary, decentralized app development also involves potential risks, such as data ownership,

management, security and compliance. If a common data model is established for a certain

organization, it is important to consider which employees can read/write to certain parts of the model.

The data model can easily be expanded adding a potentially unnecessary level of complexity and

dependencies as well as additional challenges in guarding the edges of the model in terms of data

security, compliance and governance. A suggestion would be to supply the common data model with

a meta data model that consists of all the additional information required to track where data is used

in what manner and by whom. Logic could be applied to that underlying data model to ensure that

no data is used in an insecure or inappropriate manner, if possible. If the number of customized

applications on the operational level increases, it is also necessary to keep appropriate application

lifecycle management (ALM) efforts in place to keep track of versions and updates as business

processes and employee roles may change as well as their corresponding requirements and

permissions. Another important concept that should be incorporated during the democratization of

app development is standardization. Establishing rules of conduct such as naming entities or

properties according to a specific format can be useful to simplify the design and enable more efficient

development.

33

6. Conclusions In an era where networked interconnectivity provides abundant information sources, it is crucial for

organizations to remain competitive through critically reflecting and optimizing their business

operations. Organizations typically struggle with siloed departments that lack appropriate information

management while employees experience that much of their time and effort is spent on mundane

and repetitive tasks. More specifically, a scenario within the Dutch subsidiary of Microsoft was

considered for analysis to determine whether similar types of waste as within traditional production

lines can be found and if similar strategies can be applied for optimization. A thorough analysis and

visualization of the current scenario was made using a data flow diagram. The concept of value within

information flows was then defined from an operational perspective and applied to the diagram, in

order to identify which of the current activities did not require interpretation and could therefore be

classified as non-value adding. A categorization was then made based on the traditional eight types

of waste as defined within the Toyota Production System, as well as another definition that

distinguishes between active and passive waste. Additionally, these types of waste were placed in a

people, process and technology framework as each aspect can contribute to the existence digital

waste up to a certain extent. After identifying the current types of digital waste, an optimized data flow

diagram and corresponding entity-relationship model was designed which resulted in eliminating

nearly all digital waste. Based on this redesigned information flow, a set of requirements was

determined to create a decentralized application using the Power Platform in an iterative cycle of

designing, developing and testing two applications. Intermediate feedback sessions were held with

stakeholders to ensure that all previously and newly defined requirements were met in an agile

manner. The scalability and technical deployment of the solution turned out to be challenging aspects

during these design and development phases of these two applications. They will be implemented

beyond the scope of this research, so unfortunately a thorough evaluation of the impact on current

levels of digital waste cannot be conducted at this point.

Regardless, it may be concluded that a decentralized application developing environment can provide

the ability to optimize information flows on an operational level and can be used as an attempt to

significantly reduce non-value adding activities or digital waste in a simplified manner. Simultaneously,

it is important to consider that such a democratized and decentralized ecosystem of customized

applications enables the expansion and increased complexity of the common data model, which

requires application lifecycle management efforts as well as edge security to ensure data compliance

and governance. It must be noted that the suggestions made within the scope of this research are

limited by the fact that it consists of one case study, narrowing the possibility to state whether this can

be applied generally to information flows in which mundane tasks can be found.

Nevertheless, the findings and suggested framework could still be used to guide organizations that

aim to optimize internal workflows through the identification of operational digital waste as well as

underlying root causes within their business strategy or IT infrastructure.

34

7. Recommendations for future research As this research consists of a single case study, it is recommended to investigate multiple scenarios in

order to validate the suggested digital waste framework. Especially when scenarios on different levels

within an organization are considered, an evaluation can be made on the current definition of value

in an information flow as well as the types of waste that are found. This would provide the possibility

to define more potential root causes within the current framework such that it is applicable to a variety

of organizations and corresponding information flows.

Additionally, investigating multiple scenarios could also give more insight in the occurrence of

potential consequences such as the ones suggested in this research, e.g. data model expansion and

corresponding compliance and security issues. Further research could therefore be done to determine

guidelines or rules of conduct when it comes to developing such applications to find the prerequisites

for an optimal balance between having a traditional organization with a separate IT department or a

perhaps less structured decentralized developing environment.

35

7. Bibliography Bell, S. (2006). Lean Enterprise Systems, Using IT for Continuous Improvement. Hoboken, New Jersey: John

Wiley & Sons.

Checkland, P., & Poulter, J. (2006). Learning for action: a short definitive account of soft systems

methodology and its use for practitioner, teachers, and students. John Wiley & Sons.

Chen, I. J., & Popovich, K. (2003). Understanding customer relationship management (CRM): People, process

and technology. Business Process Management Journal, 9(5), 672-688.

Durugbo, C., Tiwari, A., & Alcock, J. R. (2013). Modelling information flow for organisations: a review of

approaches and future challenges. International Journal of Information Management, 33(3).

Feldman, S. S. (2001). The high cost of not finding information. An ADC White Paper.

Hasan, R., & Burns, R. C. (2013). The life and death of unwanted bits: Towards proactive waste data

management in digital ecosystems. arXiv: Emerging Technologies, 144-148.

Hicks, B. (2007). Lean information management: Understanding and eliminating waste. International Journal

of Information Management, 27(4), 233-249.

Hicks, B. C. (2006). A study of issues relating to information management across engineering SMEs.

International Journal of Information Management, 26(4), 261-283.

Hoitash, R. K. (2006). Measuring information latency. The International Journal of Digital Accounting

Research, 6(11), 1-24.

ITEA. (2007). Standards for Technological Literacy. Reston: ITEA (International Technology Education

Association).

Liker, J. (2004). The Toyota Way. McGraw Hill.

Lodgaard, E. S. (2018). Knowledge Sharing in Product Development Teams. IWAMA (International Workshop

of Advanced Manufacturing and Automation) (pp. 432-438). Changzhou, China: Springer.

Mesaglio, M. H. (2016). Pace-Layered Application Strategy and IT Organizational Design: How to Structure

the Application Team for Success. Gartner.

Murray, M. (2003). Strategies for the successful implementation of workflow systems within healthcare: a

cross case comparison.

Romero, D., Gaiardelli, P., Powell, D., Wuest, T., & Thürer, M. (2018). Digital Lean Cyber-Physical Production

Systems: The Emergence of Digital Lean Manufacturing and the Significance of Digital Waste. International

Conference APMS (Advances in Production Management Systems) Production Management for Data-

Driven, Intelligent, Collaborative and Sustainable Manufacturing, (pp. 11-20). Seoul, Korea.

Wilson, M. (2015). Implementation of Robot Systems: an introduction to robotics, automation and successful

systems integration in manufacturing. London, England: Elsevier.

36

8. Table of Figures

Figure 1 – pyramid of knowledge (Bell, 2006) .......................................................................................................9

Figure 2 – research methodology .......................................................................................................................... 12

Figure 3 – overview of applications and data models in the power platform ............................................ 14

Figure 4 – data flow diagram of current situation ............................................................................................. 16

Figure 5 – non-value adding activities in the data flow diagram ................................................................... 17

Figure 6 – framework of factors causing digital waste ..................................................................................... 19

Figure 7 - optimized dfd .......................................................................................................................................... 21

Figure 8 – entity-relationship model .................................................................................................................... 22

Figure 9 - managed account screen in calc_cleanup application (anonymized) ...................................... 23

Figure 10 - unmanaged account screen in calc_cleanup application (anonymized) ............................... 24

Figure 11 - request screen in calc_cleanup application (anonymized) ......................................................... 25

Figure 12 – overview requests in calc_approvals application (anonymized) .............................................. 26

Figure 13 - framework for digital waste categorization ................................................................................... 29

TRITA -ITM-EX 2019:161

www.kth.se


Recommended