A Service Management System for Ticketing Maintenance Engineers Time
Dovile Baranauskaite
Submitted in accordance with the requirements for the degree of
BSc Computer Science
2016/2017
School of Computing
FACULTY OF ENGINEERING
- ii -
ii
The candidate confirms that the following have been submitted:
Items Format Recipient(s) and Date
1. Final Report
2. Comparative study of Existing
Service Management Systems
3. Wireframe of the Service
Management System
4. Database UML diagram
5. Minutes of Meetings with the
client
6. User Evaluation Questionnaires
Report and Appendices SSO (10/05/17)
SQL database of the Service
Management System
Source Code URL Client, Supervisor, Assessor
(10/05/17)
Web API of the Service Management
System
Source Code URL Client, Supervisor, Assessor
(10/05/17)
User Evaluation participant consent
forms
Signed forms in envelop SSO (10/05/17)
Type of Project: Software Product
The candidate confirms that the work submitted is their own and the appropriate credit has been given
where reference has been made to the work of others.
I understand that failure to attribute material which is obtained from another source may be
considered as plagiarism.
(Signature of student) ___________________________________
© 2017 The University of Leeds and Dovile Baranauskaite
- iii -
iii
Summary
This report details the development of a Service Management System for Ticketing Maintenance
Engineers Time. An external client, called Sycous, is involved in this project, and will adopt and
extend the required deliverables in the future – a web Application Programming Interface and a
database.
The report initially discusses problem research and requirement analysis for the system, followed by
software development process, testing, evaluation and final outcomes of the project along with
discussion on future extensions.
- iv -
iv
Table of Contents
Summary ............................................................................................................................... iii
Table of Contents ................................................................................................................. iv
1 Introduction ........................................................................................................................ 1
1.1. Context ................................................................................................................ 1
1.2. Problem Statement .............................................................................................. 1
1.3. Aim and Objectives ............................................................................................. 3
1.4. Deliverables ......................................................................................................... 4
1.5. Relevance to Degree Program ............................................................................. 4
1.6. Summary ............................................................................................................. 5
2 Project Management .......................................................................................................... 6
2.1. Choice of Methodology .......................................................................................... 6
2.1.1. Waterfall Method ...................................................................................... 6
2.1.2. Agile Iterative Development ..................................................................... 7
2.1.3. Justification of Choice ............................................................................... 8
2.2. Project Schedule ..................................................................................................... 9
2.3. Iteration and Task Management ........................................................................... 10
2.4. Source Control ..................................................................................................... 11
2.5. Communication with Client ................................................................................. 13
2.5.1. Regular Meetings .................................................................................... 13
2.5.2. Communication Alternatives ................................................................... 13
2.6. Risk Assessment and Mitigation .......................................................................... 14
2.7. Summary .............................................................................................................. 15
3 Problem Research ............................................................................................................ 16
3.1. Requirement Analysis .......................................................................................... 16
3.1.1. Background Research .............................................................................. 16
3.1.2. Interviews ................................................................................................ 17
3.1.3. Sample Documents .................................................................................. 18
3.2. Existing Systems .................................................................................................. 18
3.3. Wireframe ............................................................................................................ 20
3.4. Summary .............................................................................................................. 22
- v -
v
4 Delivery ............................................................................................................................. 23
4.1. Database ............................................................................................................... 23
4.1.1. Choice of Language and Technology ...................................................... 23
4.1.2. Database Management Systems .............................................................. 24
4.1.3. Design and Architecture .......................................................................... 24
4.1.3.1. UML Diagram ............................................................................ 24
4.1.3.2. Relational Model ........................................................................ 27
4.1.3.3. Normalisation ............................................................................. 28
4.1.3.4. File Handling .............................................................................. 28
4.1.4. Implementation ........................................................................................ 29
4.1.5. Unit Testing ............................................................................................. 30
4.2. Application Programming Interface ..................................................................... 30
4.2.1. Choice of Language and Technology ...................................................... 31
4.2.2.Implementation ......................................................................................... 32
4.2.2.1. Model-View-Controller Pattern .................................................. 32
4.2.2.2. Data Access Layer ...................................................................... 33
4.2.2.3. Controller Routing ................................................................................ 34
4.2.2.4. HTTP Responses and Validation.......................................................... 35
4.2.2.5. Endpoint Models .................................................................................. 35
4.2.2.6. Unit Testing .......................................................................................... 36
4.3. Summary .............................................................................................................. 36
5 Evaluation ......................................................................................................................... 37
5.1. User Evaluation .................................................................................................... 37
5.1.1. Choice of Technique ............................................................................... 37
5.1.2. Conduct ................................................................................................... 37
5.1.3. Results ..................................................................................................... 38
5.2. Self-Evaluation ..................................................................................................... 39
5.3. Summary .............................................................................................................. 40
6 Conclusion ........................................................................................................................ 41
6.1. Aim, Objectives and Minimum Requirements ..................................................... 41
6.2. Project Management ............................................................................................. 42
6.3. Future Extensions ................................................................................................. 42
6.4. Summary .............................................................................................................. 43
- vi -
vi
List of References ................................................................................................................ 44
Appendix A External Materials ......................................................................................... 48
Appendix B Ethical Issues Addressed ............................................................................... 49
Appendix C Minutes of Regular Meetings with the Client ............................................. 50
Appendix D Minutes of Interview with the Engineer ...................................................... 51
Appendix E The Designed Wireframe of the System ...................................................... 52
Appendix F User Evaluation Questionnaires ................................................................... 55
1
1 Introduction
The following chapter introduces the solved problem and its context, along with the aim, objectives
and requirements of the project. In addition, it specifies what deliverables have been produced and
how the project is relevant to degree program.
1.1. Context
An external client, Sycous Ltd, is involved in this project and will be adopting and extending the final
product in the future for business needs. They are a new, growing company, based in Leeds. Sycous
are providers of smart metering and remote data collection hardware as well as administration
software for use in district heating and communal energy market (Sycous, 2017).
They supply hardware to measure and collect consumption in the form of utility meters. They
interface with this hardware in a number of ways, either through human interaction where a person
would go and collect the data from the units or by using the internet to transmit data to their servers.
The company has an in-house development team which manages the software services. The main
software product of the company is their metering and billing platform, called “MySycous”. It allows
network operators to manage their energy meters, generate reports and bills for residents without the
need to outsource to third parties.
The company also have their own stock management system. Sycous are currently building a new
version of their billing platform and are planning to create a mobile application for their customers.
Sycous currently have one engineer and use a range of contractors whom they have relationships with
and bring in when a project requires the extra resource. They usually work in the field which could be
in people’s homes, commercial units or boiler rooms. The company operates all over the UK and their
engineer will travel to locations and contractors will be brought in from all over the country based on
the geography of the project. As the company grows, more engineers will be joining Sycous soon.
1.2. Problem Statement
A web-based Field Service Management System (SMS) is required to be built for the external
client. There are several currently existing Field Service Management Software Systems available to
purchase online, such as: ClickSoftware, OneServe, ServMan, etc. (Capterra, 2017). These systems
help companies manage the work of mobile staff – engineers, field workers, technicians, sub-
2
contractors. Most of the existing software products usually allow to perform actions like: scheduling,
reporting, workflow reviews, inventory management and real-time information exchange.
Sycous, as a young company, cannot always afford to use software from external sources. They also
require this system to be tailored to their special needs. First of all, the system needs to serve the
purpose of providing a web portal which would ease the communication and time
management between company’s engineers, office staff and clients. This will allow the users to log in
to their accounts and, depending on their role (engineer, office staff or company’s client), perform
particular actions, which include:
Allocating engineers to perform their duties, based on customer’s service
requirements (office)
Viewing timetables and tasks, allocated to engineers (office)
Requesting arrival of engineers due to problems/installation of hardware on site
(clients)
Viewing the tasks assigned, signing them off once they are completed and recording
evidence of work that has been done (primarily, uploading pictures) (engineers)
Managing a daily work schedule (engineers)
Recording daily actions and their duration, signing in and off to work (engineers)
Secondly, it is very important for the system to be built as a web Application Programming Interface
(API). A web API is an interface that is defined in terms of a set of functions and procedures, and
enables a program to gain access to facilities within an application. The use of such facilities enables
users to customize the application for their own purposes and to integrate the application into a
customized development environment (Butterfield & Ngondi, 2016). In short, API is not a complete
software designed for users, it is a set of methods that access the database to view, insert, update and
delete data from it, that software developers can use and extend by building client applications for it.
However, this is where the beauty of API lies – any possible user interface can be built for it, varying
from choice of programming languages, platforms, operating systems, technological solutions, up to
the application design itself. More than one user interface can be plugged into the API, allowing to
have the same program functionality on both desktop and mobile platforms. It is even possible for
distinct software applications to call only one or more specific functions of the API as a part of its
process, which cannot be completed by the program alone. Figure 1 illustrates the definition of API -
a core system that provides functionality for external applications.
3
The definition also reflects reasons for the client to require an API solution: Sycous would like to
integrate their other existing systems together with this one, having an API will allow them to do this
efficiently. It will also be possible to use the built API for future systems and integrations (for
example, an upcoming mobile app).
In order to store the system’s data, there is a requirement to build a database for it. The database needs
to follow common standards, and be designed in such a way that future system improvements could
be implemented easily.
As sensitive business data is going to be handled while using this Service Management
System, the deliverables have to be secure and thoroughly tested.
1.3. Aim and Objectives
The aim is to design and deliver a web Service Management System API for ticketing engineers’
time, based on external client’s requirements. The system will need to have certain functionalities,
specified by the client and will need to be able to store the data in a database.
The objectives of the project are as follows:
1. To collect client’s requirements of the system and understand the IT infrastructure of the
business.
2. To design and deliver an API system with following functionalities:
a) Managing a daily work timetable for field engineers, adding time records of actions
b) Allocating tasks for field engineers
c) Viewing allocated tasks
d) Signing completed tasks off, adding files
Figure 1. Web API – the core system that provides functionality
for external front end applications. (Miller, 2014)
4
3. To design and deliver a secure database for storing data, entered into the system.
1.4. Deliverables
The core deliverables of the project include:
A database for storing system’s data
Tests for the database
A Web API of a Service Management System
Tests for the Web API
Several extra deliverables are included in this report, such as:
A comparative study of existing Service Management Systems
Wireframe of the Service Management System
UML diagram of database design
Minutes of meetings with the client
User evaluation questionnaires
1.5. Relevance to Degree Program
The complexity of this project certainly allows to demonstrate a wide range of Computer Science
subjects and fields applied to it. A number of them were used only for particular parts and times
during the implementation, whereas others were practiced and revisited from start to finish.
First of all, Project Management techniques were used throughout the whole duration of the project.
They are crucial for discipline of successful creation and implementation of any product. Although no
actual teamwork in delivering the software product was involved, the same methodologies that are
beneficial for software development teams were successfully adopted for individual project.
Secondly, Information Management and Security module contents were vitally important for the
project as it involves a relational database. Data storage element is one of the most significant parts of
this system; even a small failure in design could lead to a breakdown of the complete functionality
and affect the business. In addition, database is always the main target for web attackers, so assuring a
secure and professionally tested way to store the data is always a priority.
Thirdly, Software Engineering and Programming techniques learned in the university were
continuously applied while developing the Application Programming Interface (as well as the
database code). In more detail, requirements analysis techniques and object oriented programming
5
skills were used extensively. The emphasized importance of keeping the software code universally
understandable, clean and structured, which was learned in Software Engineering module, has also
been taken into account over the development period.
1.6. Summary
The chapter has introduced the problem and its context. Aim, objectives and requirements have also
been listed. In addition, it has been specified what deliverables have been produced and how the
project is relevant to degree program.
6
2 Project Management
The following chapter analyses the different project management methodologies that have been taken
into consideration for this project and a justification for the final choice. The project schedule is
provided. In addition, technological solutions that have supported the project management process are
discussed, as well as communication with the external client.
2.1. Choice of Methodology
Project management is the discipline of using established principles, procedures and policies to
manage a project from conception through completion (Nokes, 2007). Any undertaken project,
regardless of its size or type, requires a management methodology, which would lead to its
completion. Nowadays there are many tested and trusted techniques available, both for software and
other types of projects. As the scale of possibilities is large, it is always important to analyse and
select the most suitable way of managing the project beforehand. A suitable project management
technique can eliminate some potential risks as well as help to achieve a better final result and
improve developer’s performance.
There are several project management techniques suitable for this type of project. Some of the most
widely used ones are Waterfall (classical) and Agile (more flexible) methodologies. Both of them
have their benefits and drawbacks, so in this section I would like to analyse them in the context of this
project. In addition to this, I would like to justify the choice I have made and demonstrate how the
technique was applied throughout the time.
2.1.1. Waterfall Method
As it was mentioned previously, one of the most popular methodologies is Waterfall. It is a classical
software life-cycle model, rigid and highly-structured (Efford, 2015). It represents the successive
phases as boxes and the onward progression of partially worked software as connecting arcs
(Butterfield & Ngondi, 2016). The first phase is shown at the top box and other phases flow
downwards (Figure 2). All the stages happen in chronological order, so once a phase is finished,
developers do not go back to it. This methodology is favoured by some project managers because it
naturally sets milestones for tasks.
7
Figure 2. Waterfall model (Efford, 2015)
Waterfall method can be an appropriate
choice in some circumstances. For example,
if requirements are well defined by the client
and changes in them are unlikely, it may lead
the project to success. Also, if developers are
familiar with the problem and understand the
chosen technology well, Waterfall
methodology is a good fit for the project
(Efford, 2015).
On the other hand, the discussed
circumstances are rarely true in real software
projects (Efford, 2015). It is often hard for
clients to clearly shape the requirements in
the first go. As a result, the client may keep
adding new requirements during the development life-cycle, so the Waterfall model gets violated.
Secondly, developers may be unfamiliar with the problem to be tackled as well as the technology.
This may lead to a lot of errors and changes in the product. In Waterfall method, a program is tested
only at the last stage, so developers’ mistakes would cause problems, forcing to return to first stages
of development, redoing the process and delaying the production (Efford, 2015). In this case, test-
driven or iterative development approach would be more appropriate and eliminate these potential
risks.
2.1.2. Agile Iterative Development
Another widely known methodology is Agile software development. There are various distinct types
of Agile itself, but they all follow the same ideas, described in Agile Manifesto – twelve principles of
software project management.
As it is stated in manifesto, Agile focuses on improving the communication between developers and
client. This is done in several steps. Firstly, it suggests that the best form of communication is face-to-
face with close cooperation involved. Secondly, Agile allows the client to change the requirements
even late in development. Thirdly, it offers to collaborate with the client during the whole
development process rather than just the first stages (Beck & Beedle, 2001). This is done by regularly
delivering small working increments to the client and getting feedback on the work.
8
In addition, Agile methodology follows the main principles of iterative development. The concept of
iterative development is widely known and used in modern project management processes.
Development work is broken into small iterations, which usually last several weeks – from one to
four, normally. Each iteration involves actions, such as: planning, analysis, design, coding, unit
testing, and acceptance testing. At the end of iteration a working product is demonstrated to
stakeholders (Moran, 2014). Then, another iteration follows the same process (Figure 3).
Agile iterative development has a lot of advantages to offer. First of all, continuous client feedback
from early iterations improves later ones and allows to fix mistakes on a short-time basis (Efford,
2015). Secondly, regular testing of the software minimizes the risk of production failure and saves
time. Adopting a testing routine also makes it easier for developers to tackle unfamiliar problems.
Even if testing results in failure, it will not lead to a complete redesign, redevelopment and retesting
of the product, because it is just an increment failing – it can be quickly fixed. Thirdly, short iterations
give developers a more realistic idea of what can be done in the time period. Data shows that lower-
complexity steps in large software projects are done more productively and result in more successful
final products (Larman, 2004).
Of course, there are some minimal drawbacks to this approach, too. For example, later iterations may
be forcing to change earlier ones since the work quality keeps improving over time (Efford, 2015).
Due to this, some time may be lost. Also, keeping focus on small increments might make software
creators forget about the bigger picture.
2.1.3. Justification of Choice
While considering the advantages and disadvantages of discussed project management techniques, the
most suitable choice was made. For the duration of this project Agile iterative development was
selected instead of Waterfall method because of several reasons.
Figure 3. Agile Iterative Development (Larman, 2004)
9
Firstly, Agile iterative development is more appropriate since the problem is not familiar – I have
never built an API before nor I knew any details about its concept before the start. In this case, a lot of
research and learning was required to produce a successful software product. Adopting Waterfall
method here would have introduced a high risk of failure, because the product would only be tested in
the very end. The strict timeline of the project does not provide any time for fixing major mistakes or
returning to first stages of development if the final stage of testing fails.
Secondly, because a client that defines requirements is involved in the project, Waterfall method
would again not be the best choice. Waterfall development does not offer continuous communication
with the client. Due to this, no feedback would be received during the development. This is a huge
risk which may leave the client dissatisfied in the end, especially if the product would be different
from what they have imagined. Moreover, another problem may appear if the client decides to change
the requirements or add a new feature. With Waterfall these changes would just not be possible,
whereas Agile Iterative development can handle the situation better – especially considering the
possibility for a client to give feedback.
Thirdly, Agile iterative development is more suitable because of the size and complexity of this
project. It is not a small project, so breaking the work into small increments helps to manage it better
and stay realistic.
Last but not least, iterative development model for developing a software product is recommended by
the School of Computing in the University of Leeds due to other students’ experiences in the past.
However, the only point, specified in Agile Manifesto, that could not be fully acceptable in this
project is: “Welcome changing the requirements, even late in development“ (Beck & Beedle, 2001).
Adopting this idea may not allow to meet the deadline. With this issue in mind, basic requirements
were set out with the client at the very beginning of the project. The client was notified that it would
be ideal not to have any later changes in requirements, but very minor ones would still be accepted if
really needed.
2.2. Project Schedule
Before the start of development process, a Gantt Chart was created to illustrate the schedule of this
project. Gantt charts are commonly used for task planning and project management. With the help of
Gantt Chart, it is easy to visualize general tasks to be achieved during the project, as well as
milestones for stages of project management.
By following iterative development practices, the project time was split up into two-week iterations,
each of them having the usual iterative processes and a time estimation for them to be completed.
10
However, the initial Gantt Chart, created as a part of Planning and Scoping Document has later been
revised following the advice from the project Assessor. Figure 4 illustrates the improved Gantt Chart
for this project.
2.3. Iteration and Task Management
As Gantt Chart only reflects general tasks to be completed, another way of logging more specific
tasks was needed. For this reason I decided to find a suitable task management tool. There are various
available options nowadays, both open source and requiring to buy a license. Most of them offer a
similar design but the functionality differs slightly.
At first, Trello project management tool was considered as it proved to be an extremely useful online
project management tool in previously completed team projects. It is internationally approved, simple
to use and free of charge. Trello allows to set tasks as tickets on a board, assign them, edit and sign off
Figure 4. Project schedule - Gantt Chart.
11
once complete (Trello Inc., 2017). However, Trello lacks some features and the client would have to
register themselves in order to see the progress.
Another alternative for iteration planning that I have found useful before and that the client is
currently using for all of their projects is called Youtrack. The company provides a license for it, and
Youtrack makes the communication between developers and other team members easier. All the
company members can track the work. Team members can add comments to tasks, similar tasks and
issues can be linked and activity boards can be customized to suit the business model, and much more.
As a result of its outstanding features that are useful for this particular project, Youtrack was selected.
A separate project in the tool was added, boards separated into two-week iterations (just as in Gantt
Chart) and tickets were kept being created for each upcoming task. As a task was taken, it was set to
be in progress and signed off once completed. The client was always able to see and filter out tasks
that are in progress, done or waiting, comment under them or add new ones.
2.4. Source Control
All of the code that has been submitted is source controlled using Git, currently the most popular
version control system in the world (G2 Crowd, 2017). As recommended by the School of Computing
in University of Leeds and the client, GitHub web-based source code management tool was selected
for the duration of the project. In GitHub, there are two separate project repositories created: one for
the database, another for the Application Programming Interface. Both the repositories are set to
private access modes, so that only people who are related to the project (client and developer) can
view the progress and the data is not exposed to the public.
Although Git is a completely safe technology for source code back up and updates, there are still
some risks involved from the side of developers. For example, even though the code cannot be lost, a
developer might accidentally overwrite clean, working code with errors and mistakes by committing
new code on top of the existing one. Most of the times it is still easy to revert commits and fix the
problem, but if the issue lies deeper, a developer might need to revert the code to a state which was
obtained a long time ago. In this case all the new features that have recently been added would be lost.
Furthermore, if there is a whole development team involved in writing the code, the proportion of
damage only expands.
12
Figure 5. Source control branching method,
involving two main branches: master and
develop (Driessen, 2010)
With these potential problems in mind, a Dutch
developer Vincent Driessen has introduced an
effective Git branching methodology, which many
software engineers tend to follow successfully
nowadays. The suggested way involves two main
working branches on Git, called master and develop.
The master branch would always contain source code
that is faultless and production-ready. The
development branch would contain the latest changes
for next release of the program, which would
eventually be merged into the master branch (Figure
5).
Similarly, more supporting branches can be added to
the approach, such as: feature branches, release
branches or hotfix branches. Additional branches
would add more discipline for code handling, by also
making code changes more understandable. Feature
branches would represent a new functionality added to
a program, hotfix – a quick fix of a bug and release
branches would let users additionally test the program before launching it for production. Moreover,
support branches prove to be useful when handling source code conflicts: the same parts of code
changed in different branches. In this case a conflict would never emerge in any of the main branches,
instead, they would appear on a feature branch and could be easily resolved without breaking the main
application. However, supporting branches are still more useful in team development process rather
than individual projects, as the previously mentioned main branches can handle all the risks when just
one developer is involved (Driessen, 2010).
As a result, the core branching technique was adopted in this project, both for the database and API
code repositories. No support branches were adopted as it is just an individual project, but it is always
possible to extend the applied branching model in the future, as more developers could possibly
continue developing the product. All the newest code changes were always pushed to development
branch first. Then code changes were merged from development into the master branch once they
were ready.
In addition, to secure the master branch and prevent it from receiving direct source code (which could
be pushed by accident and/or be faulty), an additional layer of security was configured in GitHub
itself. The configuration allows only to merge new code to the master branch from a pull request,
13
which does not add code automatically. A person needs to log in to GitHub to approve the merge
process manually and make sure that no conflicts are appearing. This way the code was double-
checked before submission to the final branch and the potential risks were successfully mitigated.
2.5. Communication with Client
The selected project management technique – Agile Iterative Development – suggests that
communication with the client is the key in any development process. By ensuring regular and
effective communication techniques, the product development can be improved and the risk of
achieving a result which is not up to the client’s needs can be mitigated. In this project the
communication has taken two forms: face-to-face (regular meetings) and remote.
2.5.1. Regular Meetings
Face-to-face communication with client has always been valued by all kinds of businesses.
Communicating in-person not only boosts the effectiveness but also improves the mutual
understanding of the two parties.
In the context of this project, regular communication with the client was essential to achieve effective
results in the development process. Following the iterative development model, regular meetings with
Sycous’ representatives were arranged at the end of each iteration – every two weeks. These meetings
normally were focused on updating the client about progress, demonstrating the work that has been
done, getting feedback or answers to project-related questions. In addition, there have been several
extra meetings regarding the collection of requirements, but they were informal and not documented.
The minutes of meetings with the client are added as an Appendix C.
2.5.2. Communication Alternatives
It is not always possible to ensure a flawless development process just by having regular meetings
with the client. Any questions, issues regarding the business model may arise at any time, any place
and usually need to be addressed as quickly as possible in order to proceed with development. One of
the solutions to this is to communicate through email, but it also has risks. Email communication can
be time-consuming, emails can be missed out in between of loads of other sent items, so an instant
communication technology was selected to improve remote communication with the client.
One of the most popular open source instant messaging applications specifically meant to be used by
developers is Slack. In addition to regular messaging, it allows various integrations with software
development tools, such as version control system or continuous integration build servers (Slack,
14
2017). When software changes are pushed to source control or pull requests are opened (for merging
code into other branches), team members, including sales personnel receive notifications (Figure 6).
These automatic messages contain direct links to integrated application, so anyone who has access can
just click and see more details about implemented features or coding activities of the author.
This application has proved to be very useful as the client was always notified of the progress in
development. It simplified the process when it came to explaining what changes have been added
during a period of time – people have usually seen the messages by GitHub already, so only details
needed to be explained. In addition, it was always easy to ask additional questions and get answers
almost immediately, arrange future meetings and demonstrations with client’s representatives or
members of their development team.
2.6. Risk Assessment and Mitigation
The main possible risks in this project have either been related to client’s involvement or source code
management:
The client may have not been available to answer some questions on a short-time basis;
The client may have wanted to refine the requirements by adding extra workload to the
existing project, delaying its completion.
Source code stored in version control system could have been overwritten with errors, causing
the developer to revert changes and lose the latest work.
In order to reduce the possibility of risky scenarios, stated above, the following actions were taken as
a mitigation strategy:
Continuous communication and regular meetings with the client have been assured
(Discussed in section 2.5).
The client and student have agreed on basic/primary requirements which were not supposed
to be amended over time. This was done at an early stage of the project.
Figure 6. Example of Slack notification about changes in source control system GitHub.
15
An effective branching solution has been applied to code version control system (Discussed in
section 2.4).
2.7. Summary
The chapter has analysed two different project management methodologies that were taken into
consideration for this project and a justification for the choice of Agile Iterative Development
methodology. The schedule for this project has been provided. In addition, technological solutions
that supported the project management process have been discussed, as well as communication
strategies with the external client.
16
3 Problem Research
This chapter discusses the research techniques that were conducted prior to implementation of the
product. In addition, a comparative study of existing Service Management Systems is provided and
discussed, as well as a proof-of-concept wireframe for the system.
3.1. Requirement Analysis
For the purpose of capturing requirements and understanding client’s needs, systematic investigation
techniques needed to be applied. A very common requirement investigation procedure, used in
software engineering projects, is SQIRO. The abbreviation represents five fact finding techniques:
Sample Documents, Questionnaires, Interviewing, Reading/Research and Observation (McRobb, et
al., 2010). Each of the techniques are useful in their own ways as their applications return different
kinds of information. The appropriate situations of their usage also differ, so it is not required to use
all five of them at once.
In order to gather and understand Sycous’ requirements, three out of five SQIRO techniques were
applied: Research, Interviews and Sample Documents. The remaining two techniques, Observation
and Questionnaires were not selected to be conducted due to several reasons. For example,
Observation requires to investigate the fieldwork, in this case, the work of engineer on his regular day.
Even though it could have proved to be useful, it was not possible to spend a day with an engineer –
the current Sycous engineer spends a lot of time outside Leeds, so it would have been costly and time-
consuming to perform this action. Another technique, Questionnaires, requires a questionnaire to be
designed in order to gather the requirements. Questionnaire technique is mostly useful when views or
knowledge of a large number of people need to be obtained or when people are geographically
dispersed, for example, in a company with many branches or offices around the country or around the
world (McRobb, et al., 2010). As Sycous is a small company and the future users of the system are all
in one office, it was useful enough just to organize interviews for the research.
3.1.1. Background Research
Background reading was performed as a first step of analysis in the research process, because it is
always beneficial to get an understanding of an organization before proceeding to further
investigations. Although some of the facts about Sycous were already known, more information,
especially regarding the services of engineers, needed to be collected. One of the sources was
17
company’s website, where they explain what kind of services they do and how they collaborate with
other companies. Reading the website helped to visualize the business model and get an idea who the
clients of Sycous, that use services of engineers, are. Another investigated source was their current
metering and billing platform, a software product of Sycous. The platform can only be used by
administrators and clients, so the access had to be granted. Experimenting with the billing platform
helped to understand what kind of users the required engineer ticketing system may have, what
services the users get and how they are managed by the company. Overall, the background research
process was beneficial to get a big view of the company, their clients and services.
3.1.2. Interviews
Another applied SQIRO technique was Interviews. This is one of the most effective techniques in
many software projects. Interviews can provide information in depth about the existing system and
people’s requirements from a new system (McRobb, et al., 2010). Also, interviews can be both fixed
(having a strict set of questions) and investigative (more informal, questions can be open) (Efford,
2015). However, it is still extremely important to conduct interviews with care and prepare questions
for them in advance – this will boost the effectiveness of an interview. Also, if there are different
potential users of the system involved, it is a good idea to have interviews with each of them
(McRobb, et al., 2010). In this case the requirements will be seen from different points of view, so
decisions can be made of what is actually necessary.
The conducted interviews with Sycous involved two types of users: engineering and operations
personnel. Both interviews were of investigative type, where more open questions are allowed to be
asked and interview has rather a direction of topic than a strict order of questions.
The first interview was conducted with two of client’s representatives who are both leaders of
operations team. The topic of this discussion was mainly about functionality of the system – what are
the expected features, who would be the users and what information needs to be handled by the API.
In addition, a set of basic/primary requirements was structured and agreed on, including possible
deliverables. The client was informed that only very minor changes to basic requirements are allowed
at later stages due to strict timing of the project. Moreover, possible technological choices of the
system were discussed.
The second interview was taken from a currently working engineer, who agreed to answer project-
related questions. The purpose of this interview was to investigate how a normal working day of an
engineer looks, what kind of task management system is currently used. Sycous engineer kindly
narrated how he currently manages his work: he gets tasks added to his email’s calendar system and
contacts the office staff if any questions arise. He then travels to site, works on a requested task. The
18
details of his completed work are later sent over email to business manager, who then forwards it to a
client. The engineer also agreed to provide some sample files, showing what kind of information is
recorded before and after his work (Discussed in section 3.1.3). The minutes of this interview can be
found in Appendix D.
Both the interviews proved to be extremely useful as the whole idea of the system was discussed with
different types of future users. Both office staff managers and the engineer provided clear, informative
answers to interview questions and went an extra mile trying to explain the details. To my mind, this
was the most beneficial investigation technique, compared to the other two.
During the development, more question and answer sessions were arranged as regular meetings, but
they did not take shape of formal investigative interviews – they were part of the client feedback
process.
3.1.3. Sample Documents
As a part of interview process with the engineer, several sample work documents were collected. The
purpose of collecting these documents was to understand what kind of data needs to be manipulated in
the new API of a ticketing system. The documents included copies of email attachments that were
sent to and from the engineer. The first one was a site attendance file sent to the engineer, where
details of a task that needs to be completed are described. The second is a copy of commissioning data
return – a file that was sent back to office after completion of work on site. The file contains some
comments on the completed task and details about meter installation, meter readings.
Collecting sample documents proved to be almost as useful as interviews. During later processes of
development, such as designing a proof of concept wireframe or database, the files helped to organize
the data, that needs to be logged, into separate entities with attributes. In addition, file collection
helped to visualize possible future improvements to the system, such as sending automatic emails to
clients. With this in mind, the system was later designed in such a way that possible improvements are
easy to add.
3.2. Existing Systems
In addition to performed SQIRO requirement analysis techniques, a study of existing systems and
their functionalities was conducted. The aim of this was to prepare for designing the architecture of
database and API, and to compare features of existing field service management systems with
functionalities of the product to be created for the client. Figure 7 displays the feature comparison.
19
Three top existing field service management systems of the year 2017 were chosen to be analysed and
compared (Capterra, 2017). Unfortunately, it was not possible to download and try using any of the
analysed programs as charges apply and trials are only provided for organizations planning to
purchase the product. Yet it was still possible to successfully investigate the features from official
provided descriptions and screenshots of software. The feature analysis was based on requirements of
Sycous’ product; comparing whether the planned features appear in currently existing platforms and
what possible extensions to the program can be done in the future to meet business standards.
Considering ideas of possible extensions, it was made sure later in development that the application is
designed to be flexible for improvements.
Figure 7. Feature comparison of existing systems and the required system.
20
As it is visible from the table, having an own API service management system will be beneficial for
Sycous. Unlike existing solutions, it is a huge advantage that API-centric system will allow the client
to integrate other software solutions, used in company, such as stock management system or online
metering and billing platform. Also, separate desktop and mobile back end solutions will not need to
be created – only user interfaces that access the API will require implementation. Considering the
functionality, a lot of core features are similar to existing systems, but the required solution is
designed to additionally allow handling attachment files, logging shift time and saving coordinates of
locations, to be displayed on a map in the front end solution. Last but not least, Sycous will minimize
their costs as the system will be maintained themselves and always remain flexible for improvements.
To sum up, analysis of existing solutions in comparison with client’s basic needs helped to shape the
requirements and imagine possible ways how required functionality can be handled. Furthermore, it
gave an idea of what extensions client may want to have in the future, so that designing architecture of
the system would be done is such a way that allows a wide range of flexibility. In addition, it
broadened the understanding of the need for field service management systems in global business
scope.
3.3. Wireframe
In order to test out the understanding and assumptions of shaped client’s requirements, a proof-of-
concept visual wireframe of the system was created and later discussed with the client. A Proof of
Concept (POC) is an example to test a discrete design idea or assumption about functionality (Gottlieb,
2007). This is why the wireframe was designed as a low-fidelity product, meaning, it only represents
expected functionality and data to be manipulated; it does not depict the final user interface design or
interactions. The future desktop user interface, which the client is going to build for the API, may
have a very different design and user experience, but the idea of functionality displayed in this
wireframe will remain the same or be extended to some level.
The proof of concept wireframe was created using Lucidchart, a professional online tool for
flowcharts, diagrams, UI mock-ups and etc. (Lucidchart Software Inc., 2017). Every screen diagram,
created as part of this Proof of Concept, displays a different function of the system, from either of two
possible user roles: engineer’s or client’s. Expected data to be displayed to the user is also shown,
along with ways to access, add or edit it.
21
For example, Figure 8 shows one of the created screenshots – a feature of completing a given task
from engineer’s portal. The data requirements for completing a task can be understood from this view.
First of all, when completing a task, the user must be able to add a comment, which may consist of
more than one line. Then, engineer must be able to attach files and enter a meter reading, if the task
requires (option handled automatically). The background is also informative as it tells that this feature
can be accessed when the task itself is selected from a list. Other possible functionalities are also
visible, such as ‘Manage Tasks’, which would display the list of all tasks for this engineer, or
‘Timesheet’, which would load information about hours worked. Similarly to Figure 8, these options
are displayed in detail in other diagrams, illustrating required information and further available
functionalities. The full set of created wireframes can be found in Appendix E.
After completion of wireframe, the result was demonstrated to the client. While referring to agreed
basic requirements, every part of the wireframe was displayed and explained in detail. The client
suggested improvements to functionality and data that would be handled, answered questions. After
the meeting, some time was spent to refine the wireframe, based on received feedback.
Creating a proof-of-concept wireframe definitely proved to be a useful step in requirement analysis. It
did not just allow the client to visualize their requirements, but also tested out whether assumptions,
Figure 8. Wireframe - completing a task from Engineer’s portal. Background –
details of a selected task.
22
created from previous steps of analysis, were interpreted correctly. As a consequence, possible risks
of client’s dissatisfaction or changes in requirements at later stages were mitigated. This step of
analysis was beneficial for both the sides: the developer received a deeper understanding of
specifications and the client revisited desired features, evaluated and double-checked that their
expectations meet the result.
3.4. Summary
The chapter has discussed requirement analysis techniques applied for the project, including SQIRO
techniques, analysis of existing systems and creation of wireframe – a visual representation of
expected functionality. In addition, the outcomes have been discussed, along with justified benefits of
these techniques.
23
4 Delivery
The following chapter discusses the development process of two main deliverables of this project: the
database and the web API. Technological choices for implementations are discussed, along with
details about standards, design and architecture, unit testing.
4.1. Database
The Service Management System must serve the purpose of a data management tool: users must be
able to add, retrieve and alter previously created data. As the amount of information in these kind of
systems is normally extensive (and never stops growing), it is hardly possible to handle it without
saving it to a database – an organised collection of data which represents the model of an organisation
(Beynon-Davies, 2004). There are numerous possible technological solutions for database creation
nowadays, so in this chapter I am going to discuss and justify the choices for design, technology and
implementation of the Service Management System database.
4.1.1. Choice of Language and Technology
There are plenty of database query languages available for use these days, including both SQL and
NoSQL ones. As the informal division into two database types already suggests, the most popular
nonprocedural programming language for modern database systems is called SQL (Structured Query
Language). SQL was developed by IBM in the 1970s and remained to be the first choice by
developers up until nowadays (Bhamidipati, 1998). NoSQL databases include solutions, such as
MongoDB, Redis, Cassandra, etc.
There are some key differences between these two kinds of query languages. First of all, they follow
different data models, so the data collection could be organised in different ways. SQL-based database
systems mainly follow a relational data model – store values in tables, whereas NoSQL systems can
store data as documents, graphs, keys and values, or tuples (Beynon-Davies, 2004). Secondly, the data
format that needs to be handled also varies, as well as its size (Big Data may not be handled well
using a relational model) or access speed.
However, for this kind of system, which is aimed for commercial usage and is not going to handle
rare or very specific data types, using an SQL-oriented database for data management is an ideal
choice. As suggested, SQL is easy to learn and universally accepted as well as applicable to a wide
variety of database applications and products (Kline & Kline, 2001). It is widely used in web,
24
commercial/business and finance applications all over the world and integrates with most of
programming languages.
4.1.2. Database Management Systems
There are several implementations of the current SQL standard, such as: MySQL, Oracle, Microsoft’s
SQL Server. All the management systems use SQL query language, but the syntax, speed, way of
presenting the data and platform availability usually differ. The most popular DBMS (Database
Management System) in March of 2017 was Oracle, and has been in this position for quite a long time
(DB-Engines, 2017).
In this project it was decided to use implementation of Microsoft SQL Server Management System. It
only runs on a Windows platform, but is easy to use, includes factors, such as low cost and high
performance. (Kline & Kline, 2001) Most of other client’s databases are also handled using Microsoft
SQL Server, so the choice of this DBMS allows to keep data collections consistent and organised,
accessible together in one go.
4.1.3. Design and Architecture
Database design is a vital part of any implementation of databases. Without a design, carefully
prepared in advance of implementation, many risks may arise, such as implementing a faulty, non-
flexible and insecure database (Litwin, 1990). Consequently, these problems tend to accumulate over
the whole system that would use the database, causing endless flaws that are difficult to fix. Due to
this, it is very important for any database to strictly follow professional and trusted architectural
practices.
In this section I would like to discuss the process of designing the database for Service Management
System. In addition to this, design choices and techniques will be justified.
4.1.3.1. UML Diagram
Designing a database takes three main steps:
Requirement analysis,
Gathering data, organizing it in tables, columns,
Creating Relationships (Hock-Chuan, 2010).
As the first step (requirement analysis) was completed in previous stage of development, the other
two had to be completed before moving on to actual implementation of SQL database. In order to do
this, I decided to create a Unified Modelling Language (UML) diagram. The Unified Modelling
25
Language (UML) is a general-purpose, developmental, modelling language in the field of software
engineering, that is intended to provide a standard way to visualize the design of a system (Booch,
2005). Database UML diagrams are simple and straightforward, universally recognised and
understood by many developers.
The UML diagram for the database was implemented using Lucidchart, a professional online tool for
flowcharts, diagrams, UI mock-ups and etc. (Lucidchart Software Inc., 2017). The choice was
determined by the fact that this software tool already proved to be beneficial in a previous
development process, when creating a Wireframe. In addition to ease of use, Lucidchart offers an
automatic way of generating SQL code from UML diagrams (Figure 9). This feature saved some time
later in development, as actual database was created instantly from the professional database diagram.
The full, most recent UML diagram of database design, with all relationships and details about data
types is shown in Figure 10.
Figure 8. Export SQL code option in Lucidchart.
26
Figure 10. The full UML diagram of the designed database.
27
Figure 11. Engineer and Type tables
linked by table ‘EngineerType’.
4.1.3.2. Relational Model
Every database must follow the principles of some data model, such as relational, hierarchical or
object-oriented data model (Beynon-Davies, 2004). As it was mentioned previously, SQL databases
normally follow a relational data model.
Relational data model has only one data structure, which is the relation itself, saved in a format of a
table. Because the relation is modelled on a mathematical construct, every table based on relational
model must obey certain restricted set of rules:
- Data is presented as a collection of relations, each relation is depicted as a table
- Columns are attributes that belong to the entity modelled by the table
- Table and column names within a relation must be unique
- All entries in a column must be of the same kind
- Ordering of columns and rows in a relation is not significant
- Each row in a relation (table) must be distinct, tuples must be unique
- Multiple values are not allowed in any cell of a relation (Beynon-Davies, 2004)
The created architecture for this database follows the rules of a
relational model. First of all, entities of required to store data
(known from requirement analysis) are organized into tables
that are interconnected. Each entity – category, is assigned to
have a set of attributes, belonging to an entity. For example,
Meter Read table, shown in Figure 9, has attributes Meter Serial,
Reading, etc., which clearly belong to their entity. The same
idea was applied to all the tables.
After organizing entities into tables and attributes, the remaining
rules were applied. In order to ensure uniqueness of identity, all
the tables have a numerical primary key, represented as Id
column. The data is interconnected using foreign keys, which
are linked from one table to another, based on a relation type
(one to one, one to many, etc.). In case of a many to many
relationship, link tables were introduced, containing foreign keys
to each of relations and following a naming convention that uses
the concatenation of related table names. Figure 11 shows an example of a link table: as engineers are
allowed to have more than one type (and types can be assigned to many engineers), a junction table
‘EngineerType’ handles this many to many relationship between engineers and types.
28
4.1.3.3. Normalisation
The further step in design process involved database normalisation. Normalisation helps to organise
the database in such a format that avoids processing difficulties. The reasons for normalisation
include: minimisation of the amount of space needed to store data and to simplify the maintenance of
data. Normalisation puts data in a form that could accommodate future changes and updates
(Wambler, 2013). In short, a normalised database is flexible to changes, extendable and fast – it
defines requirements for a database, needed for this project, as future extensions to it are quite likely.
There are five normal forms that can be applied to a database. The first three normal forms are the
most significant ones and are usually sufficient for most applications. Third normal form is good
enough for a regular Entity-Relationship approach (Ritchie, 2002). Due to this, a decision was made
to normalize the database to the third normal form.
Several things had to be checked to assure that
created database design follows the notation of the
third normal form. Firstly, it was made sure that no
column in a table would potentially contain more
than one entry. Then, the design was refactored to a
level when no database entries would be stored
repetitively. Next, it was assured that no tables
have attributes which are dependent on other
attributes rather than the table. If this case was
detected, tables were split up. For example, Figure
12 displays how Site and Address tables were split
up into separate entities. At first, Site contained
attributes of both the tables, but during
normalisation process it was noticed that Address Lines, Postcode and Coordinates belong to Address
entity rather than Site. As a result, splitting up tables made them more reusable – Address table can
now store not just addresses of sites, but also clients, engineers, etc.
4.1.3.4. File Handling
One of client’s requirements involved handling file attachments. Usually it is difficult to handle this
using an SQL database and possible ways of storing it have a lot of disadvantages. For instance, it is
possible to create File Tables which would specify paths to local directories on a server. However,
this way local storage can be quickly filled up by files, causing the server to slow down or even break
down (Guyer, 2016).
Figure 12. Result of normalisation – ‘Site’ table
was split up into two entities.
29
As a result, an alternative file storage solution had to be found. The most secure and problem-free way
to store files nowadays is using a remote cloud service. There are many options of cloud storage
available, but most of them require additional fees, so client’s approval was required before any
decisions could be made. The external client agreed on configuring file storage on cloud with no
concerns and recommended using Amazon Simple Storage Services (S3), which they already use for
the same purpose.
To comply with S3 storage requirements, several extra tables had to be added to database design.
Those included S3File and S3Bucket tables (displayed in Figure 13). S3File table links attachments
with S3 Buckets, which act as specific folders on cloud. Buckets can be accessed with a unique key
and ID pair, provided by S3 services (Amazon Inc., 2017). The further file on a bucket is accessed by
its ID, which is same as the ID, stored in the database.
This configuration of tables will allow the system to add and retrieve files from a cloud at a high
speed. Also, the tables can be used not just for handling files, related to specific tasks for engineers,
but can as well be reused for other future extensions of the system.
4.1.4. Implementation
After completion of full database design process, the implementation of database in SQL language
was simple and straightforward. As Lucidchart, the tool used to create the design, offers SQL code
export functionality, it did not take long to run exported table creation queries and add relations
between tables (foreign keys). Immediately afterwards the new database was linked to a remote
source control repository. To make sure implemented database architecture works as expected, testing
was carried out as a next step.
Figure 13. Database tables, handling file storage on Amazon S3 Services.
30
4.1.5. Unit Testing
Database unit testing is essential in order to verify the overall behaviour of not just the data storage
layer, but also the final application. Without extensive testing, countless faults and inconsistencies are
possible to happen in use of the software due to database implementation mistakes. Furthermore,
sometimes it could get extremely complicated to diagnose what causes failure of the software. Unit
testing the database as well as other parts of the product can be a great, time-saving way to avoid such
problems in current and future versions of the application (Kuznetsov, 2012).
In order to unit test the created database, a tSQLt testing framework was selected. tSQLt is a very
popular and widely recommended framework for SQL database unit testing in Microsoft SQL Server.
It offers an easy way of unit testing databases without the need to switch between various tools to
create code and unit tests (SQLity, 2017).
Unlike regular unit tests, database tests focus on testing all tables at once in a specified way. For
example, they may test whether all tables have primary keys, whether expected relations are linked
with foreign keys and etc. As unit testing framework was applied to the solution, a number of default
tests were run, which include specified examples. The tests passed. Then, several more tests were
added, which mainly focus on logic of attribute names within tables, compared to their types. For
example, several mistakes were spotted by these tests, such as attribute names ending with ‘Date’
actually had data type of ‘DateTime’, and the other way around. These and similar mistakes of
consistency were fixed and in the end implemented unit tests passed.
4.2. Application Programming Interface
The main required deliverable of the system, which will handle functionality, desired by the external
client, is the Application Programming Interface. A web API is an interface that is defined in terms of
a set of functions and procedures, and enables a program to gain access to facilities within an
application. (Butterfield & Ngondi, 2016).
Web API is a back-end product which
allows software developers to call its
functions from other created applications,
normally a user interface (front end
solution). More than one external program
can be created to access functions of API,
meaning, it eases the work for developers,
who would like to create an application for
Figure 14. Web API architecture. Left side (Database and
API) is server side architecture, right side displays a
variety of client applications that can use
functionality of an API (Kearn, 2015).
31
different platforms (e.g. mobile, desktop). Developers would no longer need to create full separate
products with the same desired functionality; by having an API, they can just create user interfaces
that request to view, add, update or delete predefined sets of data from it. Figure 14 illustrates the
architecture of a web API and client systems that can access it.
In this section I will discuss the choice of technology, details about architecture and implementation,
unit testing of Application Programming Interface for the Service Management System.
4.2.1. Choice of Language and Technology
There are many technological choices available nowadays for building web APIs. Almost any web
programming framework, such as Ruby, Java, Flask, NodeJS, ASP.NET, etc., allows to build a web
API. The main difference between these frameworks and programming languages is the level of
simplicity and flexibility that they offer, but the speed and quality does not differ much (Duvander,
2013). Considering these facts, when selecting an appropriate programming framework for the API, I
decided to rely on factors, such as framework popularity and modernity levels, compatibility with
SQL databases, recommendations by external client’s development team and personal preferences of
programming languages.
Some of the most popular web API frameworks these days are Flask (uses Python programming
language) and ASP.NET Core (uses C# programming language). Flask framework offers a lot of
flexibility, as it is available cross-platform. Also, it can be perfectly combined with an SQL database
(Ronacher, 2017). What is more, I have tried using Python before, so it would not be time-consuming
to learn how to build an API in Flask framework. However, Python was created with an idea of
simplicity, but due to this there are a lot of limitations of this language. The main problem is the
performance – Python is slower than classical C family languages and even new ones, such as Go
programming language (Krill, 2015). In addition to this, because Python is dynamically typed, it
requires more testing and has errors that only show up at runtime.
ASP.NET Core framework, however, uses C# programming language, which offers faster
performance than Python and can also be successfully used with SQL databases (P. Carter, 2016). It
includes LINQ (Language Integrated Query), which easily converts objects from API into SQL table
format (Dinesh Kulkarni, 2007). Due to this, data conversion from SQL database to the API (and
backwards) can be handled almost automatically, with just a simple structured query involved.
Moreover, ASP.NET Core is the newest, improved version of ASP.NET framework, released
recently, in June 2016 (Anderson, et al., 2016). ASP.NET Core offers the most modern features to
date, especially for building web APIs. For example, both server-side applications (for instance, API)
and client-side apps (web applications) can be built using the same, ASP.NET Core, framework
32
(Anderson, et al., 2016). The latter feature may ease the implementation of a front-end solution in the
future, as development team of the client prefers using ASP.NET.
Considering these facts, a decision was made to implement the product in ASP.NET Core framework
due to its superiority of features and flexibility. It is a more modern approach than Flask, and was also
improved over time. In addition, the client would prefer their products to be built in either ASP.NET
or ASP.NET Core, although it is not a requirement. What is more, even though I was not familiar with
this framework before, it usually does not require a lot of time to learn the basics as C# language is
similar to other C family languages. To my mind, it is ideal for this size of project.
As the official ASP.NET code editor is Visual Studio, also developed by Microsoft, there were no
considerations to use other editors. Visual Studio is a professional tool, free and open-source, with
premium packages available for extra charge. However, all the development work can be successfully
done without paid extensions.
4.2.2.Implementation
The further section discusses architectural solutions and design patterns applied to implementation of
the Service Management System API.
4.2.2.1. Model-View-Controller Pattern
The implementation of the system followed a database-first development approach. Database-first, as
the name suggests, is a development strategy applied when a database already exists (Singh, 2015).
By following this approach, the database is first mapped to the system, in this case, the API, and only
then the further functionality is built.
ASP.NET Core framework offers an efficient way to accomplish
this as it follows a Model-View-Controller (MVC) pattern (Figure
15). Although MVC is considered to be an architectural pattern
designed for user interfaces, it also applies to APIs (Anderson, et al.,
2016). However, the Views cannot be implemented as part of an
API, they are normally handled by a separate user interface
application, so Views in API rather refer to external application than
the API itself. The other two components, Models and Controllers,
are vital parts of API implementation in ASP.NET Core.
Model objects are the parts of the application that implement the
logic for the application's data domain. Models retrieve and store
Figure 15. MVC approach.
(Wikipedia, 2017)
33
data in a database (Microsoft Inc., 2017). By following the database-first approach, the first step in
creating the API was to map database tables into model classes. ASP.NET Core offers an automated
way of doing this, if initial setup of the API is correct, so it did not take long to accomplish this.
Controllers are the components that handle user interaction, work with the model (Microsoft Inc.,
2017). In web API, a controller is an object that handles HTTP requests from a user interface
application (Wasson, 2015). Requests to add, edit or view data can be sent to API controller methods
that have URL links assigned to each. Depending on a chosen URL, a function is called which
performs a desired action. Data is then either displayed to a user or sent to Model layer, which
accesses database, and, for instance, updates existing data. However, controllers must not perform any
direct connections to a database – it would create security risks (Wasson, 2015). That is why during
implementation of the API a separate Data Access Layer was introduced, which will be discussed in
the next section.
4.2.2.2. Data Access Layer
As API controllers strictly must not implement any requests to a database, the following architectural
solution was applied. A controller method, which user (or another application) calls by URL, first
makes a call to an interface class and looks up the required method that would access database.
However, interface classes do not implement any functionality – they only contain links to other
classes that have the actual function. Using interfaces is a good practice in C# programming because
the language doesn't support multiple inheritance of classes (Wagner & Wenzel, 2017). Once the
required method is found, the actual implementation is then looked up and run by the system. Data is
then returned back to the controller and displayed to the user. The full cycle in detail is shown in
Figure 16.
Figure 16. Architecture of the implemented Service Management System API.
34
Data Access Layer implements calls to the SQL database. Wherever possible, it was made sure that
methods would return as much information as the user would require, without making additional
connections to database which would slow down the process. For example, if an engineer (user)
would like to see details of a task, assigned to him, two ways of implementation are possible: either
getting each detail (such as address, contacts of people) by making separate calls to the database, or
constructing a method that would return all required details at once. Of course, the latter would be
more efficient, so this idea was applied throughout the solution.
However, in some cases the user would not want to see additional details, and retrieving extensive
data multiple times can again cause the program to slow down. To handle this, separate methods were
created both in controllers and data access layers that would return only a small, specific amount of
data. For instance, Figure 17 illustrates two methods, implementing requests to the database, one is
returning a task along with all related details, another – just the main information about it.
4.2.2.3. Controller Routing
As it was mentioned previously, users and external applications can send requests to an API by URL
links. Due to this, the structure of URL link needs to tell the user what data can be expected from it.
For example, by calling http://example.co.uk/engineer/5 the user would most likely expect to receive
information about engineer number five. The recommended way to define URL routing in any API is
to use attribute routing method, where each separate attribute tells information about the action
(Wasson, 2014). For instance, in a URL, such as http://example.co.uk/tasks/engineer/5/date/2017-05-
Figure 17. Two requests to the database to get a task: left – requests basic information,
right – requests detailed information.
35
03, every attribute gives a hint about the action: tasks means that a list of tasks will be retrieved,
filtered by engineer number five and a specific date.
The recommended routing method is applied in every single controller of the implemented API. This
way, the actions of the API are structured and understandable, and will help future developers, who
will be creating user interface, to easily construct a mental model of the system.
4.2.2.4. HTTP Responses and Validation
Any computer application must have a solution to handle errors, so must APIs. As user input is
processed in the API, the risk of receiving incorrect requests or data is extremely high. For this reason
controller methods in the implemented API have validation added to them. For example, if a non-
existing task is requested to be viewed, a system without validation would crash and probably output
sensitive information about what went wrong. This is a dangerous situation, as potential attackers of
the system can exploit it and try to break down the system completely or steal information.
The standard way of validating API controller methods is outputting appropriate HTTP responses in
error cases. For example, if a request is not available, it must output an error code 404 (Not Found), if
information is not available, it must output 204 (No Content) and etc. Similarly, successful requests to
get data should output 200 (OK) and successful data input should return 201 (Created) (Wikipedia,
2017).
The implemented Service Management System API controllers are validated following the standards
of HTTP status codes. The validation was tested by sending invalid requests and works as expected.
4.2.2.5. Endpoint Models
In addition to previously described implementation of Data Access Layer, an additional configuration
that improves data security was added at a later stage of development. At first, data returned by the
API was raw, contained extra fields, such as IDs of other elements. Exposing such information is not
a good development practice, so Endpoint Models were introduced to the solution. The purpose of this
is to reduce the amount of data exposed to the user to a minimum level and avoid security risks
(Wasson, 2015). What is more, this configuration does not just improve security of the API, but also
formats data in a human readable way. An example of improved formatting is displayed in Figure 18.
36
4.2.3. Unit Testing
As a part of development process, unit testing was carried out. A decision was made to unit test data
access layer instead of controllers. This is because controller behaviour was tested after each new
implementation, and controllers do not contain a lot of code to be tested. On the other hand, methods
in Data Access Layer are very extensive and logically complex, so there is a high risk that they
contain some flaws. As a result, some unit tests were set up, along with mock data models that imitate
a database. Unfortunately, due to strict timing, only the most complex functions were chosen to be
unit tested, yet simple ones were manually tested by sending requests and comparing data in the
database. Both ways of testing were successful, the data that functions retrieve or add was found to be
as expected, with no flaws.
4.3. Summary
The chapter has discussed the development process of two main deliverables of this project: the
database and the web API. Technological and architectural choices of each deliverable were justified,
along with applied development standards and patterns, error handling and unit testing.
Figure 9. Improved formatting of data after introduction of Endpoint Models.
37
5 Evaluation
This chapter discusses the methodology and results of conducted user evaluation process. In addition,
a self-evaluation of main deliverables and choices of implementation are included.
5.1. User Evaluation
The implemented deliverables, the database and the web Application Programming Interface, will be
extended with a front end solution in the future. For this reason, it is critical for other developers to
understand the code of the core system at a glance – no unreasonable time should be spent trying to
figure out details and architecture of the system. To prove whether the implemented system is usable
and extendable in the future, an evaluation had to be conducted with developers, that may potentially
work with this system in the future.
5.1.1. Choice of Technique
API evaluation can be carried out using several existing techniques. For instance, possible approaches
involve some traditional Human-Computer Interaction techniques, Cognitive Dimensions Framework
method or API Walkthrough method. The first two methodologies focus on whether the API meets
specification, and created functions work as expected. The other method, API Walkthrough, focuses
on the technical side of an API: whether it is usable for developers and is of a good technical quality
(Barth, 2011). Considering the fact that the API functionality was already tested during the
development process with unit testing techniques, and that the API will require future extensions by
other developers, the API Walkthrough method was selected to be applied. Moreover, as technical
excellence is also a critical requirement for the underlying database, the same evaluation methodology
was decided to be applied for the SQL database.
5.1.2. Conduct
The chosen API Walkthrough methodology aims to evaluate technical features of a product, such as:
whether the participants can construct an accurate mental model of the system
whether the input and output formats and/or classes make sense to the participants
whether the names of the functions, classes, methods and/or properties give participants a
clear understanding of the system
the readability of the code (Barth, 2011)
38
Based on the listed aims, a questionnaire that targets audience of developers was created. The
questionnaire includes two sets of questions: the first part asks to evaluate whether particular technical
and scientific requirements were met in the database design and implementation, the second one –
whether technical excellence applies to the API. The questions are posed in a yes/no format, with four
answers available – ‘not at all’, ‘in some cases’, ‘mostly’, ‘absolutely yes’. The last question of each
part has an open format – any additional points that are not covered by the questionnaire can be
added.
Along with participant consent forms and project information sheets, two copies of questionnaires
were given to current members of Sycous software development team without any preference – the
participants volunteered to take part in evaluation process. First of all, developers were introduced to
the project and its requirements. Then, access to database and API deliverables was granted for them.
It was explained what tasks the system is supposed to carry out, how it can currently be tested and
what future extensions will be required for it to be released into production. Afterwards participating
developers had a look at the database UML diagram, API code, tested some features. Finally,
questionnaires were filled in along with additional comments provided, and returned. Copies of
returned questionnaires can be found in Appendix F.
5.1.3. Results
The results of conducted user testing reveal the quality level of deliverables. Both participants have
provided the same or similar answers to most of the questions, so the further result analysis will be
summarised, with exceptions noted separately.
As mentioned previously, the first set of questions are focused on excellence of delivered database.
Participants were expected to evaluate, whether the database follows traditional standards of a
relational data model, whether tables and relations are grouped logically, whether the database is
normalized to a third normal form. The answers to these questions were marked as positive
(‘absolutely yes’/ ‘mostly’), so it clearly indicates that the database design proved to be clearly
understandable, well-structured and up to standards. Also, no additional comments on database were
added by participants, which again indicates a good technical level achieved, without any potential
flaws. The only problem found by participants was that one of the questions was not formulated
clearly enough, so it was skipped, but it is the problem of questionnaire itself rather than the
deliverable.
The second set of questions are focused on technical and logical excellence of the Application
Programming Interface. As required by chosen evaluation method, participants had to evaluate
39
whether it is possible to construct a mental model of the system just by looking at the code, whether
the code is finely structured, follows the best API programming principles. Again, the answers to
these type of questions, summarised from the two participants, were positive. This demonstrates that
the code structure of the API is easy to understand for developers, so no additional time would need to
be spent to figure out the system’s architecture. Other questions, which focused on system’s flexibility
and appropriateness of error handling had varying answers – one participant gave a very positive
feedback, whereas another was more neutral. This may indicate that one of the participants spotted
flaws or bugs in the system, which could potentially affect the future usage if not fixed.
In addition to answering questions, participants provided some insights about the analysed API. One
of the participants spotted that lines of code at some places could be reduced and would still provide
the same functionality. Also, code comments sometimes seemed unnecessary to him, but this kind of
feedback is more subjective rather than technical. Another participant expressed more positive
feedback by stating that the API features that he had a chance to test are very robust and accept input
data in flexible formats. Moreover, the system’s responses based on testing are correct, which also
means that both successful requests and errors are handled in a suitable manner, not exposing
unintended data to the public.
To summarise, user evaluation procedures provided some professional, constructive feedback on
implemented deliverables and allowed to spot minor mistakes which previously remained unnoticed.
In addition, it confirmed that most of technological choices in both the deliverables proved to be
correct and that the system can successfully be adopted by the client for extensions and creation of a
user interface.
5.2. Self-Evaluation
Taking both deliverables into account, I believe that the overall created system is of a good
professional level. Both the database and the Application Programming Interface successfully connect
with each other and work as per expectations. As a lot of research was done on best standards and
practices of building similar systems, most of them were applied to the implementation and proved to
be successful.
Considering the process and results of database implementation separately, I do believe that a very
good level of quality was achieved. Although I did not have a chance to go through the full database
development cycle in the past, I was learning about best relational database practices in theory as part
of university curriculum. In my opinion, the possessed knowledge from the past was the main reason
of this implementation to be so successful. Also, it was very easy to learn how to work with
Microsoft SQL Server Management Studio, so no time was wasted in this part of development.
40
However, it does not mean that the implemented database is perfect. It is impossible for any system to
be perfect – improvements are always available. By looking at the database, I do see minor possible
improvements to it. For example, I would possibly edit some attributes in tables or add new ones,
change names of tables or attributes. Nonetheless, if a chance was given to redesign the same database
from scratch, I would most likely follow the same approaches that are used in this project and spend
the same amount of time doing it.
Talking about the process of API design and development, I do believe that a good level of
professionalism was achieved. Creating an API was something completely new to me, so a lot of
research and learning had to be done in order to build a working program. There was a number of
failures that happened during the development, but it was all part of learning process. It was
especially challenging in the very beginning to set up this system, install correct supporting packages
and make it run without errors. However, once the initial setup succeeded, the rest of the development
was much more smooth, but I do believe that it could have been more extensive and better if the first
phase was more successful. If a chance was given to redo this deliverable from scratch, I would most
likely select the same development framework (ASP.NET Core) as it is convenient and fast. The same
development practices and architectural solutions would probably also be followed, unless I would
come across to some new, edge-cutting approaches. No problems or slow-downs were caused by
installed coding environment, Visual Studio, so I would definitely use it again if I had to. The only
thing I would change is that I would try to allocate some extra time for actual development whenever
possible, as unexpected mistakes tend to happen and slow down development of an unfamiliar system.
The learning outcomes of this project do not just involve gaining technical skills, learning best
database and API programming practices and etc. They also include the fact that an external client
was involved. Due to this, I learned how professional communication between developers and their
clients has to be maintained and successfully managed to do it during the whole development.
5.3. Summary
The chapter has discussed a conducted user evaluation process, done by other developers, and the
outcomes of it. In addition, a self-evaluation of main deliverables of the project and choices of
development has been included.
41
6 Conclusion
This chapter concludes the report by reflecting final achievements, compared to initial goals of
deliverables and project management. In addition, possible future extensions of the system are
discussed.
6.1. Aim, Objectives and Minimum Requirements
The aim of this project was to design and deliver a web Service Management System API for
ticketing engineers’ time, based on external client’s requirements. The system had to have a database,
where it would store data.
The objectives of the project were as follows:
1. To collect client’s requirements of the system and understand the IT infrastructure of the
business.
2. To design and deliver an API system with following functionalities:
a) Managing a daily work timetable for field engineers, adding time records of actions
b) Allocating tasks for field engineers
c) Viewing allocated tasks
d) Signing completed tasks off, adding files
3. To design and deliver a secure database for storing data, entered into the system.
By looking at the achieved result, the aim of this project is fulfilled: both the systems have been
successfully delivered. The requirements have been successfully collected at an early stage of
development and the business model of the company has been understood.
All the described functionalities of the API have been implemented and work as expected. It is
possible to add, edit, retrieve data, related to tasks, find tasks filtered by engineers that are assigned to
them and date. In addition, it is possible to complete tasks (change their status) and provide additional
required details on completion, such as comments, meter reads. Although it is not possible to add files
through a web API without user interface (user interface would handle uploads and downloads), the
database implementation allows to store locations and access keys to files, that would be stored on a
cloud service. Time records of actions can also be added to the system in the form of a completed
task. In addition, invalid requests to the API are handled correctly and no unnecessary data, which
42
could be sensitive, is exposed to users. The database has also been implemented to detail, following
all the standard practices and requirements of a relational database.
6.2. Project Management
The prepared schedule was followed throughout the project almost at the desired level. All planned
procedures of development were done on time, with only few exceptions. For example, during API
development process, some unexpected mistakes of setup occurred, which slowed it down at the very
beginning. Due to this, more work had to be done once the problems were resolved, but the
requirements were met and the API works as expected. The problems, however, did not affect
iterations of development that followed afterwards.
Considering the selected project management technique, Agile Iterative Development, it was
successfully applied throughout the time. The client was actively involved in development process
with regular meetings arranged at the end of every iteration. During meetings, client was providing
feedback on implemented deliverables and features as well as answering questions. Every iteration
produced either a full deliverable, such as database design or database implementation, or some
working features of deliverables, for instance, new working features of an API. Then the same
iterative development processes kept happening.
The chosen technology for logging tasks, Youtrack, was also used throughout the project successfully.
Although there were cases when tasks were forgotten to be updated on time, it did not take more than
one or two days to fix this and continue staying up-to-date.
The branching methodology for source control was successfully adopted and used in both the
repositories. Two branches were used, develop and master. The work was first done on develop
branch, then it was merged into master. At the moment master branches contain the final state of
deliverables.
6.3. Future Extensions
Both the database and the API are designed to be flexible, so it will be easy to extend the system’s
functionality with additional features. Of course, the first possible future extension would involve
building a client application – user interface – which would access the API. At this stage it is
absolutely possible to achieve this, however, some user authentication would need to be added to
separate functions that should only either be accessible by office staff or the engineer, or, in some
cases, by both user roles.
Other possible extensions are already implemented in the database, but not in the API, and were not
required to be implemented. For example, ‘Client’ table in the database has a relation to ‘Logo’ table.
43
At the moment the field for storing a Logo can be left empty, but in the future clients could potentially
be able to upload image logos of their company and personalise their user interfaces.
A similar extension was added to ‘Address’ table. A currently optional attribute, called Coordinates
may be used in the future to store coordinates of addresses. This way, if a user interface would have a
map in it, engineers would just look up the address in a map rather than search for the location in
external applications.
What is more, ‘Type’ and ‘EngineerType’ tables that are currently unused could store one or more
types, assigned to engineers. In case office staff would not know which engineer should be assigned
to a certain task, they could look up what types of jobs engineers are capable to do and decide.
There are many more ways to extend this system apart from the mentioned ones. Although the current
implementation does not cover this idea, it would also be nice to add email services to the system, for
example, to instantly send clients copies of completed work or notifications about upcoming site
visits.
6.4. Summary
The chapter has concluded the report by providing insights about achievements, compared to initial
goals of the project deliverables and project management techniques. In addition to this, possible
future extensions of the system have been discussed.
44
List of References
Amazon Inc., 2017. Amazon S3. [Online]
Available at: https://aws.amazon.com/s3/
[Accessed 29 04 2017].
Anderson, R., Dykstra, T., Pasic, A. & Latham, L., 2016. Introduction to ASP.NET Core. [Online]
Available at: https://docs.microsoft.com/en-us/aspnet/core/
[Accessed 01 05 2017].
Barth, M., 2011. API Evaluation. An overview of API evaluation techniques
Beck, K. & Beedle, M., 2001. Principles Behind the Agile Manifesto. [Online]
Available at: http://agilemanifesto.org/iso/en/principles.html
[Accessed 14 04 2017].
Beynon-Davies, P., 2004. Database Systems. 3rd ed., Palgrave Macmillan.
Bhamidipati, K., 1998. SQL Programmer's Reference. Osborne.
Booch, G., 2005. Unified Modeling Language User Guide. 2nd ed.
Butterfield, A. & Ngondi, G. E., 2016. A Dictionary of Computer Science. [Online]
Available at: http://0-
www.oxfordreference.com.wam.leeds.ac.uk/view/10.1093/acref/9780199688975.001.0001/acref-
9780199688975-e-160
[Accessed 05 02 2017].
Capterra, 2017. Top Field Service Management Software Products. [Online]
Available at: http://www.capterra.com/field-service-management-software/
[Accessed 08 02 2017].
DB-Engines, 2017. DB-Engines Ranking. [Online]
Available at: http://db-engines.com/en/ranking
[Accessed 22 03 2017].
Kulkarni, D., 2007. LINQ to SQL: .NET Language-Integrated Query for Relational Data. [Online]
Available at: https://msdn.microsoft.com/en-us/library/bb425822.aspx
[Accessed 03 2017].
45
Driessen, V., 2010. A Successful Git Branching Model. [Online]
Available at: http://nvie.com/posts/a-successful-git-branching-model/
[Accessed 21 04 2017].
Duvander, A., 2013. What Programming Language is Most Popular with APIs?. [Online]
Available at: https://www.programmableweb.com/news/what-programming-language-most-popular-
apis/2013/06/03
[Accessed 05 05 2017].
Efford, D. N., 2015. Software Engineering. Approaches to Software Development (Lecture Slides).
University of Leeds
Efford, D. N., 2015. Software Engineering. SQIRO Investigation Techniques - Lecture Slides. Leeds
G2 Crowd, 2017. Best Version Control Systems. [Online]
Available at: https://www.g2crowd.com/categories/version-control-systems
[Accessed 18 04 2017].
Gottlieb, S., 2007. POC, Prototype or Pilot? When and Why. [Online]
Available at: http://www.contenthere.net/2007/03/poc-prototype-or-pilot-when-and-why_92.html
[Accessed 24 04 2017].
Guyer, C., 2016. FileTables (SQL Server). [Online]
Available at: https://docs.microsoft.com/en-us/sql/relational-databases/blob/filetables-sql-server
[Accessed 01 05 2017].
Hock-Chuan, C., 2010. A Quick-Start Tutorial on Relational Database Design. [Online]
Available at: https://www.ntu.edu.sg/home/ehchua/programming/sql/relational_database_design.html
[Accessed 01 05 2017].
Kearn, M., 2015. Introduction to REST and .net Web API. [Online]
Available at: https://blogs.msdn.microsoft.com/martinkearn/2015/01/05/introduction-to-rest-and-net-
web-api/
[Accessed 04 05 2017].
Kline, K. & Kline, D., 2001. SQL in a Nutshell. O'Reilly.
Krill, P., 2015. A developer's guide to the pros and cons of Python. [Online]
Available at: http://www.infoworld.com/article/2887974/application-development/a-developer-s-
guide-to-the-pro-s-and-con-s-of-python.html
[Accessed 05 05 2017].
46
Kuznetsov, A., 2012. Close Those Loopholes: Lessons learned from Unit Testing T-SQL. [Online]
Available at: https://www.simple-talk.com/sql/database-administration/close-those-loopholes-lessons-
learned-from-unit-testing-t-sql/
[Accessed 03 05 2017].
Larman, C., 2004. Agile and Iterative Development - A Manager's Guide. s.l.:Pearson Education Inc..
Litwin, P., 1990. Fundamentals of Relational Database Design
Lucidchart Software Inc., 2017. Homepage. [Online]
Available at: https://www.lucidchart.com/
[Accessed 26 04 2017].
McRobb, S., Bennett, S. & Farmer, R., 2010. Fact Finding Techniques. In: Object-Oriented Analysis
& Design using UML
Microsoft Inc., 2017. ASP.NET MVC Overview. [Online]
Available at: https://msdn.microsoft.com/en-us/library/dd381412(v=vs.108).aspx
[Accessed 01 05 2017].
Miller, D., 2014. The Web API business layer anti-pattern. [Online]
Available at: http://bizcoder.com/the-web-api-business-layer-anti-pattern
[Accessed 02 05 2017].
Moran, A., 2014. Agile Risk Management
Nokes, S., 2007. The Definitive Guide to Project Management. 2nd ed. London
P. Carter, 2016. Choosing between .NET Core and .NET Framework for server apps. [Online]
Available at: https://docs.microsoft.com/en-us/dotnet/articles/standard/choosing-core-framework-
server
[Accessed 03 05 2017].
Ritchie, C., 2002. Relational Database Principles. 2nd ed. London: Continuum.
Ronacher, A., 2017. Flask - API Documentation. [Online]
Available at: http://flask.pocoo.org/docs/0.12/api/
[Accessed 03 05 2017].
Scotland, K., 2014. Aspects of Kanban. [Online]
Available at: http://www.methodsandtools.com/archive/archive.php?id=104
[Accessed 20 04 2017].
Singh, R. R., 2015. Mastering Entity Framework. 1st ed. Birmingham: Packt Publishing.
47
Slack, 2017. Product. [Online]
Available at: https://slack.com/is
[Accessed 05 04 2017].
SQLity, 2017. TSQLT Homepage. [Online]
Available at: http://tsqlt.org/
[Accessed 02 05 2017].
Sycous, 2017. About Us. [Online]
Available at: www.sycous.com
[Accessed 02 03 2017].
Trello Inc., 2017. Trello. [Online]
Available at: https://trello.com/
[Accessed 2017].
Wagner, B. & Wenzel, M., 2017. Interfaces (C# Programming Guide). [Online]
Available at: https://docs.microsoft.com/en-us/dotnet/articles/csharp/programming-
guide/interfaces/index
[Accessed 01 05 2017].
Wambler, S., 2013. Introduction to Data Normalization: A Database "Best" Practice. [Online]
Available at: http://agiledata.org/essays/dataNormalization.html
[Accessed 01 05 2017].
Wasson, M., 2014. Attribute Routing in ASP.NET Web API 2. [Online]
Available at: https://docs.microsoft.com/en-us/aspnet/web-api/overview/web-api-routing-and-
actions/attribute-routing-in-web-api-2
[Accessed 03 05 2017].
Wasson, M., 2015. Getting Started with ASP.NET Web API 2. [Online]
Available at: https://docs.microsoft.com/en-us/aspnet/web-api/overview/getting-started-with-aspnet-
web-api/tutorial-your-first-web-api
[Accessed 01 05 2017].
Wikipedia, 2017. List of HTTP status codes. [Online]
Available at: https://en.wikipedia.org/wiki/List_of_HTTP_status_codes
[Accessed 03 05 2017].
Wikipedia, 2017. Model-View-Controller. [Online]
Available at: https://en.wikipedia.org/wiki/Model%E2%80%93view%E2%80%93controller
[Accessed 01 05 2017].
48
Appendix A
External Materials
No external materials were used for this project.
49
Appendix B
Ethical Issues Addressed
An ethical issue which involved User Testing was addressed in this project.
An API evaluation technique, called API Walkthrough was involved in this project. The deliverables
were evaluated by participants in a written form (questionnaire). Common procedures of data
collection from participants was carried out, such as providing information about the project itself,
explaining how the collected data will be used, etc. The participants were not coerced to provide
feedback and signed a participant consent form in case of agreement. It was made sure that personal
data of participants remains anonymous.
50
Appendix C
Minutes of Regular Meetings with the Client
14/02/2017
The first meeting with the client regarding collection of requirements. A set of basic requirements was
agreed on. Requirements were explained in detail, important questions answered.
16/02/2017
Spoke to an engineer who will be using the system in the future. Asked several questions about his
work, such as: how does he manage his working day currently, who allocates tasks for him, what does
he use to log the work done, what kind of time logging tools he uses. The engineer currently gets the
tasks that need to be done on his Office 365 calendar and sends emails with proof of work done. The
tasks are allocated by the office and currently he needs a way to log his time of travel and time on site.
In addition, collected sample documents.
24/02/2017
Presented the created wireframe to the client. The client suggested several additions and
improvements to the prototype. The types of users have also been revised due to the fact that another
external client of the company would also like to use the system, so they should also have access to
allocating engineers. This is not a big change, only the 'office' and 'client' will be counted as one and
the same, just having different permissions for engineer access.
09/03/2017
Presented the database diagram to the client. Received answers to my questions and advice on
potential improvements.
24/03/2017
Presented the created database, received answers to some questions.
11/04/2017
Displayed to the client several working features of the API. Received answers to my questions.
24/04/2017
Presented most of the working features to the client. Received feedback.
51
Appendix D
Minutes of Interview with the Engineer
16/02/2017
09:10
Attended company's office and spoke to an engineer who will be using the system in the future. Asked
several questions about his work, such as: how does he manage his working day currently, who
allocates tasks for him, what does he use to log the work done, what kind of time logging tools he
uses. The engineer currently gets the tasks that need to be done on his Office 365 calendar and sends
emails with proof of work done. The tasks are allocated by the office and currently he needs a way to
log his time of travel and time on site.
11:00
Collected additional files from the engineer. Those are sample files of how he logs his work on-site
and sends reports to a client after finishing work.
The files are:
Draft site attendance – shows all the necessary information for engineer to have about task
Copy of Commissioning Data Return - Cambran Towers – shows how work completion is
reported back to a client
Property to Meter Serial List Template – a template that shows meter serials associated with
properties for the engineer to know before going to site (but first he gets a task list – Draft site
attendance)
52
Appendix E
The Designed Wireframe of the System
Engineer’s Portal
53
Office Portal
54
55
Appendix F
User Evaluation Questionnaires
Participant 1
56
57
Participant 2
58