+ All Categories
Home > Documents > Game Artificial Intelligence Library - jakepassmore.com AI Library - Report.docx  · Web...

Game Artificial Intelligence Library - jakepassmore.com AI Library - Report.docx  · Web...

Date post: 19-May-2019
Category:
Upload: buitu
View: 227 times
Download: 0 times
Share this document with a friend
54
Game Artificial Intelligence Library (GAIL) Jake Passmore (14734231) BSc Computer Games Development 2018
Transcript

Game Artificial Intelligence Library

Game Artificial Intelligence Library (GAIL)

Jake Passmore(14734231)

BSc Computer Games Development

Contents1 | Project Overview41.1 | Aim41.2 | Objectives41.3 | Risk Assessment42 | Research62.1 | Background Artificial Intelligence62.2 | IDE Research93 | Design103.1 | Features103.2 | Class Diagram113.3 | Design Methodologies133.3.1 | Waterfall133.3.2 | Incremental143.3.3 | Scrum - Agile153.3.4 | Chosen methodology164 | Testing174.1 | Alpha Testing174.2 | Beta Testing174.2.1 | Closed Beta174.2.2 | Open Beta174.3 | AI Testbench185 | Evaluation195.1 | User Feedback195.2 | Objectives Revisited205.2.1 | Objective One: Produce an AI library that is up to industry standards205.2.2 | Objective Two: Fill a gap in the AI library market215.2.3 | Objective Three: Gather feedback from users for future development215.4 | Planning Revisited226 | Conclusion236.1 | Project Summary236.2 | Future Development247 | Legal, Social, Ethical and Professional Issues (LSEPI)258 | References269 | Appendix289.1 | Statement of Originality289.2 | Ethics Checklist299.3 | Library UML Class Diagram309.4 | Gantt Chart319.5 | Release329.6 | Issue Tracking329.7 | Beta Test Tasks339.8 | Meeting Minutes36

A List of Figures

Figure 1

Technique priority pyramid [Diagram].

Page 05

Figure 2

A simple example of a guard state diagram [Diagram].

Page 07

Figure 3

Unity market (TNW Deals, 2016) [Diagram].

Page 09

Figure 4

AI Technique Priority Order [Diagram].

Page 10

Figure 5

GAIL UML Class Diagram [Diagram].

Page 11

Figure 6

Vision Cone GUI [Screenshot].

Page 12

Figure 7

Map Grid GUI [Screenshot].

Page 12

Figure 8

The Waterfall Model [Diagram].

Page 13

Figure 9

The Incremental Model [Diagram].

Page 14

Figure 10

The Scrum Lifestyle [Diagram].

Page 15

Figure 11

The Hybrid Model [Diagram].

Page 16

Figure 12

One of GAILs methods, containing a summary and line-by-line commentary [Screenshot].

Page 19

Figure 13

A pop-up users will experience when interfacing with GAIL code [Screenshot].

Page 19

1 | Project Overview1.1 | Aim

The aim of this project is to create a useful artificial intelligence (AI) library that can be easily incorporated into developing games.

1.2 | Objectives

The objectives of this project will be to:

1. Produce an AI library that is up to industry standards (optimised, documented, easily-expandable, and thoroughly bug tested).

2. Fill a gap in the AI Library market.

3. Gather feedback from users for future development of the library.

1.3 | Risk Assessment

To ensure this project stays on track for the deadline, any possible risks have been pre-emptively evaluated.

Incompatibility with future Unity updates.

At any moment Unity could release a major update to their engine that breaks or interferes with this library. To combat this, this library will be developed purely for Unity 5.6. This version has been chosen since it is the latest and last update to Unity 5 (Bibby, 2016), which has been supported and rigorously tested by the community for over two years (as of November 2017) whereas the latest Unity 2017 has only been publicly available for less than 6 months.

Data loss

Data loss is always an unavoidable and catastrophic risk; however, the use of regular backups massively decreases the effect of this. A source code control system (SCCS) known as git will be used to regularly back up files after each change. This stores a remote copy of all versions of the library in the cloud to ensure any problems to or loss of the local build can be restored.

Availability of the library.

Due to tight restrictions and lengthy approval queues of the Unity Asset Store (Unity, 2017a), this library is available through the public GitHub repository hosted at https://github.com/JakeP121/GameAILibrary. This will allow users to download it without requiring any 3rd party software but will also require additional installation instructions. Depending on the timeframe, support for the Unity Asset Store may be added at a later date.

Availability of feedback.

To ensure this library receives ample feedback, betas have been planned to be released a significant time before the final deadline. This will allow developers to have as much time as possible to test the library, which in turn provides more time to fix bugs and consider feedback that could prove helpful for milestone three.

Unforeseen Circumstances.

Although precautions such as creating a Gantt chart (appendix 9.4) have been taken to ensure this project stays on track for the final deadline, unforeseen circumstances could result in missed internal deadlines. As the coding of this project potentially poses the most problems, all AI behaviours will be sorted into three levels of priority:

Priority one Basic, generalised techniques that could be obtained from elsewhere.

Priority two Specialised techniques that are only available in this library.

Priority three Advanced, niche developments on previous techniques.

The reasoning behind this level system is the ability to chop off level three or possibly level two techniques if need be while still retaining a working AI library. The library will be developed in such a way that lower level techniques have no dependencies on subsequent levels.

Figure 1: Technique Priority Pyramid.

2 | Research 2.1 | Background Artificial Intelligence

The term artificial intelligence (AI) is defined by the Oxford Dictionary (2017) as a computer system that can perform tasks normally requiring human intelligence, such as visual perception, speech recognition, decision-making and translation between languages. Techniques that fall under this definition can be anything from relatively simple search and path finding algorithms to advanced machine learning.

Early computer games used artificial intelligence as a form of extending a games solo playability; but as the games industry grows, the demand for more extensive and lifelike AI systems has arisen (Woodcock, 2000). With todays gaming standards, smart AI has been known to make or break a game. For example, Monoliths adaptable AI is one of the main selling points for their title Middle-Earth: Shadow of Mordor (2017) while the notorious AI driven partner in Resident Evil 5 (Capcom, 2009) is known to be more of a hindrance than a support.

The goal of AI is to solve a broad problem multiple times faster than a human ever could. For example, satellite navigation systems make use of the A* path finding algorithm (PC Plus, 2010). This algorithm is known to plot the quickest route between two points separated by countless intermediate nodes more efficiently than any other algorithm without pre-processing the map (Zeng, Church, 2009, p. 18). Since this technique solves a very broad problem, it fits in a variety of different cases, specifically in this case, with game characters that need to intelligently move to a target location on the map. Fernndez, Gonzlez and Chang (2012) extended upon the default A* algorithm with their A*mbush family; a set of four algorithms (A*mbush, P-A*mbush, R-A*mbush and SAR-A*mbush) that creates a pseudo-ambush tactic when used on several pursuing agents. A similar technique is used in the classic game, Pac-Man (Namco, 1980) where each enemy agent pursues the player through slightly different algorithms, therefore causing them to approach from different angles and trap the player.

A very basic but extremely useful AI technique is the use of Finite State Machines (FSM). FSMs are modelled using a state diagram, this breaks down a machine or program into several states and events that depend on the current state. The FSM can only be in one state at a single time, and transitions between states by using actions, but not all actions can be used in every state. Wright, D.R. (2005) demonstrated this by modelling the behaviour of HO with a state diagram. HO has three possible states: solid, liquid and gaseous, and two events, cooling and heating. Using the cool event on HO in the liquid state will result in it transitioning to the solid state, changing its own variables to coincide with this new state. However, if the cool event is called again, it will use the solid HOs variant of cool, which will do nothing.

Figure 2: A simple example of a guard state diagram.

Finite state machines can also be used for video game characters by allowing them to have different behaviours with their own events and variables that they can switch between depending on the situation. Figure 2 shows a simple state diagram to illustrate how a guard character in a game may work through this technique. In this diagram, events such as shoot and search hiding spot are limited to only work in the states attacking and searching. This constrains the agent to only be able to perform certain actions in certain states to give it a more natural predictability. The robust design of a state machine means that the AI will be perfectly equipped to act natural in any situation they may find themselves in.

Artificial intelligence can be divided into two categories; pre-programmed (e.g. path following and FSM) and machine learning. Machine learning refers to when a program receives a set of training data consisting of multiple inputs and the correct output, and over time learns how to manipulate the input values to calculate that correct output from its own internal methods. With enough data sets, the program will learn the correlation and be able to calculate the correct output for a set of inputs it has never seen before. This has a wide range of uses since at a base level it is simply a pattern recognition algorithm. It was recently used in a program that could identify skin cancer from photographs (Gallagher, 2017). However, this method is also used in games to allow enemy characters to learn a players playing style to be able to combat them. For example, in the game Hello Neighbour (tinyBuild, 2017), if the enemy catches the player sneaking through a window, it will start to learn the players technique and barricade the window in the subsequent rounds. This is called predictive modelling since it predicts what the player is going to do next from past attempts (Geisser, 1993, p.1). The advantage of predictive modelling is its ability to help keep gameplay fresh, and its portability allows the AI to work in many different scenarios without much more hard-coding.

The main downside of machine learning techniques is that it takes time to train the AI what the correct responses are in each test scenario. This training can either take place before release, providing a quicker to react but more predictable bot, or take place on the users own hardware, providing a more unpredictable, but more custom-suited bot.

As well as improving gameplay, AI is also used to quality assurance (QA) test products before launch. This was seen in the development of The Witness (Thekla Inc., 2016) that used an artificial intelligence algorithm to continuously traverse the games world in order to find any game breaking bugs. Although developing artificial QA testers would create an initial delay before testing can begin, they benefit the process by both having the capacity to test for longer periods of time than their human counterparts and also painstakingly examining small details of the maps most humans wouldnt think of. An example of an exploit found by artificial QA that would very likely be missed by human testers was mentioned by Casey Muratori (2012) when developing The Witnesss Walk Monster QA system. He found that the bot could walk through a small part of seemingly impenetrable rocks, but only in one direction.

2.2 | IDE Research

When deciding which Integrated Development Environment (IDE) to program this library to work with there were many options available. The necessary criteria decided for an IDE to be considered for this project are as followed:

It must be free for public use.

It must be able to create 3D games.

These have chosen these to ensure the library will have the largest target audience as possible while not being constrained to only include simple 2D AI behaviour.

Figure 3: Unity Market Share (TNW Deals, 2016).

This narrowed the search to game engines such as Unity 5 (2015), Epic Games Unreal Engine 4 (2014), and CryTeks CryEngine (2002). At this point Microsofts Visual Studio (2017) was also an option but the additional dependency on a graphics library such as OpenGL or DirectX to create a 3D environment would not only pose an unnecessary risk but also increase the time it takes to create rapid prototypes.

Since the Unity engine currently holds 45% of the global game engine market share (figure 3), and therefore has the largest target audience and support community, it was chosen candidate for this project. As well as this, Unity also has a large number of supported platforms. This allows the library work on anything from Virtual Reality (VR) and mobile devices to game consoles and smart televisions (Unity, 2017b).

The AI techniques themselves will be written in the C# programming language as opposed to JavaScript which Unity also supports. This has been chosen not only because of more personal experience with this language but because TNW Deals reports that C# is the most popular choice for 80% of the Unity userbase.

3 | Design3.1 | Features

Unity (2017c) also has its own asset store where users can share and sell their creations for the engine. This store has shown a clear lack of stealth-based AI libraries, therefore providing a gap this project could fill. As of 22 October 2017, there are few assets in this store that fall under the stealth AI category with the ones that do only incorporating basic elements such as vision cones (Anufriev, 2017) and guard patrolling (Studio Leaves, 2014). Basic techniques like these will be included in this library, as well as adding more the advanced AI patterns not currently available through the asset store. The additional techniques that are currently planned to be incorporated in this library are as follows:

The A* path finding algorithm, used in most game AI to allow them to efficiently calculate the optimal path around a map.

Other path finding algorithms could also be included to provide a bit of variety when used as a pursuing technique.

Fernndez, Gonzlez and Changs (2012) A*mush family and the Pac-Man (Namco, 1980) ghosts algorithms could be viable options for exploring variation.

Dynamic trail creation as seen in Metal Gear Solid (Konami Computer Entertainment Japan, 1998) where the player would step on snow and leave a trail of footprints that the enemy AI could trace back to them.

Sound awareness, such as in Grand Theft Auto V (Rockstar North, 2013), where if a player makes a noise, any enemies in the sound radius will be alerted, with a larger radius for louder sounds.

Hiding spot searching as seen in Amnesia: The Dark Descent (Frictional Games, 2010) where after the player has escaped a chase, the enemy would search any nearby objects that the player could be hiding in

Extending from this behaviour, machine learning could be incorporated to make the enemy record whether the player was found in the specific hiding spot and adjust the likelihood of searching the same place the next time they are in this situation.

The basic vision cone behaviour could also be improved with multi-point tracking. This is an advanced technique that allows the behaviour to more accurately represent real-world vision by testing if several different parts of an object are visible as opposed to just the centre.

Figure 4: AI Technique Priority Order

These techniques take the following places in the order of priority mentioned in section 1.3:

3.2 | Class Diagram

To make this library as simple to understand and use as possible, a large focus of the design was to make a highly modular, object-oriented system where users could pick and choose what techniques they wanted. Figure 5 shows how each technique could fit together, revolving around the Unity GameObject class. This class is the base for every physical object in a Unity scene, therefore theoretically allowing the AI techniques to work on everything. All classes within this library can be divided into one of four categories: data types, objects, behaviours and GUI (Graphical User Interface) elements.

Figure 5: GAIL UML Class Diagram (full size in appendix 9.3).

Objects refer to any physical object that can be instantiated within a Unity scene. Behaviours use these object scripts to tell the difference between GameObjects and decide if they can be used (e.g. Hide behaviour only interacts with GameObjects with a HidingSpot object script attached).

Behaviour classes each represent a single AI technique. This includes things like A* search and vision cone. Multiple behaviours can be applied to a single GameObject and they can be toggled on or off by the end user. These classes were designed following Martin and Martins (2006) Single Responsibility Principle (SRP); meaning that they are only responsible for their one function, and they encapsulate their method of carrying it out entirely. For example, the A* search behaviour simply finds and returns a path from the GameObject (the AI agent) the script is attached to, to a target that has been specified as an input variable. The agent itself would not be able to modify the internal computation A* search has, nor would it be able to interact with its variables (e.g. the global map grid). To get the agent to follow the path, the path follower behaviour would then have to be used, since this function falls out of the A* searchs responsibility.

In most cases, the base level of AI agents should not be able to interact with physical objects, since it will almost always take place through a behaviour.

GUI elements use Unitys UnityEditor library to draw script specific interfaces on top of the scene view. These are just a visual aid when and are only available when a GameObject with a specific script is selected in the editor (custom GUIs will not be visible in exported scenes). Unity GUIs constantly update either as variables are modified or the game runs, this allows the user to precisely tweak options and see their effects in real time.

Figure 6: Vision Cone GUI.

Figure 7: Map Grid GUI.

3.3 | Design Methodologies

When creating software, it is often recommended to use some sort of design methodology in order to manage the project efficiently. Design methodologies divide projects into smaller stages such as planning, implementation and testing before specifying things like how long each stage should last, how many times should a stage be re-visited, etc. There are a number of different design methodologies suitable for this project, each with different advantages and disadvantages that should be considered.

3.3.1 | Waterfall

The waterfall model (or linear-sequential life cycle model) is the most straightforward method to consider. This is because it is very fixed and linear; meaning that each stage can only be started when a previous one is finished. However, this also means that once a stage has been completed, it cannot be modified later on since all subsequent stages will be dependent on it. This is ideal for projects that will benefit from a large initial planning phase which will leave no ambiguity later on. However, the downside of this is that since the planning and design phase will take up so much time, the actual implementation and testing of the project is left until very late into the cycle.

To follow this model, all AI behaviours in this library would have to be planned thoroughly, with considerations into how each behaviour would interact with each other and the engine before a prototype could be made.

Figure 8: The Waterfall Model (Powell-Morse, 2016).

3.3.2 | Incremental

Incremental development divides a project up into multiple features. Each feature then has its own life cycle consisting of design, development, testing, and implementation phases. Breaking up the project into multiple features allows each feature to be developed independently before being pieced together for the final release. This is suitable for this project since the individual AI behaviours are very independent anyway, therefore providing a prime example of an incremental feature.

The incremental model is great for allowing the flexibility of changing the scope of the project mid-development, since features can be easily added or removed from the plan. This is essential since focusing on implementing multiple small features can distract from the timeline of the overall project itself.

Figure 9: The Incremental model.

3.3.3 | Scrum - Agile

Scrum is another dynamic framework that could suit this project. It is similar to the incremental model in the way that it splits a large project into many smaller features in the initial planning stage. However, as opposed potentially having multiple features being worked on at the same time independently, the whole project team decides on a small set of features to work on together. This is called a sprint backlog. Features are chosen for this sprint backlog depending on their priority to the overall project. The features are then all planned and developed together in a timeframe that is known as the sprint. A sprint is completed when all items of the backlog are successfully implemented and considered potentially shippable (Scrum Alliance, 2018). This coincides very well with the project since a prioritised product backlog has already been created with sprints potentially being made up of each priority level.

Scrum suffers from the same problem as the incremental model where too much focus could put on the small parts of the project, causing scope creep (Techopedia, 2018). However, as the multiple features are grouped together in scrum, having to drop a planned sprint due to time constraints would have a much bigger effect than dropping a single technique.

Figure 10: The Scrum Lifecycle (Gregg Boer, 2018).

3.3.4 | Chosen methodology

Since all models suggested have their advantages and disadvantages, it would be best for this project if a hybrid process was used, incorporating the best aspects of each model.

As some AI behaviours share common objects, the waterfall models initial project-wide planning and design phase is essential. This is to ensure code doesnt need to be revisited and modified to work together near the end of development. This will be vital for behaviours such as the A* search and path follower that both need to work with the same path object. After this point, following the rest of the linear waterfall approach would be both inferior and riskier than the other two dynamic models since there is no guarantee of a working product until the very end.

Here, a Scrum-influenced product backlog is beneficial to the project in order to assign priorities to each feature of the library. Just like the Scrum model, features will be selected to be implemented in order of highest priority. Although the AI technique priority order (figure 4) is an ideal template to build three Scrum-style sprints from, a more incremental approach to development is more suitable. This is because once a feature is complete, it will be potentially shippable in and of itself; therefore naturally matching the incremental model.

Although in the incremental model each feature has its own testing stage before its dedicated cycle is complete, a second testing phase of the overall library should also be conducted with beta testers. This, along with the comprehensive maintenance of the library that will be needed after both the beta test and the main release shows that the project slips back into using the waterfall methodology. This is perfectly fine since at this stage, the main disadvantage of the waterfall model (a singular, rigid planning phase) has already passed, making this problem a non-issue.

Figure 11: The Hybrid Model.

4 | Testing4.1 | Alpha Testing

Following the specification of the chosen design methodology; each feature in this library was alpha tested immediately after development. This was done the before implementation stage to make sure that any bugs found were isolated to the current behaviour being tested and were not caused by some other technique interacting with it.

The main goal of testing at this stage of the product was to ensure the product was stable enough for other people to use in beta testing. All bugs found during this stage were fixed before the library was released for external testing.

4.2 | Beta Testing

To ensure this project was up to industry standards upon release, the library went through two stages of user testing. The first stage was an observed closed beta test to ensure no major bugs were present in the library before releasing a wider open beta.

4.2.1 | Closed Beta

The closed beta involved gathering a small group of volunteers who are proficient in Unity and assigning them a set of tasks (appendix 9.7) to complete. The tasks included objectives such as downloading and importing the library, as well as applying some of the AI techniques to example game characters to test how easy GAIL was to use. The users were provided the exact same materials a real user would have access to upon release (appendix 9.5) and were left to carry out the tasks. Each user was interviewed after they completed their task to discover what they found difficult, if they encountered any bugs and what theyd recommend to improve the product. Bug reports gathered from this stage were used to tweak the code before the open beta.

4.2.2 | Open Beta

Once the bug reports from the closed beta was acted upon, the library was released to the public for an open beta. This was advertised both online through the use of social media, as well as locally through a BSc Computer Games Development course tutor at the University of South Wales. Focusing the advertisement to target both university students and a broader range of users online provides fundamental feedback from full-time programmers but also an insight into how difficult lesser-skilled users may find the library. Users were given an email address they could respond to

4.3 | AI Testbench

A fully functional version of the A* path finding algorithm and path follower behaviours, along with their map grid and path dependencies was released separately from this library to constitute an AI testbed developed by the University of South Wales. This testbed was built and used by 3rd year BSc (Hons) computer game development students for a coursework in an Artificial Intelligence for Game Developers module. A large proportion of these students implemented these behaviours and interfaced with the map grid and path objects in their coursework, providing an excellent example of a real use-case scenario.

The scripts were accompanied by a GitHub post (appendix 9.8) explaining how to use them both and feedback and bug reports were left in the comments below.

5 | Evaluation 5.1 | User Feedback

User feedback on the library was very positive. The aspect most praised in feedback reports was the encapsulation of AI behaviours behind classes, which allowed users to drag and drop them onto game characters without delving into any code. Because of this feature, all closed beta test users said the library was relatively easy to use and would prefer to implement it into their games than program their own behaviours. Users also said they liked the inclusion of the graphical representations that were available with the Map Grid object and Vision Cone behaviour.

A notable area of improvement that became visible through user feedback was that although all code was documented through comments, there was a lack of documentation on private code in the Installation Instructions and Code Documentation file. This was intentionally left out to stop users modifying essential code, as to coincide with Meyers (1997) famous open-closed principle. This principle states that a module should be both open for extension (i.e. public methods/variables through inheritance), but also closed against modification (i.e. fundamental private methods/variables remain unchanged). A fair compromise to improve upon this would be to keep Meyers open-closed principle in place in the code, but also adopt a more open-source design methodology and give the users knowledge of how to edit the essential code, if they feel the need to.

As well as asking for feedback on the current version of the library, users were also asked for any additional AI behaviours they would like to see in future iterations. The most requested behaviour was the inclusion of a general search mechanic. Possible implementations of this ranged from advanced solutions such as mapping the environment and creating a plausible escape route path to follow to simple thing like wandering around. This behaviour is very high-level when compared to the rest of the library which focuses on more general low-level techniques but could be considered in the future.

One user suggested the incorporation of a flocking technique. This is where agents around as part of a group, as opposed to moving independently and is modelled on the real-world animals such as sheep and birds (Bourg and Seemann, 2004). This would be a relatively quick and easy technique to implement since it builds off the already-included vision cone.

Multiple users also suggested broadening the stealth focus of this library and including more basic techniques such as wandering and fleeing. This would widen both the scope of this library and increase the target audience massively. However, the large array of techniques that could potentially interact with each other could also make the codebase very difficult to maintain.

Other small suggestions included improvements such as making the MapGrid tile sizes to show up in the GUI before runtime and more prefabs to allow the user to simply instantiate a specific type of AI agent they want.

5.2 | Objectives Revisited

The design of this project was all centred around 3 fundamental objectives:

5.2.1 | Objective One: Produce an AI library that is up to industry standards

When creating a professional library, the code itself needs to be:

a) Well optimised

Most AI techniques included with the package are a product of multiple iterations, each providing either an increase in execution speeds or a decrease in their memory footprint. This has proven to be effective through thorough alpha testing on low-end PCs and laptops.

b) Bug tested

Apart from regular first-party testing, two waves of beta testing have been used to ensure that any common bugs users may come across are known. These bugs were recorded at https://github.com/JakeP121/GameAILibrary/issues?utf8=%E2%9C%93&q=label%3Abug before being recreated, fixed and updated on the remote copy of the library.

c) Documented

All source files within this project contains not only XML (Extensible Markup Language) method summaries, but also line-by-line commentary. This line-by-line commentary acts as pseudocode formatted in parallel to the actual code, so even if a user isnt familiar with C#, they will understand what the code is doing. The use of XML allows pop-ups to be shown to the user in supported editors, to aid them when interfacing with hidden code.

As well as this, the library is bundled with a code documentation manual that contains more in-depth information about what each class does and their accessible methods and variables.

Figure 12: One of GAILs methods, containing a summary and line-by-line commentary

Figure 13: A pop-up users will experience when interfacing with GAIL code.

d) Expandable

As mentioned in section 5.1, this library was designed with Meyers open-closed principle in mind to encourage users to expand the code to suit their needs. Lots of documentation was provided to aid the user in understanding the code. However, as evident in the user feedback, this could have been improved to include private variables and functions.

Since all of these aspects have been met, the objective one has been achieved.

5.2.2 | Objective Two: Fill a gap in the AI library market

There would be no point in producing an AI library that mimics other libraries already widely available online. This was a heavy influence when deciding what techniques should be incorporated into the project and formed the basis for the priority system in figure 1. Priority two techniques were defined as techniques that should be implemented to make the library stand out among alternatives and were therefore the target for achieving this objective.

Although the priority three techniques had to be dropped from the project due to time issues, the priority two techniques were fully developed, tested and documented by the time of release. This means that this objective two has been achieved.

5.2.3 | Objective Three: Gather feedback from users for future development

During the two waves of beta testing, feedback was obtained from all participants through observation, interviews and questionnaires. This feedback was used to tweak and improve the library before the final release as well as note new features that could be added in post-release updates. This signifies that objective three has been achieved.

5.4 | Planning Revisited

When initially planning this project, a Gantt chart was created (appendix 9.4) provide a rough timescale of what should be under development at each point in time from October to April. This chart broke the project down into smaller tasks but due to the unknown time it would take to program each feature of the library, all programming was grouped under the development heading. All deadlines set in this chart up until March were either completed on time or earlier than expected, leading to confidence that all planned behaviours and techniques would be fully functional and tested in time for the beta release. However, due to other commitments involving a cluster of University module deadlines throughout March, development on the library slowed down dramatically at this point.

The conscious decision to drop level three techniques from the initial release was made in order to provide more focus on greater usability and stability of level one and two behaviours.

Thankfully, a design methodology was set in place from the start to easily handle the addition and removal of behaviours planned behaviours, preventing any necessary redesign of the techniques already implemented.

6 | Conclusion6.1 | Project Summary

Libraries are widely used in software engineering to save time and encapsulate complicated processes behind a simpler API (Application Programming Interface). This is especially common in game development situations due to the tight deadlines commonly forced on studios by their corporate publishers. Although large development studios may have the time and resources to create their own reusable libraries, smaller independent developers must frequently rely on any available third-party AI assets. An investigation into these assets exposed a lack of any fully-fledged libraries containing numerous AI behaviours, forcing users to download each behaviour they want from a different source. A gap in the market was found from the absence of dedicated stealth-genre AI, with the only dedicated technique available being a vision cone.

Over the course of six months, a library was designed and developed to fill this gap; providing techniques not yet available online along with the basic techniques that could be useful all in one place. Behaviours were split into three priority levels depending on whether they were:

1. Basic behaviours available elsewhere.

2. Specialised behaviours specific to this library.

3. Advanced, niche developments of other behaviours.

The first two stages of this were fully implemented and documented by the time of release, totalling six different behaviours:

Patrolling.

Vision cones.

A* path finding.

Dynamic trail creation.

Sound awareness.

Hiding spot search.

Although the final stage of the library was not completed, the first two provide more than enough purpose for this librarys existence and still allow it to compete with the current resources available online. The incremental development methodology followed when creating the library allowed the third stage to be cut out of the final release without a cascading effect throughout the rest of the project.

Along with the standard alpha testing mid-development, each of the behaviours implemented went through two extensive rounds of beta testing. The first being closed test where four users were observed as they carried out a list of tasks before being interviewed, and the second test being released online to a larger audience. Feedback gathered from these tests provided bug reports, possible improvements and ideas for future development of the library.

6.2 | Future Development

Since all bugs found within the library at beta release have been fixed, the only possible advancement in future development would be to implement new features or improvements into the library. One viable option would be to implement the stage three features that got dropped from the final build due to time constraints. Since these are just advanced developments on behaviours that have already been implemented, the groundwork for them has already been laid.

Another option for future development would be to work off the feedback gathered from the beta tests and implement behaviours such as flocking and level searching suggested by the beta testers. This would be more beneficial than working on the stage three techniques since the they are proven to be features that users want. However, since they need to be planned and designed from scratch, a timescale for implementing these are unknown.

A suggestion from the beta testing that could also be explored is the possibility of broadening the library into a general-purpose library as opposed to centring on the stealth genre. This would increase the target audience of the library but also massively increase the workload to implement and maintain the large amount of basic behaviours expected of it.

As well as broadening the library, more prefabs could also provide users with specific solutions to common problems. This was the idea of one of the projects beta testers, suggesting that prefabs should be created that represent specific AI types. For example, one prefab could be a patrol AI that uses both the PathFollower script along with the VisionCone, attached together through a custom script. The users can then easily drag this prefab into the level and inject their logic into the script to create a common, but highly customisable agent. This would be very quick to implement since the only code that needs to be produced would be an implementation class that references behaviours being utilised.

An improvement that should be implemented is the inclusion of private variables and functions within the code documentation file bundled with the library. This would be a relatively small change since everything is already documented in the code and would just need to be copied into the documentation file itself.

7 | Legal, Social, Ethical and Professional Issues (LSEPI)

All code within this project was designed and written by myself, apart from the FOVCone_GUI.cs script that has been adapted from Lagues (2015) design in Field of View Visualisation (E01). Lagues code is licensed under the MIT License permitting users to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software as long as the copyright notice and the previous permission supplied with the code is included in the project (Open Source Initiative, no date). This license is compatible with the GNU General Public License (GPL) that the rest of the library is licensed under. Lague has been referenced accordingly both within the code itself and in this document.

All rules stated in the ethics checklist (appendix 9.2) were followed when using participants to test this library and no personal records were kept.

8 | References

Anufriey, A. (2017) Field of View. Available at: https://www.assetstore.unity3d.com/en/#!/content/100143 (Accessed 22 October 2017).

Bibby, B. (2016) Unity 5.6 wraps Unity 5 cycle, whats next in 2017, Unity Blogs, 13 December. Available at: https://blogs.unity3d.com/2016/12/13/unity-5-6-wraps-unity-5-cycle-whats-next-in-2017/ (Accessed: 22 November 2017).

Bourg, D. M. and Seemann, G. (2004) AI for Game Developers. California: OReilly Media, Inc.

Dynamic Pixels (2017) Hello Neighbour (Edition Standard) PC [Computer Game]. tinyBuild.

Fernndez, K., Gonzlez, G. and Chang, C. (2011) A*mbush Family: A* Variations for Ambush Behaviour and Path Diversity Generation, Lecture Notes in Computer Science 7660(11) pp. 314-325.

Frictional Games (2010) Amnesia: The Dark Descent (Edition Standard) PC [Computer Game]. Frictional Games.

Gallagher, J (2017) Artificial Intelligence as good as cancer doctors. Available at: http://www.bbc.co.uk/news/health-38717928 (Accessed 02 November 2017).

Geisser, S. (1993) Predictive Inference: An Introduction. London: Chapman & Hall.

Gregg, B. (2018) What is Scrum? Available at: https://www.visualstudio.com/learn/what-is-scrum/ (Accessed: 17 April 2018).

IO Interactive (2012) Hitman: Absolution (Edition Standard) PlayStation 3 [Computer Game]. Square Enix.

Konami Computer Entertainment Japan (1998) Metal Gear Solid (Edition Standard) PlayStation [Computer Game]. Konami.

Konami Computer Entertainment Japan (2004) Metal Gear Solid 3: Snake Eater (Edition Standard) PlayStation 2 [Computer Game]. Konami Corporation.

Lague, S. (2015) Field of View Visualisation (E01). Available at: https://www.youtube.com/watch?v=rQG9aUWarwE (Accessed: 05 January 2018).

Martin, R.C. and Martin, M. (2006) Agile Principles, Patterns and Practices in C#. New Jersey: Prentice Hall.

Meyer, B. (1997) Object-Oriented Software Construction 2nd Edition. New Jersey: Prentice Hall.

Microsoft (2017) Visual Studio 2017 (Community 15.4.4) [Computer Programme]. Available at: https://www.visualstudio.com/thank-you-downloading-visual-studio/?sku=Community&rel=15# (Accessed: 23 November 2017)

Monolith Productions (2017) Middle-earth: Shadow of Mordor (Edition Standard) PC [Computer Game]. Warner Bros. Interactive Entertainment.

Muratori, C. (2012) Mapping the Islands Walkable Surfaces. Available at: http://the-witness.net/news/2012/12/mapping-the-islands-walkable-surfaces/ (Accessed 20 April 2018)

Namco (1980) Pac-Man (Edition Standard) Arcade [Computer Game]. Namco.

Open Source Initiative (no date) The MIT License. Available at: https://opensource.org/licenses/MIT (Accessed 19 April 2018).

Oxford (2017) Artificial Intelligence. Available at: https://en.oxforddictionaries.com/definition/artificial_intelligence (Accessed 02 November 2017).

PC Plus (2010) How your sat-nav works out the best route. Available at: http://www.techradar.com/news/car-tech/satnav/how-your-sat-nav-works-out-the-best-route-677682/2 (Accessed 02 November 2017).

Powell-Morse, A. (2016) Waterfall Model: What Is It and When Should You Use It? Available at: https://airbrake.io/blog/sdlc/waterfall-model (Accessed 17 April 2018).

Rockstar North (2013) Grand Theft Auto V (Edition Standard) Xbox 360 [Computer Game]. Rockstar Games.

Scrum Alliance (2018) Learn About Scrum. Available at: https://www.scrumalliance.org/learn-about-scrum (Accessed 15 April 2018).

Studio Leaves (2014) FOV Cone of Visibilty And Patrolling for Stealth Game. Available at: https://www.assetstore.unity3d.com/en/#!/content/22169 (Accessed 22 October 2017).

Techopedia (2018) What is Scope Creep? Available at: https://www.techopedia.com/definition/24779/scope-creep (Accessed 15 April 2018).

Thekla Inc. (2016) The Witness (Edition Standard) PC [Computer game]. Thekla, Inc.

TNW Deals (2016) This engine is dominating the gaming industry right now. Available at: https://thenextweb.com/gaming/2016/03/24/engine-dominating-gaming-industry-right-now/ (Accessed 22 October 2017).

Unity (2015) Unity 5.x (Personal 5.6.4) [Computer Programme]. Available at: https://unity3d.com/get-unity/download/archive (Accessed 23 October 2017).

Unity (2017a) Submission Guidelines. Available at: https://unity3d.com/asset-store/sell-assets/submission-guidelines (Accessed 22 November 2017).

Unity (2017b) Build once, deploy anywhere. Available at: https://unity3d.com/unity/features/multiplatform (Accessed 22 October 2017).

Unity (2017c) Asset Store. Available at: https://www.assetstore.unity3d.com/en/ (Accessed 19 October 2017).

Unreal (2014) Unreal Engine (4.18) [Computer Programme]. Available at: https://www.unrealengine.com/download?dismiss=https%3A%2F%2Fwww.unrealengine.com%2Fen-US%2Fwhat-is-unreal-engine-4 (Accessed 23 November 2017).

Woodcock, S. (2000) Game AI: The State of the Industry. Available at: https://www.gamasutra.com/view/feature/131975/game_ai_the_state_of_the_industry.php?page=1 (Accessed 27 October 2017).

Wright, D.R. (2005) Finite State Machines, [lecture notes]. 14 July.

Zeng, W. and Church, R.L. (2009) Finding shortest paths on real road networks: the case for A*, OpenAire. Zenodo [Online]. Available at: https://zenodo.org/record/979689 (Accessed: 22 November 2017).

9 | Appendix

9.1 | Statement of Originality

STATEMENT OF ORIGINALITY

CS3D660 Individual Project

This is to certify that, except where specific reference is made, the work described within this project is the result of the investigation carried out by myself, and that neither this project, nor any part of it, has been submitted in candidature for any other award other than this being presently studied.

Any material taken from published texts or computerized sources have been fully referenced, and I fully realize the consequences of plagiarizing any of these sources.

Student Name (Printed)JAKE PASSMORE

Student SignatureJ. Passmore

Registered Course of StudyComputer Games Development

Date of Signing13/04/18

9.2 | Ethics Checklist

1. Participants were not exposed to any risks greater than those encountered in their normal working life. Investigators have a responsibility to protect participants from physical and mental harm during the investigation. The risk of harm must be no greater than in ordinary life. Areas of potential risk that require ethical approval include, but are not limited to, investigations that occur outside usual laboratory areas, or that require participant mobility (e.g. walking, running, use of public transport), unusual or repetitive activity or movement, that use sensory deprivation (e.g. ear plugs or blindfolds), bright or flashing lights, loud or disorienting noises, smell, taste, vibration, or force feedback.

1. The experimental materials were paper-based, or comprised software running on standard hardware. Participants should not be exposed to any risks associated with the use of non-standard equipment: anything other than pen-and-paper, standard PCs, mobile phones and PDAs.

1. All participants explicitly stated that they agreed to take part, and that their data could be used in the project. If the results of the evaluation are likely to be used beyond the term of the project (for example, the software is to be deployed, or the data is to be published), then signed consent is necessary. A separate consent form should be signed by each participant. Otherwise, verbal consent is sufficient, and should be explicitly requested in the introductory script.

1. No incentives were offered to the participants. The payment of participants must not be used to induce them to risk harm beyond that which they risk without payment in their normal lifestyle.

1. No information about the evaluation or materials was intentionally withheld from the participants. Withholding information or misleading participants is unacceptable if participants are likely to object or show unease when debriefed.

1. No participant was under the age of 16. Parental consent is required for participants under the age of 16.

1. No participant has an impairment that may limit their understanding or communication. Additional consent is required for participants with impairments.

1. Neither I nor my supervisor is in a position of authority or influence over any of the participants. A position of authority or influence over any participant must not be allowed to pressurise participants to take part in, or remain in, any experiment.

1. All participants were informed that they could withdraw at any time. All participants have the right to withdraw at any time during the investigation. They should be told this in the introductory script.

1. All participants have been informed of my contact details. All participants must be able to contact the investigator after the investigation. They should be given the details of both student and module co-ordinator or supervisor as part of the debriefing.

1. The evaluation was discussed with all the participants at the end of the session, and all participants had the opportunity to ask questions. The student must provide the participants with sufficient information in the debriefing to enable them to understand the nature of the investigation.

1. All the data collected from the participants is stored in an anonymous form. All participant data (hard-copy and soft-copy) should be stored securely, and in anonymous form.

Student Name: Jake Passmore

Student ID: 14734231

Students Signature: J. Passmore

Date: 02/11/17

2018

22

9.3 | Library UML Class Diagram9.4 | Gantt Chart

9.5 | Release

The current release of this project is available at: https://github.com/JakeP121/GameAILibrary/releases.

This contains:

The library in the form of a Unity package file.

Installation instructions and code documentation.

An example Unity project showcasing the library.

The entire GitHub repository in the form of both zip and tar.gz files.

9.6 | Issue Tracking

Development of this project was tracked through GitHubs issue tracking. Available at: https://github.com/JakeP121/GameAILibrary/issues.

9.7 | Beta Test TasksGAIL Beta Test Tasks

Task One: Installation

1. Create a fresh Unity Project.

2. Use the installation instructions (page 4) to download and import the GAIL Unity Package into this project.

Task Two: Use of AI Techniques

Note: This task uses the example Unity project, you can close/delete the project created in task one.

Path Following

1. Create a new scene.

2. Using the Path and PathNode prefabs, create a path around the scene.

3. Place an AI prefab into the scene and attach the PathFollower script to it.

4. Set this PathFollower script to use the Path you created in step 2.

5. Create and assign a new PathFollower_Agent script to the AI. This script should:

a. Inherit from the Agent script.

b. Create a reference to the PathFollower script.

c. Get the direction vector from PathFollower and assign it to the base Agents directionVector variable every frame (use the code documentation to find out how and dont forget to call the base Agents Update function).

6. (Optional) Experiment with PathFollowers public variables to see how it changes the AIs pathing.

Dynamic Path Creation

1. Attach the DynamicPathCreation script to the AI you previously made.

2. Experiment with DynamicPathCreations variables to see how it changes the new path.

A* Path Finding

1. Open the A Star Terrain scene.

2. Place a Map Grid (Terrain) prefab in the centre of the Terrain object.

3. Pass the Terrain object into Map Grid (Terrain)s Terrain variable slot and modify the X and Y values until the white square in Scene view encompasses the Terrain object. Set the Sea Level variable to the y value of the Water object.

4. Run the scene and modify Map Grid (Terrain)s Height Threshold variable to your liking (red squares show unwalkable tiles). Remember this value and enter it again when you stop the scene.

5. Experiment with Map Grid (Terrain)s Tile Size variable, experience the trade-offs between high accuracy and high performance to find a sweet spot.

6. Place two AI prefab objects into the scene, in a place where a path could be made between them. Name one Start and one End.

7. Attach a AStarSearch script onto Start, give it Map Grid (Terrain) and End in its Map Grid and Target variable slots.

8. Run the scene, select Start and make sure a path is drawn between it and End.

9. (Optional) Carry out the Path Following tasks on Start to make it follow the path AStarSearch has created.

Vision Cone

1. Create a new scene.

2. Place two AI prefab objects into the scene. Call one Watcher and one Orbiter.

3. Apply the Orbiter_AI script to Orbiter, set its target to Watcher.

4. Apply the VisionCone script to Watcher and extend the View Distance variable until the Orbiter is in range of the white circle in scene view.

5. Create a new tag and apply it to Orbiter. Add this tag to Watchers Target Tags list.

6. Run the scene, select Watcher and make sure a line is drawn between it and Orbiter.

7. (Optional) Place a 3D object with a rigid body in a place where it would obscure the line of sight between Watcher and Orbiter.

8. (Optional) Apply a Visibility script to Orbiter to slow down the speed at which the Watcher can notice it.

9. (Optional) Create a script for Watcher that obtains its VisionCone and moves it towards orbiter when it is visible.

Audio Sensing

1. Create a new scene.

2. Create a GameObject and script that creates a new Sound every couple of seconds.

3. Place an AI prefab object into the scene (within range of the Sounds volume) and apply the Listener script to it.

4. Create and apply a new script to the AI that obtains Listener, checks for sounds every frame and moves towards it if one is heard.

Hiding Spots

1. Create a new scene.

2. Place a Hiding Spot prefab object into the scene.

3. Place an AI prefab object next to the Hiding Spot in the scene and apply the Hide script to it.

4. Increase/decrease Hides Reach variable until the Hiding Spot is just in range.

5. Create and apply a new script to the AI that obtains Hide and uses it to hide inside the Hiding Spot after 2 seconds.

6. Place another AI prefab object the other side of the Hiding Spot, apply the Hide script to it and adjust the Reach.

7. Create and apply another new script to the new AI that obtains Hide, and after a 4 seconds, uses it to search the Hiding Spot and remove AI 1.

8. (Optional) Move AI 1 away from the Hiding Spot and use VisionCone to navigate to it before hiding.

9.8 | Artificial Intelligence for Game Developers A* Tutorial

9.9 | Meeting Minutes

Meeting 1 (10/10/17)

Make list of objectives to be completed.

Prepare for evaluation of objectives.

Find a problem with AI in games to try to solve.

Evaluate which API I'll use (Unity/C#, Unreal/C++, C++).

Background reading, create a bibliography.

Meeting 2 (17/10/17)

Finalise IDE decision.

Study Unity.

Narrow down scope, pick a genre.

Research AI used in target game genre.

Ideally find interviews with developers.

Failing that, analyse games from the genre and try to pinpoint the AI patterns.

Start introduction.

IDE decision.

AI techniques used in games.

Research to back up all points.

Meeting 3 (24/10/17)

Introduction feedback.

Expand, more researched based.

Include background research section of overall AI research.

Feedback on the scope of the library.

Similar projects

AI Libraries

Unity Asset store.

Create a Gantt chart.

Meeting 4 (31/10/17)

Gantt chart feedback - Realistic.

Introduction idea - Broad research into AI, get more specific into how they apply to my topic.

How am I going to get feedback?

Methods

Questionnaires

Interviews

Subjects

Classmates

Online - Social media

Start thinking of up to 'LSEPI (feedback questionnaires). ' heading in project handbook appendix III.

Attach additional planning into the appendix.

Meeting 5 (07/11/17)

What AI techniques from the ones researched can be implemented in my library?

Path finding

A*

Greedy best-first search

Line of sight

Hearing

Chasing

Swarming

How difficult will they be to implement?

Prioritise

Meeting 6 (14/11/17)

Finalise decision on included AI techniques.

Think about design methodologies

Agile

Rapid prototypes

Multiple iterations improving each time

If something new is learned at a late stage (e.g. development), it cannot be implemented until the next iteration.

Waterfall

Longer planning, everything must be known at the start

Sequential approach

Write about decision

Meeting 7 (21/11/17)

Feedback on design and planning.

What is needed for milestone 1

Introduction to show basic understanding.

Research.

Initial plan.

Meeting 8 (05/12/17)

Milestone 1 feedback.

Decided there was enough to start development.

Try to generalise my specific solutions so theyll be adaptable.

Adapting A* from AI coursework

Meeting 9 (08/01/18)

Post-Christmas progress report.

Good pace Switch to meetings every two weeks.

Emailed milestone 1 feedback sheet.

Meeting 10 (23/01/18)

Progress report.

What is needed for milestone two?

Skeleton build of library basic techniques implemented to show I am on track.

Short written section to talk about what Ive done.

Meeting 11 (06/02/18)

Meeting postponed for time to mark milestone 2.

Meeting 12 (13/02/18)

Milestone 2 feedback.

Progress report.

Meeting 13 (27/02/18)

Lack of progress due to other modules courseworks.

Meeting 14 (13/03/18)

Clarification on word count Only needs to be around 7k with a good implementation.

How am I going to get feedback over Easter? Group facebook page, email Mike about his year twos doing Unity

Meeting 15 (20/03/18)

Group meeting to compare progress.

What is expected of an evaluation and conclusion?

Meeting 16 (17/04/18)

Final meeting before milestone 3.

Deliverables expected:

Source code file (not full Unity project).

Dissertation.

Presentation.

15 Minutes + 15 minutes of questions

Poster

Name.

Video (Highly recommended).

Second assessor.

Path Following / Patrolling

A* Path Finding

Sound Awareness

Vision Cone

Dynamic Trail Creation

Hiding Spot Search

Variant Path Finding

Hiding Spot Search Learning

Multi-Point Vision Cone Tracking


Recommended