Date post: | 15-Jan-2016 |
Category: |
Documents |
Upload: | hilary-owen |
View: | 218 times |
Download: | 0 times |
July 23-25 2008 CGA Seattle 2008
Making a Game “Just Right” Through Testing and Play Balancing
James C. SmithCo-founder / Producer
Reflexive Entertainment
July 23-25 2008 CGA Seattle 2008
2
About James C. Smith(in 60 seconds or less)
• Co-founded Reflexive Entertainment 1997• Producer (vision holder) & lead programmer
– Ricochet Xtreme– Ricochet Lost Worlds– Big Kahuna Reef– Big Kahuna Words– Big Kahuna Reef 2– Ricochet Infinity– Build in Time
• Editor & Chief: CasualCharts.com
July 23-25 2008 CGA Seattle 2008
3
World Map (agenda)
• List types of testing, their goals, and methods• Usability Test
• Play Balancing• Q&A / Resources
July 23-25 2008 CGA Seattle 2008
4
Types of Testing
•Focus Testing•Usability Testing•Play Balancing•Bug Testing•Compatibility Testing
July 23-25 2008 CGA Seattle 2008
5
Types of Testing
• Types of Testing• Focus Testing• Usability Testing• Play Balancing• Bug Testing• Compatibility Testing
– Differences• Different Goals• Done at different time• May need different type of testers or different interaction with
testers (in person vs. remote)
– May end up combining them• You should still think of their goals independently
July 23-25 2008 CGA Seattle 2008
6
BETA testing
• Ambiguous term • Means many different things to different people• I never user this term in formal discussions• Informally, it often refers to any or all of the kinds of testing we
will discuss today
July 23-25 2008 CGA Seattle 2008
7
Focus Testing
• Goal: See if people like the game mechanic, theme and style
• When: As early as possible. Before the game finished or hardly even started
• Can accomplish a lot with no game implementation by using mocked up screen shots, story boards, and paper
• Can accomplish more with a prototype even if there is no tutorial and hardly any levels finished
July 23-25 2008 CGA Seattle 2008
8
Usability Testing
• Goal: See if the players understand all the features of the game
• When: After the tutorial is finished and the early levels
• How: Watch people play for 1 hour • More on this later
July 23-25 2008 CGA Seattle 2008
9
Play Balancing
• Goal: Figure out which levels are too hardor easy, which items are too expensive ortoo powerful
• When: “game play feature complete” – After all the levels and items are made – Every part of the play mechanic works – Meta game structure and shell stuff may be incomplete
such as trophies, story screens, configuration menus, and maybe even tutorials
• How: Have off site people play instrumented build all the way though the whole game. Collect data, analyze, and adjust game setting. Repeat.
July 23-25 2008 CGA Seattle 2008
10
Bug Testing
• Goal: Find features of the game thatdon’t work as intended
• When: After game is feature complete – (everything is implemented including meta structure and
shell)
• How: Internal QA staff, contract testing company, or volunteer community of game players
• Tools Reflexive Uses: – Bug Tracking: Bugzilla– Forums (phpBB, vBulletin, …)– Video Recording software: CamStudio– Volunteers from customer base
(Same people who did the play balancing)
July 23-25 2008 CGA Seattle 2008
11
Compatibility Testing
• Goal: Make sure the game work withevery kind of hardware and software imaginable
• When: Could be as early as when engine is stable and 90% of content is in. May wait until game is feature complete.
• How: In house lab or contact testing company or rely on feedback from beta testers
• What Reflexive Does: – Reuse mature framework battle tested for years– Rely on reports from play balancing tester and bug
testers
July 23-25 2008 CGA Seattle 2008
12
Types of Testing - Review
•Focus Testing•Usability Testing•Play Balancing•Bug Testing•Compatibility Testing
July 23-25 2008 CGA Seattle 2008
13
Schedule
Naive Schedule
July 23-25 2008 CGA Seattle 2008
14
Schedule
Better Schedule
July 23-25 2008 CGA Seattle 2008
15
World Map (agenda)
July 23-25 2008 CGA Seattle 2008
16
Usability Testing
• Goal: See if the players understandall the features of the game
• When: After the tutorial is finished and the early levels– Don’t wait for the game to be finished
• How: – Tester plays game for about 45 minutes– Moderator watches and takes notes– Tester answer a survey
July 23-25 2008 CGA Seattle 2008
17
Usability Testing – The Tester (player)
• Requirements– Must be players who haven't played this game before – Should be a “casual” player– Really needs to be done in person
• Can’t be– Remote customers– Development team members
• Candidates (from easy to hard, worst to best)– Employees not on the team– Family and friends– Random people off the street– Customer who happen to be local
July 23-25 2008 CGA Seattle 2008
18
Usability Testing – Neutral• Don’t tell the tester that you made the game
– People tend to be polite to the creator
• Be careful not to ask leading questions– Wrong: Isn’t this level fun?– Wrong: How do you like this level I made?– Correct: Would you say this level is boring or fun?
• Tell the player that you are not testing them– If the player can’t figure out what to do then the game
designer failed not the player
• The player is never wrong or stupid– It doesn’t matter that the answer is flashing in their face.
If they don’t see it, then you need to change something
July 23-25 2008 CGA Seattle 2008
19
Usability Testing – The Test
• Watch tester play the game and don’t help her in any way
• Encourage tester to think out loud and even ask questions with the understanding that they will not be answered
• Take notes • Video Recording is also preferable.
– Record the screen and the player if you can
July 23-25 2008 CGA Seattle 2008
20
Usability Testing – Exit Survey
• Exit survey question – What did you like or dislike?– What part was too hard or easy?– Do you know what feature X does?– Explain how to use feature Y.– What part was most confusing?
• Often times the survey won’t reveal what things the player really got stuck on. That is why you watch and take notes
• Other times the survey will reveal that you notes were wrong– Wik Story
July 23-25 2008 CGA Seattle 2008
21
Lessons Learned About Game Design - Usability Tutorials are hard and need lots of testing
• What you think is perfectly clear… never really is– Test it on real players, adjust, and then test again
• Optional moves (like combos) are hardest to teach• Forcing a move doesn’t teach it
– Players often read it, and then do it, and still don’t get it (Big Kahuna Net)
• Adding more text is usually the wrong solution• Visual effects can help show cause and effect• Don’t add a feature you can’t teach
July 23-25 2008 CGA Seattle 2008
22
GDC Session by User Research Engineers from Microsoft
• Do-It-Yourself Usability: How to Use User Research to Improve your Game
• store.cmpgame.com/product.php?id=28&cat=51
July 23-25 2008 CGA Seattle 2008
23
World Map (agenda)
July 23-25 2008 CGA Seattle 2008
24
Playbalancing Overview
• Adjust difficulty of every level & item in the game
• Recruit 50+ testers• Give them a full version of game with
special instrumentation to collect a playlog
• Collect and analyze play logs
July 23-25 2008 CGA Seattle 2008
25
When to start Playbalancing
• When the game is “gameplay feature complete”
• Needed– Every gameplay feature done – All upgrades and items implemented– Every level complete
• Not needed– Meta game structure (Stories, Trophies) not needed– Final art not needed– No need to fix all bugs, performance, or compatibility
July 23-25 2008 CGA Seattle 2008
26
Recruit Testers for Playbalancing
• Can be located anywhere in the world• Only first play through is accurate. Reply by same person is
invalid.• Cannot be QA staff or contracted testing company• Sampling of target audience of your game• Contact your customers (or players of other casual games)
– Post in forums– Mention in e-mail newsletter
• Give them full version of game (not a 60 minute trial)
• First build should be given to a limited subset of your testers so the rest are “fresh”
July 23-25 2008 CGA Seattle 2008
27
Creating a PlayLog File
• Special version of game logs player actions• Usually a simple text file with one line per level
played• Comma-separated values (CSV) easy to write and
easy to import into Excel
July 23-25 2008 CGA Seattle 2008
28
Sample Playlog
July 23-25 2008 CGA Seattle 2008
29
Playlog Fields (Player behavior on level)
• Level Number• Status (completed, failed, aborted)• Seconds played• Score / revenue• Optional goals achieved• Upgrades purchased• Items used• Mouse clicks• Invalid moves• Hints used
July 23-25 2008 CGA Seattle 2008
30
Playlog Fields (universal)
• Play name• If first time• Minutes Running Game• Build Number• Player created with build number• Avg. FPS• IP Address• Date & Time• Hardware Info (RAM, CPU, Video Resolution)
July 23-25 2008 CGA Seattle 2008
31
Collect Playlog File From Testers – E-Mail(Use for Big Kahuna Reef & Ricochet Games)
• Ask player to E-mail file as attachment– Pro
• No implementation on your part• Players Know what is being tracked
– Con
• Most players won’t do it• Lots of files for you to manage
• Usually the testing coordinator will import each log file into one master Excel document or database
July 23-25 2008 CGA Seattle 2008
32
Collect Playlog File From Testers – FPT(Used for Wik and Mosaic)
• Game uses simple FTP batch file to upload playlog to server– Pro
• Simple implementation on your part• Players can find out what is being tracked
– Con
• Lots of files for you to manage• Usually the testing coordinator will import each log
file into one master Excel document or database
July 23-25 2008 CGA Seattle 2008
33
Online Playlog using HTTP(Used for Build in Time)
• After each level, send one record to server using HTTP post• Also save a copy in a local PlayLog.CSV file for curios testers
– Pro
• Real-time data as game is being played• All data in one database rather than a separate
file form each user• Data from EVERY player• Can also handle disabling old build
– Con
• Requires more work to setup but not much
July 23-25 2008 CGA Seattle 2008
34
Online Playlog using HTTP - Details
• Game engine collects all playlog fields during level play• At end of level, all fields are sent to a web server using
an HTTP Post – Just like a form post in a browser– LibCurl can help with this
• The server can be programmed using simple web frameworks and languages like PHP. – A trivial PHP script can capture all the posted form fields
and save them in a MySQL database
• Retrieving the data from MySQL can be made simpler with another trivial PHP program to output the database as CSV
July 23-25 2008 CGA Seattle 2008
35
Reflexive’s Playlog Tools• Source code available BuildInTime.com/playlog
– Not intended to be a turn key solution– It’s a huge head start when rolling your own– Requires web host with PHP and MySQL
• Makes no assumptions about what fields are in your playlog– Automatically add new columns to the SQL table
• Contents (about 300 lines of PHP code)– playlog.php collects posted fields and stores them in DB– download.php dumps the DB as a CSV file for Excel– gateway.php handle beta authorization– dbconnect.php has MySQL Password
• Or roll your own in ASP.net, Rubu or whatever– Should be trivial for any experienced web developer
July 23-25 2008 CGA Seattle 2008
36
Analyzing Playlog
• Excel, Access, or any spreadsheet or database • Averages per level• Averages per player• PivotTable (or Crosstab) is your best friend
July 23-25 2008 CGA Seattle 2008
37
Make adjustments and test again
• Use a new build number that is tracked in the playlog file
• Disable the old build if you have on-line validation • Get “fresh” testers for the new build but also let the
original testers replay on the new build• Don’t balance for the experienced players. Look at
averages for first time players• This is where it is important to know which logs are
from replays
July 23-25 2008 CGA Seattle 2008
38
Using External Testers
• Clearly label the game as beta and give it a kill date– All Reflexive beta builds “expire” 21 days after they were
compiled– Each time you launch the game it pops-up a message box
with the number of days until expiration
• On-line validation is even better (makes sense when using on-line playlog)– Beta game pings playlog sever before allowing player to start
a game– Verifies that player’s firewall is configured to allow playlog to
work– Allows sever to deny the game from running if it is an out of
date beta
• Wrapping the game in DRM may help
July 23-25 2008 CGA Seattle 2008
39
Lessons Learned About Testing Process
• Automate playlog collection as much as possible• Make all test builds expire
– Final test build of BKR2 did not
• On-line validation is even better then expiration• Game should ask the player…
– to enter their FORUM user name– if they have played before
• Log file should include answers to above, build number, and FIRST build number used
• Logging failed levels in not enough, be sure to log aborts• Reserve some testers for the 2nd or 3rd play balancing build
July 23-25 2008 CGA Seattle 2008
40
Playbalancing - Data
July 23-25 2008 CGA Seattle 2008
41
“Speed” Levels“Speed” Levels
Build in Time Play Log Analysis
July 23-25 2008 CGA Seattle 2008
42
Too Easy
Build in Time Play Log Analysis
July 23-25 2008 CGA Seattle 2008
43
Too DifficultToo Difficult
NewVersion
OldVersion
Build in Time Play Log Analysis
July 23-25 2008 CGA Seattle 2008
44
World Map (agenda)
July 23-25 2008 CGA Seattle 2008
45
Resources
• James C. Smith – [email protected]• Reflexive’s Playlog Tools: www.BuildInTime.com/playlog• Libcurl (HTTP library) sourceforge.net/projects/curl/• Surveys - www.SurveyMonkey.com• Forums: www.vBulletin.com www.phpbb.com • Bug Tracking: www.Bugzilla.org• Video Recording: www.CamStudio.org• Market Research: www.CasualCharts.com• GDC Session: Do-It-Yourself Usability
store.cmpgame.com/product.php?id=28&cat=51
July 23-25 2008 CGA Seattle 2008
46