Upload
amr-koura
View
111
Download
5
Embed Size (px)
Citation preview
Neural Network Based Player Retention Prediction in Free to Play Games
Amr KouraFirst Examiner: Prof. Dr. Christian Bauckhage
Second Examiner: Prof. Dr. Stefan WrobelAdvisor: Rafet Sifa 1
Content
• Introduction• Related Work• Solution Approach• Future Work• Conclusion
2
https://www.statista.com/statistics/234649/percentage-of-us-population-that-play-mobile-games/ 3
https://wallpaperscraft.com/download/call_of_duty_modern_warfare_3_france_eiffel_tower_soldiers_battle_tanks_19772/1920x12004
Call of Duty MW3
1.5 million peoplenight #1
6.5 million copiessold on launch day
775$ millionin first 5 days
1$ billionin 16 days
https://en.wikipedia.org/wiki/Call_of_Duty:_Modern_Warfare_3
5
Freemium
Free-to-play game http://www.techlamb.com/2014/11/candy-crush-soda-saga-for-pc-download.html 6
Freemium
https://www.pinterest.com/pin/508554982898643116/ 7
“Who will leave the game in the future?”
8
Defining Problem ?
9
The Big Picture
Game Player
Player Activities
Churn Prediction
10
The Big Picture
Game Player
Player Activities
Churn Prediction
11
The Big Picture
Game Player
Player Activities
Churn Prediction
12
The Big Picture
Game Player
Player Activities
Churn/Retention Prediction
13
Related Work
14
Related WorkHadiji et al., 2014 studied the churn prediction over five free-to-play games , with several machine learning algorithms.
Decision Tree achieved best prediction performance.
15
Related WorkSifa et al., 2015 studied player purchase decision in mobile free-to-play games over 100,000 players.
16
Related WorkDrachen et al., 2016 studied the player retention in the free-to-play mobile games.
17
Solution Approach
18
Classical Machine Learning
Algorithms
Deep learning Approach
19
Classical Machine Learning Algorithms
20
Data setHave historical data with the following features.
Description Field NameDay of event Observation daySum of daily move count Moves CountSum of daily Opponents Active OpponentsSum of daily World Number World NumberSum of daily Skill 1 scores Skill 1Sum of daily Skill 2 scores Skill 2Sum of daily Skill 3 scores Skill 3Sum of daily reached goals Goals reachedSum of daily time in game Time in gameWhich day of week Week DayPlayer country code Country code
21
Data setHave historical data with the following features.
Description Field NameDay of event Observation daySum of daily move count Moves CountSum of daily Opponents Active OpponentsSum of daily World Number World NumberSum of daily Skill 1 scores Skill 1Sum of daily Skill 2 scores Skill 2Sum of daily Skill 3 scores Skill 3Sum of daily reached goals Goals reachedSum of daily time in game Time in gameWhich day of week Week DayPlayer country code Country code
11*7=77
Feature/Player
22
Classical Machine Learning● Logstic regression.
● Decision tree
● SVM(Polynomial kernel , RBF kernel)
● Random Forest (100 tree)
23
Classical Machine Learning
24
Deep learning Approach
25
Motivation
26
“To achieve its impressive performance in tasks such as speech perception or object recognition, the brain extracts multiple levels of representation from the sensory input.” Hinton ,2007.
Motivation
27
Motivation
Distributed Representation of Visual Datahttp://www.slideshare.net/WillStanton/deep-learning-with-text-v4
28
Motivation
Shallow neural network vs Deep neural networkhttp://stats.stackexchange.com/questions/182734/what-is-the-difference-between-a-neural-network-and-a-deep-neural-network
29
Feed Forward Neural Network
30
Deep Learning
Input Layer
Output Layer
Hidden Layer
Weights W
Weights W
77 Neurons
200 Neurons
2 Neurons
31
Deep Learning
Input Layer
Output Layer
Hidden Layer
Weights W
Weights W
77 Neurons
RELU
Soft Max
Hint: Learning rate = 0.3 Batch size = 2700 Training Iteration = 1000 32
Deep Learning
33
Deep Learning Supervised pre-training
34
Recurrent Neural Network
35
Motivation
Recurrent Neural Networkhttps://leonardoaraujosantos.gitbooks.io/artificial-inteligence/content/recurrent_neural_networks.html
36
Recurrent Neural Network Pre-processing:Create 3D arrays [#players, #days, #features].
Player 1 Data [7x11]
Player 2 Data [7x11]
…
Player N Data [7x11]37
0
1
…
037
Recurrent Neural Network Pre-processing:In case of missing data , we pad the player records with zeros.
3
4
5
6
7
8
9
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .
. . . . . . . . . . .0
. . . . . . . . . . .0000000000
38
Unfolding
Recurrent Neural Network Unfoldinghttp://www.wildml.com/2015/09/recurrent-neural-networks-tutorial-part-1-introduction-to-rnns/
39
Unfolding through time
40
Unfolding through time
41
Unfolding through time
t
t-1
t-2
t-3
42
RNN Modes
RNN Modeshttp://www.wildml.com/2015/09/recurrent-neural-networks-tutorial-part-1-introduction-to-rnns/
43
LSTMLSTM solves one RNN learning problem 'vanishing exploding gradients'
Gated LSTM Cell https://en.wikipedia.org/wiki/Long_short-term_memory
44
LSTM Network
LSTM Networkhttps://deeplearning4j.org/lstm 45
Bidirectional LSTM
Bidirectional LSTM Networkhttp://www.cs.cmu.edu/afs/cs/user/zhouyu/www/ASRU.pdf 46
Dynamic LSTM
Dynamic LSTM Networkhttps://arxiv.org/pdf/1406.1078.pdf 47
Dynamic Bidirectional LSTM
Accuracy :0.899078983403 Precision :0.827267953923 Recall :0.884941525098 F1-score :0.854869382267 G mean :0.632014710972
48
Evaluation Metric
49
Future Work
50
Predict the player Retention for the next Month rather than next week.
Future Work
51
Sequence-to-sequence learning model rather than sequence-to-label.
Future Work
52
use unsupervised pre-training before learning deep learning model using stacked Autoecoder.
Future Work
53
solve the variable length time series embedding using Autoencoder.
Future Work
Gianniotis et al., 2016 Model-Coupled Autoencoder for Time Series Visualisation 54
Conclusion• Churn prediction analysis is very important task for
game development company.
• This is the first time , the problem solved using
memory based computation.
• RNN is very powerful tool to learn sequence.
• Dynamic Bidirectional LSTM solves the variable
Length time series and gives best learning results.
55
Thank you
56
Appendix
57
BackPropagation
58
Back-propagation through time(BPTT)
59
BPTT
BPTThttp://www.wildml.com/2015/10/recurrent-neural-networks-tutorial-part-3-backpropagation-through-time-and-vanishing-gradients/
60
Gated LSTM Cell
61
Bidirectional LSTM
62
Thank you
63