Date post: | 17-Dec-2015 |
Category: |
Documents |
Upload: | chester-cobb |
View: | 224 times |
Download: | 0 times |
PROBABILISTIC GRAPHIC MODEL&LDA
Yilun Wang
Chu-kochen Honors College, Zhejiang University
OUTLINE
WHAT DOES A PROBABILISTIC MODEL DO? What are mechanisms underlying gene expression
data? Colon Cancer Research.
How to predict prices of stocks and bonds from historical data? Hedge fund dynamics.
Given a list of movies that a particular user likes, what other movies would she like? Netflix Prize.
How to identify aspects of a patient’s health that are indicative of disease? Heart Disease Classification.
Which documents from a collection are relevant to a search query? Google Research.
HOW
Setps:1. Formulating questions about data.
明确要干什么,要求解什么,有哪些参数2. Design an appropriate joint distribution.
建模,确定数据的结构,隐变量,共轭先验(确定图模型)3. Cast our questions on the computation on the joint.
将要求解的概率通过积分,条件独立,拆成多个可计算的部分4. Develop efficient algorithms to perform or
approximate the computations on the joint.利用吉布斯采样或者变分推理等方法求解
PROBABILITY REVIEW
R1. Joint Distributions
R2. Marginal Probabilities
R3. Conditional Probabilities (R1+R2) Joint/Marginal
R4. Independence
PROBABILITY REVIEW
Bayes' rule
( ) ( ) ( )|A
P B A P A dA P B=ò
“Bayesian Inference with Tears”
posterior
likelihoodprior
evidence
Probability Estimation
( ) ( ) ( )| |P A B P B A P Aµ
(R2+R3)
GRAPHICAL MODELS
A family of probability distributions defined in terms of a directed (DGM/DAG/Bayesian Network) or/and(chain) undirected (Markov Networks) graph
GRAPHICAL MODELS
A more economic representation of the joint图模型是表示随机变量之间的关系的图,图中的节点表示随机变量, (缺少 )边表示条件独立假设。因此可以对联合分布提供一种紧致表示
Advantages of GM allow us to articulate structural assumptions
about collections of random variables. provide general algorithms to compute
conditionals, marginals, expectations and independencies, etc.
provide control over the complexity of these operations.
decouple the factorization of the joint from its particular function form.
CONDITIONAL INDEPENDENCE
Independence:
Conditional Independence
CONDITIONAL INDEPENDENCE
Take graphic model of LDA as an example:
CONDITIONAL INDEPENDENCE
Sometime we want to evaluate the following CI:
?
PROBABILISTIC GRAPHIC MODEL
Graphical model is the study of probabilistic models Just because there are nodes and edges doesn’t mean
it’s a graphical model These are not graphical models:
Xiaojin Zhu, Tutorial on Graphic Models at KDD-2012http://pages.cs.wisc.edu/~jerryzhu/
DIRECTED GRAPHIC MODELS
Binary varibles
EXAMPLE: ALARM
求 P(B, ~E, A, J, ~M)
Used extensively in natural language
processing Plate representation on the right
EXAMPLE: NAÏVE BAYES
EXAMPLE: PROBABILISTIC LSI
Eric Xing, Topic Models, Latent Space Models, Sparse Coding , and All That
EXAMPLE: LATENT DIRICHLET ALLOCATION
Generative model Models each word in a document as a sample
from a mixture model. Each word is generated from a single topic,
different words in the document may be generated from different topics.
A topic is characterized by a distribution over words.
Each document is represented as a list of admixing proportions for the components (i.e. topic vector).
The topic vectors and the word rates each follows a Dirichlet prior --- essentially a Bayesian pLSI
EXAMPLE: LATENT DIRICHLET ALLOCATION
EXAMPLE: LATENT DIRICHLET ALLOCATION
EXAMPLE: LATENT DIRICHLET ALLOCATION
CONDITIONAL INDEPENDENCE
D-SEPARATION CASE 1: TAIL-TO-TAIL
D-SEPARATION CASE 2: HEAD-TO-TAIL
D-SEPARATION CASE 3: HEAD-TO-HEAD
D-SEPARATION
UNDIRECTED GRAPHICAL MODELS
FACTOR GRAPH
WHERE DOES COMPLICATED MODEL SUCH AS LDA COME FROM?
THE ORIGIN OF LDA
Dice Model Is Dice Model a
generative model?
Unigram Model
xiDN
Language Model
wφN
DProbability
Vocabulary
CorpusTopic
Dice Model
THE EVOLUTION PROCESS
E1: Add a conjugate prior Why Conjugate
prior?
E2: Sampling with repeated choice of dice
xiDN
α xiDN
Bayesian (completed) Dice Model
wφN
D
α wφN
D
α
Language Model
THE EVOLUTION PROCESS
E3: Turn DM-E2 into a Bayesian mixture model
Mixture of unigrams
xiDN
B
2ψ
α
β
D
Π
K
α
β
Nwdizd
ψzd
THE EVOLUTION PROCESS
Mixture of unigrams
D
Π
K
α
β
Nwdizd
ψzd
Corpus
Topic 1
Topic 2
Topic 3
D
Π
D
α
β
Nwdizd
ψzd
THE EVOLUTION PROCESS
Finally: we reach the pLSA/LDA
Corpus
Topic 1
Topic 2
Topic 3
LDA VARIATIONS
REVISITING K-MEANS: NEW ALGORITHMS VIA BAYESIAN NONPARAMETRICS
Bayesian nonparametrics can be used for modeling infinite mixtures, and hierarchical Bayesian models can be utilized for sharing clusters across multiple data sets.
Revisiting the k-means clustering algorithm from a Bayesian nonparametric viewpoint
RECALL
Mixture Gaussian
RECALL
Hjort, N., Holmes, C., Mueller, P., and Walker, S. Bayesian Nonparametrics: Principles and Practice. Cambridge University Press, Cambridge, UK, 2010.
Dirichlet process mixture: infinite mixture
DP-MEAN
THE CONTEXTUAL FOCUSED TOPIC MODEL(CFTM)
cFTM infers a sparse (“focused”) set of topics for each document, while also leveraging contextual information about the author(s) and document venue.
hierarchical beta process
Xu Chen, Mingyuan Zhou, Lawrence Carin, Duke University, The Contextual Focused Topic Model
LDA
cFTM+HBP
PROS
(1) It automatically infers the number of topics by combining properties from the Dirichlet process and hierarchical beta process, allowing an unbounded number of topics for the entire corpus, while inferring a focused (sparse) set of topics for each individual document.
PROS
(2) The cFTM nonparametrically clusters then authors and venues, thereby increasing statistical strength while also inferring useful relational information.
(3) Instead of pre-specifying the importance of author/venue information (as was done in [6]), the cFTM automatically infers the document-dependent, probabilistic importance of the author/venue information on word assignment.
Data: DBLP+NSF
TM-LDA: EFFICIENT ONLINE MODELING OF LATENT TOPIC TRANSITIONS IN SOCIAL MEDIA
Much of the textual content on the web, and especially social media, is temporally sequenced, and comes in short fragments, including microblog posts on sites such as Twitter and Weibo, status updates on social networking sites such as Facebook and LinkedIn, or comments on content sharing sites such as YouTube
Yu Wang, Eugene Agichtein, Michele Benzi, Emory University, TM-LDA: Efficient Online Modeling of Latent Topic Transitions in Social Media
Efficiently mining text streams such as a sequence of posts from the same author, by modeling the topic transitions that naturally arise in these data.
TM-LDA learns the transition parameters among topics by minimizing the prediction error on topic distribution in subsequent postings. After training, TM-LDA is thus able to accurately predict the expected topic distribution in future posts.
Space of topic distributions
Given the topic distribution vector of a historical document x, the estimated topic distribution of a new document ˆy is given by ˆy = f(x)
EXPERIMENT
EXPERIMENT
COMSOC: ADAPTIVE TRANSFER OF USER BEHAVIORS OVER COMPOSITE SOCIAL NETWORK
Accurate prediction of user behaviors is important for many social media applications, including social marketing, personalization and recommendation, etc.
1. alleviate the data sparsity problem 2. enhance the predictive performance of
user modeling
Erheng Zhong, Wei Fan, Junwei Wang, Lei Xiao, and Yong Li, HKUST, IBM Research Center, Tencent, ComSoc: Adaptive Transfer of User Behaviors overComposite Social Network
Comsoc
Thank you!