Implémentation de l'estimation bayésienne du modèle de sujet en python. En tant que manuel ["Topic Model"](https://www.amazon.co.jp/ Topic Model-Machine Learning Professional Series-Iwata-Guji / dp / 4061529048 / ref = sr_1_2? Ie = UTF8 & qid = 1501997285 & sr = 8- 2 & keywords = modèle de sujet) a été utilisé.
Structure de cet article
L'explication du modèle de rubrique est omise car elle est décrite dans [Implémentation d'estimation la plus probable du modèle de rubrique en python](http://qiita.com/ta-ka/items/18248abf0135cca02b93#topic model).
Les formules nécessaires sont dérivées en premier.
La fonction bêta est exprimée comme suit.
\begin{align}
\int_0^1 \phi^{\alpha - 1} (1 - \phi)^{\beta - 1} d\phi &= \left[ \cfrac{\phi^{\alpha}}{\alpha} (1 - \phi)^{\beta - 1} \right]_0^1 + \int_0^1 \cfrac{\phi^{\alpha}}{\alpha} (\beta - 1) (1 - \phi)^{\beta - 2} d\phi \\
&= \cfrac{\beta - 1}{\alpha} \int_0^1 \phi^{\alpha} (1 - \phi)^{\beta - 2} d\phi \\
&= \cfrac{\beta - 1}{\alpha} \cdots \cfrac{1}{\alpha + \beta - 2} \int_0^1 \phi^{\alpha + \beta - 2} d\phi \\
&= \cfrac{(\beta - 1) \cdots 1}{\alpha \cdots (\alpha + \beta - 1)} \\
&= \cfrac{(\beta - 1)!}{\cfrac{(\alpha + \beta - 1)!}{(\alpha - 1)!}} \\
&= \cfrac{(\alpha - 1)!(\beta - 1)!}{(\alpha + \beta - 1)!} \\
&= \cfrac{\Gamma(\alpha)\Gamma(\beta)}{\Gamma(\alpha + \beta)} \equiv B(\alpha, \beta)
\end{align}
À partir de là, l'équation suivante tient.
\begin{align}
\int_0^{1 - q} x^{\alpha - 1} (1 - q - x)^{\beta - 1} dx
&= \int_0^1 \bigl( (1 - q) y \bigr)^{\alpha - 1} \bigl( (1 - q) (1 - y) \bigr)^{\beta - 1} (1 - q) dy \ \ \ \ \because x = (1 - q)y \\
&= (1 - q)^{\alpha + \beta + 1} \int_0^1 y^{\alpha - 1} (1 - y)^{\beta - 1} dy \\
&= (1 - q)^{\alpha + \beta + 1} B(\alpha, \beta) \\
\end{align}
La distribution Diricle est exprimée comme suit.
\begin{align}
{\rm Diriclet}(\boldsymbol \phi \mid \boldsymbol \beta)
= \cfrac{\displaystyle \prod_{v = 1}^V \phi_v^{\beta_v - 1}}{\displaystyle \int \prod \limits_{v = 1}^V \phi_v^{\beta_v - 1} d \boldsymbol \phi} \\
\end{align}
Développez le terme de normalisation.
\begin{align}
\int \prod_{v = 1}^V \phi_v^{\beta_v - 1} d\boldsymbol \phi
&= \int_0^1 \phi_1^{\beta_1 - 1} \int_0^{1 - \phi_1} \phi_2^{\beta_2 - 1} \cdots \int_0^{1 - \sum\limits_{v = 1}^{V - 2} \phi_v} \phi_{V - 1}^{\beta_{V - 1} - 1} \left( 1 - \sum_{v = 1}^{V - 1}\phi_v \right)^{\beta_V - 1} d\phi_{V - 1} \cdots d\phi_2 d\phi_1 \\
&= \int_0^1 \phi_1^{\beta_1 - 1} \int_0^{1 - \phi_1} \phi_2^{\beta_2 - 1} \cdots \int_0^{1 - \sum\limits_{v = 1}^{V - 3} \phi_v} \phi_{V - 2}^{\beta_{V - 2} - 1} \left( 1 - \sum_{v = 1}^{V - 2}\phi_v \right)^{\beta_{V - 1} + \beta_V - 1} B(\beta_{V - 1}, \beta_V) d\phi_{V - 2} \cdots d\phi_2 d\phi_1 \\
&= \int_0^1 \phi_1^{\beta_1 - 1} \int_0^{1 - \phi_1} \phi_2^{\beta_2 - 1} \cdots \int_0^{1 - \sum\limits_{v = 1}^{V - 4} \phi_v} \phi_{V - 3}^{\beta_{V - 3} - 1} \left( 1 - \sum_{v = 1}^{V - 3}\phi_v \right)^{\beta_{V - 2} + \beta_{V - 1} + \beta_V - 1} B(\beta_{V - 2}, \beta_{V - 1} + \beta_V) B(\beta_{V - 1}, \beta_V) d\phi_{V - 3} \cdots d\phi_2 d\phi_1 \\
&= B \left( \beta_1, \sum_{v = 2}^V \beta_v \right) \cdots B(\beta_{V - 2}, \beta_{V - 1} + \beta_V) B(\beta_{V - 1}, \beta_V) \\
&= \cfrac{\Gamma(\beta_1) \Gamma \left( \sum\limits_{v = 2}^V \beta_v \right)}{\Gamma \left( \sum\limits_{v = 1}^V \beta_v \right)} \cdots \cfrac{\Gamma(\beta_{V - 2}) \Gamma(\beta_{V - 1} + \beta_V)}{\Gamma(\beta_{V - 2} + \beta_{V - 1} + \beta_V)} \cfrac{\Gamma(\beta_{V - 1}) \Gamma(\beta_V)}{\Gamma(\beta_{V - 1} + \beta_V)} \\
&= \cfrac{\prod\limits_{v = 1}^V \Gamma(\beta_v)}{\Gamma \left( \sum\limits_{v = 1}^V \beta_v \right)} \\
\end{align}
Remplacez le terme de normalisation.
\begin{align}
{\rm Diriclet}(\boldsymbol \phi \mid \boldsymbol \beta)
= \cfrac{\displaystyle \Gamma \left( \sum_{v = 1}^V \beta_v \right)}{\displaystyle \prod_{v = 1}^V \Gamma(\beta_v)} \prod_{v = 1}^V \phi_v^{\beta_v - 1} \\
\end{align}
La valeur attendue de la valeur logarithmique de l'élément $ v $ th $ \ phi_v $ de la variable de probabilité $ \ boldsymbol \ phi $ qui suit la distribution de Diricle peut être exprimée comme suit.
\begin{align}
\int p(\boldsymbol \phi \mid \boldsymbol \beta) \log \phi_v d \boldsymbol \phi
&= \int \cfrac{\Gamma(\hat \beta)}{\prod\limits_{v' = 1}^V \Gamma(\beta_{v'})} \prod_{v' = 1}^V \phi_{v'}^{\beta_{v'} - 1} \log \phi_v d \boldsymbol \phi \\
&= \cfrac{\Gamma(\hat \beta)}{\prod\limits_{v' = 1}^V \Gamma(\beta_{v'})} \int \cfrac{\partial \prod\limits_{v' = 1}^V \phi_{v'}^{\beta_{v'} - 1}}{\partial \beta_v} d \boldsymbol \phi \\
&= \cfrac{\Gamma(\hat \beta)}{\prod\limits_{v' = 1}^V \Gamma(\beta_{v'})} \cfrac{\partial}{\partial \beta_v} \int \prod_{v' = 1}^V \phi_{v'}^{\beta_{v'} - 1} d \boldsymbol \phi \\
&= \cfrac{\Gamma(\hat \beta)}{\prod\limits_{v' = 1}^V \Gamma(\beta_{v'})} \cfrac{\partial}{\partial \beta_v} \cfrac{\prod\limits_{v' = 1}^V \Gamma(\beta_{v'})}{\Gamma(\hat \beta)} \\
&= \cfrac{\partial}{\partial \beta_v} \log \cfrac{\prod\limits_{v' = 1}^V \Gamma(\beta_{v'})}{\Gamma(\hat \beta)} \\
&= \cfrac{\partial}{\partial \beta_v} \left( \log \prod_{v' = 1}^V \Gamma(\beta_{v'}) - \log \Gamma(\hat \beta) \right) \\
&= \cfrac{\partial \log \Gamma(\beta_v)}{\partial \beta_v} - \cfrac{\partial \log \Gamma(\hat \beta)}{\partial \hat \beta} = \Psi(\beta_v) - \Psi(\hat \beta) \\
\end{align}
Où $ \ hat \ beta $ est la somme des paramètres et $ \ Psi (x) $ est la fonction digamma.
\begin{align}
& \hat \beta = \sum_{v = 1}^V \beta_v \\
& \Psi(x) = \cfrac{d}{dx} \log \Gamma(x) \\
\end{align}
L'estimation bayésienne variable estime la distribution a posteriori des variables inconnues. Il existe trois types de variables inconnues dans le modèle de rubrique.
Variable inconnue | Notation |
---|---|
Ensemble de sujets | |
Ensemble de distribution de sujets | |
Ensemble de distribution de mots |
$ D $ est le nombre de documents, $ K $ est le nombre de sujets, $ V $ est le nombre de vocabulaires et $ N_d $ est la longueur du document $ d $. $ z_ {dn} $ est le sujet du $ n $ ème mot du document $ d $, $ \ theta_ {dk} $ est la probabilité que le sujet $ k $ soit affecté au document $ d $, $ \ phi_ {kv} $ représente la probabilité que le sujet $ k $ génère le vocabulaire $ v $ En outre, l'ensemble de distribution de rubriques $ \ Theta $ est généré à partir de la distribution Diricre avec le paramètre $ \ alpha $. L'ensemble de distribution de mots $ \ Phi $ est généré à partir de la distribution Diricre avec le paramètre $ \ beta $.
Utilisez l'inégalité de Jensen pour trouver la limite inférieure de variation $ F $ pour $ \ log p (\ boldsymbol W \ mid \ alpha, \ beta) $.
\begin{align}
\log p(\boldsymbol W \mid \alpha, \beta)
&= \log \int \int \sum_{\boldsymbol Z} p(\boldsymbol W, \boldsymbol Z, \boldsymbol \Theta, \boldsymbol \Phi \mid \alpha, \beta) d\boldsymbol \Theta d \boldsymbol \Phi \\
&= \log \int \int \sum_{\boldsymbol Z} q(\boldsymbol Z, \boldsymbol \Theta, \boldsymbol \Phi) \cfrac{p(\boldsymbol W, \boldsymbol Z, \boldsymbol \Theta, \boldsymbol \Phi \mid \alpha, \beta)}{q(\boldsymbol Z, \boldsymbol \Theta, \boldsymbol \Phi)} d\boldsymbol \Theta d \boldsymbol \Phi \\
&\geq \int \int \sum_{\boldsymbol Z} q(\boldsymbol Z, \boldsymbol \Theta, \boldsymbol \Phi) \log \cfrac{p(\boldsymbol W, \boldsymbol Z, \boldsymbol \Theta, \boldsymbol \Phi \mid \alpha, \beta)}{q(\boldsymbol Z, \boldsymbol \Theta, \boldsymbol \Phi)} d\boldsymbol \Theta d \boldsymbol \Phi \\
&= \int \int \sum_{\boldsymbol Z} q(\boldsymbol Z) q(\boldsymbol \Theta, \boldsymbol \Phi) \log \cfrac{p(\boldsymbol W, \boldsymbol Z, \boldsymbol \Theta, \boldsymbol \Phi \mid \alpha, \beta)}{q(\boldsymbol Z) q(\boldsymbol \Theta, \boldsymbol \Phi)} d\boldsymbol \Theta d \boldsymbol \Phi \equiv F \\
\end{align}
Définissez $ F (q (\ boldsymbol Z)) $ comme suit et subdivisez-le avec $ q (\ boldsymbol Z) $.
\begin{align}
F(q(\boldsymbol Z))
&\equiv \int \int \sum_{\boldsymbol Z} q(\boldsymbol Z) q(\boldsymbol \Theta, \boldsymbol \Phi) \log \cfrac{p(\boldsymbol W, \boldsymbol Z, \boldsymbol \Theta, \boldsymbol \Phi \mid \alpha, \beta)}{q(\boldsymbol Z) q(\boldsymbol \Theta, \boldsymbol \Phi)} d \boldsymbol \Theta d \boldsymbol \Phi + \lambda \left( \sum_{\boldsymbol Z} q(\boldsymbol Z) - 1 \right) \\
\cfrac{\partial F(q(\boldsymbol Z))}{\partial q(\boldsymbol Z)}
&= \int \int q(\boldsymbol \Theta, \boldsymbol \Phi) \bigl( \log p(\boldsymbol W, \boldsymbol Z, \boldsymbol \Theta, \boldsymbol \Phi \mid \alpha, \beta) - \log q(\boldsymbol Z) - \log q(\boldsymbol \Theta, \boldsymbol \Phi) - 1 \bigr) d\boldsymbol \Theta d \boldsymbol \Phi + \lambda \\
&= \mathbb{E}_{q(\boldsymbol \Theta, \boldsymbol \Phi)} \bigl[ \log p(\boldsymbol W, \boldsymbol Z, \boldsymbol \Theta, \boldsymbol \Phi \mid \alpha, \beta) \bigr] - \log q(\boldsymbol Z) + \lambda - C \\
\end{align}
Résolvez $ \ displaystyle \ cfrac {\ partial F (q (\ boldsymbol Z))} {\ partial q (\ boldsymbol Z)} = 0 $.
\begin{align}
q(\boldsymbol Z)
&\propto \exp \Bigl( \mathbb{E}_{q(\boldsymbol \Theta, \boldsymbol \Phi)} \bigl[ \log p(\boldsymbol W, \boldsymbol Z, \boldsymbol \Theta, \boldsymbol \Phi \mid \alpha, \beta) \bigr] \Bigr) \\
&= \exp \Bigl( \mathbb{E}_{q(\boldsymbol \Theta, \boldsymbol \Phi)} \bigl[ \log p(\boldsymbol \Theta \mid \alpha) + \log p(\boldsymbol \Phi \mid \beta) + \log p(\boldsymbol Z \mid \boldsymbol \Theta) + \log p(\boldsymbol W \mid \boldsymbol Z, \boldsymbol \Phi) \bigr] \Bigr) \\
&\propto \exp \Bigl( \mathbb{E}_{q(\boldsymbol \Theta, \boldsymbol \Phi)} \bigl[ \log p(\boldsymbol Z \mid \boldsymbol \Theta) + \log p(\boldsymbol W \mid \boldsymbol Z, \boldsymbol \Phi) \bigr] \Bigr) \\
&= \exp \Bigl( \mathbb{E}_{q(\boldsymbol \Theta)} \bigl[ \log p(\boldsymbol Z \mid \boldsymbol \Theta) \bigr] + \mathbb{E}_{q(\boldsymbol \Phi)} \bigl[ \log p(\boldsymbol W \mid \boldsymbol Z, \boldsymbol \Phi) \bigr] \Bigr) \\
&= \exp \Bigl( \mathbb{E}_{q(\boldsymbol \Theta)} \bigl[ \log \prod_{d = 1}^D \prod_{n = 1}^{N_d} \theta_{dz_{dn}} \bigr] + \mathbb{E}_{q(\boldsymbol \Phi)} \bigl[ \log \prod_{d = 1}^{D} \prod_{n = 1}^{N_d} \phi_{z_{dn}w_{dn}} \bigr] \Bigr) \\
&= \exp \Bigl( \mathbb{E}_{q(\boldsymbol \Theta)} \bigl[ \sum_{d = 1}^D \sum_{n = 1}^{N_d} \log \theta_{dz_{dn}} \bigr] + \mathbb{E}_{q(\boldsymbol \Phi)} \bigl[ \sum_{d = 1}^D \sum_{n = 1}^{N_d} \log \phi_{z_{dn}w_{dn}} \bigr] \Bigr) \\
&= \prod_{d = 1}^D \prod_{n = 1}^{N_d} \exp \Bigl( \mathbb{E}_{q(\boldsymbol \Theta)} \bigl[ \log \theta_{dz_{dn}} \bigr] + \mathbb{E}_{q(\boldsymbol \Phi)} \bigl[ \log \phi_{z_{dn}w_{dn}} \bigr] \Bigr) \\
&= \prod_{d = 1}^D \prod_{n = 1}^{N_d} \exp \Bigl( \Psi \bigl( \alpha_{dz_{dn}} \bigr) - \Psi \bigl( \sum_{k = 1}^K \alpha_{dk} \bigr) + \Psi \bigl( \beta_{z_{dn}w_{dn}} \bigr) - \Psi \bigl( \sum_{v = 1}^V \beta_{z_{dn}v} \bigr) \Bigr) \\
\end{align}
From $ \ displaystyle q (\ boldsymbol Z) = \ prod_ {d = 1} ^ D \ prod_ {n = 1} ^ {N_d} q_ {dnz_ {dn}} $, $ q_ {dnk} $ est comme suit Il devient.
\begin{align}
q_{dnk}
&\propto \exp \Bigl( \Psi \bigl( \alpha_{dk} \bigr) - \Psi \bigl( \sum_{k' = 1}^K \alpha_{dk'} \bigr) + \Psi \bigl( \beta_{kw_{dn}} \bigr) - \Psi \bigl( \sum_{v = 1}^V \beta_{kv} \bigr) \Bigr) \\
\end{align}
Définissez $ F (q (\ boldsymbol \ Theta, \ boldsymbol \ Phi)) $ comme suit et subdivisez-le par $ q (\ boldsymbol \ Theta, \ boldsymbol \ Phi) $.
\begin{align}
F(q(\boldsymbol \Theta, \boldsymbol \Phi))
&\equiv \int \int \sum_{\boldsymbol Z} q(\boldsymbol Z) q(\boldsymbol \Theta, \boldsymbol \Phi) \log \cfrac{p(\boldsymbol W, \boldsymbol Z, \boldsymbol \Theta, \boldsymbol \Phi \mid \alpha, \beta)}{q(\boldsymbol Z) q(\boldsymbol \Theta, \boldsymbol \Phi)} d \boldsymbol \Theta d \boldsymbol \Phi + \lambda \left( \int \int q(\boldsymbol \Theta, \boldsymbol \Phi) d \boldsymbol \Theta d \boldsymbol \Phi - 1 \right) \\
\cfrac{\partial F(q(\boldsymbol \Theta, \boldsymbol \Phi))}{\partial q(\boldsymbol \Theta, \boldsymbol \Phi)}
&= \sum_{\boldsymbol Z} q(\boldsymbol Z) \bigl( \log p(\boldsymbol W, \boldsymbol Z, \boldsymbol \Theta, \boldsymbol \Phi \mid \alpha, \beta) - \log q(\boldsymbol Z) - \log q(\boldsymbol \Theta, \boldsymbol \Phi) - 1 \bigr) + \lambda \\
&= \mathbb{E}_{q(\boldsymbol Z)} \bigl[ \log p(\boldsymbol W, \boldsymbol Z, \boldsymbol \Theta, \boldsymbol \Phi \mid \alpha, \beta) \bigr] - \log q(\boldsymbol \Theta, \boldsymbol \Phi) + \lambda - C \\
\end{align}
Résolvez $ \ displaystyle \ cfrac {\ partial F (q (\ boldsymbol \ Theta, \ boldsymbol \ Phi))} {\ partial q (\ boldsymbol \ Theta, \ boldsymbol \ Phi)} = 0 $.
\begin{align}
q(\boldsymbol \Theta, \boldsymbol \Phi)
&\propto \exp \Bigl( \mathbb{E}_{q(\boldsymbol Z)} \bigl[ \log p(\boldsymbol W, \boldsymbol Z, \boldsymbol \Theta, \boldsymbol \Phi \mid \alpha, \beta) \bigr] \Bigr) \\
&= \exp \Bigl( \mathbb{E}_{q(\boldsymbol Z)} \bigl[ \log p(\boldsymbol \Theta \mid \alpha) + \log p(\boldsymbol \Phi \mid \beta) + \log p(\boldsymbol Z \mid \boldsymbol \Theta) + \log p(\boldsymbol W \mid \boldsymbol Z, \boldsymbol \Phi) \bigr] \Bigr) \\
&= \exp \Bigl( \mathbb{E}_{q(\boldsymbol Z)} \bigl[ \log p(\boldsymbol Z \mid \boldsymbol \Theta) \bigr] + \log p(\boldsymbol \Theta \mid \alpha) \Bigr) \times \exp \Bigl( \mathbb{E}_{q(\boldsymbol Z)} \bigl[ \log p(\boldsymbol W \mid \boldsymbol Z, \boldsymbol \Phi) \bigr] + \log p(\boldsymbol \Phi \mid \beta) \Bigr) \\
\end{align}
À partir de l'équation ci-dessus, il peut être décomposé en $ q (\ boldsymbol \ Theta, \ boldsymbol \ Phi) = q (\ boldsymbol \ Theta) q (\ boldsymbol \ Phi) $, et chacun peut être exprimé comme suit.
\begin{align}
& q(\boldsymbol \Theta) \propto \exp \Bigl( \mathbb{E}_{q(\boldsymbol Z)} \bigl[ \log p(\boldsymbol Z \mid \boldsymbol \Theta) \bigr] + \log p(\boldsymbol \Theta \mid \alpha) \Bigr) \\
& q(\boldsymbol \Phi) \propto \exp \Bigl( \mathbb{E}_{q(\boldsymbol Z)} \bigl[ \log p(\boldsymbol W \mid \boldsymbol Z, \boldsymbol \Phi) \bigr] + \log p(\boldsymbol \Phi \mid \beta) \Bigr) \\
\end{align}
$ q (\ boldsymbol \ Theta) $ peut être calculé comme suit.
\begin{align}
q(\boldsymbol \Theta) &\propto \exp \Bigl( \mathbb{E}_{q(\boldsymbol Z)} \bigl[ \log p(\boldsymbol Z \mid \boldsymbol \Theta) \bigr] + \log p(\boldsymbol \Theta \mid \alpha) \Bigr) \\
&= \exp \Bigl( \mathbb{E}_{q(\boldsymbol Z)} \bigl[ \sum_{d = 1}^D \sum_{n = 1}^{N_d} \log \theta_{dz_{dn}} \bigr] + \log \prod_{d = 1}^D p(\boldsymbol \theta_d \mid \alpha) \Bigr) \\
&= \exp \Bigl( \sum_{d = 1}^D \sum_{n = 1}^{N_d} \sum_{k = 1}^{K} q_{dnk} \log \theta_{dk} + \sum_{d = 1}^D \log \cfrac{\Gamma(\alpha K)}{\Gamma(\alpha)^K} \prod_{k = 1}^K \theta_{dk}^{\alpha - 1} \Bigr) \\
&\propto \exp \Bigl( \sum_{d = 1}^D \sum_{n = 1}^{N_d} \sum_{k = 1}^K q_{dnk} \log \theta_{dk} + \sum_{d = 1}^D \sum_{k = 1}^K \log \theta_{dk}^{\alpha - 1} \Bigr) \\
&= \exp \Bigl( \sum_{d = 1}^D \sum_{k = 1}^K \log \theta_{dk}^{\sum \limits_{n = 1}^{N_d} q_{dnk}} + \sum_{d = 1}^D \sum_{k = 1}^K \log \theta_{dk}^{\alpha - 1} \Bigr) \\
&= \prod_{d = 1}^D \prod_{k = 1}^K \theta_{dk}^{\alpha + \sum \limits_{n = 1}^{N_d} q_{dnk} - 1} \\
q(\boldsymbol \Theta) &= \prod_{d = 1}^D {\rm Diriclet}(\boldsymbol \theta_d \mid \alpha_{d1}, \cdots, \alpha_{dK}) \\
\end{align}
Ici, $ \ alpha_ {dk} $ est défini comme suit.
\begin{align}
\alpha_{dk} &= \alpha + \sum_{n = 1}^{N_d} q_{dnk} \\
\end{align}
$ q (\ boldsymbol \ Phi) $ peut être calculé comme suit.
\begin{align}
q(\boldsymbol \Phi) &\propto \exp \Bigl( \mathbb{E}_{q(\boldsymbol Z)} \bigl[ \log p(\boldsymbol W \mid \boldsymbol Z, \boldsymbol \Phi) \bigr] + \log p(\boldsymbol \Phi \mid \beta) \Bigr) \\
&= \exp \Bigl( \mathbb{E}_{q(\boldsymbol Z)} \bigl[ \sum_{d = 1}^D \sum_{n = 1}^{N_d} \log \phi_{z_{dn}w_{dn}} \bigr] + \log \prod_{k = 1}^K p(\boldsymbol \phi_k \mid \beta) \Bigr) \\
&= \exp \Bigl( \sum_{d = 1}^D \sum_{n = 1}^{N_d} \sum_{k = 1}^K q_{dnk} \log \phi_{kw_{dn}} + \sum_{k = 1}^K \log \cfrac{\Gamma(\beta V)}{\Gamma(\beta)^V} \prod_{v = 1}^V \phi_{kv}^{\beta - 1} \Bigr) \\
&\propto \exp \Bigl( \sum_{d = 1}^D \sum_{n = 1}^{N_d} \sum_{k = 1}^K q_{dnk} \log \phi_{kw_{dn}} + \sum_{k = 1}^K \sum_{v = 1}^V \phi_{kv}^{\beta - 1} \Bigr) \\
&= \exp \Bigl( \sum_{k = 1}^K \sum_{v = 1}^V \log \phi_{kv}^{\sum \limits_{d = 1}^D \sum \limits_{n: w_{dn}= v} q_{dnk}} + \sum_{k = 1}^K \sum_{v = 1}^V \log \phi_{kv}^{\beta - 1} \Bigr) \\
&= \prod_{k = 1}^K \prod_{v = 1}^V \phi_{kv}^{\beta + \sum \limits_{d = 1}^D \sum \limits_{n: w_{dn}= v} q_{dnk} - 1} \\
q(\boldsymbol \Phi) &= \prod_{k = 1}^K {\rm Diriclet}(\boldsymbol \phi_k \mid \beta_{k1}, \cdots, \beta_{kV}) \\
\end{align}
Ici, $ \ beta_ {kv} $ est défini comme suit.
\begin{align}
\beta_{kv} &= \beta + \sum_{d = 1}^D \sum_{n: w_{dn}= v} q_{dnk} \\
\end{align}
Estimez les paramètres à l'aide des résultats dérivés.
J'ai implémenté un programme jouet pour l'estimation bayésienne variable du modèle de sujet en python. J'ai eu du mal à comprendre et à développer la formule, mais les résultats sont magnifiques et le code est court.
import numpy as np
from scipy.special import digamma
def normalize(ndarray, axis):
return ndarray / ndarray.sum(axis = axis, keepdims = True)
def normalized_random_array(d0, d1):
ndarray = np.random.rand(d0, d1)
return normalize(ndarray, axis = 1)
if __name__ == "__main__":
# initialize parameters
D, K, V = 1000, 2, 6
alpha0, beta0 = 1.0, 1.0
alpha = alpha0 + np.random.rand(D, K)
beta = beta0 + np.random.rand(K, V)
theta = normalized_random_array(D, K)
phi = normalized_random_array(K, V)
# for generate documents
_theta = np.array([theta[:, :k+1].sum(axis = 1) for k in range(K)]).T
_phi = np.array([phi[:, :v+1].sum(axis = 1) for v in range(V)]).T
# generate documents
W, Z = [], []
N = np.random.randint(100, 300, D)
for (d, N_d) in enumerate(N):
Z.append((np.random.rand(N_d, 1) < _theta[d, :]).argmax(axis = 1))
W.append((np.random.rand(N_d, 1) < _phi[Z[-1], :]).argmax(axis = 1))
# estimate parameters
T = 30
for t in range(T):
dig_alpha = digamma(alpha) - digamma(alpha.sum(axis = 1, keepdims = True))
dig_beta = digamma(beta) - digamma(beta.sum(axis = 1, keepdims = True))
alpha_new = np.ones((D, K)) * alpha0
beta_new = np.ones((K, V)) * beta0
for (d, N_d) in enumerate(N):
# q
q = np.zeros((V, K))
v, count = np.unique(W[d], return_counts = True)
q[v, :] = (np.exp(dig_alpha[d, :].reshape(-1, 1) + dig_beta[:, v]) * count).T
q[v, :] /= q[v, :].sum(axis = 1, keepdims = True)
# alpha, beta
alpha_new[d, :] += count.dot(q[v])
beta_new[:, v] += count * q[v].T
alpha = alpha_new.copy()
beta = beta_new.copy()
theta_est = np.array([np.random.dirichlet(a) for a in alpha])
phi_est = np.array([np.random.dirichlet(b) for b in beta])
Le $ \ boldsymbol \ Phi $ généré à partir du $ \ boldsymbol \ beta $ obtenu par le programme jouet ci-dessus est affiché. À l'origine, la perplexité et la vraisemblance logarithmique devraient être utilisées comme échelles d'évaluation, mais je ne le fais pas parce que c'est gênant.
phi
[[ 0.26631554 0.04657097 0.29425041 0.1746378 0.03077238 0.1874529 ]
[ 0.2109456 0.01832505 0.30360253 0.09073456 0.14039401 0.23599826]]
phi_est
[[ 0.27591967 0.0424522 0.26088712 0.18220604 0.02874477 0.2097902 ]
[ 0.19959096 0.02327517 0.34107528 0.08462041 0.14291539 0.20852279]]
Nous avons pu dériver et implémenter la variante d'estimation bayésienne du modèle thématique. Les résultats de cet article n'étaient pas très intéressants, J'ai regardé LDA pour l'analyse de Pokemon et j'ai essayé de classer les versions or et argent de Pokemon. Je résumerai cela dans Qiita à une date ultérieure.
Recommended Posts