I’ve been reading about PPCA, and this post summarizes my understanding of it. I took a lot of this from Pattern Recognition and Machine Learning by Bishop.

The model behind the algorithm is quite simple. We’ve got \(n\) observations of a random variable \(X\) that takes values in \(\mathbb R^m\). We describe a latent representation \(Z\) that has dimension \(m\) or lower as follows. I’ll assume that \(X\) has a zero mean.

\[X | Z \sim \mathcal N(WZ, \sigma^2 I)\] \[Z \sim \mathcal N (0, \mathcal I)\]

We can marginalize out \(Z\) out:

\[X \sim \mathcal N(0, WW^T + \sigma^2 I) \;\;\;*\]

Tipping & Bishop (1999) showed that the maximum likelihood solution for W is achieved at:

\[W_{ML} = U (L - \sigma^2 I)^{1/2} R\]

where \(L\) is a matrix of (the largest) eigenvalues, \(U\) is a matrix of corresponding eigenvectors and \(R\) is an arbitrary orthogonal matrix.

One could work back to \(X\) using the latent variables using:

\[M = W^T W + \sigma^2 I\] \[Z | X \sim \mathcal N(M^{-1} W^T X, \sigma^{-2} M)\]

Here’s some Stan code to reproduce the maximum likelihood solution to \(W\). We recover the correct solution up to rotations, as expected.

Stan code for PCA

rstan_options(auto_write = TRUE)
options(mc.cores = parallel::detectCores())

m = 2; n = 100
Z = matrix(rnorm(m*n), n, m)
W = matrix(rnorm(m*m), m, m)
X = Z %*% W
pca = prcomp(X, center = F, scale = F)

model_string = "
data {
	int m;
	int n;
	matrix[n, m] X;
parameters {
	matrix[m, m] W;
model {
	vector[m] mu;
	matrix[m, m] L;
	mu = rep_vector(0, m);
	L = cholesky_decompose(W * W');

	for(i in 1:m) {
		W[, i] ~ normal(0, 2);

	for(i in 1:n) {
		X[i, ] ~ multi_normal_cholesky(mu, L);


model = stan_model(model_code = model_string)
data = list(m = m, n = n, X = X)

# optim_lik = function() optimizing(model, data = data)$par
samples = sampling(model, data = data, chains = 2, iter = 1000)

ml_samples = matrix(extract(samples)$W, 1000, 4)
ml_samples = as.data.table(ml_samples)
names(ml_samples) = c('W11', 'W21', 'W12', 'W22')

eig = eigen(cov(X))
U = eig$vectors
L = eig$values
W_no_R = U %*% diag(sqrt(L))

simulate_rotation = function() {
	theta = runif(1, 0, 2*pi)
	R = matrix(c(cos(theta), sin(theta), -sin(theta), cos(theta)), 2, 2)
	W_ML = W_no_R %*% R

manual_calc_samples = t(replicate(1000, as.numeric(simulate_rotation())))
manual_calc_samples = as.data.table(manual_calc_samples)
names(manual_calc_samples) = c('MW11', 'MW21', 'MW12', 'MW22')

plot_frame = cbind(ml_samples, manual_calc_samples)

ggplot(plot_frame) +
	geom_hex(aes(W11, W21, fill = 'Stan_W[, 1]'), alpha = 0.3, bins = 100) +
	geom_hex(aes(W12, W22, fill = 'Stan_W[, 2]'), alpha = 0.3, bins = 100) +
	geom_hex(aes(MW11, MW21, fill = 'ML_W[, 1]'), alpha = 0.3, bins = 100) +
	geom_hex(aes(MW12, MW22, fill = 'ML_W[, 2]'), alpha = 0.3, bins = 100) +
	xlim(-3, 3) + ylim(-3, 3) + labs(x = 'x', y = 'y') +
	scale_fill_brewer(palette = 'Spectral') + theme_void()


Gaussian Processes in MGCV

I lay out the canonical GP interpretation of MGCV’s GAM parameters here. Prof. Wood updated the package with stationary GP smooths after a request. Running through the predict.gam source code in a debugger, the computation of predictions appears to be as follows:

~1 min read


I wanted to see how easy it was to do photogrammetry (create 3d models using photos) using PyTorch3D by Facebook AI Research.

1 min read

Dead Code & Syntax Trees

This post was motivated by some R code that I came across (over a thousand lines of it) with a bunch of if-statements that were never called. I wanted an automatic way to get a minimal reproducing example of a test from this file. While reading about how to do this, I came across Dead Code Elimination, which kills unused and unreachable code and variables as an example.

~1 min read
Back to Top ↑



I used to do a fair bit of astrophotography in university - it’s harder to find good skies now living in the city. Here are some of my old pictures. I’ve kept making rookie mistakes (too much ISO, not much exposure time, using a slow lens, bad stacking, …), for that I apologize!

1 min read

Probabilistic PCA

I’ve been reading about PPCA, and this post summarizes my understanding of it. I took a lot of this from Pattern Recognition and Machine Learning by Bishop.

1 min read

Spotify Data Exploration

The main objective of this post was just to write about my typical workflow and views. The structure of this data is also outside my immediate domain so I thought it’d be fun to write up a small diary working with the data.

6 min read

Random Stuff

For dealing with road/city networks, refer to Geoff Boeing’s blog and his amazing python package OSMnx. Go to Shapely for manipulation of line segments and other objects in python, networkx for networks in python and igraph for networks in R.

5 min read

Morphing with GPs

The main aim here was to morph space inside a square but such that the transformation preserves some kind of ordering of the points. I wanted to use it to generate some random graphs on a flat surface and introduce spatial deformation to make the graphs more interesting.

2 min read

Speech Synthesis

The initial aim here was to model speech samples as realizations of a Gaussian process with some appropriate covariance function, by conditioning on the spectrogram. I fit a spectral mixture kernel to segments of audio data and concatenated the segments to obtain the full waveform.

4 min read
Back to Top ↑


Back to Top ↑


Back to Top ↑