Random Projects

Inferring Gaussian Process Autocorrelation

It seems reasonable and intuitive to think that the sample autocovariance function of a stationary gaussian process to be a sufficient statistic for its covariance function, and I read that this is indeed true for certain stationary GPs with a rational spectrum. This condition is quite similar (if not the same) for GPs to possess a state space representation.

It’s also interesting that not all GPs are ergodic. GPs are mixing (and hence ergodic, I believe) if the covariance dies off to zero after a point, or if its spectrum is absolutely continuous. Loosely, this means that the distribution of the process can be inferred from just one long sample. GPs with an exponentiated sine squared (ESS) covariance function, for example, wouldn’t be ergodic.

As a consequence, the covariance function of a zero mean GP with an ESS kernel, and similar signals, cannot be inferred from a single sample. This is reasonable, as no matter how long the signal is, there’s no new information in it after a certain point. Intuitively, a sample from a zero mean GP with an ESS kernel might look like \( (3, 2.5, 3, 2.5, …) \). The ESS is a kernel which is periodic, and the correlation of points spaced half a period apart is closest to zero (compared to any other pair of points), but still strictly positive. Another sample from that GP may look like \( (-2, -1.7, -2, -1.7, …) \).

Points one and two are closer together within each sample than across samples due to the correlation, but given just one observation of the signal (and with no knowledge of the mean of the process), it would appear that points one and two are negatively correlated.

The image below shows this; the black line is the true autocovariance function of the zero mean GP with an ESS kernel, and the boxplots show the sampling distribution of the unbiased sample autocovariance function based on single samples.

New York Conditional Taxi Dropoff Probabilities

I fit a twenty component mixture of multivariate normals, using scikit-learn, to the four dimensional new york taxi pickup/dropoffs dataset.

The dimensions look like (pickup_lat, pickup_lon, dropoff_lat, dropoff_lon). The aim is to predict the distribution of (dropoff_lat, dropoff_lon) by conditioning on (pickup_lat, pickup_lon).

Fancy ways to do this might include fitting a neural net or some kind of a gp to the conditional density, but here, I literally just fit a 4d mvn to the whole dataset. To condition, we just plug in the pickup position and renormalize (Bayes rule).

Envelope Modelling

Google’s Quick Draw dataset contains multiple observations of quickly drawn envelopes. I fit a 256-component restricted boltzmann machine (heavily overparameterised; not a great model - I know) to the data, which represents a big nasty distribution over the random field that represents an envelope image. Now, starting off with a completely random image, using Gibbs sampling, we can make our way to the typical set of the distribution, which hopefully looks like an envelope. Here’s what the burn in looks like:

Inferring the Extent of Differentiability

Let’s say that we have an observation of a noiseless function but we don’t know how smooth it is. You could probably fit a Matern GP with different smoothness parameters to see which parameter maximises the log marginal likelihood (the matern parameter corresponds to the number of times one can differentiate a sample from the gp).

Below, I’ve simulated a Matern GP with a particular parameter, and fit it using parameters ranging from \(\{0.5, …, 5\}\). The color & label correspond to the parameter while sampling.

Modelling Audio using GPs

I used the S-PAD and the GP-PAD models from Richard Turner’s thesis to make these plots using some random audio data from the internet.

Sample stan code for this.
// S-PAD
data {
	int n;     // len
	real x[n]; // audio
parameters {
	real<lower = 0, upper = 1> l;
	real<lower = 0, upper = 10> s;
	vector<lower = -10, upper = 2>[n] sigma;
model {
	sigma[1] ~ normal(0, s);
	sigma[2:n] ~ normal(l * sigma[1:(n - 1)], s*(1 - l^2)^0.5);
	x ~ normal(0, exp(sigma));

data {
	int n;
	int n_s; // nrow of S22
	real seg_a[n]; // audio sample
	matrix[n, n_s] factor; // mvn conditional distribution shift factor S12 * S22^(-1)
	cholesky_factor_cov[n_s] K22c; // cholesky factor of S22
parameters {
	real<lower = -7.5, upper = 10> mu;
	vector<lower = -7.5, upper = 10>[n_s] sigma;
transformed parameters {
	vector[n] sigma_vec;
	sigma_vec = rep_vector(mu, n) +
		factor*(sigma - rep_vector(mu, n_s));
model {
	sigma ~ multi_normal_cholesky(rep_vector(mu, n_s), K22c);
	seg_a ~ normal(0, log(1 + exp(sigma_vec)));

Modelling my d20 dice

I fit a spline on a sphere representing my d20. The color represents the model probabilities. The distance from the centre represents the proportion of times my dice fell on a particular number during six hundred trials.

Code to help with this (messy).

# I came up with a coordinate system to label the dice
# The params phi and theta are reversed w.r.t. polar convention
# phi is pi - phi w.r.t. the polar convention
vertices <- data.table(
	r = 1, theta = c(0, 0, 2 * pi * (0:9)/10),
	phi = c(pi/2, -pi/2, rep(c(-atan(0.5), atan(0.5)), 5)) + pi/2,
	label = letters[1:12])

dice_map <- list(
	"1" = "ihj", "2" = "dfe", "3" = "kjl", "4" = "beg",
	"5" = "gfh", "6" = "bck", "7" = "ahj", "8" = "cdl",
	"9" = "bki", "10" = "adl", "11" = "bgi", "12" = "adf",
	"13" = "gih", "14" = "bce", "15" = "afh", "16" = "ckl",
	"17" = "ajl", "18" = "egf", "19" = "kij", "20" = "ced")

cart <- function(polar) {
	# theta phi
	return(c(sin(polar[1]) * cos(polar[2]),
		     sin(polar[1]) * sin(polar[2]),

polar <- function(cart) {
	return(c(acos(cart[3]), atan2(cart[2], cart[1])))

my_coords <- function(pol_mine) {
	return(c(pi - pol_mine[2], pol_mine[1]))

arc_length <- function(polar_a, polar_b) {
	return(acos(sum(cart(polar_a) * cart(polar_b))))

query <- function(verts) {
	Reduce(`&`, lapply(verts, function(v) grepl(v, dice_map)))

rd20 <- function(n = 1) replicate(n, {
	z <- rnorm(3); z <- z/sqrt(sum(z^2))

qd20 <- function(polar_coords) {
	dists <- sapply(1:12, function(i) arc_length(
		polar_coords, vertices[i, my_coords(c(theta, phi))]))
	min_three <- sort(dists)[1:3]
	verts <- vertices[which(dists %in% min_three), label]


Efficient Gaussian Process Computation

I’ll try to give examples of efficient gaussian process computation here, like the vec trick (Kronecker product trick), efficient toeliptz and circulant matrix computations, RTS smoothing and Kalman filtering using state space representations, and so on.

4 min read

Gaussian Process Speech Synthesis (Draft)

Very untidy first working draft of the idea mentioned on the efficient computation page. Here, I fit a spectral mixture to some audio data to build a “generative model” for audio. I’ll implement efficient sampling later, and I’ll replace the arbitrary way this is trained with an LSTM-RNN to go straight from text/spectrograms to waveforms.

3 min read
Back to Top ↑


Gaussian Process Middle C

First of my experiments on audio modelling using gaussian processes. Here, I construct a GP that, when sampled, plays middle c the way a grand piano would.

1 min read
Back to Top ↑


Back to Top ↑