… using Stan & HMC

Here, I sample from an Ising-like model (I treat the random variables as continuous, between -1 and 1 and add a term to the pseudo-likelihood that resembles a beta log density).

functions {
    real log_p(matrix m, real T, real alpha) {
        int n = rows(m);
        return( (1/T) * sum(m[2:(n-1), 2:(n-1)] .* m[1:(n-2), 2:(n-1)] +
                            m[2:(n-1), 2:(n-1)] .* m[2:(n-1), 1:(n-2)] +
                            m[2:(n-1), 2:(n-1)] .* m[3:n    , 2:(n-1)] +
                            m[2:(n-1), 2:(n-1)] .* m[2:(n-1), 3:n    ]) +
                sum( log(m/2 + 0.5)*(alpha - 1) + log(0.5 - m/2)*(alpha - 1) ));
data {
    int n;
    real T;
    real alpha;
parameters {
    matrix<lower = -1, upper = 1>[n, n] m;
model {
    target += log_p(m, T, alpha);

The matrix terms are essentially a vectorised product-sum of nearest neighbour spins.



Efficient Gaussian Process Computation

I’ll try to give examples of efficient gaussian process computation here, like the vec trick (Kronecker product trick), efficient toeliptz and circulant matrix computations, RTS smoothing and Kalman filtering using state space representations, and so on.

4 min read

Gaussian Process Speech Synthesis (Draft)

Very untidy first working draft of the idea mentioned on the efficient computation page. Here, I fit a spectral mixture to some audio data to build a “generative model” for audio. I’ll implement efficient sampling later, and I’ll replace the arbitrary way this is trained with an LSTM-RNN to go straight from text/spectrograms to waveforms.

3 min read
Back to Top ↑


Gaussian Process Middle C

First of my experiments on audio modelling using gaussian processes. Here, I construct a GP that, when sampled, plays middle c the way a grand piano would.

~1 min read
Back to Top ↑


Back to Top ↑