Musings on Statistics
Historically, the concept of uncertainty has been around the notion of how frequently one observes an event of interest, but it has since expanded to account and quantify for a lot of other, more intuitive forms of unknowability. Perhaps these could be explained mechanistically, i.e. considering the uncertainties associated with complex events as the aggregate of the component-wise uncertainties that make up the event.
This way of thinking is completely unnecessary though, as the axioms of probability can be assumed in many scenarios, no matter what the fundamental meaning of a probability is.
A model is perhaps just an idealisation or a description that is a collection of random variables which link up to each other using distributional assumptions. We can do a lot of cool stuff using models though, e.g. describing the plausibility of things.
Another interesting thing about statistical reasoning is that a lot of ideas in the philosophy of science (e.g. falsification, “strength” of induction, not being able to study hypotheses individually, the inability to separate evidence from theory, simplicity of hypotheses, etc.) have corresponding statistical parallels.
I used to do a fair bit of astrophotography in university - it’s harder to find good skies now living in the city. Here are some of my old pictures. I’ve kept making rookie mistakes (too much ISO, not much exposure time, using a slow lens, bad stacking, …), for that I apologize!
I’ve been reading about PPCA, and this post summarizes my understanding of it. I took a lot of this from Pattern Recognition and Machine Learning by Bishop.
The main objective of this post was just to write about my typical workflow and views rather than come up with a great model. The structure of this data is also outside my immediate domain so I thought it’d be fun to write up a small diary on making a model with it.
The main aim here was to morph space inside a square but such that the transformation preserves some kind of ordering of the points. I wanted to use it to generate some random graphs on a flat surface and introduce spatial deformation to make the graphs more interesting.
I had a go at a few SEIR models, this is a rough diary of the process.
The initial aim here was to model speech samples as realizations of a Gaussian process with some appropriate covariance function, by conditioning on the spectrogram. I fit a spectral mixture kernel to segments of audio data and concatenated the segments to obtain the full waveform. Partway into writing efficient sampling code (generating waveforms using the Gaussian process state space representation), I realized that it’s actually quite easy to obtain waveforms if you’ve already got a spectrogram.
I’ll try to give examples of efficient gaussian process computation here, like the vec trick (Kronecker product trick), efficient toeliptz and circulant matrix computations, RTS smoothing and Kalman filtering using state space representations, and so on.
Minimal Working Examples
First of my experiments on audio modeling using Gaussian processes. Here, I construct a GP that, when sampled, plays middle c the way a grand piano would.
… using Stan & HMC