Consider:\[X|P \sim Bernoulli(P)\]
… where \(X, P\) are random variables. Then:\[\mathbb P(X = 1) = \int_0^1 \mathbb P(X = 1 | P = p) dF_P(p) = \int_0^1 p dF_P(p) = \mathbb E(P).\]
The distribution of \(X\) is only dependant on the expectation of \(P\).
Another way to see this:\[\mathbb V (X) = \mathbb E_p (\mathbb V (X|p)) + \mathbb V_p (\mathbb E (X|p))\] \[= \mathbb E_p (p(1 - p)) + \mathbb V_p (p)\] \[= \mathbb E_p(p) - \mathbb E_p (p^2) + \mathbb E_p (p^2) - \mathbb E_p^2 (p)\] \[= \mathbb E_p(p) (1 - \mathbb E_p(p)).\]
So, random probabilities, random hazard rates or ‘random effects’ across groups which have just one observation are probably meaningless to talk about.
Using einsum for vectorizing matrix ops
I lay out the canonical GP interpretation of MGCV’s GAM parameters here. Prof. Wood updated the package with stationary GP smooths after a request. Running through the
predict.gam source code in a debugger, the computation of predictions appears to be as follows:
I wanted to see how easy it was to do photogrammetry (create 3d models using photos) using PyTorch3D by Facebook AI Research.
This post was motivated by some R code that I came across (over a thousand lines of it) with a bunch of if-statements that were never called. I wanted an automatic way to get a minimal reproducing example of a test from this file. While reading about how to do this, I came across Dead Code Elimination, which kills unused and unreachable code and variables as an example.
I used to do a fair bit of astrophotography in university - it’s harder to find good skies now living in the city. Here are some of my old pictures. I’ve kept making rookie mistakes (too much ISO, not much exposure time, using a slow lens, bad stacking, …), for that I apologize!
I’ve been reading about PPCA, and this post summarizes my understanding of it. I took a lot of this from Pattern Recognition and Machine Learning by Bishop.
The main objective of this post was just to write about my typical workflow and views. The structure of this data is also outside my immediate domain so I thought it’d be fun to write up a small diary working with the data.
The main aim here was to morph space inside a square but such that the transformation preserves some kind of ordering of the points. I wanted to use it to generate some random graphs on a flat surface and introduce spatial deformation to make the graphs more interesting.
The model is described on the Compartmental Models Wikipedia Page.
The initial aim here was to model speech samples as realizations of a Gaussian process with some appropriate covariance function, by conditioning on the spectrogram. I fit a spectral mixture kernel to segments of audio data and concatenated the segments to obtain the full waveform.
Minimal Working Example
… using Stan & HMC