Introduction

The log-likelihood function is a function such that:

… where p is the generalized density under a model, evaluated on the dataset.

The log-likelihood is an interesting beast. It not only captures the complete model law and the fed dataset (or a sufficient statistic of it), but it also has deep connections with the ideas of information entropy. It’s exceptionally well-behaved (numerically) and many modern model comparison methods such as the evidence measure its value or an adjusted (bias? corrected) expectation of it. Typically with i.i.d. experiments, it’s approximately normally distributed with an expectation equal to the model entropy.

Loss Functions

It’s quite easy to see why the log-likelihood enforces a model. While optimizing literally anything, one usually has a “loss” containing a function of data and parameters, and another that’s a function only of the parameters. Everything else disappears when a derivative is calculated, and this raises an interesting question. If every loss can be written as:

… then what really is the difference between a loss function and a negative log posterior? As far as I’ve been able to tell, there’s no difference (unless we’re talking about a decision-theoretic loss, which is a whole entire matter).

Case in point: Regularization

Adding a penalty of:

… to the regression framework leads to “insignificant” parameters being forced down/close to zero (this actually introduces bias, but biased estimators can have a lower MSE than unbiased estimators). From an optimization perspective, is a Lagrangian multiplier; for each value it takes, a different curve centered on is enforced and a value on the likelihood g is found such that the boundary enforcement is met:

p = 1 (LASSO) & p = 2 (Ridge)

… but this can be interpret as placing strong Laplacian/Gaussian priors on the parameters and obtaining a MAP solution.

Model Laws

By constructing logical model laws, one can obtain amazing results with probability theory. Examples:

  1. We can reason about the probability of failure in a repeated Bernoulli experiment even if no failures are observed! Assume that a process represented by produces a set of successes. We can still construct a posterior by writing out the likelihood of this model. This would obviously include 0 (and the highest mass would be there) but an interval with a 5% error rate would be .

  2. When dealing with censored data, we can re-frame the problem and derive a likelihood. Here’s my attempt at that:

    Let a random variable have a cumulative distribution function .

    Now, let be a random variable such that:

    Then, it’s easy to see that the cumulative distribution function of is:

    Let be a reference measure that is the sum of the Lebesgue measure and the Dirac measure on the set . Then, the density on will be a function (the Radon-Nikodym derivative) such that:

    A function that satisfies the form above is:

Dimensionality

Volumes in High Dimensions

A lot of probability theory uses the idea of volumes. For example, the probability and expectation are usually defined as:

… where the differential element is usually a volume, , and p is a density, but volumes are really weird in higher dimensions.

The main idea is that our ideas of distance no longer make sense. Consider the following: I simulated a bunch of random normal points in many dimensions and plotted the average ratio of the distance to nearest point to the distance to farthest point:

This ratio tends to one, implying that the nearest and farthest points are very “close” together / all points “look alike”.

On the same subject, consider a cube of unit lengths containing a sphere of unit radius in higher dimensions:

The edges of the cube keep more and more volume away from the sphere. In fact - the volume of a sphere starts decreasing after 5 dimensions.

All of this discussion has a huge implication on computational statistics, as all of our sampling methods rely on a density defined on some volume. Moreover, this has many other areas of effect, for example, in optimization and cluster analysis as points start getting indistinguishable as we move into higher dimensions.

Effect on Likelihoods

This discussion leads to one very interesting observation: a probability is not a good measure of typicality outside of the first dimension.

Consider the following: In one dimension, if I draw from a normal distribution, numbers close to zero would be the most reasonable guess at what the draw could be. In higher dimensions, this isn’t really the case - if I draw from a 1000-dimensional normal, it’s ridiculously unlikely that I’d pick a vector that (0, 0, 0, …); a more reasonable guess would be “a vector of points where the mean radius is ye much”.

A very good analogy I’ve read on CrossValidated is: let’s say that the weights, heights, eyesight, athleticism, intelligence, … of people is normally distributed. If I picked a random guy on the street and asked him his weight, it would be reasonable to expect that it’d be around the average. Instead, if I asked the person about their height, weight, eyesight …, it’s very unlikely to meet a person who’s average at everything. I’m a bit more heavy, averagely tall, more books-ey … than other people, for example.

Now consider the likelihood of an isolated repeated experiment:

This is a function of many random variables (which collectively have a dimension n). This will have it’s own sampling distribution, and the position where it’s maxmized w.r.t. any of the x’s isn’t a typical value of the likelihood at all.

Example:

hist_fancy(replicate(10000, sum(log(dnorm(rnorm(100))))),
           main = "Typical Values of the Loglikelihood", xlab = "ll")

The maximum possible value of the loglikelihood in this case is -91.9, although it’s always much lower than that. Loglikelihood values around -140 for this experiment are typical.

We call values that lie in this region of typicality the typical set and it’s geometry is expected to be weird. From the other results, we expect it to be a thin layer of mass lying on a shell somewhere in a high dimensional space.

From Andrew Gelman’s blog: Interpolating between any two values of the typical set is rather unlikely to result in another typical sample; this is apparently why optimizers and “ensemble methods” (ones which initialize a bunch of points and move them around till they look like a sample from a density) are a bad way to sample and fail in higher dimensional spaces. This is why HMC is said to be so powerful; to sample from an arbitrary density, we can’t use grid or ensemble based methods, so a logical alternative is to start with a sample and obtain another sample, which leads logically to the idea of Markov Chain Monte Carlo, at which the Hybrid/Hamiltonian method is rather efficient.

Michael Betancourt has an even more interesting example; instead of sampling, Betancourt first reparameterizes a target density and integrates out all of the nuisance parameters to obtain a marginal distribution of a set of normals w.r.t. their radius. See the booklet “Logic, Probability, and Bayesian Inference” by Betancourt & the Stan Dev Team.

I believe that the sampling distribution of a model’s log-likelihood can be very interesting as a comparison / validation tool. I believe that there’s a link between MacKay’s Bayesian Occam Razor and these sampling distributions.

R Code for Dimensionality Image
library(mvtnorm); library(ggplot2)

dim_dist <- function(d, samp_size = 10, ...){
	samp <- rmvnorm(samp_size, mean = rep(0, d), sigma = diag(1, d, d))
	dists <- as.matrix(dist(samp, diag = T, upper = T, ...))
	return(mean(sapply(1:samp_size, function(i) min(dists[-i, i])/max(dists[-i, i]))))
}

qplot(x = 1:250,
	  y = sapply(1:250, function(d) dim_dist(d)),
	  main = "Euclidean Distance in High Dimensions",
	  ylab = "Average Ratio of Nearest to Furthest Point",
	  xlab = "Dimension", color = I("dark gray"), ylim = c(0, 1)) +
	  geom_hline(aes(yintercept = 1))


2020

Probabilistic PCA

I’ve been reading about PPCA, and this post summarizes my understanding of it. I took a lot of this from Pattern Recognition and Machine Learning by Bishop.

1 min read

Modelling with Spotify Data

The main objective of this post was just to write about my typical workflow and views rather than come up with a great model. The structure of this data is also outside my immediate domain so I thought it’d be fun to write up a small diary on making a model with it.

5 min read

Astrophotography

I used to do a fair bit of astrophotography in university - it’s harder to find good skies now living in the city. Here are some of my old pictures. I’ve kept making rookie mistakes (too much ISO, not much exposure time, using a slow lens, bad stacking, …), for that I apologize!

1 min read

Morphing with GPs

The main aim here was to morph space inside a square but such that the transformation preserves some kind of ordering of the points. I wanted to use it to generate some random graphs on a flat surface and introduce spatial deformation to make the graphs more interesting.

1 min read

SEIR Models

I had a go at a few SEIR models, this is a rough diary of the process.

3 min read

Speech Synthesis

The initial aim here was to model speech samples as realizations of a Gaussian process with some appropriate covariance function, by conditioning on the spectrogram. I fit a spectral mixture kernel to segments of audio data and concatenated the segments to obtain the full waveform. Partway into writing efficient sampling code (generating waveforms using the Gaussian process state space representation), I realized that it’s actually quite easy to obtain waveforms if you’ve already got a spectrogram.

4 min read

Efficient Gaussian Process Computation

I’ll try to give examples of efficient gaussian process computation here, like the vec trick (Kronecker product trick), efficient toeliptz and circulant matrix computations, RTS smoothing and Kalman filtering using state space representations, and so on.

4 min read
Back to Top ↑

2019

Gaussian Process Middle C

First of my experiments on audio modeling using Gaussian processes. Here, I construct a GP that, when sampled, plays middle c the way a grand piano would.

~1 min read
Back to Top ↑

2018

Back to Top ↑