-
Quantization for Neural Networks
a walkthrough of PTQ and QAT techniques for efficient on-device inference
-
Enforcing Lipschitz Constant in Neural Network
spectral normalization and other techniques for bounding the Lipschitz constant of neural networks
-
Langevin Dynamics for Bayesian Inference
connecting stochastic differential equations and the Fokker-Planck equation to Bayesian posterior sampling
-
Understanding and Implementing Asymmetric Numeral System (ANS)
from the theory of Asymmetric Numeral Systems to a working Python implementation
-
Arithmetic Coding (AC) Implementation
a from-scratch Python implementation of arithmetic coding for lossless data compression
-
Markov Chain Monte Carlo: Gibbs, Metropolis-Hasting, and Hamiltonian
a hands-on introduction to Gibbs sampling, Metropolis-Hastings, and Hamiltonian Monte Carlo
-
Normalizing Flow: understanding the change of variable equation
understanding how invertible transformations enable exact likelihood computation in normalizing flows
-
Understanding conventional HMM-based ASR training
how acoustic models, pronunciation lexicons, and language models combine in classical HMM-based speech recognition
-
Comparison of end-to-end ASR models
a probabilistic perspective on CTC, RNN-Transducer, and attention-based encoder-decoder models
-
Gumbel max and Gumbel softmax
the Gumbel-max trick, its continuous relaxation via Gumbel-softmax, and applications to discrete latent variables
-
An introduction to Kalman filter and particle filter
state estimation in linear and nonlinear dynamical systems via Kalman and particle filtering
-
Griffin-Lim algorithm for waveform reconstruction
an iterative phase estimation algorithm for reconstructing waveforms from magnitude spectrograms
-
A step-by-step guide to variational inference (4): variational auto encoder
how amortized variational inference gives rise to the encoder-decoder architecture of the VAE
-
A step-by-step guide to variational inference (3): mean field approximation
approximating intractable posteriors by assuming factorized independence across latent variables
-
A step-by-step guide to variational inference (2): expectation maximization
deriving the EM algorithm as coordinate ascent on the ELBO with a closed-form posterior
-
A step-by-step guide to variational inference (1): variational lower bound
deriving the ELBO from KL divergence and understanding why it underlies variational inference
-
A solution manual to the Elements of Statistical Learning (ESL)
solutions to selected exercises from Hastie, Tibshirani & Friedman's classic machine learning textbook