home
posts
posts
just trying to learn and write
Categories
All
(9)
cs
(1)
deep learning
(7)
exercises
(3)
paper
(4)
quick intro
(1)
wolverine events
(1)
TENT: Fully Test-Time Adaptation By Entropy Minimization
An attempted (partial) paper reproduction
deep learning
paper
Once a model is deployed the feature (covariate) data distribution might shift from that seen during training. These shifts make models go out-of-distribution and worsen…
Dec 29, 2024
3 min
Building a university event embedding recommender
Part 1: MVP
wolverine events
I am currently a grad student at the University of Michigan and like to attend seminars, lectures, and social events when I can. However, I often attend the same weekly…
Oct 11, 2024
7 min
RNNs from scratch exercises
deep learning
exercises
Where I build up intuition for RNNs and attempt to solve the exercises in section 9.5 of the d2l book from scratch in pytorch (without using the d2l library).
Sep 27, 2024
7 min
A Closer Look at Memorization in Deep Networks
An attempted (partial) paper reproduction
deep learning
paper
This paper argues that memorization is a behavior exhibited by networks trained on random data, as, in the absence of patterns, they can only rely on remembering examples.…
Sep 7, 2024
4 min
Approximate Nearest Cosine Neighbors
cs
quick intro
Using Random Hyperplane LSH
Aug 9, 2024
4 min
Understanding Batch Normalization
An attempted (partial) paper reproduction
deep learning
paper
The paper investigates the cause of batch norm’s benefits experimentally. The authors show that its main benefit is allowing for larger learning rates during training. In…
Jul 17, 2024
8 min
Batch Norm exercises
deep learning
exercises
Where I attempt to solve the exercises in section 8.5 of the d2l book from scratch in pytorch (without using the d2l library).
Jul 2, 2024
6 min
LeNet exercises
deep learning
exercises
Where I attempt to solve the exercises in section 7.6 of the d2l book from scratch in pytorch (without using the d2l library).
Jun 21, 2024
3 min
Deep Learning is Robust to Massive Label Noise
An attempted (partial) paper reproduction
deep learning
paper
The paper shows that neural networks can keep generalizing when large numbers of (non-adversarially) incorrectly labeled examples are added to datasets (MNIST, CIFAR, and…
Jun 18, 2024
4 min
No matching items