nonlinearity, why it makes sense to think big
Thinking big has been vastly talked about from the "attitude" perspective. However, what fascinates me are some physical observations that show it actually makes sense to have big…
A notebook
Essays, notes, and experiments on machine learning, AI, and the non-linear nature of things — by Hamidreza Saghir.
Featured
All posts →Thinking big has been vastly talked about from the "attitude" perspective. However, what fascinates me are some physical observations that show it actually makes sense to have big…
Recent writing
Archive →Barbara Minto's pyramid: put the answer at the top, let the reader's questions drive the hierarchy, and choose deduction or induction at each branch.
machine learning ·unified views
From spectral clustering to Gaussian processes to transformer attention — the same primitive, a similarity matrix between points, keeps showing up as the load-bearing piece of very different models.
BFS, Dijkstra, and A* differ by one line: the data structure you pop the next node from. A worked maze example that converts each into the next.
Backpropagation, belief propagation, the Viterbi algorithm, and matrix-chain multiplication all solve the same problem: summing over exponentially many paths in a graph by reusing work.
I love the simplicity of autoencoders as a very intuitive unsupervised learning method. They are in the simplest case, a three layer neural network. In the first layer the data…
What is a neural network? To get started, it's beneficial to keep in mind that modern neural network started as an attempt to model the way that brain performs computations. We…
I typically use my computers at home to connect to my work computer. I setup xRDP to remote desktop into my work computer(Linux) which is OK but slow at times depending on the…