Войти
  • 168594Просмотров
  • 9 месяцев назадОпубликованоEmergent Garden

Gradient Descent vs Evolution | How Neural Networks Learn

Explore two learning algorithms for neural networks: stochastic gradient descent and an evolutionary algorithm known as a local search. They fundamentally solve the same problem in similar ways, but one has the advantage. Step-by-step they find a way down Loss Mountain. Watch real neural networks maximize the fitness of curve fitting. We've got Dogson here! Special thanks to Andrew Carr( ) and Josh Greaves for reviewing this with their human neurons, and to the artificial neurons of Grok, o3 mini, and claude. Grok thought the gay joke was funny, o3 thought it wasn't inclusive lol. it is inclusive! ~Webtoys~ Hill Climbers: Neuron Tuner: Subscribe to my music guy NOW: ~Links~ Patreon: Kofi: My Twitter: My Bluesky: My Other NN videos: Webtoy Source: Animation Source: Image Approximators: FUNCTIONS DESCRIBE THE WORLD: Dawkins Climbing Mount Improbable: But he's gay: ~Citations~ Unfortunately many of these are behind paywalls NNs are Universal Function Approximators: ~epxing/Class/10715/reading/ Backpropagation: Loss Surfaces of MLPs: ~Timestamps~ (0:00) Learning Learning (1:20) Neural Network Space (3:40) The Loss Landscape (7:21) The Blind Mountain Climber (8:37) Evolution (Local Search) (13:07) Gradient Descent (18:40) The Gradient Advantage (20:48) The Evolutionary (dis)advantage