
Not sure how that kind of connection gets learnt.Īnd finally, while it would be nice. that's a connection from neuron A to neuron B, where A *anti-*predicts B? If A fires, make B firing less likely. Everything I Know I Don't KnowĪs I just mentioned, I don't know what model explains the Zero Contingency Procedure? Maybe some variant on the Anti-Hebbian rule?Īlso, other than the most basic details I gleamed from Crash Course, I don't know much about neural anatomy, especially not the actual chemical mechanism by which Hebb/Anti-Hebb/STDP works.įurthermoreover, I don't know where or how inhibitory neural connections come into play. (Given a connection A→B, firing B without A weakens the connection, since A no longer "predicts" B.) I don't know what model explains that, which brings me to my final bulletpoint: 5. What those rules don't explain, however, is the Zero Contingency Procedure. Hebbian & Anti-Hebbian Learning, even though proposed long before we knew much about the anatomy of neurons, turned out to have a solid biological basis! It's called spike-timing-dependent plasticity (STDP), one of many things I don't actually know deeply about.Īnyway, the Hebb & Anti-Hebb rules also explains a lot of the weird results from classical conditioning experiments, even explaining why backwards conditioning doesn't work, and extinction of conditioned responses. actually pretty good, but sort of incomplete. So, I pretended that signals get weaker the more they're passed down from neuron to neuron, and that only the strongest signals trigger the Hebbian & Anti-Hebbian learning rules. Getting poked sends a few signals per millisecond, getting punched sends a lot more signals per millisecond.Īnyway, since I made my model deterministic, I couldn't use probability to make signals "die out". What real neurons do to vary the intensity of a nerve signal is changing the frequency of signals. Neural signals don't have varying strengths.Īll neural electrical signals have more-or-less the exact same voltage. However, making my model deterministic meant I had to add another inaccuracy: 3. My goal with Neurons was teachin' peeps the general gist, rather than specific details. However, adding unpredictability to a model makes it harder to learn, and if I could get away without it, that's fine. One neuron doesn't directly cause the next connected neuron to fire - instead, it raises (or lowers) the next neuron's action potential, merely making it more likely (or less likely) to fire.

I wish I didn't have to explicitly debunk the idea that thoughts live inside individual neurons, (to be fair, nobody knows how thoughts emerge from neural connections, they just do) but considering crappy popsci like this, and the fact so many people still believe that “we only use 10% of our brain”, I just gotta cover all my bases here.

I was using those phrases as linguistic shorthand because explaining the world through symbolic spoken language is like eating a steak through a straw. You have no "dog" neuron or "pain" neuron. Thoughts don’t live in individual neurons. Nevertheregardless, for the sake of intellectual honesty, here's everything I know I lied about, why I simplified them the way I did, and what I know I don't even know. Likewise, this model throws away details that may distract from learning about the core principles of anxiety and therapy. I think this can be a good thing - a street map is useful not just despite simplifying the city, but because it simplifies the city. My most recent interactive project, Neurotic Neurons, has a lot of simplifications.
