This post was first shown on Zach's Hackaday Projects page and is available here. As noted in that entry, the unmodified text comes from Andrew Salveson in early 2014.
[A Previous] Note from Zach: Andrew wrote this up in early April of 2014. I've pasted it here without modification.
Zach asked me to write a little about the prehistory of the Neuron project, so here goes.
This Neuron project goes back a few years, to architecture grad school and a brief obsession with the idea of computer-generated architecture. I wrote programs that generated forms based on random walks, fractal games, and genetic iterators, but what I really wanted was to be able to mimic the accidents and discovery of design by writing a program that would iterate process rather than form: instead of always generating form using fractals or through the application of a pattern, I wanted to codify the decision to even use a tool in the first place, or to change the order of operations performed. I could write code that would do something myself and change it myself, but I wanted code that would rewrite itself.
I began casting around for a way to generate randomized algorithms that always worked. The solution as I conceived of it was to simplify everything to binary--to think of functions as nodes and to normalize their inputs and outputs to either 1 or 0, then generate a random directed graph from a bunch of function nodes, creating random loops and sequences of functions.
I'm not sure what led me to start thinking of the nodes as neurons--maybe I saw something like Randall Beer's periplaneta computatrix and recognized its applicability; I also had a friend who was getting a degree in physiology and I probably had a few conversations about biopsychology.
Whatever the motivation, I wrote a program in Ruby using the SketchUp API that would generate a random cloud of 'neurons.' I didn't want the code to execute all at once, and I wanted it to step slowly enough that I could watch it, so I wrote an environment that would keep track of each 'neuron,' its upstream and downstream connections, and the messages it was sending. I then had the environment generate a model of the neurons in the SketchUp environment so that I could see the connections, and I had it color the neurons according to their potential. By allowing the environment to progress one step, every neuron's input was summed, checked against its internal threshold, and if the potential (sum of inputs) was greater than the threshold, a message was sent to all neurons downstream.
Some of the neurons were designated 'actor neurons,' and when activated would move the insertion point or drop geometry into the 3D SketchUp environment. Some neurons were designated 'input neurons,' and would receive messages from the environment.
Results were positive--the process certainly generated things, and each generation of randomized algorithm certainly had its own personality.
The neurons and the networks they created started to be more interesting to me than what they actually produced. I read more about neurons, trying to understand what made them work in our own brains and the brains of animals, and how the model might better mimic organic neurons.
I learned about excitatory and inhibitory inputs, long term potentiation, Rosenblatt's Perceptron, and started to refine the model to be more generalized.
Early in 2014 I decided I wanted to make physical versions of the software neurons. I knew absolutely nothing about electronics. Zach and I got to talking at a gathering and Zach saw potential in the idea, and proposed we work together.