Loading AI tools
Neural-net machine From Wikipedia, the free encyclopedia
The Stochastic Neural Analog Reinforcement Calculator (SNARC) is a neural-net machine designed by Marvin Lee Minsky.[1][2] Prompted by a letter from Minsky, George Armitage Miller gathered the funding for the project from the Air Force Office of Scientific Research in the summer of 1951 with the work to be carried out by Minsky, who was then a graduate student in mathematics at Princeton University. At the time, a physics graduate student at Princeton, Dean S. Edmonds,[3] volunteered that he was good with electronics and therefore Minsky brought him onto the project.[4]
During undergraduate years, he was inspired by the 1943 Warren McCulloch and Walter Pitts paper on artificial neurons, and decided to build such a machine. The learning was Skinnerian reinforcement learning, and Minsky talked with Skinner extensively during the development of the machine. They tested the machine on a copy of Shannon's maze, and found that it could learn to solve the maze. Unlike Shannon's maze, this machine did not control a physical robot, but simulated rats running in a maze. The simulation is displayed as an "arrangement of lights", and the circuit was reinforced each time the simulated rat reached the goal.
The machine surprised its creators. "The rats actually interacted with one another. If one of them found a good path, the others would tend to follow it."[4]
The machine itself is a randomly connected network of approximately 40 Hebb synapses. These synapses each have a memory that holds the probability that signal comes in one input and another signal will come out of the output. There is a probability knob that goes from 0 to 1 that shows this probability of the signals propagating. If the probability signal gets through, a capacitor remembers this function and engages an electromagnetic clutch. At this point, the operator will press a button to give reward to the machine. This activates a motor on a surplus Minneapolis-Honeywell C-1 gyroscopic autopilot from a B-24 bomber.[5] The motor turns a chain that goes to all 40 synapse machines, checking if the clutch is engaged or not. As the capacitor can only "remember" for a certain amount of time, the chain only catches the most recent updates of the probabilities.
Each neuron contained 6 vacuum tubes and a motor. The entire machine is "the size of a grand piano" and contained 300 vacuum tubes. The tubes failed regularly, but the machine would still work despite failures.[4]
This machine is considered one of the first pioneering attempts at the field of artificial intelligence. Minsky went on to be a founding member of MIT's Project MAC, which split to become the MIT Laboratory for Computer Science and the MIT Artificial Intelligence Lab, and is now the MIT Computer Science and Artificial Intelligence Laboratory. In 1985 Minsky became a founding member of the MIT Media Laboratory.
According to Minsky, he loaned the machine to students in Dartmouth, and subsequently lost, except for a single neuron.[6] A photo of Minsky's last neuron can be seen here. The photo shows that 6 vacuum tubes, one of which is a Sylvania JAN-CHS-6H6GT/G/VT-90A.
Seamless Wikipedia browsing. On steroids.
Every time you click a link to Wikipedia, Wiktionary or Wikiquote in your browser's search results, it will show the modern Wikiwand interface.
Wikiwand extension is a five stars, simple, with minimum permission required to keep your browsing private, safe and transparent.