The McCulloch-Pitts Model of Neuron
- The early model of an artificial neuron is introduced by Warren McCulloch and Walter Pitts in 1943. The McCulloch-Pitts neural model is also known as linear threshold gate.
- These neuron are connected by direct weighted path. The connected path can be excitatory and inhibitory.
- There will be same weight for the excitatory connection entering
- The connection weights from x1,x2,…….xn are exhibitory denoted by ‘w’ and connection weights from Xn+1 , Xn+2,…….Xn+m are inhibitory denoted by ‘-p’.
-> The McCulloch-Pitts neuron Y has the activation function.
f(yin) = 1 if yin >= Θ where net input yin is given by yin = Σ xiwi
0 if yin < Θ
where Θ is the threshold value and yin is the total net input signal received by neuron Y.
-> The McCulloch-Pitts neuron will fire if it receives k or more exhibitory inputs and no inhibitory inputs.
Kw >= Θ > (K-1)w
Single-layer Neural Networks (Perceptrons) Input is multi-dimensional (i.e. input can be a vector): input x = ( I1, I2, .., In) Input nodes (or units) are connected (typically fully) to a node (or multiple nodes) in the next layer. A node in the next layer takes a weighted sum of all its inputs: [Tex]\newline Summed Input = \sum_{i}w_iI_i \newline [/Tex]The rule: The output node has a “threshold” t. Rule: If summed input ? t, then it “fires” (output y = 1). Else (summed input < t) it doesn’t fire (output y = 0). [Tex]\newline if \sum_{i}w_iI_i\geqslant t then y=1 \newline else (if \sum_{i}w_iI_i < t) then y=0 \newline [/Tex]which
- The input to the response unit will be the output from the associator unit, which is a binary vector.
- The input layer consist of input neurons from x1,x2,………xi……..xn. There always exist a common bias of ‘1’ .
- The input neurons are connected to the output neuron with the weighted interconnection.
Introduction to Artificial Neural Networks | Set 1
ANN learning is robust to errors in the training data and has been successfully applied for learning real-valued, discrete-valued, and vector-valued functions containing problems such as interpreting visual scenes, speech recognition, and learning robot control strategies. The study of artificial neural networks (ANNs) has been inspired in part by the observation that biological learning systems are built of very complex webs of interconnected neurons in brains. The human brain contains a densely interconnected network of approximately 10^11-10^12 neurons, each connected neuron, on average connected, to l0^4-10^5 other neurons. So on average human brain takes approximately 10^-1 to make surprisingly complex decisions.
ANN systems are motivated to capture this kind of highly parallel computation based on distributed representations. Generally, ANNs are built out of a densely interconnected set of simple units, where each unit takes a number of real-valued inputs and produces a single real-valued output. But ANNs are less motivated by biological neural systems, there are many complexities to biological neural systems that are not modeled by ANNs.