askvity

What is the Formula for Artificial Neural Network?

Published in Artificial Neural Network Formula 4 mins read

An Artificial Neural Network (ANN) isn't defined by a single overarching formula for the entire network, but rather by the fundamental computations performed within each of its constituent neurons and how these neurons are interconnected. Based on the core principles described, the "formula" pertains to the calculation performed by a single neuron before its output is propagated through the network.

Core Computation Within a Neuron

Each neuron in an artificial neural network performs a two-step process:

  1. Linear Combination (Weighted Sum + Bias): The neuron receives inputs from other neurons (or the initial input data). Each input is multiplied by an associated weight, and these weighted inputs are summed together. A bias term is then added to this sum.

    The reference provides the formula for this step for a neuron with two inputs (X1 and X2):

    Y = W1 * X1 + W2 * X2 + b

    • X1, X2: These are the input values to the neuron.
    • W1, W2: These are the weights associated with inputs X1 and X2, respectively. Weights determine the strength and importance of each input.
    • b: This is the bias term, which allows the neuron to shift the output of the activation function.
    • Y: This is the result of the weighted sum plus bias.
  2. Activation Function: The result of the linear combination (Y) is then passed through a non-linear function called the activation function.

    As stated in the reference: "This summed function (Y=W1X1+W2X2+b) is applied over an Activation function."

    The activation function determines the neuron's output signal. Different activation functions (like ReLU, Sigmoid, Tanh) introduce non-linearity, enabling the network to learn complex patterns. While various activation functions can be used, especially in hidden layers, the choice might vary.

    So, the final output of a single neuron can be generalized as:

    Neuron Output = ActivationFunction(Y)

    Or, substituting Y:

    Neuron Output = ActivationFunction(W1*X1 + W2*X2 + b) (for the 2-input case)

Connecting Neurons Across Layers

The reference further explains how neurons connect: "The output from this neuron is multiplied with the weight W3 and supplied as input to the output layer."

This illustrates the fundamental architecture:

  • The output of a neuron (after applying the activation function) becomes an input to neurons in the next layer.
  • This output is multiplied by a new weight (like W3 mentioned for the connection to the output layer) specific to that connection before being fed into the subsequent neuron's linear combination step.

In Summary

While there isn't one single formula representing an entire complex neural network, the core calculation within each neuron, based on the provided information, involves a weighted sum of inputs plus a bias, followed by applying an activation function. The result of this operation in one neuron then serves as a weighted input to neurons in the subsequent layer, propagating information through the network to ultimately produce an output.

Component Description Calculation (Example from Reference)
Inputs Values fed into the neuron. X1, X2
Weights Multipliers for inputs, learned during training. W1, W2 (for inputs), W3 (for output connection)
Bias An additive term that shifts the output. b
Linear Combination Weighted sum of inputs plus bias. Y = W1X1 + W2X2 + b
Activation Function Non-linear function applied to the linear output. Applied over Y
Neuron Output The final output of the neuron. ActivationFunction(Y)

Related Articles