feed_forward()

home > kero > Documentation

This feeds an input value

kero.multib.NeuralNetwork.py

class NetworkUpdater:
  def feed_forward(self, weights, biases, a_1, AF,
		verbose=False,
		matrix_formatting="%6.2f"):
    return a_l_set, z_l_set

Arguments/Return

weights The collection of weights in the neural network.

weights is a list [w_l], where w_l is the collection of weights between the (l-1)-th and l-th layer, for l=2,3,…,L where l=1 is the input layer, l=2 the first hidden layer ad and l=L is the output layer.

w_l is a matrix (list of list) so that w_l[i][j] is the weight between neuron j at layer l-1 and neuron i at layer l.

biases the collection of biases in the neural network.

biases is a list [b_l], where b_l is the collection of biases in the l-th layer for l=2,3,…,L

a_1 numpy matrix. Input layer
AF AF (activationFunction). Assume it is initiated.
verbose False or non-negative integer. The higher the number, the more information is printed when this function is called.

Default = False

matrix_formatting While printing matrices, when set to verbose mode, the decimal representation is set by this parameter.

Default = “%6.2f”

Example Usage 1

testNNupdater1.py

import kero.multib.NeuralNetwork as nn
import kero.utils.utils as ut
import numpy as np

#----------------------------------------
# weights : the collection of weights in the neural network
#   weights is a list [w_l], where w_l is the collection of weights between 
#     the (l-1)-th and l-th layer
#     for l=2,3,...,L where l=1 is the input layer, l=2 the first hidden layer
#     and l=L is the output layer
#   w_l is a matrix (list of list)
#     so that w_l[i][j] is the weight between neuron j at layer l-1 and 
#     neuron i at layer l 
# biases : the collection of biases in the neural network
#   biases is a list [b_l], where b_l is the collection of biases in the l-th layer
#     for l=2,3,...,L

# arbitary choices
arb0 = [0.1,0.1,0.1]
arb1 = [0.1,0.2,0.3]
arb2 = [0.4,0.5,0.6]
arb3 = [-0.1,-0.1,-0.1] # just to show negative weight is okay

# Weights and biases
# Input layer - Hidden layer 1
# (layer 1 - layer 2)
# -------------------------------------
# An Example: w_2[0][1] == 0.2. This means the weight between 
#   between neuron 2 of the input layer and neuron 1 of hidden layer 1 is 0.2 
# Note that w_2 is 3x3 matrix. Input layer and hidden layer 1 both have 3 neurons
w_2 = [arb1, arb2, arb3] 
b_2 = [0, 0, 0]
# Hidden layer 1 - Hidden layer 2
# (layer 2 - layer 3)
# --------------------------------------
# w_3 is a 2x3 matrix.
#   Hidden layer 1 (layer 2) have 3 neurons 
#   Hidden layer 2 (layer 3) have 2 neurons
w_3 = [arb0, arb0]
b_3 = [0,0]
# Hidden layer 2 - Output layer
# (layer 3 - layer 4)
w_4 = [[0.1,0.1],[0.1,0.1]]
b_4 = [0,0.1]

net1=nn.NeuralNetwork()
bulk={
	"weights" : [w_2,w_3,w_4],
	"biases" : [b_2,b_3,b_4]
}


a_1 = np.transpose(np.matrix([0.5,0.5,0.5]))
print("a_1 = ")
ut.print_numpy_matrix(a_1,formatting="%6.2f",no_of_space=6)
AF = nn.activationFunction(func = "Sigmoid")
nu = nn.NetworkUpdater()
weights = bulk["weights"]
biases = bulk["biases"]
a_l_set, z_l_set = nu.feed_forward( weights, biases, a_1, AF,
		verbose=31,matrix_formatting="%6.2f")

The output is as the following. We are able to see the values of weights between layers and the biases of each layer. To understand the notation, refer to Neural Network and Back Propagation. At each layer, the activated values are a_l_act.

Initializing a Neural Network object.
a_1 =
        0.50
        0.50
        0.50
 --+ feed_forward()
     ------------------------------------
     layer  0 to layer 1
     w_l =
                      0.10   0.20   0.30
                      0.40   0.50   0.60
                     -0.10  -0.10  -0.10
     a_l_minus_1 =
                      0.50
                      0.50
                      0.50
     b_l =
                      0.00
                      0.00
                      0.00
      ->  0  :  0.30000000000000004  :  0.574442516811659
      ->  1  :  0.75  :  0.679178699175393
      ->  2  :  -0.15000000000000002  :  0.46257015465625045
     a_l_act =
                      0.57
                      0.68
                      0.46
     ------------------------------------
     layer  1 to layer 2
     w_l =
                      0.10   0.10   0.10
                      0.10   0.10   0.10
     a_l_minus_1 =
                      0.57
                      0.68
                      0.46
     b_l =
                      0.00
                      0.00
      ->  0  :  0.17161913706433024  :  0.5427997868295662
      ->  1  :  0.17161913706433024  :  0.5427997868295662
     a_l_act =
                      0.54
                      0.54
     ------------------------------------
     layer  2 to layer 3
     w_l =
                      0.10   0.10
                      0.10   0.10
     a_l_minus_1 =
                      0.54
                      0.54
     b_l =
                      0.00
                      0.10
      ->  0  :  0.10855995736591324  :  0.5271133663878375
      ->  1  :  0.20855995736591326  :  0.551951812279805
     a_l_act =
                      0.53
                      0.55

Example Usage 2

See example usage 2 of initiate_neural_network().

kero version: 0.6.3 and above