Skip to main content
Tweeted twitter.com/StackCodeReview/status/775079817666519040
edited body
Source Link
mdfst13
  • 22.4k
  • 6
  • 34
  • 70

I've written a simple module that creates a fully connected neural network of any size. The arguments of the train function are list of tuples with a training example array first and an array containing its class second, list that contains a number of neurons in every layer including the input and tethe output layers, a learning rate and a number of epochs. To run a trained network I've written the run function, its arguments are an input and weights from the trained network. Since I'm a beginner to programming and machine learning I'll be very happy getting advicesadvice regarding computational efficiency and optimization.

I've written a simple module that creates a fully connected neural network of any size. The arguments of the train function are list of tuples with a training example array first and an array containing its class second, list that contains a number of neurons in every layer including the input and te output layers, a learning rate and a number of epochs. To run a trained network I've written the run function, its arguments are an input and weights from the trained network. Since I'm a beginner to programming and machine learning I'll be very happy getting advices regarding computational efficiency and optimization.

I've written a simple module that creates a fully connected neural network of any size. The arguments of the train function are list of tuples with a training example array first and an array containing its class second, list that contains a number of neurons in every layer including the input and the output layers, a learning rate and a number of epochs. To run a trained network I've written the run function, its arguments are an input and weights from the trained network. Since I'm a beginner to programming and machine learning I'll be very happy getting advice regarding computational efficiency and optimization.

Source Link
Snate
  • 131
  • 2

A simple fully connected ANN module

I've written a simple module that creates a fully connected neural network of any size. The arguments of the train function are list of tuples with a training example array first and an array containing its class second, list that contains a number of neurons in every layer including the input and te output layers, a learning rate and a number of epochs. To run a trained network I've written the run function, its arguments are an input and weights from the trained network. Since I'm a beginner to programming and machine learning I'll be very happy getting advices regarding computational efficiency and optimization.

import numpy as np
def weights_init(inSize,outSize): #initialize the weights
    return 2*np.random.random((inSize,outSize))-1
    
def Sigmoid(input, weights): #create a sigmoid layer and return a layer along with its derivative
    out = 1/(1+np.exp(-np.dot(input,weights)))
    derivative = out*(1-out)
    return out,derivative
    
def backProp(layers, weights, deriv, size, rate = 1): 
    derivative = deriv.pop()#get the cost function derivative
    #reverse all the lists because we need to go backwards
    deriv = deriv[::-1] 
    layers = layers[::-1]
    weights = weights[::-1]
    new_weights=[]
    #backpopagate
    new_weights.append(weights[0]+(layers[1].T.dot(derivative*rate))) #this one does not fit well the algorithm inside for loop, so it's outside of it
    for i in range(len(size)-2):
        derivative = derivative.dot(weights[i].T)*deriv[i]
        new_weights.append(weights[i+1]+(layers[i+2].T.dot(derivative*rate)))
    return new_weights[::-1]

def train(input,size,rate=1,epochs=1): #train the network
    layers=[]
    weights=[]
    derivs=[] 
    for i in xrange(len(size)-1): #weights initialization
        weights.append(weights_init(size[i],size[i+1]))
    for i in xrange(epochs): #the training process
        for example, target in input:  #online learning
            layers.append(example) 
            for i in xrange(len(size)-1):
                layer, derivative = Sigmoid(layers[i],weights[i])#calculate the layer and itd derivative
                layers.append(layer)
                derivs.append(derivative)
            
            loss_deriv = target-layer[-1] #loss function
            
            derivs[-1] = loss_deriv*derivs[-1] #multiply the loss function by the final layer's derivative
            weights = backProp(layers,weights,derivs,size) #update the weights
            layers=[]
            derivs = []
    return weights   

def run(input,weights): #run a trained neural network
    layers=[input]
    for i in xrange(len(weights)):
        layer,derivative = Sigmoid(layers[i],weights[i])
        layers.append(layer)
    return layers

An example:

X = [(np.array([[0,0,1]]),np.array([[0]])),(    np.array([[0,1,1]]),np.array([[1]])), (np.array([[1,0,1]]),np.array([[1]])), (np.array([[1,1,1]]),np.array([[0]]))]

weights = train(X,[3,4,1],epochs=60000)
run(X[0][0],weights)