Neural network: Using genetic algorithms to train and deploy neural networks: The derivatives of the transfer functions


(0 comments)

The two most common transfer functions (activation functions) are the sigmoid logistic function or sigmoid function and ReLU function. Their derivative formulas are as follows

1) sigmoid function

Set

u (x) = 1 + e-x

We have f (u) = 1 / u = u-1

f ’(u) = -u-2

Other hand,

u ’(x) = -e-x

Therefore

f ’(x) = f’ (u) * u ’(x) = u-2 * e-x = 1 / (1 + e-x) 2 * e-x

In which y is the output.

2) ReLU function (Rectified linear unit)
This function takes the positive part of the input signal

y = max (0, x)



The derivative is

A soft ReLU function called SmoothReLU function

y = log (1 + ex)

The SmoothReLU function has the derivative that is the sigmoid function, indeed

We set u = 1 + ex

y (u) = log (u) => y '(u) = 1 / u

u ’(x) = ex

So y ’(x) = y’ (u) * u ’(x) = ex / u = ex / (1 + ex)

Currently unrated

Comments

There are currently no comments

New Comment

required

required (not published)

optional

required


What is 2 + 7?

required