# Neural network: Using genetic algorithms to train and deploy neural networks: The derivatives of the transfer functions

Total views
1,431,749

The two most common transfer functions (activation functions) are the sigmoid logistic function or sigmoid function and ReLU function. Their derivative formulas are as follows

1) sigmoid function

Set

u (x) = 1 + e-x

We have f (u) = 1 / u = u-1

f ’(u) = -u-2

Other hand,

u ’(x) = -e-x

Therefore

f ’(x) = f’ (u) * u ’(x) = u-2 * e-x = 1 / (1 + e-x) 2 * e-x

In which y is the output.

2) ReLU function (Rectified linear unit)
This function takes the positive part of the input signal

y = max (0, x)

The derivative is

A soft ReLU function called SmoothReLU function

y = log (1 + ex)

The SmoothReLU function has the derivative that is the sigmoid function, indeed

We set u = 1 + ex

y (u) = log (u) => y '(u) = 1 / u

u ’(x) = ex

So y ’(x) = y’ (u) * u ’(x) = ex / u = ex / (1 + ex)

Currently unrated

### New Comment

required

required (not published)

optional

required

What is 1 + 7?

required

### Top Posts & Pages

• ###### Author List (3,896 hits)
Join 1,272 other followers

What is 1 + 1?

Can't see mail in Inbox? Check your Spam folder.