Many data scientists confuse in distinguishing between Classification neural network and Regression neural network. There are several reasons:
So how is the Classification neural network different from the Regression neural network?
They are no different, in principle. This means that a neural network can be both a Classification neural network and a Regression neural network. The difference is just output data. The Classification neural network predicts classes or discrete data, while the Regression neural network predicts continuous data such as house prices, income. The classification into these two types of networks is only from an application perspective, because they serve two different types of problems.
Continuous data processing is even easier than processing the classes, because the output of the problem can be directly transformed into the destination output of the network. The name part "Regression" is derived from "Linear regression" and the Regression neural network is considered as AI version of the Linear regression.
Meanwhile, the Recurrent neural network is really a different type of neural network. It handles an input sequence over time, in which the previous information is remembered and returned to the network to process real time samples. In terms of purpose, it is also used to solve problems of neural networks in general, but processing real time data. And normal Classification neural network and Regression neural network handle spatial samples. When deployed, they only transmit information in one direction from the input to the output, so they are called feedforward networks.
While Classification neural networks need n output nodes for n output classes, Regression neural networks only need one output node for one continuous output value.
Transfer function
The common transfer function is the logistic function
In which e is the natural logarithms base (≈ 2.718281828).
The logistic function is a function shaped S
The output is compressed into the range [0, 1], it is close to 1 when x> 5 and close to 0 when x <-5. This ensures uniformity of value density for the next layer of the network as well as the entire network.
Representation of the output of the Regression neural network
A simple way is to scale the continuous output value of the problem from the output of the network, in a nearly linear range of the transfer function with values in the range [0.1, 0.9]
The destination output T of the network of a sample is calculated as follows:
Where
Tmin = 0.1
Tmax = 0.9
Val is the output value of the sample
Vmin is the smallest output value in the sample set
Vmax is the largest output value in the sample set
The value T then along with the real output of the network participates in the error calculation expression of the network as usual.
Can't see mail in Inbox? Check your Spam folder.
Comments
There are currently no comments
New Comment