Along with mini-batch, convolution can be used for data mining before training. Both techniques are intended to reduce the size of training data. The mini-batch reduces the sample set size vertically, ie taking a small number of samples. The convolution reduces the sample set size by width, ie, narrowing the number of feature columns, reducing the input for the network.
Two-dimensional convolution is commonly used for image processing due to the two-dimensional feature of the image. This article deals with one-dimensional convolution, which can be applied to all common neural network problems that need to handle the input vector.
The method uses filter set Filters
, this filter applies on a sliding window on the input vector Input
, the result is an output vector Output
. The window starts from the first cell of Input and slides right, with the stride stride
. At each step, the corresponding element of the output vector is calculated by the scalar product of the sub-vector in the window of the input vector and the filter:
Where len_w
is the window size, equal to the size of Filters.
First we consider the case of stride = 1, ie, each step the window shifts to the right 1 cell
Call len_in
is the size of the Input vector, len_out
is the size of the Output vector. At starting position the window occupies len_w
cells, so remaining len_in - len_w
cells and the window will step further len_in - len_w
steps. At this time Output has 1 element. So the output size is:
len_out = len_in - len_w + 1
A general way
Setting up the output size
Our goal is to reduce the size of the input, so it is necessary to set up to get the given output size of the convolution. The filter size will be dependent, can be deduced from the above formula
len_w = len_in - (len_out - 1) * stride
In the example above, we need to reduce the input size from 9 to 3
len_in = 9
len_out = 3
stride = 1
So the window needs size
len_w = 9 - (3 - 1) = 9 - 2 = 7
The filter is very simple, we just need to take the average value, the filter elements are equal and equal to 1 / len_w
, in the example is 1/7.
padding
The window size must be positive, that requires
len_in - (len_out - 1) * stride> 0
If the len_in
size is not enough, we need to pad padding
elements whose value 0
, which can be added at the end of the input vector. We have the general window size formula
Source code
void convol(double *in, double *out, int len_in, int len_out, int stride = 1, int padding = 0) { double *in_padded; int len_in_padded = len_in + padding; if (padding == 0) in_padded = in; else { in_padded = new double[len_in_padded]; memcpy(in_padded, in, sizeof(double) * len_in); for (int i = len_in; i < len_in_padded; i++) in_padded[i] = 0; } int len_window = len_in_padded - (len_out - 1) * stride; assert (len_window > 0); for (int i = 0; i < len_out; i++) { double val = 0; for (int j = i * stride; j < len_window + i * stride; j++) val += in_padded[j]; out[i] = val / len_window; } if (padding) delete in_padded; }Share on Twitter Share on Facebook Share on Linked In
Can't see mail in Inbox? Check your Spam folder.
Comments
There are currently no comments
New Comment