This is a great version. The program uses ID3 algorithm to generate some checking samples. Many neural network populations are processed with different number of hidden nodes, and through a number of different weight initialization. The populations evolve to their fixed point, when all neural networks have converged, the errors on the checking sample set is used to select population whose smallest error.
Interestingly, for the tennis problem, the result of Football Predictions 2.0 is 100% identical to the result of id3. I declare here to publish this scientific result to the world.
In the above snapshot, Echecking has a value of approximately 0. This shows that the whole samples of id3 has no error when passing through the neural network. This means that the network will surely produce the same results as id3 at least in the samples.
The value of Etraining is also approx. 0, which means that the entire main samples will have no errors (in the sense to get the right result). You do not need to try, the result is definitely so.
But Etraining is not exactly zero. A very small error is precious. It ensures that the network does not exactly match into the training samples, but has achieved generalization over the whole problem space.