Data range used by evaluation phase is not specified. Training data range is broken up into In Sample (IS) and Out of Sample (OOS), what range of data does the Evaluation (Evaluate tab) phase use? Does it use any portion of the OOS that is defined in the training phase, if so what is this portion (ie., last 20% of OOS?)?
Rename
Evaluation tab uses both In Sample and Out of Sample, there are columns for each one.
Would this be the same way how the training tab use the data?
The training page uses the In Sample data to train the network, then every few generations runs it through the Out of Sample to show the OOS error.
Yes, that is my understanding. My thinking was that the data is divided into training, testing and evaluation segments and the evaluation segment is untouched during training.
If I have two models, how do I compare the accuracies of the two models? How do I know which model is a better model in term prediction accuracy?
If I have two models, how do I compare the accuracies of the two models? How do I know which model is a better model in term prediction accuracy?
It's not always 1 or 0 which is "better". You make a judgement based on the output of the indicator across its range, paying close attention to the number of observations. If you have 20,000 observations across the output range but predicted [large] gains only twice, it's not too useful.
Maybe one of the ANN/ML scientists out there could tell us a good way measure the accuracy (for lack of a better word) in a normalized way in order to compare multiple models.
Maybe one of the ANN/ML scientists out there could tell us a good way measure the accuracy (for lack of a better word) in a normalized way in order to compare multiple models.
Thanks Cone. I am a beginner on NN, however one of the methods to assess the accuracy of prediction of a regression type model is to calculate and compare its mean square error (MSE) or other similar terms - also called losses. This is being done on the training tab - that is the loss function is being calculated and updated during training.
Could the loss (MSE) also be calculated and displayed on the evaluation tab. This info would tell how accurate of predictions of the model on OOS data and allows comparison between different models.
Could the loss (MSE) also be calculated and displayed on the evaluation tab. This info would tell how accurate of predictions of the model on OOS data and allows comparison between different models.
Your Response
Post
Edit Post
Login is required