Running with nthreads = 4
DataSetInfo : [dataset] : Added class "Signal"
: Add Tree sig_tree of type Signal with 1000 events
DataSetInfo : [dataset] : Added class "Background"
: Add Tree bkg_tree of type Background with 1000 events
Factory : Booking method: ␛[1mBDT␛[0m
:
: Rebuilding Dataset dataset
: Building event vectors for type 2 Signal
: Dataset[dataset] : create input formulas for tree sig_tree
: Using variable vars[0] from array expression vars of size 256
: Building event vectors for type 2 Background
: Dataset[dataset] : create input formulas for tree bkg_tree
: Using variable vars[0] from array expression vars of size 256
DataSetFactory : [dataset] : Number of events in input trees
:
:
: Number of training and testing events
: ---------------------------------------------------------------------------
: Signal -- training events : 800
: Signal -- testing events : 200
: Signal -- training and testing events: 1000
: Background -- training events : 800
: Background -- testing events : 200
: Background -- training and testing events: 1000
:
Factory : Booking method: ␛[1mTMVA_DNN_CPU␛[0m
:
: Parsing option string:
: ... "!H:V:ErrorStrategy=CROSSENTROPY:VarTransform=None:WeightInitialization=XAVIER:Layout=DENSE|100|RELU,BNORM,DENSE|100|RELU,BNORM,DENSE|100|RELU,BNORM,DENSE|100|RELU,DENSE|1|LINEAR:TrainingStrategy=LearningRate=1e-3,Momentum=0.9,Repetitions=1,ConvergenceSteps=5,BatchSize=100,TestRepetitions=1,WeightDecay=1e-4,Regularization=None,Optimizer=ADAM,DropConfig=0.0+0.0+0.0+0.,MaxEpochs=10:Architecture=CPU"
: The following options are set:
: - By User:
: <none>
: - Default:
: Boost_num: "0" [Number of times the classifier will be boosted]
: Parsing option string:
: ... "!H:V:ErrorStrategy=CROSSENTROPY:VarTransform=None:WeightInitialization=XAVIER:Layout=DENSE|100|RELU,BNORM,DENSE|100|RELU,BNORM,DENSE|100|RELU,BNORM,DENSE|100|RELU,DENSE|1|LINEAR:TrainingStrategy=LearningRate=1e-3,Momentum=0.9,Repetitions=1,ConvergenceSteps=5,BatchSize=100,TestRepetitions=1,WeightDecay=1e-4,Regularization=None,Optimizer=ADAM,DropConfig=0.0+0.0+0.0+0.,MaxEpochs=10:Architecture=CPU"
: The following options are set:
: - By User:
: V: "True" [Verbose output (short form of "VerbosityLevel" below - overrides the latter one)]
: VarTransform: "None" [List of variable transformations performed before training, e.g., "D_Background,P_Signal,G,N_AllClasses" for: "Decorrelation, PCA-transformation, Gaussianisation, Normalisation, each for the given class of events ('AllClasses' denotes all events of all classes, if no class indication is given, 'All' is assumed)"]
: H: "False" [Print method-specific help message]
: Layout: "DENSE|100|RELU,BNORM,DENSE|100|RELU,BNORM,DENSE|100|RELU,BNORM,DENSE|100|RELU,DENSE|1|LINEAR" [Layout of the network.]
: ErrorStrategy: "CROSSENTROPY" [Loss function: Mean squared error (regression) or cross entropy (binary classification).]
: WeightInitialization: "XAVIER" [Weight initialization strategy]
: Architecture: "CPU" [Which architecture to perform the training on.]
: TrainingStrategy: "LearningRate=1e-3,Momentum=0.9,Repetitions=1,ConvergenceSteps=5,BatchSize=100,TestRepetitions=1,WeightDecay=1e-4,Regularization=None,Optimizer=ADAM,DropConfig=0.0+0.0+0.0+0.,MaxEpochs=10" [Defines the training strategies.]
: - Default:
: VerbosityLevel: "Default" [Verbosity level]
: CreateMVAPdfs: "False" [Create PDFs for classifier outputs (signal and background)]
: IgnoreNegWeightsInTraining: "False" [Events with negative weights are ignored in the training (but are included for testing and performance evaluation)]
: InputLayout: "0|0|0" [The Layout of the input]
: BatchLayout: "0|0|0" [The Layout of the batch]
: RandomSeed: "0" [Random seed used for weight initialization and batch shuffling]
: ValidationSize: "20%" [Part of the training data to use for validation. Specify as 0.2 or 20% to use a fifth of the data set as validation set. Specify as 100 to use exactly 100 events. (Default: 20%)]
: Will now use the CPU architecture with BLAS and IMT support !
Factory : Booking method: ␛[1mTMVA_CNN_CPU␛[0m
:
: Parsing option string:
: ... "!H:V:ErrorStrategy=CROSSENTROPY:VarTransform=None:WeightInitialization=XAVIER:InputLayout=1|16|16:Layout=CONV|10|3|3|1|1|1|1|RELU,BNORM,CONV|10|3|3|1|1|1|1|RELU,MAXPOOL|2|2|1|1,RESHAPE|FLAT,DENSE|100|RELU,DENSE|1|LINEAR:TrainingStrategy=LearningRate=1e-3,Momentum=0.9,Repetitions=1,ConvergenceSteps=5,BatchSize=100,TestRepetitions=1,WeightDecay=1e-4,Regularization=None,Optimizer=ADAM,DropConfig=0.0+0.0+0.0+0.0,MaxEpochs=10:Architecture=CPU"
: The following options are set:
: - By User:
: <none>
: - Default:
: Boost_num: "0" [Number of times the classifier will be boosted]
: Parsing option string:
: ... "!H:V:ErrorStrategy=CROSSENTROPY:VarTransform=None:WeightInitialization=XAVIER:InputLayout=1|16|16:Layout=CONV|10|3|3|1|1|1|1|RELU,BNORM,CONV|10|3|3|1|1|1|1|RELU,MAXPOOL|2|2|1|1,RESHAPE|FLAT,DENSE|100|RELU,DENSE|1|LINEAR:TrainingStrategy=LearningRate=1e-3,Momentum=0.9,Repetitions=1,ConvergenceSteps=5,BatchSize=100,TestRepetitions=1,WeightDecay=1e-4,Regularization=None,Optimizer=ADAM,DropConfig=0.0+0.0+0.0+0.0,MaxEpochs=10:Architecture=CPU"
: The following options are set:
: - By User:
: V: "True" [Verbose output (short form of "VerbosityLevel" below - overrides the latter one)]
: VarTransform: "None" [List of variable transformations performed before training, e.g., "D_Background,P_Signal,G,N_AllClasses" for: "Decorrelation, PCA-transformation, Gaussianisation, Normalisation, each for the given class of events ('AllClasses' denotes all events of all classes, if no class indication is given, 'All' is assumed)"]
: H: "False" [Print method-specific help message]
: InputLayout: "1|16|16" [The Layout of the input]
: Layout: "CONV|10|3|3|1|1|1|1|RELU,BNORM,CONV|10|3|3|1|1|1|1|RELU,MAXPOOL|2|2|1|1,RESHAPE|FLAT,DENSE|100|RELU,DENSE|1|LINEAR" [Layout of the network.]
: ErrorStrategy: "CROSSENTROPY" [Loss function: Mean squared error (regression) or cross entropy (binary classification).]
: WeightInitialization: "XAVIER" [Weight initialization strategy]
: Architecture: "CPU" [Which architecture to perform the training on.]
: TrainingStrategy: "LearningRate=1e-3,Momentum=0.9,Repetitions=1,ConvergenceSteps=5,BatchSize=100,TestRepetitions=1,WeightDecay=1e-4,Regularization=None,Optimizer=ADAM,DropConfig=0.0+0.0+0.0+0.0,MaxEpochs=10" [Defines the training strategies.]
: - Default:
: VerbosityLevel: "Default" [Verbosity level]
: CreateMVAPdfs: "False" [Create PDFs for classifier outputs (signal and background)]
: IgnoreNegWeightsInTraining: "False" [Events with negative weights are ignored in the training (but are included for testing and performance evaluation)]
: BatchLayout: "0|0|0" [The Layout of the batch]
: RandomSeed: "0" [Random seed used for weight initialization and batch shuffling]
: ValidationSize: "20%" [Part of the training data to use for validation. Specify as 0.2 or 20% to use a fifth of the data set as validation set. Specify as 100 to use exactly 100 events. (Default: 20%)]
: Will now use the CPU architecture with BLAS and IMT support !
Factory : ␛[1mTrain all methods␛[0m
Factory : Train method: BDT for Classification
:
BDT : #events: (reweighted) sig: 800 bkg: 800
: #events: (unweighted) sig: 800 bkg: 800
: Training 400 Decision Trees ... patience please
: Elapsed time for training with 1600 events: 1.3 sec
BDT : [dataset] : Evaluation of BDT on training sample (1600 events)
: Elapsed time for evaluation of 1600 events: 0.0138 sec
: Creating xml weight file: ␛[0;36mdataset/weights/TMVA_CNN_Classification_BDT.weights.xml␛[0m
: Creating standalone class: ␛[0;36mdataset/weights/TMVA_CNN_Classification_BDT.class.C␛[0m
: TMVA_CNN_ClassificationOutput.root:/dataset/Method_BDT/BDT
Factory : Training finished
:
Factory : Train method: TMVA_DNN_CPU for Classification
:
: Start of deep neural network training on CPU using MT, nthreads = 4
:
: ***** Deep Learning Network *****
DEEP NEURAL NETWORK: Depth = 8 Input = ( 1, 1, 256 ) Batch size = 100 Loss function = C
Layer 0 DENSE Layer: ( Input = 256 , Width = 100 ) Output = ( 1 , 100 , 100 ) Activation Function = Relu
Layer 1 BATCH NORM Layer: Input/Output = ( 100 , 100 , 1 ) Norm dim = 100 axis = -1
Layer 2 DENSE Layer: ( Input = 100 , Width = 100 ) Output = ( 1 , 100 , 100 ) Activation Function = Relu
Layer 3 BATCH NORM Layer: Input/Output = ( 100 , 100 , 1 ) Norm dim = 100 axis = -1
Layer 4 DENSE Layer: ( Input = 100 , Width = 100 ) Output = ( 1 , 100 , 100 ) Activation Function = Relu
Layer 5 BATCH NORM Layer: Input/Output = ( 100 , 100 , 1 ) Norm dim = 100 axis = -1
Layer 6 DENSE Layer: ( Input = 100 , Width = 100 ) Output = ( 1 , 100 , 100 ) Activation Function = Relu
Layer 7 DENSE Layer: ( Input = 100 , Width = 1 ) Output = ( 1 , 100 , 1 ) Activation Function = Identity
: Using 1280 events for training and 320 for testing
: Compute initial loss on the validation data
: Training phase 1 of 1: Optimizer ADAM (beta1=0.9,beta2=0.999,eps=1e-07) Learning rate = 0.001 regularization 0 minimum error = 21.7848
: --------------------------------------------------------------
: Epoch | Train Err. Val. Err. t(s)/epoch t(s)/Loss nEvents/s Conv. Steps
: --------------------------------------------------------------
: Start epoch iteration ...
: 1 Minimum Test error found - save the configuration
: 1 | 0.843006 0.829066 0.10266 0.010295 12991.9 0
: 2 Minimum Test error found - save the configuration
: 2 | 0.650141 0.753836 0.102389 0.0101217 13005.7 0
: 3 Minimum Test error found - save the configuration
: 3 | 0.540495 0.675284 0.102187 0.0101408 13036.9 0
: 4 Minimum Test error found - save the configuration
: 4 | 0.462877 0.659557 0.102043 0.0101383 13057 0
: 5 | 0.395879 0.662325 0.102084 0.00977045 12999.1 1
: 6 | 0.344088 0.689132 0.101771 0.00986159 13056.4 2
: 7 | 0.299179 0.66233 0.101818 0.00979074 13039.7 3
: 8 | 0.255894 0.683555 0.101916 0.00988138 13038.6 4
: 9 | 0.234279 0.724881 0.101858 0.00981647 13037.6 5
: 10 | 0.201766 0.732147 0.102077 0.00981217 13006 6
:
: Elapsed time for training with 1600 events: 1.04 sec
: Evaluate deep neural network on CPU using batches with size = 100
:
TMVA_DNN_CPU : [dataset] : Evaluation of TMVA_DNN_CPU on training sample (1600 events)
: Elapsed time for evaluation of 1600 events: 0.0512 sec
: Creating xml weight file: ␛[0;36mdataset/weights/TMVA_CNN_Classification_TMVA_DNN_CPU.weights.xml␛[0m
: Creating standalone class: ␛[0;36mdataset/weights/TMVA_CNN_Classification_TMVA_DNN_CPU.class.C␛[0m
Factory : Training finished
:
Factory : Train method: TMVA_CNN_CPU for Classification
:
: Start of deep neural network training on CPU using MT, nthreads = 4
:
: ***** Deep Learning Network *****
DEEP NEURAL NETWORK: Depth = 7 Input = ( 1, 16, 16 ) Batch size = 100 Loss function = C
Layer 0 CONV LAYER: ( W = 16 , H = 16 , D = 10 ) Filter ( W = 3 , H = 3 ) Output = ( 100 , 10 , 10 , 256 ) Activation Function = Relu
Layer 1 BATCH NORM Layer: Input/Output = ( 10 , 256 , 100 ) Norm dim = 10 axis = 1
Layer 2 CONV LAYER: ( W = 16 , H = 16 , D = 10 ) Filter ( W = 3 , H = 3 ) Output = ( 100 , 10 , 10 , 256 ) Activation Function = Relu
Layer 3 POOL Layer: ( W = 15 , H = 15 , D = 10 ) Filter ( W = 2 , H = 2 ) Output = ( 100 , 10 , 10 , 225 )
Layer 4 RESHAPE Layer Input = ( 10 , 15 , 15 ) Output = ( 1 , 100 , 2250 )
Layer 5 DENSE Layer: ( Input = 2250 , Width = 100 ) Output = ( 1 , 100 , 100 ) Activation Function = Relu
Layer 6 DENSE Layer: ( Input = 100 , Width = 1 ) Output = ( 1 , 100 , 1 ) Activation Function = Identity
: Using 1280 events for training and 320 for testing
: Compute initial loss on the validation data
: Training phase 1 of 1: Optimizer ADAM (beta1=0.9,beta2=0.999,eps=1e-07) Learning rate = 0.001 regularization 0 minimum error = 64.5068
: --------------------------------------------------------------
: Epoch | Train Err. Val. Err. t(s)/epoch t(s)/Loss nEvents/s Conv. Steps
: --------------------------------------------------------------
: Start epoch iteration ...
: 1 Minimum Test error found - save the configuration
: 1 | 2.74457 1.17146 0.78935 0.0658618 1658.63 0
: 2 Minimum Test error found - save the configuration
: 2 | 1.00078 0.977604 0.774454 0.0651126 1691.71 0
: 3 Minimum Test error found - save the configuration
: 3 | 0.76596 0.812146 0.771421 0.0649372 1698.55 0
: 4 Minimum Test error found - save the configuration
: 4 | 0.743833 0.76986 0.777253 0.0651526 1685.16 0
: 5 Minimum Test error found - save the configuration
: 5 | 0.71901 0.726784 0.786355 0.0643919 1662.13 0
: 6 Minimum Test error found - save the configuration
: 6 | 0.684743 0.713478 0.780813 0.0648941 1676.17 0
: 7 Minimum Test error found - save the configuration
: 7 | 0.655841 0.703797 0.7859 0.0644417 1663.3 0
: 8 Minimum Test error found - save the configuration
: 8 | 0.636634 0.697292 0.792591 0.0648482 1648.93 0
: 9 Minimum Test error found - save the configuration
: 9 | 0.61928 0.680888 0.771377 0.064983 1698.77 0
: 10 | 0.608945 0.692196 0.779353 0.063853 1677.15 1
:
: Elapsed time for training with 1600 events: 7.88 sec
: Evaluate deep neural network on CPU using batches with size = 100
:
TMVA_CNN_CPU : [dataset] : Evaluation of TMVA_CNN_CPU on training sample (1600 events)
: Elapsed time for evaluation of 1600 events: 0.338 sec
: Creating xml weight file: ␛[0;36mdataset/weights/TMVA_CNN_Classification_TMVA_CNN_CPU.weights.xml␛[0m
: Creating standalone class: ␛[0;36mdataset/weights/TMVA_CNN_Classification_TMVA_CNN_CPU.class.C␛[0m
Factory : Training finished
:
: Ranking input variables (method specific)...
BDT : Ranking result (top variable is best ranked)
: --------------------------------------
: Rank : Variable : Variable Importance
: --------------------------------------
: 1 : vars : 1.058e-02
: 2 : vars : 9.408e-03
: 3 : vars : 9.323e-03
: 4 : vars : 9.219e-03
: 5 : vars : 9.057e-03
: 6 : vars : 8.201e-03
: 7 : vars : 8.191e-03
: 8 : vars : 8.137e-03
: 9 : vars : 7.703e-03
: 10 : vars : 7.703e-03
: 11 : vars : 7.546e-03
: 12 : vars : 7.163e-03
: 13 : vars : 7.131e-03
: 14 : vars : 7.054e-03
: 15 : vars : 7.007e-03
: 16 : vars : 7.001e-03
: 17 : vars : 6.995e-03
: 18 : vars : 6.921e-03
: 19 : vars : 6.875e-03
: 20 : vars : 6.777e-03
: 21 : vars : 6.769e-03
: 22 : vars : 6.703e-03
: 23 : vars : 6.614e-03
: 24 : vars : 6.525e-03
: 25 : vars : 6.490e-03
: 26 : vars : 6.489e-03
: 27 : vars : 6.448e-03
: 28 : vars : 6.429e-03
: 29 : vars : 6.393e-03
: 30 : vars : 6.371e-03
: 31 : vars : 6.331e-03
: 32 : vars : 6.298e-03
: 33 : vars : 6.280e-03
: 34 : vars : 6.266e-03
: 35 : vars : 6.262e-03
: 36 : vars : 6.223e-03
: 37 : vars : 6.172e-03
: 38 : vars : 6.122e-03
: 39 : vars : 6.014e-03
: 40 : vars : 5.963e-03
: 41 : vars : 5.960e-03
: 42 : vars : 5.855e-03
: 43 : vars : 5.824e-03
: 44 : vars : 5.808e-03
: 45 : vars : 5.803e-03
: 46 : vars : 5.789e-03
: 47 : vars : 5.755e-03
: 48 : vars : 5.750e-03
: 49 : vars : 5.643e-03
: 50 : vars : 5.637e-03
: 51 : vars : 5.613e-03
: 52 : vars : 5.560e-03
: 53 : vars : 5.554e-03
: 54 : vars : 5.526e-03
: 55 : vars : 5.490e-03
: 56 : vars : 5.404e-03
: 57 : vars : 5.392e-03
: 58 : vars : 5.391e-03
: 59 : vars : 5.357e-03
: 60 : vars : 5.334e-03
: 61 : vars : 5.228e-03
: 62 : vars : 5.220e-03
: 63 : vars : 5.195e-03
: 64 : vars : 5.155e-03
: 65 : vars : 5.123e-03
: 66 : vars : 5.104e-03
: 67 : vars : 5.087e-03
: 68 : vars : 5.045e-03
: 69 : vars : 5.006e-03
: 70 : vars : 4.972e-03
: 71 : vars : 4.960e-03
: 72 : vars : 4.903e-03
: 73 : vars : 4.883e-03
: 74 : vars : 4.870e-03
: 75 : vars : 4.867e-03
: 76 : vars : 4.858e-03
: 77 : vars : 4.820e-03
: 78 : vars : 4.792e-03
: 79 : vars : 4.784e-03
: 80 : vars : 4.774e-03
: 81 : vars : 4.759e-03
: 82 : vars : 4.755e-03
: 83 : vars : 4.733e-03
: 84 : vars : 4.718e-03
: 85 : vars : 4.714e-03
: 86 : vars : 4.695e-03
: 87 : vars : 4.683e-03
: 88 : vars : 4.601e-03
: 89 : vars : 4.565e-03
: 90 : vars : 4.563e-03
: 91 : vars : 4.551e-03
: 92 : vars : 4.509e-03
: 93 : vars : 4.445e-03
: 94 : vars : 4.432e-03
: 95 : vars : 4.431e-03
: 96 : vars : 4.406e-03
: 97 : vars : 4.400e-03
: 98 : vars : 4.343e-03
: 99 : vars : 4.320e-03
: 100 : vars : 4.281e-03
: 101 : vars : 4.262e-03
: 102 : vars : 4.259e-03
: 103 : vars : 4.250e-03
: 104 : vars : 4.229e-03
: 105 : vars : 4.218e-03
: 106 : vars : 4.180e-03
: 107 : vars : 4.146e-03
: 108 : vars : 4.134e-03
: 109 : vars : 4.127e-03
: 110 : vars : 4.125e-03
: 111 : vars : 4.088e-03
: 112 : vars : 4.074e-03
: 113 : vars : 4.065e-03
: 114 : vars : 4.035e-03
: 115 : vars : 4.018e-03
: 116 : vars : 3.991e-03
: 117 : vars : 3.990e-03
: 118 : vars : 3.977e-03
: 119 : vars : 3.961e-03
: 120 : vars : 3.944e-03
: 121 : vars : 3.917e-03
: 122 : vars : 3.894e-03
: 123 : vars : 3.870e-03
: 124 : vars : 3.869e-03
: 125 : vars : 3.864e-03
: 126 : vars : 3.863e-03
: 127 : vars : 3.863e-03
: 128 : vars : 3.856e-03
: 129 : vars : 3.836e-03
: 130 : vars : 3.806e-03
: 131 : vars : 3.770e-03
: 132 : vars : 3.756e-03
: 133 : vars : 3.748e-03
: 134 : vars : 3.742e-03
: 135 : vars : 3.704e-03
: 136 : vars : 3.703e-03
: 137 : vars : 3.691e-03
: 138 : vars : 3.644e-03
: 139 : vars : 3.587e-03
: 140 : vars : 3.574e-03
: 141 : vars : 3.567e-03
: 142 : vars : 3.559e-03
: 143 : vars : 3.554e-03
: 144 : vars : 3.553e-03
: 145 : vars : 3.543e-03
: 146 : vars : 3.541e-03
: 147 : vars : 3.513e-03
: 148 : vars : 3.465e-03
: 149 : vars : 3.412e-03
: 150 : vars : 3.401e-03
: 151 : vars : 3.398e-03
: 152 : vars : 3.372e-03
: 153 : vars : 3.359e-03
: 154 : vars : 3.356e-03
: 155 : vars : 3.344e-03
: 156 : vars : 3.280e-03
: 157 : vars : 3.251e-03
: 158 : vars : 3.239e-03
: 159 : vars : 3.238e-03
: 160 : vars : 3.238e-03
: 161 : vars : 3.231e-03
: 162 : vars : 3.209e-03
: 163 : vars : 3.185e-03
: 164 : vars : 3.183e-03
: 165 : vars : 3.178e-03
: 166 : vars : 3.155e-03
: 167 : vars : 3.121e-03
: 168 : vars : 3.119e-03
: 169 : vars : 3.112e-03
: 170 : vars : 3.078e-03
: 171 : vars : 3.059e-03
: 172 : vars : 3.038e-03
: 173 : vars : 3.036e-03
: 174 : vars : 3.017e-03
: 175 : vars : 3.010e-03
: 176 : vars : 2.998e-03
: 177 : vars : 2.997e-03
: 178 : vars : 2.972e-03
: 179 : vars : 2.929e-03
: 180 : vars : 2.928e-03
: 181 : vars : 2.928e-03
: 182 : vars : 2.853e-03
: 183 : vars : 2.853e-03
: 184 : vars : 2.847e-03
: 185 : vars : 2.799e-03
: 186 : vars : 2.750e-03
: 187 : vars : 2.741e-03
: 188 : vars : 2.740e-03
: 189 : vars : 2.694e-03
: 190 : vars : 2.688e-03
: 191 : vars : 2.688e-03
: 192 : vars : 2.674e-03
: 193 : vars : 2.651e-03
: 194 : vars : 2.643e-03
: 195 : vars : 2.640e-03
: 196 : vars : 2.620e-03
: 197 : vars : 2.605e-03
: 198 : vars : 2.514e-03
: 199 : vars : 2.487e-03
: 200 : vars : 2.456e-03
: 201 : vars : 2.439e-03
: 202 : vars : 2.394e-03
: 203 : vars : 2.381e-03
: 204 : vars : 2.349e-03
: 205 : vars : 2.332e-03
: 206 : vars : 2.325e-03
: 207 : vars : 2.324e-03
: 208 : vars : 2.237e-03
: 209 : vars : 2.224e-03
: 210 : vars : 2.201e-03
: 211 : vars : 2.174e-03
: 212 : vars : 2.149e-03
: 213 : vars : 2.134e-03
: 214 : vars : 2.115e-03
: 215 : vars : 2.034e-03
: 216 : vars : 2.012e-03
: 217 : vars : 1.977e-03
: 218 : vars : 1.966e-03
: 219 : vars : 1.918e-03
: 220 : vars : 1.874e-03
: 221 : vars : 1.795e-03
: 222 : vars : 1.795e-03
: 223 : vars : 1.791e-03
: 224 : vars : 1.700e-03
: 225 : vars : 1.688e-03
: 226 : vars : 1.646e-03
: 227 : vars : 1.619e-03
: 228 : vars : 1.602e-03
: 229 : vars : 1.566e-03
: 230 : vars : 1.561e-03
: 231 : vars : 1.550e-03
: 232 : vars : 1.547e-03
: 233 : vars : 1.530e-03
: 234 : vars : 1.480e-03
: 235 : vars : 1.397e-03
: 236 : vars : 1.149e-03
: 237 : vars : 8.397e-04
: 238 : vars : 7.974e-04
: 239 : vars : 0.000e+00
: 240 : vars : 0.000e+00
: 241 : vars : 0.000e+00
: 242 : vars : 0.000e+00
: 243 : vars : 0.000e+00
: 244 : vars : 0.000e+00
: 245 : vars : 0.000e+00
: 246 : vars : 0.000e+00
: 247 : vars : 0.000e+00
: 248 : vars : 0.000e+00
: 249 : vars : 0.000e+00
: 250 : vars : 0.000e+00
: 251 : vars : 0.000e+00
: 252 : vars : 0.000e+00
: 253 : vars : 0.000e+00
: 254 : vars : 0.000e+00
: 255 : vars : 0.000e+00
: 256 : vars : 0.000e+00
: --------------------------------------
: No variable ranking supplied by classifier: TMVA_DNN_CPU
: No variable ranking supplied by classifier: TMVA_CNN_CPU
TH1.Print Name = TrainingHistory_TMVA_DNN_CPU_trainingError, Entries= 0, Total sum= 4.2276
TH1.Print Name = TrainingHistory_TMVA_DNN_CPU_valError, Entries= 0, Total sum= 7.07211
TH1.Print Name = TrainingHistory_TMVA_CNN_CPU_trainingError, Entries= 0, Total sum= 9.1796
TH1.Print Name = TrainingHistory_TMVA_CNN_CPU_valError, Entries= 0, Total sum= 7.94551
Factory : === Destroy and recreate all methods via weight files for testing ===
:
: Reading weight file: ␛[0;36mdataset/weights/TMVA_CNN_Classification_BDT.weights.xml␛[0m
: Reading weight file: ␛[0;36mdataset/weights/TMVA_CNN_Classification_TMVA_DNN_CPU.weights.xml␛[0m
: Reading weight file: ␛[0;36mdataset/weights/TMVA_CNN_Classification_TMVA_CNN_CPU.weights.xml␛[0m
Factory : ␛[1mTest all methods␛[0m
Factory : Test method: BDT for Classification performance
:
BDT : [dataset] : Evaluation of BDT on testing sample (400 events)
: Elapsed time for evaluation of 400 events: 0.00356 sec
Factory : Test method: TMVA_DNN_CPU for Classification performance
:
: Evaluate deep neural network on CPU using batches with size = 400
:
TMVA_DNN_CPU : [dataset] : Evaluation of TMVA_DNN_CPU on testing sample (400 events)
: Elapsed time for evaluation of 400 events: 0.0125 sec
Factory : Test method: TMVA_CNN_CPU for Classification performance
:
: Evaluate deep neural network on CPU using batches with size = 400
:
TMVA_CNN_CPU : [dataset] : Evaluation of TMVA_CNN_CPU on testing sample (400 events)
: Elapsed time for evaluation of 400 events: 0.0863 sec
Factory : ␛[1mEvaluate all methods␛[0m
Factory : Evaluate classifier: BDT
:
BDT : [dataset] : Loop over test events and fill histograms with classifier response...
:
: Dataset[dataset] : variable plots are not produces ! The number of variables is 256 , it is larger than 200
Factory : Evaluate classifier: TMVA_DNN_CPU
:
TMVA_DNN_CPU : [dataset] : Loop over test events and fill histograms with classifier response...
:
: Evaluate deep neural network on CPU using batches with size = 1000
:
: Dataset[dataset] : variable plots are not produces ! The number of variables is 256 , it is larger than 200
Factory : Evaluate classifier: TMVA_CNN_CPU
:
TMVA_CNN_CPU : [dataset] : Loop over test events and fill histograms with classifier response...
:
: Evaluate deep neural network on CPU using batches with size = 1000
:
: Dataset[dataset] : variable plots are not produces ! The number of variables is 256 , it is larger than 200
:
: Evaluation results ranked by best signal efficiency and purity (area)
: -------------------------------------------------------------------------------------------------------------------
: DataSet MVA
: Name: Method: ROC-integ
: dataset BDT : 0.773
: dataset TMVA_DNN_CPU : 0.650
: dataset TMVA_CNN_CPU : 0.626
: -------------------------------------------------------------------------------------------------------------------
:
: Testing efficiency compared to training efficiency (overtraining check)
: -------------------------------------------------------------------------------------------------------------------
: DataSet MVA Signal efficiency: from test sample (from training sample)
: Name: Method: @B=0.01 @B=0.10 @B=0.30
: -------------------------------------------------------------------------------------------------------------------
: dataset BDT : 0.095 (0.345) 0.465 (0.695) 0.670 (0.875)
: dataset TMVA_DNN_CPU : 0.040 (0.162) 0.236 (0.427) 0.500 (0.718)
: dataset TMVA_CNN_CPU : 0.035 (0.030) 0.150 (0.268) 0.515 (0.563)
: -------------------------------------------------------------------------------------------------------------------
:
Dataset:dataset : Created tree 'TestTree' with 400 events
:
Dataset:dataset : Created tree 'TrainTree' with 1600 events
:
Factory : ␛[1mThank you for using TMVA!␛[0m
: ␛[1mFor citation information, please visit: http://tmva.sf.net/citeTMVA.html␛[0m