데이터 집합 소스 : https://archive.ics.uci.edu/ml/datasets/wine이 신경망에서 178 회의 관측 데이터 세트에 대해 50-50 열차/테스트 분할이 가장 효과가있는 이유는 무엇입니까?
전체 소스 코드는 (NumPy와, 파이썬 3 필요) : https://github.com/nave01314/NNClassifier 내가 읽은 바로는
, 그것은 나타나 약 80 % 교육 20 % 버지니아의 분할 유효성 검사 데이터가 최적에 가까워졌습니다. 테스트 데이터 세트의 크기가 증가함에 따라 유효성 검사 결과의 편차가 줄어들어 효과적인 교육의 비용이 낮아집니다 (유효성 검사 정확도가 낮아짐).
따라서 TEST_SIZE=0.5
(각 시험은 여러 번 실행되고 하나의 시험은 다양한 시험 크기를 나타 내기 위해 선택됨)으로 최적의 정확도와 낮은 편차를 보이는 다음 결과에 대해서는 혼란 스럽습니다.
TEST_SIZE=0.1
,이 값은 큰 트레이닝 크기로 인해 효과적이지만 큰 차이가 있습니다 (5 번의 시도는 16 %에서 50 %까지 다양합니다).
Epoch 0, Loss 0.021541, Targets [ 1. 0. 0.], Outputs [ 0.979 0.011 0.01 ], Inputs [ 0.086 0.052 0.08 0.062 0.101 0.093 0.107 0.058 0.108 0.08 0.084 0.115 0.104]
Epoch 100, Loss 0.001154, Targets [ 0. 0. 1.], Outputs [ 0. 0.001 0.999], Inputs [ 0.083 0.099 0.084 0.079 0.085 0.061 0.02 0.103 0.038 0.083 0.078 0.053 0.067]
Epoch 200, Loss 0.000015, Targets [ 0. 0. 1.], Outputs [ 0. 0. 1.], Inputs [ 0.076 0.092 0.087 0.107 0.077 0.063 0.02 0.13 0.054 0.106 0.054 0.051 0.086]
Target Class 0, Predicted Class 0
Target Class 0, Predicted Class 0
Target Class 1, Predicted Class 0
Target Class 1, Predicted Class 0
Target Class 1, Predicted Class 0
Target Class 0, Predicted Class 0
Target Class 1, Predicted Class 0
Target Class 1, Predicted Class 0
Target Class 1, Predicted Class 0
Target Class 1, Predicted Class 0
Target Class 0, Predicted Class 0
Target Class 0, Predicted Class 0
Target Class 1, Predicted Class 0
Target Class 0, Predicted Class 0
Target Class 2, Predicted Class 2
Target Class 1, Predicted Class 0
Target Class 0, Predicted Class 0
Target Class 2, Predicted Class 2
50.0% overall accuracy for validation set.
TEST_SIZE=0.5
Epoch 0, Loss 0.547218, Targets [ 1. 0. 0.], Outputs [ 0.579 0.087 0.334], Inputs [ 0.106 0.08 0.142 0.133 0.129 0.115 0.127 0.13 0.12 0.068 0.123 0.126 0.11 ]
Epoch 100, Loss 0.002716, Targets [ 0. 1. 0.], Outputs [ 0.003 0.997 0. ], Inputs [ 0.09 0.059 0.097 0.114 0.088 0.108 0.102 0.144 0.125 0.036 0.186 0.113 0.054]
Epoch 200, Loss 0.002874, Targets [ 0. 1. 0.], Outputs [ 0.003 0.997 0. ], Inputs [ 0.102 0.067 0.088 0.109 0.088 0.097 0.091 0.088 0.092 0.056 0.113 0.141 0.089]
Target Class 1, Predicted Class 1
Target Class 0, Predicted Class 0
Target Class 2, Predicted Class 2
Target Class 1, Predicted Class 1
Target Class 2, Predicted Class 2
Target Class 2, Predicted Class 2
Target Class 2, Predicted Class 2
Target Class 1, Predicted Class 1
Target Class 0, Predicted Class 0
Target Class 0, Predicted Class 0
Target Class 0, Predicted Class 0
Target Class 0, Predicted Class 0
Target Class 0, Predicted Class 0
Target Class 0, Predicted Class 0
Target Class 1, Predicted Class 0
Target Class 2, Predicted Class 2
Target Class 1, Predicted Class 1
Target Class 1, Predicted Class 1
Target Class 1, Predicted Class 1
Target Class 0, Predicted Class 0
Target Class 2, Predicted Class 2
Target Class 1, Predicted Class 1
Target Class 0, Predicted Class 0
Target Class 1, Predicted Class 1
Target Class 2, Predicted Class 2
Target Class 1, Predicted Class 1
Target Class 1, Predicted Class 1
Target Class 1, Predicted Class 1
Target Class 2, Predicted Class 1
Target Class 2, Predicted Class 2
Target Class 0, Predicted Class 0
Target Class 1, Predicted Class 1
Target Class 0, Predicted Class 0
Target Class 1, Predicted Class 1
Target Class 1, Predicted Class 1
Target Class 0, Predicted Class 0
Target Class 2, Predicted Class 2
Target Class 0, Predicted Class 0
Target Class 0, Predicted Class 0
Target Class 1, Predicted Class 1
Target Class 1, Predicted Class 1
Target Class 0, Predicted Class 0
Target Class 1, Predicted Class 1
Target Class 0, Predicted Class 0
Target Class 0, Predicted Class 0
Target Class 1, Predicted Class 1
Target Class 2, Predicted Class 2
Target Class 2, Predicted Class 2
Target Class 1, Predicted Class 1
Target Class 0, Predicted Class 0
Target Class 1, Predicted Class 1
Target Class 2, Predicted Class 2
Target Class 1, Predicted Class 1
Target Class 2, Predicted Class 2
Target Class 1, Predicted Class 1
Target Class 0, Predicted Class 0
Target Class 1, Predicted Class 1
Target Class 0, Predicted Class 0
Target Class 2, Predicted Class 2
Target Class 2, Predicted Class 2
Target Class 1, Predicted Class 1
Target Class 0, Predicted Class 0
Target Class 2, Predicted Class 2
Target Class 2, Predicted Class 2
Target Class 0, Predicted Class 0
Target Class 0, Predicted Class 0
Target Class 1, Predicted Class 1
Target Class 0, Predicted Class 0
Target Class 1, Predicted Class 1
Target Class 1, Predicted Class 1
Target Class 1, Predicted Class 1
Target Class 0, Predicted Class 0
Target Class 1, Predicted Class 1
Target Class 0, Predicted Class 0
Target Class 2, Predicted Class 2
Target Class 1, Predicted Class 1
Target Class 1, Predicted Class 1
Target Class 0, Predicted Class 0
Target Class 1, Predicted Class 1
Target Class 0, Predicted Class 0
Target Class 2, Predicted Class 2
Target Class 2, Predicted Class 2
Target Class 1, Predicted Class 1
Target Class 1, Predicted Class 1
Target Class 2, Predicted Class 2
Target Class 2, Predicted Class 2
Target Class 1, Predicted Class 1
Target Class 0, Predicted Class 0
Target Class 1, Predicted Class 1
97.75280898876404% overall accuracy for validation set.
TEST_SIZE=0.9
Epoch 0, Loss 2.448474, Targets [ 0. 0. 1.], Outputs [ 0.707 0.206 0.086], Inputs [ 0.229 0.421 0.266 0.267 0.223 0.15 0.057 0.33 0.134 0.148 0.191 0.12 0.24 ]
Epoch 100, Loss 0.017506, Targets [ 1. 0. 0.], Outputs [ 0.983 0.017 0. ], Inputs [ 0.252 0.162 0.274 0.255 0.241 0.275 0.314 0.175 0.278 0.135 0.286 0.36 0.281]
Epoch 200, Loss 0.001819, Targets [ 0. 0. 1.], Outputs [ 0.002 0. 0.998], Inputs [ 0.245 0.348 0.248 0.274 0.284 0.153 0.167 0.212 0.191 0.362 0.145 0.125 0.183]
Target Class 2, Predicted Class 2
Target Class 2, Predicted Class 2
Target Class 1, Predicted Class 1
Target Class 1, Predicted Class 1
Target Class 0, Predicted Class 1
Target Class 1, Predicted Class 1
Target Class 1, Predicted Class 2
Target Class 1, Predicted Class 1
Target Class 1, Predicted Class 1
Target Class 2, Predicted Class 2
Target Class 0, Predicted Class 1
Target Class 1, Predicted Class 1
Target Class 1, Predicted Class 1
Target Class 2, Predicted Class 2
Target Class 0, Predicted Class 1
Target Class 2, Predicted Class 2
Target Class 1, Predicted Class 1
Target Class 2, Predicted Class 2
Target Class 2, Predicted Class 2
Target Class 0, Predicted Class 1
Target Class 2, Predicted Class 2
Target Class 2, Predicted Class 2
Target Class 1, Predicted Class 1
Target Class 1, Predicted Class 1
Target Class 1, Predicted Class 1
Target Class 2, Predicted Class 2
Target Class 1, Predicted Class 1
Target Class 1, Predicted Class 1
Target Class 1, Predicted Class 1
Target Class 2, Predicted Class 2
Target Class 0, Predicted Class 1
Target Class 2, Predicted Class 2
Target Class 2, Predicted Class 2
Target Class 1, Predicted Class 1
Target Class 0, Predicted Class 1
Target Class 0, Predicted Class 1
Target Class 1, Predicted Class 1
Target Class 1, Predicted Class 1
Target Class 0, Predicted Class 1
Target Class 0, Predicted Class 1
Target Class 0, Predicted Class 1
Target Class 0, Predicted Class 1
Target Class 2, Predicted Class 2
Target Class 0, Predicted Class 1
Target Class 2, Predicted Class 2
Target Class 0, Predicted Class 1
Target Class 0, Predicted Class 1
Target Class 0, Predicted Class 1
Target Class 1, Predicted Class 2
Target Class 2, Predicted Class 2
Target Class 2, Predicted Class 2
Target Class 1, Predicted Class 1
Target Class 1, Predicted Class 1
Target Class 1, Predicted Class 1
Target Class 2, Predicted Class 2
Target Class 0, Predicted Class 1
Target Class 2, Predicted Class 2
Target Class 2, Predicted Class 2
Target Class 1, Predicted Class 1
Target Class 1, Predicted Class 1
Target Class 0, Predicted Class 1
Target Class 0, Predicted Class 1
Target Class 0, Predicted Class 1
Target Class 2, Predicted Class 2
Target Class 0, Predicted Class 1
Target Class 0, Predicted Class 1
Target Class 1, Predicted Class 1
Target Class 2, Predicted Class 2
Target Class 0, Predicted Class 1
Target Class 1, Predicted Class 1
Target Class 0, Predicted Class 1
Target Class 1, Predicted Class 1
Target Class 1, Predicted Class 1
Target Class 1, Predicted Class 1
Target Class 0, Predicted Class 1
Target Class 0, Predicted Class 1
Target Class 2, Predicted Class 2
Target Class 1, Predicted Class 1
Target Class 2, Predicted Class 2
Target Class 0, Predicted Class 1
Target Class 0, Predicted Class 1
Target Class 2, Predicted Class 2
Target Class 1, Predicted Class 1
Target Class 1, Predicted Class 1
Target Class 0, Predicted Class 1
Target Class 2, Predicted Class 2
Target Class 1, Predicted Class 1
Target Class 0, Predicted Class 1
Target Class 0, Predicted Class 1
Target Class 1, Predicted Class 1
Target Class 1, Predicted Class 1
Target Class 2, Predicted Class 2
Target Class 0, Predicted Class 1
Target Class 0, Predicted Class 1
Target Class 0, Predicted Class 1
Target Class 2, Predicted Class 2
Target Class 2, Predicted Class 2
Target Class 0, Predicted Class 1
Target Class 1, Predicted Class 1
Target Class 1, Predicted Class 1
Target Class 0, Predicted Class 1
Target Class 0, Predicted Class 1
Target Class 1, Predicted Class 1
Target Class 2, Predicted Class 2
Target Class 1, Predicted Class 1
Target Class 1, Predicted Class 1
Target Class 2, Predicted Class 2
Target Class 0, Predicted Class 1
Target Class 0, Predicted Class 1
Target Class 2, Predicted Class 2
Target Class 0, Predicted Class 1
Target Class 1, Predicted Class 1
Target Class 0, Predicted Class 1
Target Class 1, Predicted Class 1
Target Class 1, Predicted Class 1
Target Class 2, Predicted Class 2
Target Class 1, Predicted Class 1
Target Class 0, Predicted Class 1
Target Class 2, Predicted Class 2
Target Class 0, Predicted Class 1
Target Class 1, Predicted Class 1
Target Class 1, Predicted Class 1
Target Class 1, Predicted Class 1
Target Class 0, Predicted Class 1
Target Class 2, Predicted Class 2
Target Class 0, Predicted Class 1
Target Class 1, Predicted Class 1
Target Class 0, Predicted Class 1
Target Class 1, Predicted Class 1
Target Class 2, Predicted Class 2
Target Class 0, Predicted Class 1
Target Class 1, Predicted Class 1
Target Class 1, Predicted Class 2
Target Class 1, Predicted Class 1
Target Class 0, Predicted Class 1
Target Class 0, Predicted Class 1
Target Class 1, Predicted Class 1
Target Class 1, Predicted Class 1
Target Class 1, Predicted Class 1
Target Class 0, Predicted Class 1
Target Class 1, Predicted Class 1
Target Class 1, Predicted Class 1
Target Class 1, Predicted Class 1
Target Class 2, Predicted Class 2
Target Class 1, Predicted Class 1
Target Class 1, Predicted Class 1
Target Class 1, Predicted Class 1
Target Class 1, Predicted Class 1
Target Class 0, Predicted Class 1
Target Class 0, Predicted Class 1
Target Class 1, Predicted Class 1
Target Class 1, Predicted Class 1
Target Class 2, Predicted Class 2
Target Class 2, Predicted Class 2
Target Class 2, Predicted Class 2
Target Class 0, Predicted Class 1
Target Class 0, Predicted Class 1
Target Class 1, Predicted Class 1
Target Class 0, Predicted Class 1
Target Class 1, Predicted Class 1
Target Class 2, Predicted Class 2
64.59627329192547% overall accuracy for validation set.
- 주요 기능 조각 아래 :
일부 매개 변수
에게import numpy as np
from sklearn.preprocessing import normalize
from sklearn.model_selection import train_test_split
def readInput(filename, delimiter, inputlen, outputlen, categories, test_size):
def onehot(num, categories):
arr = np.zeros(categories)
arr[int(num[0])-1] = 1
return arr
with open(filename) as file:
inputs = list()
outputs = list()
for line in file:
assert(len(line.split(delimiter)) == inputlen+outputlen)
outputs.append(onehot(list(map(lambda x: float(x), line.split(delimiter)))[:outputlen], categories))
inputs.append(list(map(lambda x: float(x), line.split(delimiter)))[outputlen:outputlen+inputlen])
inputs = np.array(inputs)
outputs = np.array(outputs)
inputs_train, inputs_val, outputs_train, outputs_val = train_test_split(inputs, outputs, test_size=test_size)
assert len(inputs_train) > 0
assert len(inputs_val) > 0
return normalize(inputs_train, axis=0), outputs_train, normalize(inputs_val, axis=0), outputs_val
를 가져 오기 및 분할 데이터 세트
import numpy as np import helper FILE_NAME = 'data2.csv' DATA_DELIM = ',' ACTIVATION_FUNC = 'tanh' TESTING_FREQ = 100 EPOCHS = 200 LEARNING_RATE = 0.2 TEST_SIZE = 0.9 INPUT_SIZE = 13 HIDDEN_LAYERS = [5] OUTPUT_SIZE = 3
메인 프로그램은 샘플 크기는 매우 작다)
def step(self, x, targets, lrate):
self.forward_propagate(x)
self.backpropagate_errors(targets)
self.adjust_weights(x, lrate)
def test(self, epoch, x, target):
predictions = self.forward_propagate(x)
print('Epoch %5i, Loss %2f, Targets %s, Outputs %s, Inputs %s' % (epoch, helper.crossentropy(target, predictions), target, predictions, x))
def train(self, inputs, targets, epochs, testfreq, lrate):
xindices = [i for i in range(len(inputs))]
for epoch in range(epochs):
np.random.shuffle(xindices)
if epoch % testfreq == 0:
self.test(epoch, inputs[xindices[0]], targets[xindices[0]])
for i in xindices:
self.step(inputs[i], targets[i], lrate)
self.test(epochs, inputs[xindices[0]], targets[xindices[0]])
def validate(self, inputs, targets):
correct = 0
targets = np.argmax(targets, axis=1)
for i in range(len(inputs)):
prediction = np.argmax(self.forward_propagate(inputs[i]))
if prediction == targets[i]: correct += 1
print('Target Class %s, Predicted Class %s' % (targets[i], prediction))
print('%s%% overall accuracy for validation set.' % (correct/len(inputs)*100))
np.random.seed()
inputs_train, outputs_train, inputs_val, outputs_val = helper.readInput(FILE_NAME, DATA_DELIM, inputlen=INPUT_SIZE, outputlen=1, categories=OUTPUT_SIZE, test_size=TEST_SIZE)
nn = Classifier([INPUT_SIZE] + HIDDEN_LAYERS + [OUTPUT_SIZE], ACTIVATION_FUNC)
nn.train(inputs_train, outputs_train, EPOCHS, TESTING_FREQ, LEARNING_RATE)
nn.validate(inputs_val, outputs_val)
80/20 분할은 모든 경우에 최적이 아닙니다. 귀하의 데이터에 따라 다릅니다. 그것은 당신의 가설을 몇 번 다시 테스트하는 데 도움이되지만 이번에는 데이터 세트를 뒤섞습니다. – ninesalt
불행히도 이와 같은 질문은 명백한 해답이 아니며 특히 데이터에 대한 액세스 권한이 없습니다. –
Coldspeed, 데이터 세트를 제공했습니다 (편집 됨). Swailem95, 데이터 세트는 각 신기원마다 섞이며 (나는 믿는다) 분할하기 전에 (http://scikit-learn.org/stable/modules/generated/sklearn.model_selection.train_test_split.html 참조) –